CodeBlocks should be in IsoSubspaces
authorfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 10 Jan 2018 00:30:38 +0000 (00:30 +0000)
committerfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 10 Jan 2018 00:30:38 +0000 (00:30 +0000)
https://bugs.webkit.org/show_bug.cgi?id=180884

Reviewed by Saam Barati.
Source/JavaScriptCore:

This moves CodeBlocks into IsoSubspaces. Doing so means that we no longer need to have the
special CodeBlockSet HashSets of new and old CodeBlocks. We also no longer use
WeakReferenceHarvester or UnconditionalFinalizer. Instead:

- Code block sweeping is now just eager sweeping. This means that it automatically takes
  advantage of our unswept set, which roughly corresponds to what CodeBlockSet used to use
  its eden set for.

- Those idea of Executable "weakly visiting" the CodeBlock is replaced by Executable
  marking a ExecutableToCodeBlockEdge object. That object being marked corresponds to what
  we used to call CodeBlock "having been weakly visited". This means that CodeBlockSet no
  longer has to clear the set of weakly visited code blocks. This also means that
  determining CodeBlock liveness, propagating CodeBlock transitions, and jettisoning
  CodeBlocks during GC are now the edge's job. The edge is also in an IsoSubspace and it
  has IsoCellSets to tell us which edges have output constraints (what we used to call
  CodeBlock's weak reference harvester) and which have unconditional finalizers.

- CodeBlock now uses an IsoCellSet to tell if it has an unconditional finalizer.

- CodeBlockSet still exists!  It has one unified HashSet of CodeBlocks that we use to
  handle requests from the sampler, debugger, and other facilities. They may want to ask
  if some pointer corresponds to a CodeBlock during stages of execution during which the
  GC is unable to answer isLive() queries. The trickiest is the sampling profiler thread.
  There is no way that the GC's isLive could tell us of a CodeBlock that had already been
  allocated has now been full constructed.

* JavaScriptCore.xcodeproj/project.pbxproj:
* Sources.txt:
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::CodeBlock):
(JSC::CodeBlock::finishCreation):
(JSC::CodeBlock::finishCreationCommon):
(JSC::CodeBlock::~CodeBlock):
(JSC::CodeBlock::visitChildren):
(JSC::CodeBlock::propagateTransitions):
(JSC::CodeBlock::determineLiveness):
(JSC::CodeBlock::finalizeUnconditionally):
(JSC::CodeBlock::stronglyVisitStrongReferences):
(JSC::CodeBlock::hasInstalledVMTrapBreakpoints const):
(JSC::CodeBlock::installVMTrapBreakpoints):
(JSC::CodeBlock::dumpMathICStats):
(JSC::CodeBlock::visitWeakly): Deleted.
(JSC::CodeBlock::WeakReferenceHarvester::visitWeakReferences): Deleted.
(JSC::CodeBlock::UnconditionalFinalizer::finalizeUnconditionally): Deleted.
* bytecode/CodeBlock.h:
(JSC::CodeBlock::subspaceFor):
(JSC::CodeBlock::ownerEdge const):
(JSC::CodeBlock::clearVisitWeaklyHasBeenCalled): Deleted.
* bytecode/EvalCodeBlock.h:
(JSC::EvalCodeBlock::create): Deleted.
(JSC::EvalCodeBlock::createStructure): Deleted.
(JSC::EvalCodeBlock::variable): Deleted.
(JSC::EvalCodeBlock::numVariables): Deleted.
(JSC::EvalCodeBlock::functionHoistingCandidate): Deleted.
(JSC::EvalCodeBlock::numFunctionHoistingCandidates): Deleted.
(JSC::EvalCodeBlock::EvalCodeBlock): Deleted.
(JSC::EvalCodeBlock::unlinkedEvalCodeBlock const): Deleted.
* bytecode/ExecutableToCodeBlockEdge.cpp: Added.
(JSC::ExecutableToCodeBlockEdge::createStructure):
(JSC::ExecutableToCodeBlockEdge::create):
(JSC::ExecutableToCodeBlockEdge::visitChildren):
(JSC::ExecutableToCodeBlockEdge::visitOutputConstraints):
(JSC::ExecutableToCodeBlockEdge::finalizeUnconditionally):
(JSC::ExecutableToCodeBlockEdge::activate):
(JSC::ExecutableToCodeBlockEdge::deactivate):
(JSC::ExecutableToCodeBlockEdge::deactivateAndUnwrap):
(JSC::ExecutableToCodeBlockEdge::wrap):
(JSC::ExecutableToCodeBlockEdge::wrapAndActivate):
(JSC::ExecutableToCodeBlockEdge::ExecutableToCodeBlockEdge):
(JSC::ExecutableToCodeBlockEdge::runConstraint):
* bytecode/ExecutableToCodeBlockEdge.h: Added.
(JSC::ExecutableToCodeBlockEdge::subspaceFor):
(JSC::ExecutableToCodeBlockEdge::codeBlock const):
(JSC::ExecutableToCodeBlockEdge::unwrap):
* bytecode/FunctionCodeBlock.h:
(JSC::FunctionCodeBlock::subspaceFor):
(JSC::FunctionCodeBlock::createStructure):
* bytecode/ModuleProgramCodeBlock.h:
(JSC::ModuleProgramCodeBlock::create): Deleted.
(JSC::ModuleProgramCodeBlock::createStructure): Deleted.
(JSC::ModuleProgramCodeBlock::ModuleProgramCodeBlock): Deleted.
* bytecode/ProgramCodeBlock.h:
(JSC::ProgramCodeBlock::create): Deleted.
(JSC::ProgramCodeBlock::createStructure): Deleted.
(JSC::ProgramCodeBlock::ProgramCodeBlock): Deleted.
* debugger/Debugger.cpp:
(JSC::Debugger::SetSteppingModeFunctor::operator() const):
(JSC::Debugger::ToggleBreakpointFunctor::operator() const):
(JSC::Debugger::ClearCodeBlockDebuggerRequestsFunctor::operator() const):
(JSC::Debugger::ClearDebuggerRequestsFunctor::operator() const):
* heap/CodeBlockSet.cpp:
(JSC::CodeBlockSet::contains):
(JSC::CodeBlockSet::dump const):
(JSC::CodeBlockSet::add):
(JSC::CodeBlockSet::remove):
(JSC::CodeBlockSet::promoteYoungCodeBlocks): Deleted.
(JSC::CodeBlockSet::clearMarksForFullCollection): Deleted.
(JSC::CodeBlockSet::lastChanceToFinalize): Deleted.
(JSC::CodeBlockSet::deleteUnmarkedAndUnreferenced): Deleted.
* heap/CodeBlockSet.h:
* heap/CodeBlockSetInlines.h:
(JSC::CodeBlockSet::iterate):
(JSC::CodeBlockSet::iterateViaSubspaces):
* heap/ConservativeRoots.cpp:
(JSC::ConservativeRoots::genericAddPointer):
(JSC::DummyMarkHook::markKnownJSCell):
(JSC::CompositeMarkHook::mark):
(JSC::CompositeMarkHook::markKnownJSCell):
* heap/ConservativeRoots.h:
* heap/Heap.cpp:
(JSC::Heap::lastChanceToFinalize):
(JSC::Heap::finalizeMarkedUnconditionalFinalizers):
(JSC::Heap::finalizeUnconditionalFinalizers):
(JSC::Heap::beginMarking):
(JSC::Heap::deleteUnmarkedCompiledCode):
(JSC::Heap::sweepInFinalize):
(JSC::Heap::forEachCodeBlockImpl):
(JSC::Heap::forEachCodeBlockIgnoringJITPlansImpl):
(JSC::Heap::addCoreConstraints):
(JSC::Heap::finalizeUnconditionalFinalizersInIsoSubspace): Deleted.
* heap/Heap.h:
* heap/HeapCell.h:
* heap/HeapCellInlines.h:
(JSC::HeapCell::subspace const):
* heap/HeapInlines.h:
(JSC::Heap::forEachCodeBlock):
(JSC::Heap::forEachCodeBlockIgnoringJITPlans):
* heap/HeapUtil.h:
(JSC::HeapUtil::findGCObjectPointersForMarking):
* heap/IsoCellSet.cpp:
(JSC::IsoCellSet::parallelNotEmptyMarkedBlockSource):
* heap/IsoCellSet.h:
* heap/IsoCellSetInlines.h:
(JSC::IsoCellSet::forEachMarkedCellInParallel):
(JSC::IsoCellSet::forEachLiveCell):
* heap/LargeAllocation.h:
(JSC::LargeAllocation::subspace const):
* heap/MarkStackMergingConstraint.cpp:
(JSC::MarkStackMergingConstraint::executeImpl):
* heap/MarkStackMergingConstraint.h:
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::parallelNotEmptyBlockSource):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::Handle::didAddToAllocator):
(JSC::MarkedBlock::Handle::didRemoveFromAllocator):
* heap/MarkedBlock.h:
(JSC::MarkedBlock::subspace const):
* heap/MarkedBlockInlines.h:
(JSC::MarkedBlock::Handle::forEachLiveCell):
* heap/MarkedSpaceInlines.h:
(JSC::MarkedSpace::forEachLiveCell):
* heap/MarkingConstraint.cpp:
(JSC::MarkingConstraint::execute):
(JSC::MarkingConstraint::doParallelWork):
(JSC::MarkingConstraint::finishParallelWork): Deleted.
(JSC::MarkingConstraint::doParallelWorkImpl): Deleted.
(JSC::MarkingConstraint::finishParallelWorkImpl): Deleted.
* heap/MarkingConstraint.h:
* heap/MarkingConstraintSet.cpp:
(JSC::MarkingConstraintSet::add):
* heap/MarkingConstraintSet.h:
(JSC::MarkingConstraintSet::add):
* heap/MarkingConstraintSolver.cpp:
(JSC::MarkingConstraintSolver::execute):
(JSC::MarkingConstraintSolver::addParallelTask):
(JSC::MarkingConstraintSolver::runExecutionThread):
(JSC::MarkingConstraintSolver::didExecute): Deleted.
* heap/MarkingConstraintSolver.h:
(JSC::MarkingConstraintSolver::TaskWithConstraint::TaskWithConstraint):
(JSC::MarkingConstraintSolver::TaskWithConstraint::operator== const):
* heap/SimpleMarkingConstraint.cpp:
(JSC::SimpleMarkingConstraint::SimpleMarkingConstraint):
(JSC::SimpleMarkingConstraint::executeImpl):
* heap/SimpleMarkingConstraint.h:
(JSC::SimpleMarkingConstraint::SimpleMarkingConstraint):
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::addParallelConstraintTask):
* heap/SlotVisitor.h:
* heap/Subspace.cpp:
(JSC::Subspace::sweep):
* heap/Subspace.h:
* heap/SubspaceInlines.h:
(JSC::Subspace::forEachLiveCell):
* llint/LowLevelInterpreter.asm:
* runtime/EvalExecutable.cpp:
(JSC::EvalExecutable::visitChildren):
* runtime/EvalExecutable.h:
(JSC::EvalExecutable::codeBlock):
* runtime/FunctionExecutable.cpp:
(JSC::FunctionExecutable::baselineCodeBlockFor):
(JSC::FunctionExecutable::visitChildren):
* runtime/FunctionExecutable.h:
* runtime/JSType.h:
* runtime/ModuleProgramExecutable.cpp:
(JSC::ModuleProgramExecutable::visitChildren):
* runtime/ModuleProgramExecutable.h:
* runtime/ProgramExecutable.cpp:
(JSC::ProgramExecutable::visitChildren):
* runtime/ProgramExecutable.h:
* runtime/ScriptExecutable.cpp:
(JSC::ScriptExecutable::installCode):
(JSC::ScriptExecutable::newReplacementCodeBlockFor):
* runtime/VM.cpp:
(JSC::VM::VM):
* runtime/VM.h:
(JSC::VM::SpaceAndFinalizerSet::SpaceAndFinalizerSet):
(JSC::VM::SpaceAndFinalizerSet::finalizerSetFor):
(JSC::VM::forEachCodeBlockSpace):
* runtime/VMTraps.cpp:
(JSC::VMTraps::handleTraps):
* tools/VMInspector.cpp:
(JSC::VMInspector::codeBlockForMachinePC):
(JSC::VMInspector::isValidCodeBlock):

Source/WebCore:

No new tests because no new behavior.

Adopting new parallel constraint API, so that more of the logic of doing parallel
constraint solving is shared between the DOM's output constraints and JSC's output
constraints.

* bindings/js/DOMGCOutputConstraint.cpp:
(WebCore::DOMGCOutputConstraint::executeImpl):
(WebCore::DOMGCOutputConstraint::doParallelWorkImpl): Deleted.
(WebCore::DOMGCOutputConstraint::finishParallelWorkImpl): Deleted.
* bindings/js/DOMGCOutputConstraint.h:

Source/WTF:

Deque<>::contains() is helpful for a debug ASSERT.

* wtf/Deque.h:
(WTF::inlineCapacity>::contains):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@226667 268f45cc-cd09-0410-ab3c-d52691b4dbfc

68 files changed:
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/Sources.txt
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/bytecode/CodeBlock.h
Source/JavaScriptCore/bytecode/EvalCodeBlock.h
Source/JavaScriptCore/bytecode/ExecutableToCodeBlockEdge.cpp [new file with mode: 0644]
Source/JavaScriptCore/bytecode/ExecutableToCodeBlockEdge.h [new file with mode: 0644]
Source/JavaScriptCore/bytecode/FunctionCodeBlock.h
Source/JavaScriptCore/bytecode/ModuleProgramCodeBlock.h
Source/JavaScriptCore/bytecode/ProgramCodeBlock.h
Source/JavaScriptCore/debugger/Debugger.cpp
Source/JavaScriptCore/heap/CodeBlockSet.cpp
Source/JavaScriptCore/heap/CodeBlockSet.h
Source/JavaScriptCore/heap/CodeBlockSetInlines.h
Source/JavaScriptCore/heap/ConservativeRoots.cpp
Source/JavaScriptCore/heap/ConservativeRoots.h
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/heap/Heap.h
Source/JavaScriptCore/heap/HeapCell.h
Source/JavaScriptCore/heap/HeapCellInlines.h
Source/JavaScriptCore/heap/HeapInlines.h
Source/JavaScriptCore/heap/HeapUtil.h
Source/JavaScriptCore/heap/IsoCellSet.cpp
Source/JavaScriptCore/heap/IsoCellSet.h
Source/JavaScriptCore/heap/IsoCellSetInlines.h
Source/JavaScriptCore/heap/LargeAllocation.h
Source/JavaScriptCore/heap/MarkStackMergingConstraint.cpp
Source/JavaScriptCore/heap/MarkStackMergingConstraint.h
Source/JavaScriptCore/heap/MarkedAllocator.cpp
Source/JavaScriptCore/heap/MarkedBlock.cpp
Source/JavaScriptCore/heap/MarkedBlock.h
Source/JavaScriptCore/heap/MarkedBlockInlines.h
Source/JavaScriptCore/heap/MarkedSpaceInlines.h
Source/JavaScriptCore/heap/MarkingConstraint.cpp
Source/JavaScriptCore/heap/MarkingConstraint.h
Source/JavaScriptCore/heap/MarkingConstraintSet.cpp
Source/JavaScriptCore/heap/MarkingConstraintSet.h
Source/JavaScriptCore/heap/MarkingConstraintSolver.cpp
Source/JavaScriptCore/heap/MarkingConstraintSolver.h
Source/JavaScriptCore/heap/SimpleMarkingConstraint.cpp
Source/JavaScriptCore/heap/SimpleMarkingConstraint.h
Source/JavaScriptCore/heap/SlotVisitor.cpp
Source/JavaScriptCore/heap/SlotVisitor.h
Source/JavaScriptCore/heap/Subspace.cpp
Source/JavaScriptCore/heap/Subspace.h
Source/JavaScriptCore/heap/SubspaceInlines.h
Source/JavaScriptCore/llint/LowLevelInterpreter.asm
Source/JavaScriptCore/runtime/EvalExecutable.cpp
Source/JavaScriptCore/runtime/EvalExecutable.h
Source/JavaScriptCore/runtime/FunctionExecutable.cpp
Source/JavaScriptCore/runtime/FunctionExecutable.h
Source/JavaScriptCore/runtime/JSType.h
Source/JavaScriptCore/runtime/ModuleProgramExecutable.cpp
Source/JavaScriptCore/runtime/ModuleProgramExecutable.h
Source/JavaScriptCore/runtime/ProgramExecutable.cpp
Source/JavaScriptCore/runtime/ProgramExecutable.h
Source/JavaScriptCore/runtime/ScriptExecutable.cpp
Source/JavaScriptCore/runtime/VM.cpp
Source/JavaScriptCore/runtime/VM.h
Source/JavaScriptCore/runtime/VMTraps.cpp
Source/JavaScriptCore/tools/VMInspector.cpp
Source/WTF/ChangeLog
Source/WTF/wtf/Deque.h
Source/WebCore/ChangeLog
Source/WebCore/bindings/js/DOMGCOutputConstraint.cpp
Source/WebCore/bindings/js/DOMGCOutputConstraint.h
Tools/Scripts/run-jsc-benchmarks

index 6aebe1a..a8eb784 100644 (file)
@@ -1,3 +1,224 @@
+2018-01-04  Filip Pizlo  <fpizlo@apple.com>
+
+        CodeBlocks should be in IsoSubspaces
+        https://bugs.webkit.org/show_bug.cgi?id=180884
+
+        Reviewed by Saam Barati.
+        
+        This moves CodeBlocks into IsoSubspaces. Doing so means that we no longer need to have the
+        special CodeBlockSet HashSets of new and old CodeBlocks. We also no longer use
+        WeakReferenceHarvester or UnconditionalFinalizer. Instead:
+        
+        - Code block sweeping is now just eager sweeping. This means that it automatically takes
+          advantage of our unswept set, which roughly corresponds to what CodeBlockSet used to use
+          its eden set for.
+        
+        - Those idea of Executable "weakly visiting" the CodeBlock is replaced by Executable
+          marking a ExecutableToCodeBlockEdge object. That object being marked corresponds to what
+          we used to call CodeBlock "having been weakly visited". This means that CodeBlockSet no
+          longer has to clear the set of weakly visited code blocks. This also means that
+          determining CodeBlock liveness, propagating CodeBlock transitions, and jettisoning
+          CodeBlocks during GC are now the edge's job. The edge is also in an IsoSubspace and it
+          has IsoCellSets to tell us which edges have output constraints (what we used to call
+          CodeBlock's weak reference harvester) and which have unconditional finalizers.
+        
+        - CodeBlock now uses an IsoCellSet to tell if it has an unconditional finalizer.
+        
+        - CodeBlockSet still exists!  It has one unified HashSet of CodeBlocks that we use to
+          handle requests from the sampler, debugger, and other facilities. They may want to ask
+          if some pointer corresponds to a CodeBlock during stages of execution during which the
+          GC is unable to answer isLive() queries. The trickiest is the sampling profiler thread.
+          There is no way that the GC's isLive could tell us of a CodeBlock that had already been
+          allocated has now been full constructed.
+        
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * Sources.txt:
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::CodeBlock):
+        (JSC::CodeBlock::finishCreation):
+        (JSC::CodeBlock::finishCreationCommon):
+        (JSC::CodeBlock::~CodeBlock):
+        (JSC::CodeBlock::visitChildren):
+        (JSC::CodeBlock::propagateTransitions):
+        (JSC::CodeBlock::determineLiveness):
+        (JSC::CodeBlock::finalizeUnconditionally):
+        (JSC::CodeBlock::stronglyVisitStrongReferences):
+        (JSC::CodeBlock::hasInstalledVMTrapBreakpoints const):
+        (JSC::CodeBlock::installVMTrapBreakpoints):
+        (JSC::CodeBlock::dumpMathICStats):
+        (JSC::CodeBlock::visitWeakly): Deleted.
+        (JSC::CodeBlock::WeakReferenceHarvester::visitWeakReferences): Deleted.
+        (JSC::CodeBlock::UnconditionalFinalizer::finalizeUnconditionally): Deleted.
+        * bytecode/CodeBlock.h:
+        (JSC::CodeBlock::subspaceFor):
+        (JSC::CodeBlock::ownerEdge const):
+        (JSC::CodeBlock::clearVisitWeaklyHasBeenCalled): Deleted.
+        * bytecode/EvalCodeBlock.h:
+        (JSC::EvalCodeBlock::create): Deleted.
+        (JSC::EvalCodeBlock::createStructure): Deleted.
+        (JSC::EvalCodeBlock::variable): Deleted.
+        (JSC::EvalCodeBlock::numVariables): Deleted.
+        (JSC::EvalCodeBlock::functionHoistingCandidate): Deleted.
+        (JSC::EvalCodeBlock::numFunctionHoistingCandidates): Deleted.
+        (JSC::EvalCodeBlock::EvalCodeBlock): Deleted.
+        (JSC::EvalCodeBlock::unlinkedEvalCodeBlock const): Deleted.
+        * bytecode/ExecutableToCodeBlockEdge.cpp: Added.
+        (JSC::ExecutableToCodeBlockEdge::createStructure):
+        (JSC::ExecutableToCodeBlockEdge::create):
+        (JSC::ExecutableToCodeBlockEdge::visitChildren):
+        (JSC::ExecutableToCodeBlockEdge::visitOutputConstraints):
+        (JSC::ExecutableToCodeBlockEdge::finalizeUnconditionally):
+        (JSC::ExecutableToCodeBlockEdge::activate):
+        (JSC::ExecutableToCodeBlockEdge::deactivate):
+        (JSC::ExecutableToCodeBlockEdge::deactivateAndUnwrap):
+        (JSC::ExecutableToCodeBlockEdge::wrap):
+        (JSC::ExecutableToCodeBlockEdge::wrapAndActivate):
+        (JSC::ExecutableToCodeBlockEdge::ExecutableToCodeBlockEdge):
+        (JSC::ExecutableToCodeBlockEdge::runConstraint):
+        * bytecode/ExecutableToCodeBlockEdge.h: Added.
+        (JSC::ExecutableToCodeBlockEdge::subspaceFor):
+        (JSC::ExecutableToCodeBlockEdge::codeBlock const):
+        (JSC::ExecutableToCodeBlockEdge::unwrap):
+        * bytecode/FunctionCodeBlock.h:
+        (JSC::FunctionCodeBlock::subspaceFor):
+        (JSC::FunctionCodeBlock::createStructure):
+        * bytecode/ModuleProgramCodeBlock.h:
+        (JSC::ModuleProgramCodeBlock::create): Deleted.
+        (JSC::ModuleProgramCodeBlock::createStructure): Deleted.
+        (JSC::ModuleProgramCodeBlock::ModuleProgramCodeBlock): Deleted.
+        * bytecode/ProgramCodeBlock.h:
+        (JSC::ProgramCodeBlock::create): Deleted.
+        (JSC::ProgramCodeBlock::createStructure): Deleted.
+        (JSC::ProgramCodeBlock::ProgramCodeBlock): Deleted.
+        * debugger/Debugger.cpp:
+        (JSC::Debugger::SetSteppingModeFunctor::operator() const):
+        (JSC::Debugger::ToggleBreakpointFunctor::operator() const):
+        (JSC::Debugger::ClearCodeBlockDebuggerRequestsFunctor::operator() const):
+        (JSC::Debugger::ClearDebuggerRequestsFunctor::operator() const):
+        * heap/CodeBlockSet.cpp:
+        (JSC::CodeBlockSet::contains):
+        (JSC::CodeBlockSet::dump const):
+        (JSC::CodeBlockSet::add):
+        (JSC::CodeBlockSet::remove):
+        (JSC::CodeBlockSet::promoteYoungCodeBlocks): Deleted.
+        (JSC::CodeBlockSet::clearMarksForFullCollection): Deleted.
+        (JSC::CodeBlockSet::lastChanceToFinalize): Deleted.
+        (JSC::CodeBlockSet::deleteUnmarkedAndUnreferenced): Deleted.
+        * heap/CodeBlockSet.h:
+        * heap/CodeBlockSetInlines.h:
+        (JSC::CodeBlockSet::iterate):
+        (JSC::CodeBlockSet::iterateViaSubspaces):
+        * heap/ConservativeRoots.cpp:
+        (JSC::ConservativeRoots::genericAddPointer):
+        (JSC::DummyMarkHook::markKnownJSCell):
+        (JSC::CompositeMarkHook::mark):
+        (JSC::CompositeMarkHook::markKnownJSCell):
+        * heap/ConservativeRoots.h:
+        * heap/Heap.cpp:
+        (JSC::Heap::lastChanceToFinalize):
+        (JSC::Heap::finalizeMarkedUnconditionalFinalizers):
+        (JSC::Heap::finalizeUnconditionalFinalizers):
+        (JSC::Heap::beginMarking):
+        (JSC::Heap::deleteUnmarkedCompiledCode):
+        (JSC::Heap::sweepInFinalize):
+        (JSC::Heap::forEachCodeBlockImpl):
+        (JSC::Heap::forEachCodeBlockIgnoringJITPlansImpl):
+        (JSC::Heap::addCoreConstraints):
+        (JSC::Heap::finalizeUnconditionalFinalizersInIsoSubspace): Deleted.
+        * heap/Heap.h:
+        * heap/HeapCell.h:
+        * heap/HeapCellInlines.h:
+        (JSC::HeapCell::subspace const):
+        * heap/HeapInlines.h:
+        (JSC::Heap::forEachCodeBlock):
+        (JSC::Heap::forEachCodeBlockIgnoringJITPlans):
+        * heap/HeapUtil.h:
+        (JSC::HeapUtil::findGCObjectPointersForMarking):
+        * heap/IsoCellSet.cpp:
+        (JSC::IsoCellSet::parallelNotEmptyMarkedBlockSource):
+        * heap/IsoCellSet.h:
+        * heap/IsoCellSetInlines.h:
+        (JSC::IsoCellSet::forEachMarkedCellInParallel):
+        (JSC::IsoCellSet::forEachLiveCell):
+        * heap/LargeAllocation.h:
+        (JSC::LargeAllocation::subspace const):
+        * heap/MarkStackMergingConstraint.cpp:
+        (JSC::MarkStackMergingConstraint::executeImpl):
+        * heap/MarkStackMergingConstraint.h:
+        * heap/MarkedAllocator.cpp:
+        (JSC::MarkedAllocator::parallelNotEmptyBlockSource):
+        * heap/MarkedBlock.cpp:
+        (JSC::MarkedBlock::Handle::didAddToAllocator):
+        (JSC::MarkedBlock::Handle::didRemoveFromAllocator):
+        * heap/MarkedBlock.h:
+        (JSC::MarkedBlock::subspace const):
+        * heap/MarkedBlockInlines.h:
+        (JSC::MarkedBlock::Handle::forEachLiveCell):
+        * heap/MarkedSpaceInlines.h:
+        (JSC::MarkedSpace::forEachLiveCell):
+        * heap/MarkingConstraint.cpp:
+        (JSC::MarkingConstraint::execute):
+        (JSC::MarkingConstraint::doParallelWork):
+        (JSC::MarkingConstraint::finishParallelWork): Deleted.
+        (JSC::MarkingConstraint::doParallelWorkImpl): Deleted.
+        (JSC::MarkingConstraint::finishParallelWorkImpl): Deleted.
+        * heap/MarkingConstraint.h:
+        * heap/MarkingConstraintSet.cpp:
+        (JSC::MarkingConstraintSet::add):
+        * heap/MarkingConstraintSet.h:
+        (JSC::MarkingConstraintSet::add):
+        * heap/MarkingConstraintSolver.cpp:
+        (JSC::MarkingConstraintSolver::execute):
+        (JSC::MarkingConstraintSolver::addParallelTask):
+        (JSC::MarkingConstraintSolver::runExecutionThread):
+        (JSC::MarkingConstraintSolver::didExecute): Deleted.
+        * heap/MarkingConstraintSolver.h:
+        (JSC::MarkingConstraintSolver::TaskWithConstraint::TaskWithConstraint):
+        (JSC::MarkingConstraintSolver::TaskWithConstraint::operator== const):
+        * heap/SimpleMarkingConstraint.cpp:
+        (JSC::SimpleMarkingConstraint::SimpleMarkingConstraint):
+        (JSC::SimpleMarkingConstraint::executeImpl):
+        * heap/SimpleMarkingConstraint.h:
+        (JSC::SimpleMarkingConstraint::SimpleMarkingConstraint):
+        * heap/SlotVisitor.cpp:
+        (JSC::SlotVisitor::addParallelConstraintTask):
+        * heap/SlotVisitor.h:
+        * heap/Subspace.cpp:
+        (JSC::Subspace::sweep):
+        * heap/Subspace.h:
+        * heap/SubspaceInlines.h:
+        (JSC::Subspace::forEachLiveCell):
+        * llint/LowLevelInterpreter.asm:
+        * runtime/EvalExecutable.cpp:
+        (JSC::EvalExecutable::visitChildren):
+        * runtime/EvalExecutable.h:
+        (JSC::EvalExecutable::codeBlock):
+        * runtime/FunctionExecutable.cpp:
+        (JSC::FunctionExecutable::baselineCodeBlockFor):
+        (JSC::FunctionExecutable::visitChildren):
+        * runtime/FunctionExecutable.h:
+        * runtime/JSType.h:
+        * runtime/ModuleProgramExecutable.cpp:
+        (JSC::ModuleProgramExecutable::visitChildren):
+        * runtime/ModuleProgramExecutable.h:
+        * runtime/ProgramExecutable.cpp:
+        (JSC::ProgramExecutable::visitChildren):
+        * runtime/ProgramExecutable.h:
+        * runtime/ScriptExecutable.cpp:
+        (JSC::ScriptExecutable::installCode):
+        (JSC::ScriptExecutable::newReplacementCodeBlockFor):
+        * runtime/VM.cpp:
+        (JSC::VM::VM):
+        * runtime/VM.h:
+        (JSC::VM::SpaceAndFinalizerSet::SpaceAndFinalizerSet):
+        (JSC::VM::SpaceAndFinalizerSet::finalizerSetFor):
+        (JSC::VM::forEachCodeBlockSpace):
+        * runtime/VMTraps.cpp:
+        (JSC::VMTraps::handleTraps):
+        * tools/VMInspector.cpp:
+        (JSC::VMInspector::codeBlockForMachinePC):
+        (JSC::VMInspector::isValidCodeBlock):
+
 2018-01-09  Michael Saboff  <msaboff@apple.com>
 
         Unreviewed, rolling out r226600 and r226603
index 4b43dc1..0b59cea 100644 (file)
                0F5CF9891E9ED65200C18692 /* AirStackAllocation.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5CF9871E9ED64E00C18692 /* AirStackAllocation.h */; };
                0F5EF91F16878F7D003E5C25 /* JITThunks.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5EF91C16878F78003E5C25 /* JITThunks.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F5F08CF146C7633000472A9 /* UnconditionalFinalizer.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5F08CE146C762F000472A9 /* UnconditionalFinalizer.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F60FE901FFC37020003320A /* ExecutableToCodeBlockEdge.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F60FE8E1FFC36FD0003320A /* ExecutableToCodeBlockEdge.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F61832A1C45BF070072450B /* AirCCallingConvention.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6183211C45BF070072450B /* AirCCallingConvention.h */; };
                0F61832D1C45BF070072450B /* AirEmitShuffle.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6183241C45BF070072450B /* AirEmitShuffle.h */; };
                0F61832F1C45BF070072450B /* AirLowerAfterRegAlloc.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6183261C45BF070072450B /* AirLowerAfterRegAlloc.h */; };
                0F5EF91B16878F78003E5C25 /* JITThunks.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITThunks.cpp; sourceTree = "<group>"; };
                0F5EF91C16878F78003E5C25 /* JITThunks.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITThunks.h; sourceTree = "<group>"; };
                0F5F08CE146C762F000472A9 /* UnconditionalFinalizer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = UnconditionalFinalizer.h; sourceTree = "<group>"; };
+               0F60FE8D1FFC36FC0003320A /* ExecutableToCodeBlockEdge.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ExecutableToCodeBlockEdge.cpp; sourceTree = "<group>"; };
+               0F60FE8E1FFC36FD0003320A /* ExecutableToCodeBlockEdge.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ExecutableToCodeBlockEdge.h; sourceTree = "<group>"; };
                0F6183201C45BF070072450B /* AirCCallingConvention.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = AirCCallingConvention.cpp; path = b3/air/AirCCallingConvention.cpp; sourceTree = "<group>"; };
                0F6183211C45BF070072450B /* AirCCallingConvention.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = AirCCallingConvention.h; path = b3/air/AirCCallingConvention.h; sourceTree = "<group>"; };
                0F6183221C45BF070072450B /* AirCustom.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = AirCustom.cpp; path = b3/air/AirCustom.cpp; sourceTree = "<group>"; };
                                14AD91121DCA97FD0014F9FE /* EvalCodeBlock.cpp */,
                                14AD91061DCA92940014F9FE /* EvalCodeBlock.h */,
                                14142E521B796EDD00F4BF4B /* ExecutableInfo.h */,
+                               0F60FE8D1FFC36FC0003320A /* ExecutableToCodeBlockEdge.cpp */,
+                               0F60FE8E1FFC36FD0003320A /* ExecutableToCodeBlockEdge.h */,
                                0F56A1D415001CF2002992B1 /* ExecutionCounter.cpp */,
                                0F56A1D115000F31002992B1 /* ExecutionCounter.h */,
                                0F0332BF18ADFAE1005F979A /* ExitingJITType.cpp */,
                                A53243981856A489002ED692 /* CombinedDomains.json in Headers */,
                                BC18C3F30E16F5CD00B34460 /* CommonIdentifiers.h in Headers */,
                                0F15F15F14B7A73E005DE37D /* CommonSlowPaths.h in Headers */,
+                               0F60FE901FFC37020003320A /* ExecutableToCodeBlockEdge.h in Headers */,
                                6553A33217A1F1EE008CF6F3 /* CommonSlowPathsExceptions.h in Headers */,
                                0FD82E39141AB14D00179C94 /* CompactJITCodeMap.h in Headers */,
                                A7E5A3A81797432D00E893C0 /* CompilationResult.h in Headers */,
index 2c1eba8..8ad3f2a 100644 (file)
@@ -210,6 +210,7 @@ bytecode/DeferredCompilationCallback.cpp
 bytecode/DeferredSourceDump.cpp
 bytecode/DirectEvalCodeCache.cpp
 bytecode/EvalCodeBlock.cpp
+bytecode/ExecutableToCodeBlockEdge.cpp
 bytecode/ExecutionCounter.cpp
 bytecode/ExitKind.cpp
 bytecode/ExitingJITType.cpp
index f9f6ad1..683a968 100644 (file)
@@ -51,6 +51,7 @@
 #include "GetPutInfo.h"
 #include "InlineCallFrame.h"
 #include "InterpreterInlines.h"
+#include "IsoCellSetInlines.h"
 #include "JIT.h"
 #include "JITMathIC.h"
 #include "JSBigInt.h"
@@ -329,20 +330,19 @@ CodeBlock::CodeBlock(VM* vm, Structure* structure, CopyParsedBlockTag, CodeBlock
     , m_optimizationDelayCounter(0)
     , m_reoptimizationRetryCounter(0)
     , m_creationTime(MonotonicTime::now())
-    , m_unconditionalFinalizer(makePoisonedUnique<UnconditionalFinalizer>(*this))
-    , m_weakReferenceHarvester(makePoisonedUnique<WeakReferenceHarvester>(*this))
 {
-    m_visitWeaklyHasBeenCalled = false;
-
     ASSERT(heap()->isDeferred());
     ASSERT(m_scopeRegister.isLocal());
 
     setNumParameters(other.numParameters());
+    
+    vm->heap.codeBlockSet().add(this);
 }
 
 void CodeBlock::finishCreation(VM& vm, CopyParsedBlockTag, CodeBlock& other)
 {
     Base::finishCreation(vm);
+    finishCreationCommon(vm);
 
     optimizeAfterWarmUp();
     jitAfterWarmUp();
@@ -354,8 +354,6 @@ void CodeBlock::finishCreation(VM& vm, CopyParsedBlockTag, CodeBlock& other)
         m_rareData->m_switchJumpTables = other.m_rareData->m_switchJumpTables;
         m_rareData->m_stringSwitchJumpTables = other.m_rareData->m_stringSwitchJumpTables;
     }
-    
-    heap()->m_codeBlocks->add(this);
 }
 
 CodeBlock::CodeBlock(VM* vm, Structure* structure, ScriptExecutable* ownerExecutable, UnlinkedCodeBlock* unlinkedCodeBlock,
@@ -389,16 +387,14 @@ CodeBlock::CodeBlock(VM* vm, Structure* structure, ScriptExecutable* ownerExecut
     , m_optimizationDelayCounter(0)
     , m_reoptimizationRetryCounter(0)
     , m_creationTime(MonotonicTime::now())
-    , m_unconditionalFinalizer(makePoisonedUnique<UnconditionalFinalizer>(*this))
-    , m_weakReferenceHarvester(makePoisonedUnique<WeakReferenceHarvester>(*this))
 {
-    m_visitWeaklyHasBeenCalled = false;
-
     ASSERT(heap()->isDeferred());
     ASSERT(m_scopeRegister.isLocal());
 
     ASSERT(m_source);
     setNumParameters(unlinkedCodeBlock->numParameters());
+    
+    vm->heap.codeBlockSet().add(this);
 }
 
 // The main purpose of this function is to generate linked bytecode from unlinked bytecode. The process
@@ -413,6 +409,7 @@ bool CodeBlock::finishCreation(VM& vm, ScriptExecutable* ownerExecutable, Unlink
     JSScope* scope)
 {
     Base::finishCreation(vm);
+    finishCreationCommon(vm);
 
     auto throwScope = DECLARE_THROW_SCOPE(vm);
 
@@ -849,19 +846,26 @@ bool CodeBlock::finishCreation(VM& vm, ScriptExecutable* ownerExecutable, Unlink
     if (Options::dumpGeneratedBytecodes())
         dumpBytecode();
 
-    heap()->m_codeBlocks->add(this);
     heap()->reportExtraMemoryAllocated(m_instructions.size() * sizeof(Instruction));
 
     return true;
 }
 
+void CodeBlock::finishCreationCommon(VM& vm)
+{
+    m_ownerEdge.set(vm, this, ExecutableToCodeBlockEdge::create(vm, this));
+}
+
 CodeBlock::~CodeBlock()
 {
     VM& vm = *m_poisonedVM;
+
+    vm.heap.codeBlockSet().remove(this);
+    
     if (UNLIKELY(vm.m_perBytecodeProfiler))
         vm.m_perBytecodeProfiler->notifyDestruction(this);
 
-    if (unlinkedCodeBlock()->didOptimize() == MixedTriState)
+    if (!vm.heap.isShuttingDown() && unlinkedCodeBlock()->didOptimize() == MixedTriState)
         unlinkedCodeBlock()->setDidOptimize(FalseTriState);
 
 #if ENABLE(VERBOSE_VALUE_PROFILE)
@@ -975,58 +979,6 @@ CodeBlock* CodeBlock::specialOSREntryBlockOrNull()
 #endif // ENABLE(FTL_JIT)
 }
 
-void CodeBlock::visitWeakly(SlotVisitor& visitor)
-{
-    ConcurrentJSLocker locker(m_lock);
-    if (m_visitWeaklyHasBeenCalled)
-        return;
-    
-    m_visitWeaklyHasBeenCalled = true;
-
-    if (Heap::isMarked(this))
-        return;
-
-    if (shouldVisitStrongly(locker)) {
-        visitor.appendUnbarriered(this);
-        return;
-    }
-    
-    // There are two things that may use unconditional finalizers: inline cache clearing
-    // and jettisoning. The probability of us wanting to do at least one of those things
-    // is probably quite close to 1. So we add one no matter what and when it runs, it
-    // figures out whether it has any work to do.
-    visitor.addUnconditionalFinalizer(m_unconditionalFinalizer.get());
-
-    if (!JITCode::isOptimizingJIT(jitType()))
-        return;
-
-    // If we jettison ourselves we'll install our alternative, so make sure that it
-    // survives GC even if we don't.
-    visitor.append(m_alternative);
-    
-    // There are two things that we use weak reference harvesters for: DFG fixpoint for
-    // jettisoning, and trying to find structures that would be live based on some
-    // inline cache. So it makes sense to register them regardless.
-    visitor.addWeakReferenceHarvester(m_weakReferenceHarvester.get());
-
-#if ENABLE(DFG_JIT)
-    // We get here if we're live in the sense that our owner executable is live,
-    // but we're not yet live for sure in another sense: we may yet decide that this
-    // code block should be jettisoned based on its outgoing weak references being
-    // stale. Set a flag to indicate that we're still assuming that we're dead, and
-    // perform one round of determining if we're live. The GC may determine, based on
-    // either us marking additional objects, or by other objects being marked for
-    // other reasons, that this iteration should run again; it will notify us of this
-    // decision by calling harvestWeakReferences().
-
-    m_allTransitionsHaveBeenMarked = false;
-    propagateTransitions(locker, visitor);
-
-    m_jitCode->dfgCommon()->livenessHasBeenProved = false;
-    determineLiveness(locker, visitor);
-#endif // ENABLE(DFG_JIT)
-}
-
 size_t CodeBlock::estimatedSize(JSCell* cell)
 {
     CodeBlock* thisObject = jsCast<CodeBlock*>(cell);
@@ -1041,18 +993,13 @@ void CodeBlock::visitChildren(JSCell* cell, SlotVisitor& visitor)
     CodeBlock* thisObject = jsCast<CodeBlock*>(cell);
     ASSERT_GC_OBJECT_INHERITS(thisObject, info());
     JSCell::visitChildren(thisObject, visitor);
+    visitor.append(thisObject->m_ownerEdge);
     thisObject->visitChildren(visitor);
 }
 
 void CodeBlock::visitChildren(SlotVisitor& visitor)
 {
     ConcurrentJSLocker locker(m_lock);
-    // There are two things that may use unconditional finalizers: inline cache clearing
-    // and jettisoning. The probability of us wanting to do at least one of those things
-    // is probably quite close to 1. So we add one no matter what and when it runs, it
-    // figures out whether it has any work to do.
-    visitor.addUnconditionalFinalizer(m_unconditionalFinalizer.get());
-
     if (CodeBlock* otherBlock = specialOSREntryBlockOrNull())
         visitor.appendUnbarriered(otherBlock);
 
@@ -1071,9 +1018,8 @@ void CodeBlock::visitChildren(SlotVisitor& visitor)
 
     stronglyVisitStrongReferences(locker, visitor);
     stronglyVisitWeakReferences(locker, visitor);
-
-    m_allTransitionsHaveBeenMarked = false;
-    propagateTransitions(locker, visitor);
+    
+    VM::SpaceAndFinalizerSet::finalizerSetFor(*subspace()).add(this);
 }
 
 bool CodeBlock::shouldVisitStrongly(const ConcurrentJSLocker& locker)
@@ -1164,12 +1110,8 @@ void CodeBlock::propagateTransitions(const ConcurrentJSLocker&, SlotVisitor& vis
 {
     UNUSED_PARAM(visitor);
 
-    if (m_allTransitionsHaveBeenMarked)
-        return;
-
     VM& vm = *m_poisonedVM;
-    bool allAreMarkedSoFar = true;
-        
+
     if (jitType() == JITCode::InterpreterThunk) {
         const Vector<unsigned>& propertyAccessInstructions = m_unlinkedCode->propertyAccessInstructions();
         for (size_t i = 0; i < propertyAccessInstructions.size(); ++i) {
@@ -1186,8 +1128,6 @@ void CodeBlock::propagateTransitions(const ConcurrentJSLocker&, SlotVisitor& vis
                     vm.heap.structureIDTable().get(newStructureID);
                 if (Heap::isMarked(oldStructure))
                     visitor.appendUnbarriered(newStructure);
-                else
-                    allAreMarkedSoFar = false;
                 break;
             }
             default:
@@ -1199,7 +1139,7 @@ void CodeBlock::propagateTransitions(const ConcurrentJSLocker&, SlotVisitor& vis
 #if ENABLE(JIT)
     if (JITCode::isJIT(jitType())) {
         for (auto iter = m_stubInfos.begin(); !!iter; ++iter)
-            allAreMarkedSoFar &= (*iter)->propagateTransitions(visitor);
+            (*iter)->propagateTransitions(visitor);
     }
 #endif // ENABLE(JIT)
     
@@ -1207,7 +1147,7 @@ void CodeBlock::propagateTransitions(const ConcurrentJSLocker&, SlotVisitor& vis
     if (JITCode::isOptimizingJIT(jitType())) {
         DFG::CommonData* dfgCommon = m_jitCode->dfgCommon();
         for (auto& weakReference : dfgCommon->weakStructureReferences)
-            allAreMarkedSoFar &= weakReference->markIfCheap(visitor);
+            weakReference->markIfCheap(visitor);
 
         for (auto& transition : dfgCommon->transitions) {
             if (shouldMarkTransition(transition)) {
@@ -1231,14 +1171,10 @@ void CodeBlock::propagateTransitions(const ConcurrentJSLocker&, SlotVisitor& vis
                 // live).
 
                 visitor.append(transition.m_to);
-            } else
-                allAreMarkedSoFar = false;
+            }
         }
     }
 #endif // ENABLE(DFG_JIT)
-    
-    if (allAreMarkedSoFar)
-        m_allTransitionsHaveBeenMarked = true;
 }
 
 void CodeBlock::determineLiveness(const ConcurrentJSLocker&, SlotVisitor& visitor)
@@ -1246,11 +1182,16 @@ void CodeBlock::determineLiveness(const ConcurrentJSLocker&, SlotVisitor& visito
     UNUSED_PARAM(visitor);
     
 #if ENABLE(DFG_JIT)
-    // Check if we have any remaining work to do.
-    DFG::CommonData* dfgCommon = m_jitCode->dfgCommon();
-    if (dfgCommon->livenessHasBeenProved)
+    if (Heap::isMarked(this))
         return;
     
+    // In rare and weird cases, this could be called on a baseline CodeBlock. One that I found was
+    // that we might decide that the CodeBlock should be jettisoned due to old age, so the
+    // isMarked check doesn't protect us.
+    if (!JITCode::isOptimizingJIT(jitType()))
+        return;
+    
+    DFG::CommonData* dfgCommon = m_jitCode->dfgCommon();
     // Now check all of our weak references. If all of them are live, then we
     // have proved liveness and so we scan our strong references. If at end of
     // GC we still have not proved liveness, then this code block is toast.
@@ -1279,17 +1220,10 @@ void CodeBlock::determineLiveness(const ConcurrentJSLocker&, SlotVisitor& visito
     
     // All weak references are live. Record this information so we don't
     // come back here again, and scan the strong references.
-    dfgCommon->livenessHasBeenProved = true;
     visitor.appendUnbarriered(this);
 #endif // ENABLE(DFG_JIT)
 }
 
-void CodeBlock::WeakReferenceHarvester::visitWeakReferences(SlotVisitor& visitor)
-{
-    codeBlock.propagateTransitions(NoLockingNecessary, visitor);
-    codeBlock.determineLiveness(NoLockingNecessary, visitor);
-}
-
 void CodeBlock::clearLLIntGetByIdCache(Instruction* instruction)
 {
     instruction[0].u.opcode = LLInt::getOpcode(op_get_by_id);
@@ -1421,25 +1355,19 @@ void CodeBlock::finalizeBaselineJITInlineCaches()
 #endif
 }
 
-void CodeBlock::UnconditionalFinalizer::finalizeUnconditionally()
+void CodeBlock::finalizeUnconditionally(VM&)
 {
-    codeBlock.updateAllPredictions();
+    updateAllPredictions();
     
-    if (!Heap::isMarked(&codeBlock)) {
-        if (codeBlock.shouldJettisonDueToWeakReference())
-            codeBlock.jettison(Profiler::JettisonDueToWeakReference);
-        else
-            codeBlock.jettison(Profiler::JettisonDueToOldAge);
-        return;
-    }
-
-    if (JITCode::couldBeInterpreted(codeBlock.jitType()))
-        codeBlock.finalizeLLIntInlineCaches();
+    if (JITCode::couldBeInterpreted(jitType()))
+        finalizeLLIntInlineCaches();
 
 #if ENABLE(JIT)
-    if (!!codeBlock.jitCode())
-        codeBlock.finalizeBaselineJITInlineCaches();
+    if (!!jitCode())
+        finalizeBaselineJITInlineCaches();
 #endif
+
+    VM::SpaceAndFinalizerSet::finalizerSetFor(*subspace()).remove(this);
 }
 
 void CodeBlock::getStubInfoMap(const ConcurrentJSLocker&, StubInfoMap& result)
@@ -1593,7 +1521,7 @@ void CodeBlock::stronglyVisitStrongReferences(const ConcurrentJSLocker& locker,
     UNUSED_PARAM(locker);
     
     visitor.append(m_globalObject);
-    visitor.append(m_ownerExecutable);
+    visitor.append(m_ownerExecutable); // This is extra important since it causes the ExecutableToCodeBlockEdge to be marked.
     visitor.append(m_unlinkedCode);
     if (m_rareData)
         m_rareData->m_directEvalCodeCache.visitAggregate(visitor);
@@ -3132,7 +3060,6 @@ void CodeBlock::jitSoon()
 bool CodeBlock::hasInstalledVMTrapBreakpoints() const
 {
 #if ENABLE(SIGNAL_BASED_VM_TRAPS)
-    
     // This function may be called from a signal handler. We need to be
     // careful to not call anything that is not signal handler safe, e.g.
     // we should not perturb the refCount of m_jitCode.
@@ -3152,7 +3079,8 @@ bool CodeBlock::installVMTrapBreakpoints()
     // we should not perturb the refCount of m_jitCode.
     if (!JITCode::isOptimizingJIT(jitType()))
         return false;
-    m_jitCode->dfgCommon()->installVMTrapBreakpoints(this);
+    auto& commonData = *m_jitCode->dfgCommon();
+    commonData.installVMTrapBreakpoints(this);
     return true;
 #else
     UNREACHABLE_FOR_PLATFORM();
@@ -3192,8 +3120,6 @@ void CodeBlock::dumpMathICStats()
             numSubs++;
             totalSubSize += subIC->codeSize();
         }
-
-        return false;
     };
     heap()->forEachCodeBlock(countICs);
 
index c1079ff..fb7e68b 100644 (file)
@@ -87,6 +87,7 @@ struct OSRExitState;
 class BytecodeLivenessAnalysis;
 class CodeBlockSet;
 class ExecState;
+class ExecutableToCodeBlockEdge;
 class JSModuleEnvironment;
 class LLIntOffsetsExtractor;
 class PCToCodeOriginMap;
@@ -107,27 +108,15 @@ class CodeBlock : public JSCell {
     friend class JIT;
     friend class LLIntOffsetsExtractor;
 
-    struct UnconditionalFinalizer : public JSC::UnconditionalFinalizer {
-        UnconditionalFinalizer(CodeBlock& codeBlock)
-            : codeBlock(codeBlock)
-        { }
-        void finalizeUnconditionally() override;
-        CodeBlock& codeBlock;
-    };
-
-    struct WeakReferenceHarvester : public JSC::WeakReferenceHarvester {
-        WeakReferenceHarvester(CodeBlock& codeBlock)
-            : codeBlock(codeBlock)
-        { }
-        void visitWeakReferences(SlotVisitor&) override;
-        CodeBlock& codeBlock;
-    };
-
 public:
 
     enum CopyParsedBlockTag { CopyParsedBlock };
 
     static const unsigned StructureFlags = Base::StructureFlags | StructureIsImmortal;
+    static const bool needsDestruction = true;
+
+    template<typename>
+    static void subspaceFor(VM&) { }
 
     DECLARE_INFO;
 
@@ -137,6 +126,8 @@ protected:
 
     void finishCreation(VM&, CopyParsedBlockTag, CodeBlock& other);
     bool finishCreation(VM&, ScriptExecutable* ownerExecutable, UnlinkedCodeBlock*, JSScope*);
+    
+    void finishCreationCommon(VM&);
 
     WriteBarrier<JSGlobalObject> m_globalObject;
 
@@ -200,8 +191,7 @@ public:
     static size_t estimatedSize(JSCell*);
     static void visitChildren(JSCell*, SlotVisitor&);
     void visitChildren(SlotVisitor&);
-    void visitWeakly(SlotVisitor&);
-    void clearVisitWeaklyHasBeenCalled();
+    void finalizeUnconditionally(VM&);
 
     void dumpSource();
     void dumpSource(PrintStream&);
@@ -369,6 +359,8 @@ public:
     
     ExecutableBase* ownerExecutable() const { return m_ownerExecutable.get(); }
     ScriptExecutable* ownerScriptExecutable() const { return jsCast<ScriptExecutable*>(m_ownerExecutable.get()); }
+    
+    ExecutableToCodeBlockEdge* ownerEdge() const { return m_ownerEdge.get(); }
 
     VM* vm() const { return m_poisonedVM.unpoisoned(); }
 
@@ -829,8 +821,6 @@ public:
     // concurrent compilation threads finish what they're doing.
     mutable ConcurrentJSLock m_lock;
 
-    bool m_visitWeaklyHasBeenCalled;
-
     bool m_shouldAlwaysBeInlined; // Not a bitfield because the JIT wants to store to it.
 
 #if ENABLE(JIT)
@@ -918,6 +908,7 @@ protected:
 
 private:
     friend class CodeBlockSet;
+    friend class ExecutableToCodeBlockEdge;
 
     BytecodeLivenessAnalysis& livenessAnalysisSlow();
     
@@ -982,6 +973,7 @@ private:
         };
     };
     WriteBarrier<ExecutableBase> m_ownerExecutable;
+    WriteBarrier<ExecutableToCodeBlockEdge> m_ownerEdge;
     ConstExprPoisoned<CodeBlockPoison, VM*> m_poisonedVM;
 
     PoisonedRefCountedArray<CodeBlockPoison, Instruction> m_instructions;
@@ -1046,9 +1038,6 @@ private:
     MonotonicTime m_creationTime;
 
     std::unique_ptr<RareData> m_rareData;
-
-    PoisonedUniquePtr<CodeBlockPoison, UnconditionalFinalizer> m_unconditionalFinalizer;
-    PoisonedUniquePtr<CodeBlockPoison, WeakReferenceHarvester> m_weakReferenceHarvester;
 };
 
 inline Register& ExecState::r(int index)
@@ -1075,11 +1064,6 @@ inline Register& ExecState::uncheckedR(VirtualRegister reg)
     return uncheckedR(reg.offset());
 }
 
-inline void CodeBlock::clearVisitWeaklyHasBeenCalled()
-{
-    m_visitWeaklyHasBeenCalled = false;
-}
-
 template <typename ExecutableType>
 JSObject* ScriptExecutable::prepareForExecution(VM& vm, JSFunction* function, JSScope* scope, CodeSpecializationKind kind, CodeBlock*& resultCodeBlock)
 {
index 4adb487..6775c36 100644 (file)
 
 namespace JSC {
 
-class EvalCodeBlock : public GlobalCodeBlock {
+class EvalCodeBlock final : public GlobalCodeBlock {
 public:
     typedef GlobalCodeBlock Base;
     DECLARE_INFO;
 
+    template<typename>
+    static IsoSubspace* subspaceFor(VM& vm)
+    {
+        return &vm.evalCodeBlockSpace.space;
+    }
+
     static EvalCodeBlock* create(VM* vm, CopyParsedBlockTag, EvalCodeBlock& other)
     {
         EvalCodeBlock* instance = new (NotNull, allocateCell<EvalCodeBlock>(vm->heap))
@@ -58,7 +64,7 @@ public:
 
     static Structure* createStructure(VM& vm, JSGlobalObject* globalObject, JSValue prototype)
     {
-        return Structure::create(vm, globalObject, prototype, TypeInfo(CellType, StructureFlags), info());
+        return Structure::create(vm, globalObject, prototype, TypeInfo(CodeBlockType, StructureFlags), info());
     }
 
     const Identifier& variable(unsigned index) { return unlinkedEvalCodeBlock()->variable(index); }
diff --git a/Source/JavaScriptCore/bytecode/ExecutableToCodeBlockEdge.cpp b/Source/JavaScriptCore/bytecode/ExecutableToCodeBlockEdge.cpp
new file mode 100644 (file)
index 0000000..0cb614e
--- /dev/null
@@ -0,0 +1,177 @@
+/*
+ * Copyright (C) 2018 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "ExecutableToCodeBlockEdge.h"
+
+#include "IsoCellSetInlines.h"
+
+namespace JSC {
+
+const ClassInfo ExecutableToCodeBlockEdge::s_info = { "ExecutableToCodeBlockEdge", nullptr, nullptr, nullptr, CREATE_METHOD_TABLE(ExecutableToCodeBlockEdge) };
+
+Structure* ExecutableToCodeBlockEdge::createStructure(VM& vm, JSGlobalObject* globalObject, JSValue prototype)
+{
+    return Structure::create(vm, globalObject, prototype, TypeInfo(CellType, StructureFlags), info());
+}
+
+ExecutableToCodeBlockEdge* ExecutableToCodeBlockEdge::create(VM& vm, CodeBlock* codeBlock)
+{
+    ExecutableToCodeBlockEdge* result = new (NotNull, allocateCell<ExecutableToCodeBlockEdge>(vm.heap)) ExecutableToCodeBlockEdge(vm, codeBlock);
+    result->finishCreation(vm);
+    return result;
+}
+
+void ExecutableToCodeBlockEdge::visitChildren(JSCell* cell, SlotVisitor& visitor)
+{
+    VM& vm = visitor.vm();
+    ExecutableToCodeBlockEdge* edge = jsCast<ExecutableToCodeBlockEdge*>(cell);
+    CodeBlock* codeBlock = edge->m_codeBlock.get();
+    
+    if (!edge->m_isActive) {
+        visitor.appendUnbarriered(codeBlock);
+        return;
+    }
+    
+    ConcurrentJSLocker locker(codeBlock->m_lock);
+    
+    if (codeBlock->shouldVisitStrongly(locker))
+        visitor.appendUnbarriered(codeBlock);
+    
+    if (!Heap::isMarked(codeBlock))
+        vm.executableToCodeBlockEdgesWithFinalizers.add(edge);
+    
+    if (JITCode::isOptimizingJIT(codeBlock->jitType())) {
+        // If we jettison ourselves we'll install our alternative, so make sure that it
+        // survives GC even if we don't.
+        visitor.append(codeBlock->m_alternative);
+    }
+    
+    // NOTE: There are two sides to this constraint, with different requirements for correctness.
+    // Because everything is ultimately protected with weak references and jettisoning, it's
+    // always "OK" to claim that something is dead prematurely and it's "OK" to keep things alive.
+    // But both choices could lead to bad perf - either recomp cycles or leaks.
+    //
+    // Determining CodeBlock liveness: This part is the most consequential. We want to keep the
+    // output constraint active so long as we think that we may yet prove that the CodeBlock is
+    // live but we haven't done it yet.
+    //
+    // Marking Structures if profitable: It's important that we do a pass of this. Logically, this
+    // seems like it is a constraint of CodeBlock. But we have always first run this as a result
+    // of the edge being marked even before we determine the liveness of the CodeBlock. This
+    // allows a CodeBlock to mark itself by first proving that all of the Structures it weakly
+    // depends on could be strongly marked. (This part is also called propagateTransitions.)
+    //
+    // As a weird caveat, we only fixpoint the constraints so long as the CodeBlock is not live.
+    // This means that we may overlook structure marking opportunities created by other marking
+    // that happens after the CodeBlock is marked. This was an accidental policy decision from a
+    // long time ago, but it is probably OK, since it's only worthwhile to keep fixpointing the
+    // structure marking if we still have unmarked structures after the first round. We almost
+    // never will because we will mark-if-profitable based on the owning global object being
+    // already marked. We mark it just in case that hadn't happened yet. And if the CodeBlock is
+    // not yet marked because it weakly depends on a structure that we did not yet mark, then we
+    // will keep fixpointing until the end.
+    visitor.appendUnbarriered(codeBlock->globalObject());
+    vm.executableToCodeBlockEdgesWithConstraints.add(edge);
+    edge->runConstraint(locker, vm, visitor);
+}
+
+void ExecutableToCodeBlockEdge::visitOutputConstraints(JSCell* cell, SlotVisitor& visitor)
+{
+    VM& vm = visitor.vm();
+    ExecutableToCodeBlockEdge* edge = jsCast<ExecutableToCodeBlockEdge*>(cell);
+    
+    edge->runConstraint(NoLockingNecessary, vm, visitor);
+}
+
+void ExecutableToCodeBlockEdge::finalizeUnconditionally(VM& vm)
+{
+    CodeBlock* codeBlock = m_codeBlock.get();
+    
+    if (!Heap::isMarked(codeBlock)) {
+        if (codeBlock->shouldJettisonDueToWeakReference())
+            codeBlock->jettison(Profiler::JettisonDueToWeakReference);
+        else
+            codeBlock->jettison(Profiler::JettisonDueToOldAge);
+        m_codeBlock.clear();
+    }
+    
+    vm.executableToCodeBlockEdgesWithFinalizers.remove(this);
+    vm.executableToCodeBlockEdgesWithConstraints.remove(this);
+}
+
+void ExecutableToCodeBlockEdge::activate()
+{
+    m_isActive = true;
+}
+
+void ExecutableToCodeBlockEdge::deactivate()
+{
+    m_isActive = false;
+}
+
+CodeBlock* ExecutableToCodeBlockEdge::deactivateAndUnwrap(ExecutableToCodeBlockEdge* edge)
+{
+    if (!edge)
+        return nullptr;
+    edge->deactivate();
+    return edge->codeBlock();
+}
+
+ExecutableToCodeBlockEdge* ExecutableToCodeBlockEdge::wrap(CodeBlock* codeBlock)
+{
+    if (!codeBlock)
+        return nullptr;
+    return codeBlock->ownerEdge();
+}
+    
+ExecutableToCodeBlockEdge* ExecutableToCodeBlockEdge::wrapAndActivate(CodeBlock* codeBlock)
+{
+    if (!codeBlock)
+        return nullptr;
+    ExecutableToCodeBlockEdge* result = codeBlock->ownerEdge();
+    result->activate();
+    return result;
+}
+
+ExecutableToCodeBlockEdge::ExecutableToCodeBlockEdge(VM& vm, CodeBlock* codeBlock)
+    : Base(vm, vm.executableToCodeBlockEdgeStructure.get())
+    , m_codeBlock(vm, this, codeBlock)
+{
+}
+
+void ExecutableToCodeBlockEdge::runConstraint(const ConcurrentJSLocker& locker, VM& vm, SlotVisitor& visitor)
+{
+    CodeBlock* codeBlock = m_codeBlock.get();
+    
+    codeBlock->propagateTransitions(locker, visitor);
+    codeBlock->determineLiveness(locker, visitor);
+    
+    if (Heap::isMarked(codeBlock))
+        vm.executableToCodeBlockEdgesWithConstraints.remove(this);
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/bytecode/ExecutableToCodeBlockEdge.h b/Source/JavaScriptCore/bytecode/ExecutableToCodeBlockEdge.h
new file mode 100644 (file)
index 0000000..a2b9b50
--- /dev/null
@@ -0,0 +1,89 @@
+/*
+ * Copyright (C) 2018 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "ConcurrentJSLock.h"
+#include "IsoSubspace.h"
+#include "JSCell.h"
+#include "VM.h"
+
+namespace JSC {
+
+class CodeBlock;
+class LLIntOffsetsExtractor;
+
+class ExecutableToCodeBlockEdge : public JSCell {
+public:
+    typedef JSCell Base;
+    static const unsigned StructureFlags = Base::StructureFlags | StructureIsImmortal;
+
+    template<typename CellType>
+    static IsoSubspace* subspaceFor(VM& vm)
+    {
+        return &vm.executableToCodeBlockEdgeSpace;
+    }
+
+    static Structure* createStructure(VM&, JSGlobalObject*, JSValue prototype);
+
+    static ExecutableToCodeBlockEdge* create(VM&, CodeBlock*);
+    
+    DECLARE_INFO;
+
+    CodeBlock* codeBlock() const { return m_codeBlock.get(); }
+    
+    static void visitChildren(JSCell*, SlotVisitor&);
+    static void visitOutputConstraints(JSCell*, SlotVisitor&);
+    void finalizeUnconditionally(VM&);
+    
+    void activate();
+    void deactivate();
+    
+    static CodeBlock* unwrap(ExecutableToCodeBlockEdge* edge)
+    {
+        if (!edge)
+            return nullptr;
+        return edge->codeBlock();
+    }
+    
+    static CodeBlock* deactivateAndUnwrap(ExecutableToCodeBlockEdge* edge);
+    
+    static ExecutableToCodeBlockEdge* wrap(CodeBlock* codeBlock);
+    
+    static ExecutableToCodeBlockEdge* wrapAndActivate(CodeBlock* codeBlock);
+    
+private:
+    friend class LLIntOffsetsExtractor;
+
+    ExecutableToCodeBlockEdge(VM&, CodeBlock*);
+    
+    void runConstraint(const ConcurrentJSLocker&, VM&, SlotVisitor&);
+    
+    WriteBarrier<CodeBlock> m_codeBlock;
+    bool m_isActive { false };
+};
+
+} // namespace JSC
+
index 1a6c0ba..2b07cf6 100644 (file)
@@ -39,6 +39,12 @@ public:
     typedef CodeBlock Base;
     DECLARE_INFO;
 
+    template<typename>
+    static IsoSubspace* subspaceFor(VM& vm)
+    {
+        return &vm.functionCodeBlockSpace.space;
+    }
+
     static FunctionCodeBlock* create(VM* vm, CopyParsedBlockTag, FunctionCodeBlock& other)
     {
         FunctionCodeBlock* instance = new (NotNull, allocateCell<FunctionCodeBlock>(vm->heap))
@@ -59,7 +65,7 @@ public:
 
     static Structure* createStructure(VM& vm, JSGlobalObject* globalObject, JSValue prototype)
     {
-        return Structure::create(vm, globalObject, prototype, TypeInfo(CellType, StructureFlags), info());
+        return Structure::create(vm, globalObject, prototype, TypeInfo(CodeBlockType, StructureFlags), info());
     }
 
 private:
index 1994b52..ae99f77 100644 (file)
 
 namespace JSC {
 
-class ModuleProgramCodeBlock : public GlobalCodeBlock {
+class ModuleProgramCodeBlock final : public GlobalCodeBlock {
 public:
     typedef GlobalCodeBlock Base;
     DECLARE_INFO;
 
+    template<typename>
+    static IsoSubspace* subspaceFor(VM& vm)
+    {
+        return &vm.moduleProgramCodeBlockSpace.space;
+    }
+
     static ModuleProgramCodeBlock* create(VM* vm, CopyParsedBlockTag, ModuleProgramCodeBlock& other)
     {
         ModuleProgramCodeBlock* instance = new (NotNull, allocateCell<ModuleProgramCodeBlock>(vm->heap))
@@ -59,7 +65,7 @@ public:
 
     static Structure* createStructure(VM& vm, JSGlobalObject* globalObject, JSValue prototype)
     {
-        return Structure::create(vm, globalObject, prototype, TypeInfo(CellType, StructureFlags), info());
+        return Structure::create(vm, globalObject, prototype, TypeInfo(CodeBlockType, StructureFlags), info());
     }
 
 private:
index 2aac5d9..8ea9b4c 100644 (file)
 
 namespace JSC {
 
-class ProgramCodeBlock : public GlobalCodeBlock {
+class ProgramCodeBlock final : public GlobalCodeBlock {
 public:
     typedef GlobalCodeBlock Base;
     DECLARE_INFO;
 
+    template<typename>
+    static IsoSubspace* subspaceFor(VM& vm)
+    {
+        return &vm.programCodeBlockSpace.space;
+    }
+
     static ProgramCodeBlock* create(VM* vm, CopyParsedBlockTag, ProgramCodeBlock& other)
     {
         ProgramCodeBlock* instance = new (NotNull, allocateCell<ProgramCodeBlock>(vm->heap))
@@ -59,7 +65,7 @@ public:
 
     static Structure* createStructure(VM& vm, JSGlobalObject* globalObject, JSValue prototype)
     {
-        return Structure::create(vm, globalObject, prototype, TypeInfo(CellType, StructureFlags), info());
+        return Structure::create(vm, globalObject, prototype, TypeInfo(CodeBlockType, StructureFlags), info());
     }
 
 private:
index 7716e89..c325080 100644 (file)
@@ -207,7 +207,7 @@ public:
     {
     }
 
-    bool operator()(CodeBlock* codeBlock) const
+    void operator()(CodeBlock* codeBlock) const
     {
         if (m_debugger == codeBlock->globalObject()->debugger()) {
             if (m_mode == SteppingModeEnabled)
@@ -215,7 +215,6 @@ public:
             else
                 codeBlock->setSteppingMode(CodeBlock::SteppingModeDisabled);
         }
-        return false;
     }
 
 private:
@@ -315,11 +314,10 @@ public:
     {
     }
 
-    bool operator()(CodeBlock* codeBlock) const
+    void operator()(CodeBlock* codeBlock) const
     {
         if (m_debugger == codeBlock->globalObject()->debugger())
             m_debugger->toggleBreakpoint(codeBlock, m_breakpoint, m_enabledOrNot);
-        return false;
     }
 
 private:
@@ -528,11 +526,10 @@ public:
     {
     }
 
-    bool operator()(CodeBlock* codeBlock) const
+    void operator()(CodeBlock* codeBlock) const
     {
         if (codeBlock->hasDebuggerRequests() && m_debugger == codeBlock->globalObject()->debugger())
             codeBlock->clearDebuggerRequests();
-        return false;
     }
 
 private:
@@ -558,11 +555,10 @@ public:
     {
     }
 
-    bool operator()(CodeBlock* codeBlock) const
+    void operator()(CodeBlock* codeBlock) const
     {
         if (codeBlock->hasDebuggerRequests() && m_globalObject == codeBlock->globalObject())
             codeBlock->clearDebuggerRequests();
-        return false;
     }
 
 private:
index ae79e79..49fcd84 100644 (file)
@@ -41,83 +41,13 @@ CodeBlockSet::~CodeBlockSet()
 {
 }
 
-void CodeBlockSet::add(CodeBlock* codeBlock)
-{
-    LockHolder locker(&m_lock);
-    bool isNewEntry = m_newCodeBlocks.add(codeBlock).isNewEntry;
-    ASSERT_UNUSED(isNewEntry, isNewEntry);
-}
-
-void CodeBlockSet::promoteYoungCodeBlocks(const AbstractLocker&)
-{
-    ASSERT(m_lock.isLocked());
-    m_oldCodeBlocks.add(m_newCodeBlocks.begin(), m_newCodeBlocks.end());
-    m_newCodeBlocks.clear();
-}
-
-void CodeBlockSet::clearMarksForFullCollection()
-{
-    LockHolder locker(&m_lock);
-    for (CodeBlock* codeBlock : m_oldCodeBlocks)
-        codeBlock->clearVisitWeaklyHasBeenCalled();
-}
-
-void CodeBlockSet::lastChanceToFinalize(VM& vm)
-{
-    LockHolder locker(&m_lock);
-    for (CodeBlock* codeBlock : m_newCodeBlocks)
-        codeBlock->structure(vm)->classInfo()->methodTable.destroy(codeBlock);
-
-    for (CodeBlock* codeBlock : m_oldCodeBlocks)
-        codeBlock->structure(vm)->classInfo()->methodTable.destroy(codeBlock);
-}
-
-void CodeBlockSet::deleteUnmarkedAndUnreferenced(VM& vm, CollectionScope scope)
-{
-    LockHolder locker(&m_lock);
-    
-    // Destroying a CodeBlock takes about 1us on average in Speedometer. Full collections in Speedometer
-    // usually have ~2000 CodeBlocks to process. The time it takes to process the whole list varies a
-    // lot. In one extreme case I saw 18ms (on my fast MBP).
-    //
-    // FIXME: use Subspace instead of HashSet and adopt Subspace-based constraint solving. This may
-    // remove the need to eagerly destruct CodeBlocks.
-    // https://bugs.webkit.org/show_bug.cgi?id=180089
-    //
-    // FIXME: make CodeBlock::~CodeBlock a lot faster. It seems insane for that to take 1us or more.
-    // https://bugs.webkit.org/show_bug.cgi?id=180109
-    
-    auto consider = [&] (HashSet<CodeBlock*>& set) {
-        set.removeIf(
-            [&] (CodeBlock* codeBlock) -> bool {
-                if (Heap::isMarked(codeBlock))
-                    return false;
-                codeBlock->structure(vm)->classInfo()->methodTable.destroy(codeBlock);
-                return true;
-            });
-    };
-
-    switch (scope) {
-    case CollectionScope::Eden:
-        consider(m_newCodeBlocks);
-        break;
-    case CollectionScope::Full:
-        consider(m_oldCodeBlocks);
-        consider(m_newCodeBlocks);
-        break;
-    }
-
-    // Any remaining young CodeBlocks are live and need to be promoted to the set of old CodeBlocks.
-    promoteYoungCodeBlocks(locker);
-}
-
 bool CodeBlockSet::contains(const AbstractLocker&, void* candidateCodeBlock)
 {
     RELEASE_ASSERT(m_lock.isLocked());
     CodeBlock* codeBlock = static_cast<CodeBlock*>(candidateCodeBlock);
     if (!HashSet<CodeBlock*>::isValidValue(codeBlock))
         return false;
-    return m_oldCodeBlocks.contains(codeBlock) || m_newCodeBlocks.contains(codeBlock) || m_currentlyExecuting.contains(codeBlock);
+    return m_codeBlocks.contains(codeBlock);
 }
 
 void CodeBlockSet::clearCurrentlyExecuting()
@@ -128,12 +58,8 @@ void CodeBlockSet::clearCurrentlyExecuting()
 void CodeBlockSet::dump(PrintStream& out) const
 {
     CommaPrinter comma;
-    out.print("{old = [");
-    for (CodeBlock* codeBlock : m_oldCodeBlocks)
-        out.print(comma, pointerDump(codeBlock));
-    out.print("], new = [");
-    comma = CommaPrinter();
-    for (CodeBlock* codeBlock : m_newCodeBlocks)
+    out.print("{codeBlocks = [");
+    for (CodeBlock* codeBlock : m_codeBlocks)
         out.print(comma, pointerDump(codeBlock));
     out.print("], currentlyExecuting = [");
     comma = CommaPrinter();
@@ -142,5 +68,19 @@ void CodeBlockSet::dump(PrintStream& out) const
     out.print("]}");
 }
 
+void CodeBlockSet::add(CodeBlock* codeBlock)
+{
+    auto locker = holdLock(m_lock);
+    auto result = m_codeBlocks.add(codeBlock);
+    RELEASE_ASSERT(result);
+}
+
+void CodeBlockSet::remove(CodeBlock* codeBlock)
+{
+    auto locker = holdLock(m_lock);
+    bool result = m_codeBlocks.remove(codeBlock);
+    RELEASE_ASSERT(result);
+}
+
 } // namespace JSC
 
index 4c22828..8faef46 100644 (file)
@@ -49,24 +49,7 @@ public:
     CodeBlockSet();
     ~CodeBlockSet();
 
-    void lastChanceToFinalize(VM&);
-    
-    // Add a CodeBlock. This is only called by CodeBlock constructors.
-    void add(CodeBlock*);
-    
-    // Clear all mark bits for all CodeBlocks.
-    void clearMarksForFullCollection();
-
-    // Mark a pointer that may be a CodeBlock that belongs to the set of DFG
-    // blocks. This is defined in CodeBlock.h.
-private:
     void mark(const AbstractLocker&, CodeBlock* candidateCodeBlock);
-public:
-    void mark(const AbstractLocker&, void* candidateCodeBlock);
-    
-    // Delete all code blocks that are only referenced by this set (i.e. owned
-    // by this set), and that have not been marked.
-    void deleteUnmarkedAndUnreferenced(VM&, CollectionScope);
     
     void clearCurrentlyExecuting();
 
@@ -78,16 +61,18 @@ public:
     // visited.
     template<typename Functor> void iterate(const Functor&);
     template<typename Functor> void iterate(const AbstractLocker&, const Functor&);
+
+    template<typename Functor> void iterateViaSubspaces(VM&, const Functor&);
     
     template<typename Functor> void iterateCurrentlyExecuting(const Functor&);
     
     void dump(PrintStream&) const;
+    
+    void add(CodeBlock*);
+    void remove(CodeBlock*);
 
 private:
-    void promoteYoungCodeBlocks(const AbstractLocker&);
-
-    HashSet<CodeBlock*> m_oldCodeBlocks;
-    HashSet<CodeBlock*> m_newCodeBlocks;
+    HashSet<CodeBlock*> m_codeBlocks;
     HashSet<CodeBlock*> m_currentlyExecuting;
     Lock m_lock;
 };
index c3d8c0b..d61bab4 100644 (file)
 
 namespace JSC {
 
-inline void CodeBlockSet::mark(const AbstractLocker& locker, void* candidateCodeBlock)
-{
-    ASSERT(m_lock.isLocked());
-    // We have to check for 0 and -1 because those are used by the HashMap as markers.
-    uintptr_t value = reinterpret_cast<uintptr_t>(candidateCodeBlock);
-    
-    // This checks for both of those nasty cases in one go.
-    // 0 + 1 = 1
-    // -1 + 1 = 0
-    if (value + 1 <= 1)
-        return;
-
-    CodeBlock* codeBlock = static_cast<CodeBlock*>(candidateCodeBlock); 
-    if (!m_oldCodeBlocks.contains(codeBlock) && !m_newCodeBlocks.contains(codeBlock))
-        return;
-
-    mark(locker, codeBlock);
-}
-
 inline void CodeBlockSet::mark(const AbstractLocker&, CodeBlock* codeBlock)
 {
     if (!codeBlock)
@@ -70,17 +51,20 @@ void CodeBlockSet::iterate(const Functor& functor)
 template<typename Functor>
 void CodeBlockSet::iterate(const AbstractLocker&, const Functor& functor)
 {
-    for (auto& codeBlock : m_oldCodeBlocks) {
-        bool done = functor(codeBlock);
-        if (done)
-            return;
-    }
-    
-    for (auto& codeBlock : m_newCodeBlocks) {
-        bool done = functor(codeBlock);
-        if (done)
-            return;
-    }
+    for (CodeBlock* codeBlock : m_codeBlocks)
+        functor(codeBlock);
+}
+
+template<typename Functor>
+void CodeBlockSet::iterateViaSubspaces(VM& vm, const Functor& functor)
+{
+    vm.forEachCodeBlockSpace(
+        [&] (IsoSubspace& space) {
+            space.forEachLiveCell(
+                [&] (HeapCell* cell, HeapCell::Kind) {
+                    functor(jsCast<CodeBlock*>(static_cast<JSCell*>(cell)));
+                });
+        });
 }
 
 template<typename Functor>
index 741792a..1e7e05f 100644 (file)
@@ -72,7 +72,10 @@ inline void ConservativeRoots::genericAddPointer(void* p, HeapVersion markingVer
 
     HeapUtil::findGCObjectPointersForMarking(
         m_heap, markingVersion, newlyAllocatedVersion, filter, p,
-        [&] (void* p) {
+        [&] (void* p, HeapCell::Kind cellKind) {
+            if (cellKind == HeapCell::JSCell)
+                markHook.markKnownJSCell(static_cast<JSCell*>(p));
+            
             if (m_size == m_capacity)
                 grow();
             
@@ -103,6 +106,7 @@ void ConservativeRoots::genericAddSpan(void* begin, void* end, MarkHook& markHoo
 class DummyMarkHook {
 public:
     void mark(void*) { }
+    void markKnownJSCell(JSCell*) { }
 };
 
 void ConservativeRoots::add(void* begin, void* end)
@@ -111,11 +115,6 @@ void ConservativeRoots::add(void* begin, void* end)
     genericAddSpan(begin, end, dummy);
 }
 
-void ConservativeRoots::add(void* begin, void* end, JITStubRoutineSet& jitStubRoutines)
-{
-    genericAddSpan(begin, end, jitStubRoutines);
-}
-
 class CompositeMarkHook {
 public:
     CompositeMarkHook(JITStubRoutineSet& stubRoutines, CodeBlockSet& codeBlocks, const AbstractLocker& locker)
@@ -128,7 +127,12 @@ public:
     void mark(void* address)
     {
         m_stubRoutines.mark(address);
-        m_codeBlocks.mark(m_codeBlocksLocker, address);
+    }
+    
+    void markKnownJSCell(JSCell* cell)
+    {
+        if (cell->type() == CodeBlockType)
+            m_codeBlocks.mark(m_codeBlocksLocker, jsCast<CodeBlock*>(cell));
     }
 
 private:
index 5448392..0f9a42a 100644 (file)
@@ -39,7 +39,6 @@ public:
     ~ConservativeRoots();
 
     void add(void* begin, void* end);
-    void add(void* begin, void* end, JITStubRoutineSet&);
     void add(void* begin, void* end, JITStubRoutineSet&, CodeBlockSet&);
     
     size_t size();
index f0baf31..d121122 100644 (file)
@@ -1,5 +1,5 @@
 /*
- *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003-2018 Apple Inc. All rights reserved.
  *  Copyright (C) 2007 Eric Seidel <eric@webkit.org>
  *
  *  This library is free software; you can redistribute it and/or
@@ -367,6 +367,8 @@ void Heap::lastChanceToFinalize()
         dataLog("[GC<", RawPointer(this), ">: shutdown ");
     }
     
+    m_isShuttingDown = true;
+    
     RELEASE_ASSERT(!m_vm->entryScope);
     RELEASE_ASSERT(m_mutatorState == MutatorState::Running);
     
@@ -436,7 +438,6 @@ void Heap::lastChanceToFinalize()
         dataLog("5 ");
     
     m_arrayBuffers.lastChanceToFinalize();
-    m_codeBlocks->lastChanceToFinalize(*m_vm);
     m_objectSpace.stopAllocating();
     m_objectSpace.lastChanceToFinalize();
     releaseDelayedReleasedObjects();
@@ -558,7 +559,7 @@ void Heap::addReference(JSCell* cell, ArrayBuffer* buffer)
 }
 
 template<typename CellType, typename CellSet>
-void Heap::finalizeUnconditionalFinalizers(CellSet& cellSet)
+void Heap::finalizeMarkedUnconditionalFinalizers(CellSet& cellSet)
 {
     cellSet.forEachMarkedCell(
         [&] (HeapCell* cell, HeapCell::Kind) {
@@ -566,21 +567,17 @@ void Heap::finalizeUnconditionalFinalizers(CellSet& cellSet)
         });
 }
 
-template<typename CellType>
-void Heap::finalizeUnconditionalFinalizersInIsoSubspace()
-{
-    JSC::subspaceFor<CellType>(*vm())->forEachMarkedCell(
-        [&] (HeapCell* cell, HeapCell::Kind) {
-            static_cast<CellType*>(cell)->finalizeUnconditionally(*vm());
-        });
-}
-
 void Heap::finalizeUnconditionalFinalizers()
 {
-    finalizeUnconditionalFinalizers<InferredType>(vm()->inferredTypesWithFinalizers);
-    finalizeUnconditionalFinalizers<InferredValue>(vm()->inferredValuesWithFinalizers);
-    finalizeUnconditionalFinalizersInIsoSubspace<JSWeakSet>();
-    finalizeUnconditionalFinalizersInIsoSubspace<JSWeakMap>();
+    finalizeMarkedUnconditionalFinalizers<InferredType>(vm()->inferredTypesWithFinalizers);
+    finalizeMarkedUnconditionalFinalizers<InferredValue>(vm()->inferredValuesWithFinalizers);
+    vm()->forEachCodeBlockSpace(
+        [&] (auto& space) {
+            this->finalizeMarkedUnconditionalFinalizers<CodeBlock>(space.finalizerSet);
+        });
+    finalizeMarkedUnconditionalFinalizers<ExecutableToCodeBlockEdge>(vm()->executableToCodeBlockEdgesWithFinalizers);
+    finalizeMarkedUnconditionalFinalizers<JSWeakSet>(vm()->weakSetSpace);
+    finalizeMarkedUnconditionalFinalizers<JSWeakMap>(vm()->weakMapSpace);
     
     while (m_unconditionalFinalizers.hasNext()) {
         UnconditionalFinalizer* finalizer = m_unconditionalFinalizers.removeNext();
@@ -677,8 +674,6 @@ void Heap::gatherScratchBufferRoots(ConservativeRoots& roots)
 void Heap::beginMarking()
 {
     TimingScope timingScope(*this, "Heap::beginMarking");
-    if (m_collectionScope == CollectionScope::Full)
-        m_codeBlocks->clearMarksForFullCollection();
     m_jitStubRoutines->clearMarks();
     m_objectSpace.beginMarking();
     setMutatorShouldBeFenced(true);
@@ -939,7 +934,6 @@ void Heap::clearUnmarkedExecutables()
 void Heap::deleteUnmarkedCompiledCode()
 {
     clearUnmarkedExecutables();
-    m_codeBlocks->deleteUnmarkedAndUnreferenced(*m_vm, *m_lastCollectionScope);
     m_jitStubRoutines->deleteUnmarkedJettisonedStubRoutines();
 }
 
@@ -2088,12 +2082,8 @@ void Heap::waitForCollection(Ticket ticket)
 void Heap::sweepInFinalize()
 {
     m_objectSpace.sweepLargeAllocations();
-    
-    auto sweepBlock = [&] (MarkedBlock::Handle* handle) {
-        handle->sweep(nullptr);
-    };
-    
-    vm()->eagerlySweptDestructibleObjectSpace.forEachMarkedBlock(sweepBlock);
+    vm()->forEachCodeBlockSpace([] (auto& space) { space.space.sweep(); });
+    vm()->eagerlySweptDestructibleObjectSpace.sweep();
 }
 
 void Heap::suspendCompilerThreads()
@@ -2468,7 +2458,7 @@ size_t Heap::bytesVisited()
     return result;
 }
 
-void Heap::forEachCodeBlockImpl(const ScopedLambda<bool(CodeBlock*)>& func)
+void Heap::forEachCodeBlockImpl(const ScopedLambda<void(CodeBlock*)>& func)
 {
     // We don't know the full set of CodeBlocks until compilation has terminated.
     completeAllJITPlans();
@@ -2476,7 +2466,7 @@ void Heap::forEachCodeBlockImpl(const ScopedLambda<bool(CodeBlock*)>& func)
     return m_codeBlocks->iterate(func);
 }
 
-void Heap::forEachCodeBlockIgnoringJITPlansImpl(const AbstractLocker& locker, const ScopedLambda<bool(CodeBlock*)>& func)
+void Heap::forEachCodeBlockIgnoringJITPlansImpl(const AbstractLocker& locker, const ScopedLambda<void(CodeBlock*)>& func)
 {
     return m_codeBlocks->iterate(locker, func);
 }
@@ -2713,6 +2703,26 @@ void Heap::addCoreConstraints()
         },
         ConstraintVolatility::GreyedByMarking);
     
+    m_constraintSet->add(
+        "O", "Output",
+        [this] (SlotVisitor& slotVisitor) {
+            VM& vm = slotVisitor.vm();
+            
+            auto callOutputConstraint = [] (SlotVisitor& slotVisitor, HeapCell* heapCell, HeapCell::Kind) {
+                VM& vm = slotVisitor.vm();
+                JSCell* cell = static_cast<JSCell*>(heapCell);
+                cell->methodTable(vm)->visitOutputConstraints(cell, slotVisitor);
+            };
+            
+            auto add = [&] (auto& set) {
+                slotVisitor.addParallelConstraintTask(set.forEachMarkedCellInParallel(callOutputConstraint));
+            };
+            
+            add(vm.executableToCodeBlockEdgesWithConstraints);
+        },
+        ConstraintVolatility::GreyedByMarking,
+        ConstraintParallelism::Parallel);
+    
 #if ENABLE(DFG_JIT)
     m_constraintSet->add(
         "Dw", "DFG Worklists",
index 0e97dd4..8ececad 100644 (file)
@@ -167,6 +167,8 @@ public:
 
     void notifyIsSafeToCollect();
     bool isSafeToCollect() const { return m_isSafeToCollect; }
+    
+    bool isShuttingDown() const { return m_isShuttingDown; }
 
     JS_EXPORT_PRIVATE bool isHeapSnapshotting() const;
 
@@ -496,11 +498,8 @@ private:
     void harvestWeakReferences();
 
     template<typename CellType, typename CellSet>
-    void finalizeUnconditionalFinalizers(CellSet&);
+    void finalizeMarkedUnconditionalFinalizers(CellSet&);
 
-    template<typename CellType>
-    void finalizeUnconditionalFinalizersInIsoSubspace();
-    
     void finalizeUnconditionalFinalizers();
     
     void clearUnmarkedExecutables();
@@ -527,8 +526,8 @@ private:
     size_t visitCount();
     size_t bytesVisited();
     
-    void forEachCodeBlockImpl(const ScopedLambda<bool(CodeBlock*)>&);
-    void forEachCodeBlockIgnoringJITPlansImpl(const AbstractLocker& codeBlockSetLocker, const ScopedLambda<bool(CodeBlock*)>&);
+    void forEachCodeBlockImpl(const ScopedLambda<void(CodeBlock*)>&);
+    void forEachCodeBlockIgnoringJITPlansImpl(const AbstractLocker& codeBlockSetLocker, const ScopedLambda<void(CodeBlock*)>&);
     
     void setMutatorShouldBeFenced(bool value);
     
@@ -609,6 +608,7 @@ private:
     FinalizerOwner m_finalizerOwner;
     
     bool m_isSafeToCollect;
+    bool m_isShuttingDown { false };
 
     bool m_mutatorShouldBeFenced { Options::forceFencedBarrier() };
     unsigned m_barrierThreshold { Options::forceFencedBarrier() ? tautologicalThreshold : blackThreshold };
index eb8e4cd..682187e 100644 (file)
@@ -33,6 +33,7 @@ class CellContainer;
 class Heap;
 class LargeAllocation;
 class MarkedBlock;
+class Subspace;
 class VM;
 struct AllocatorAttributes;
 
@@ -68,6 +69,7 @@ public:
     AllocatorAttributes allocatorAttributes() const;
     DestructionMode destructionMode() const;
     Kind cellKind() const;
+    Subspace* subspace() const;
     
     // Call use() after the last point where you need `this` pointer to be kept alive. You usually don't
     // need to use this, but it might be necessary if you're otherwise referring to an object's innards
index 4e69734..563ee01 100644 (file)
@@ -90,5 +90,12 @@ ALWAYS_INLINE HeapCell::Kind HeapCell::cellKind() const
     return allocatorAttributes().cellKind;
 }
 
+ALWAYS_INLINE Subspace* HeapCell::subspace() const
+{
+    if (isLargeAllocation())
+        return largeAllocation().subspace();
+    return markedBlock().subspace();
+}
+
 } // namespace JSC
 
index 2add0ed..d679bff 100644 (file)
@@ -144,12 +144,12 @@ inline void Heap::mutatorFence()
 
 template<typename Functor> inline void Heap::forEachCodeBlock(const Functor& func)
 {
-    forEachCodeBlockImpl(scopedLambdaRef<bool(CodeBlock*)>(func));
+    forEachCodeBlockImpl(scopedLambdaRef<void(CodeBlock*)>(func));
 }
 
 template<typename Functor> inline void Heap::forEachCodeBlockIgnoringJITPlans(const AbstractLocker& codeBlockSetLocker, const Functor& func)
 {
-    forEachCodeBlockIgnoringJITPlansImpl(codeBlockSetLocker, scopedLambdaRef<bool(CodeBlock*)>(func));
+    forEachCodeBlockIgnoringJITPlansImpl(codeBlockSetLocker, scopedLambdaRef<void(CodeBlock*)>(func));
 }
 
 template<typename Functor> inline void Heap::forEachProtectedCell(const Functor& functor)
index 57fcd04..32c455a 100644 (file)
@@ -66,14 +66,16 @@ public:
                     LargeAllocation::fromCell(pointer),
                     [] (LargeAllocation** ptr) -> LargeAllocation* { return *ptr; });
                 if (result) {
-                    if (result > heap.objectSpace().largeAllocationsForThisCollectionBegin()
-                        && result[-1]->contains(pointer))
-                        func(result[-1]->cell());
-                    if (result[0]->contains(pointer))
-                        func(result[0]->cell());
-                    if (result + 1 < heap.objectSpace().largeAllocationsForThisCollectionEnd()
-                        && result[1]->contains(pointer))
-                        func(result[1]->cell());
+                    auto attemptLarge = [&] (LargeAllocation* allocation) {
+                        if (allocation->contains(pointer))
+                            func(allocation->cell(), allocation->attributes().cellKind);
+                    };
+                    
+                    if (result > heap.objectSpace().largeAllocationsForThisCollectionBegin())
+                        attemptLarge(result[-1]);
+                    attemptLarge(result[0]);
+                    if (result + 1 < heap.objectSpace().largeAllocationsForThisCollectionEnd())
+                        attemptLarge(result[1]);
                 }
             }
         }
@@ -89,7 +91,7 @@ public:
                 && previousCandidate->handle().cellKind() == HeapCell::Auxiliary) {
                 previousPointer = static_cast<char*>(previousCandidate->handle().cellAlign(previousPointer));
                 if (previousCandidate->handle().isLiveCell(markingVersion, newlyAllocatedVersion, isMarking, previousPointer))
-                    func(previousPointer);
+                    func(previousPointer, previousCandidate->handle().cellKind());
             }
         }
     
@@ -100,10 +102,12 @@ public:
     
         if (!set.contains(candidate))
             return;
+
+        HeapCell::Kind cellKind = candidate->handle().cellKind();
         
         auto tryPointer = [&] (void* pointer) {
             if (candidate->handle().isLiveCell(markingVersion, newlyAllocatedVersion, isMarking, pointer))
-                func(pointer);
+                func(pointer, cellKind);
         };
     
         if (candidate->handle().cellKind() == HeapCell::JSCell) {
index ab1741f..f828f6e 100644 (file)
@@ -46,6 +46,41 @@ IsoCellSet::~IsoCellSet()
         BasicRawSentinelNode<IsoCellSet>::remove();
 }
 
+RefPtr<SharedTask<MarkedBlock::Handle*()>> IsoCellSet::parallelNotEmptyMarkedBlockSource()
+{
+    class Task : public SharedTask<MarkedBlock::Handle*()> {
+    public:
+        Task(IsoCellSet& set)
+            : m_set(set)
+            , m_allocator(set.m_subspace.m_allocator)
+        {
+        }
+        
+        MarkedBlock::Handle* run() override
+        {
+            if (m_done)
+                return nullptr;
+            auto locker = holdLock(m_lock);
+            auto bits = m_allocator.m_markingNotEmpty & m_set.m_blocksWithBits;
+            m_index = bits.findBit(m_index, true);
+            if (m_index >= m_allocator.m_blocks.size()) {
+                m_done = true;
+                return nullptr;
+            }
+            return m_allocator.m_blocks[m_index++];
+        }
+        
+    private:
+        IsoCellSet& m_set;
+        MarkedAllocator& m_allocator;
+        size_t m_index { 0 };
+        Lock m_lock;
+        bool m_done { false };
+    };
+    
+    return adoptRef(new Task(*this));
+}
+
 NEVER_INLINE Bitmap<MarkedBlock::atomsPerBlock>* IsoCellSet::addSlow(size_t blockIndex)
 {
     auto locker = holdLock(m_subspace.m_allocator.m_bitvectorLock);
index faf565f..38695f2 100644 (file)
@@ -48,10 +48,18 @@ public:
     
     bool contains(HeapCell* cell) const;
     
+    JS_EXPORT_PRIVATE RefPtr<SharedTask<MarkedBlock::Handle*()>> parallelNotEmptyMarkedBlockSource();
+    
     // This will have to do a combined search over whatever Subspace::forEachMarkedCell uses and
     // our m_blocksWithBits.
     template<typename Func>
     void forEachMarkedCell(const Func&);
+
+    template<typename Func>
+    RefPtr<SharedTask<void(SlotVisitor&)>> forEachMarkedCellInParallel(const Func&);
+    
+    template<typename Func>
+    void forEachLiveCell(const Func&);
     
 private:
     friend class IsoSubspace;
index f7250b4..d9434b7 100644 (file)
@@ -78,5 +78,61 @@ void IsoCellSet::forEachMarkedCell(const Func& func)
         });
 }
 
+template<typename Func>
+RefPtr<SharedTask<void(SlotVisitor&)>> IsoCellSet::forEachMarkedCellInParallel(const Func& func)
+{
+    class Task : public SharedTask<void(SlotVisitor&)> {
+    public:
+        Task(IsoCellSet& set, const Func& func)
+            : m_set(set)
+            , m_blockSource(set.parallelNotEmptyMarkedBlockSource())
+            , m_func(func)
+        {
+        }
+        
+        void run(SlotVisitor& visitor) override
+        {
+            while (MarkedBlock::Handle* handle = m_blockSource->run()) {
+                size_t blockIndex = handle->index();
+                auto* bits = m_set.m_bits[blockIndex].get();
+                handle->forEachMarkedCell(
+                    [&] (size_t atomNumber, HeapCell* cell, HeapCell::Kind kind) -> IterationStatus {
+                        if (bits->get(atomNumber))
+                            m_func(visitor, cell, kind);
+                        return IterationStatus::Continue;
+                    });
+            }
+        }
+        
+    private:
+        IsoCellSet& m_set;
+        RefPtr<SharedTask<MarkedBlock::Handle*()>> m_blockSource;
+        Func m_func;
+        Lock m_lock;
+    };
+    
+    return adoptRef(new Task(*this, func));
+}
+
+template<typename Func>
+void IsoCellSet::forEachLiveCell(const Func& func)
+{
+    MarkedAllocator& allocator = m_subspace.m_allocator;
+    m_blocksWithBits.forEachSetBit(
+        [&] (size_t blockIndex) {
+            MarkedBlock::Handle* block = allocator.m_blocks[blockIndex];
+
+            // FIXME: We could optimize this by checking our bits before querying isLive.
+            // OOPS! (need bug URL)
+            auto* bits = m_bits[blockIndex].get();
+            block->forEachLiveCell(
+                [&] (size_t atomNumber, HeapCell* cell, HeapCell::Kind kind) -> IterationStatus {
+                    if (bits->get(atomNumber))
+                        func(cell, kind);
+                    return IterationStatus::Continue;
+                });
+        });
+}
+
 } // namespace JSC
 
index a26d9a5..313f3cd 100644 (file)
@@ -58,6 +58,8 @@ public:
         return bitwise_cast<uintptr_t>(cell) & halfAlignment;
     }
     
+    Subspace* subspace() const { return m_subspace; }
+    
     void lastChanceToFinalize();
     
     Heap* heap() const { return m_weakSet.heap(); }
index bab5a43..b3eb45b 100644 (file)
@@ -54,11 +54,10 @@ void MarkStackMergingConstraint::prepareToExecuteImpl(const AbstractLocker&, Slo
         dataLog("(", size, ")");
 }
 
-ConstraintParallelism MarkStackMergingConstraint::executeImpl(SlotVisitor& visitor)
+void MarkStackMergingConstraint::executeImpl(SlotVisitor& visitor)
 {
     m_heap.m_mutatorMarkStack->transferTo(visitor.mutatorMarkStack());
     m_heap.m_raceMarkStack->transferTo(visitor.mutatorMarkStack());
-    return ConstraintParallelism::Sequential;
 }
 
 } // namespace JSC
index ffcac43..29d36e5 100644 (file)
@@ -38,7 +38,7 @@ public:
     
 protected:
     void prepareToExecuteImpl(const AbstractLocker& constraintSolvingLocker, SlotVisitor&) override;
-    ConstraintParallelism executeImpl(SlotVisitor&) override;
+    void executeImpl(SlotVisitor&) override;
     
 private:
     Heap& m_heap;
index de846ac..0b9b4c7 100644 (file)
@@ -458,10 +458,14 @@ RefPtr<SharedTask<MarkedBlock::Handle*()>> MarkedAllocator::parallelNotEmptyBloc
         
         MarkedBlock::Handle* run() override
         {
+            if (m_done)
+                return nullptr;
             auto locker = holdLock(m_lock);
             m_index = m_allocator.m_markingNotEmpty.findBit(m_index, true);
-            if (m_index >= m_allocator.m_blocks.size())
+            if (m_index >= m_allocator.m_blocks.size()) {
+                m_done = true;
                 return nullptr;
+            }
             return m_allocator.m_blocks[m_index++];
         }
         
@@ -469,6 +473,7 @@ RefPtr<SharedTask<MarkedBlock::Handle*()>> MarkedAllocator::parallelNotEmptyBloc
         MarkedAllocator& m_allocator;
         size_t m_index { 0 };
         Lock m_lock;
+        bool m_done { false };
     };
     
     return adoptRef(new Task(*this));
index ab215e0..3eb9ec2 100644 (file)
@@ -329,6 +329,7 @@ void MarkedBlock::Handle::didAddToAllocator(MarkedAllocator* allocator, size_t i
     
     m_index = index;
     m_allocator = allocator;
+    m_block->m_subspace = allocator->subspace();
     
     size_t cellSize = allocator->cellSize();
     m_atomsPerCell = (cellSize + atomSize - 1) / atomSize;
@@ -356,6 +357,7 @@ void MarkedBlock::Handle::didRemoveFromAllocator()
     
     m_index = std::numeric_limits<size_t>::max();
     m_allocator = nullptr;
+    m_block->m_subspace = nullptr;
 }
 
 #if !ASSERT_DISABLED
index 20397a4..11b72c7 100644 (file)
@@ -300,6 +300,8 @@ public:
     const Bitmap<atomsPerBlock>& marks() const;
     
     CountingLock& lock() { return m_lock; }
+    
+    Subspace* subspace() const { return m_subspace; }
 
 private:
     static const size_t atomAlignmentMask = atomSize - 1;
@@ -319,6 +321,7 @@ private:
         
     Handle& m_handle;
     VM* m_vm;
+    Subspace* m_subspace;
 
     CountingLock m_lock;
     
index 3dfbf75..ca735bf 100644 (file)
@@ -483,6 +483,12 @@ inline IterationStatus MarkedBlock::Handle::forEachLiveCell(const Functor& funct
     // happen, we will just overlook objects. I think that because of how aboutToMarkSlow() does things,
     // a race ought to mean that it just returns false when it should have returned true - but this is
     // something that would have to be verified carefully.
+    //
+    // NOTE: Some users of forEachLiveCell require that their callback is called exactly once for
+    // each live cell. We could optimize this function for those users by using a slow loop if the
+    // block is in marks-mean-live mode. That would only affect blocks that had partial survivors
+    // during the last collection and no survivors (yet) during this collection.
+    //
     // https://bugs.webkit.org/show_bug.cgi?id=180315
     
     HeapCell::Kind kind = m_attributes.cellKind;
@@ -491,7 +497,7 @@ inline IterationStatus MarkedBlock::Handle::forEachLiveCell(const Functor& funct
         if (!isLive(cell))
             continue;
 
-        if (functor(cell, kind) == IterationStatus::Done)
+        if (functor(i, cell, kind) == IterationStatus::Done)
             return IterationStatus::Done;
     }
     return IterationStatus::Continue;
index 7fba86d..4c6b119 100644 (file)
@@ -40,7 +40,11 @@ template<typename Functor> inline void MarkedSpace::forEachLiveCell(const Functo
 {
     BlockIterator end = m_blocks.set().end();
     for (BlockIterator it = m_blocks.set().begin(); it != end; ++it) {
-        if ((*it)->handle().forEachLiveCell(functor) == IterationStatus::Done)
+        IterationStatus result = (*it)->handle().forEachLiveCell(
+            [&] (size_t, HeapCell* cell, HeapCell::Kind kind) -> IterationStatus {
+                return functor(cell, kind);
+            });
+        if (result == IterationStatus::Done)
             return;
     }
     for (LargeAllocation* allocation : m_largeAllocations) {
index 64084fe..3bab767 100644 (file)
@@ -51,19 +51,13 @@ void MarkingConstraint::resetStats()
     m_lastVisitCount = 0;
 }
 
-ConstraintParallelism MarkingConstraint::execute(SlotVisitor& visitor)
+void MarkingConstraint::execute(SlotVisitor& visitor)
 {
     VisitCounter visitCounter(visitor);
-    ConstraintParallelism result = executeImpl(visitor);
+    executeImpl(visitor);
     m_lastVisitCount += visitCounter.visitCount();
     if (verboseMarkingConstraint && visitCounter.visitCount())
         dataLog("(", abbreviatedName(), " visited ", visitCounter.visitCount(), " in execute)");
-    if (result == ConstraintParallelism::Parallel) {
-        // It's illegal to produce parallel work if you haven't advertised it upfront because the solver
-        // has optimizations for constraints that promise to never produce parallel work.
-        RELEASE_ASSERT(m_parallelism == ConstraintParallelism::Parallel);
-    }
-    return result;
 }
 
 double MarkingConstraint::quickWorkEstimate(SlotVisitor&)
@@ -87,10 +81,10 @@ void MarkingConstraint::prepareToExecute(const AbstractLocker& constraintSolving
         dataLog("(", abbreviatedName(), " visited ", visitCounter.visitCount(), " in prepareToExecute)");
 }
 
-void MarkingConstraint::doParallelWork(SlotVisitor& visitor)
+void MarkingConstraint::doParallelWork(SlotVisitor& visitor, SharedTask<void(SlotVisitor&)>& task)
 {
     VisitCounter visitCounter(visitor);
-    doParallelWorkImpl(visitor);
+    task.run(visitor);
     if (verboseMarkingConstraint && visitCounter.visitCount())
         dataLog("(", abbreviatedName(), " visited ", visitCounter.visitCount(), " in doParallelWork)");
     {
@@ -99,27 +93,9 @@ void MarkingConstraint::doParallelWork(SlotVisitor& visitor)
     }
 }
 
-void MarkingConstraint::finishParallelWork(SlotVisitor& visitor)
-{
-    VisitCounter visitCounter(visitor);
-    finishParallelWorkImpl(visitor);
-    m_lastVisitCount += visitCounter.visitCount();
-    if (verboseMarkingConstraint && visitCounter.visitCount())
-        dataLog("(", abbreviatedName(), " visited ", visitCounter.visitCount(), " in finishParallelWork)");
-}
-
 void MarkingConstraint::prepareToExecuteImpl(const AbstractLocker&, SlotVisitor&)
 {
 }
 
-void MarkingConstraint::doParallelWorkImpl(SlotVisitor&)
-{
-    UNREACHABLE_FOR_PLATFORM();
-}
-
-void MarkingConstraint::finishParallelWorkImpl(SlotVisitor&)
-{
-}
-
 } // namespace JSC
 
index f0fcc18..5f0effe 100644 (file)
@@ -32,6 +32,7 @@
 #include <wtf/FastMalloc.h>
 #include <wtf/Lock.h>
 #include <wtf/Noncopyable.h>
+#include <wtf/SharedTask.h>
 #include <wtf/text/CString.h>
 
 namespace JSC {
@@ -59,7 +60,7 @@ public:
     
     size_t lastVisitCount() const { return m_lastVisitCount; }
     
-    ConstraintParallelism execute(SlotVisitor&);
+    void execute(SlotVisitor&);
     
     JS_EXPORT_PRIVATE virtual double quickWorkEstimate(SlotVisitor& visitor);
     
@@ -67,8 +68,7 @@ public:
     
     void prepareToExecute(const AbstractLocker& constraintSolvingLocker, SlotVisitor&);
     
-    void doParallelWork(SlotVisitor&);
-    void finishParallelWork(SlotVisitor&);
+    void doParallelWork(SlotVisitor&, SharedTask<void(SlotVisitor&)>&);
     
     ConstraintVolatility volatility() const { return m_volatility; }
     
@@ -76,10 +76,8 @@ public:
     ConstraintParallelism parallelism() const { return m_parallelism; }
 
 protected:
-    virtual ConstraintParallelism executeImpl(SlotVisitor&) = 0;
+    virtual void executeImpl(SlotVisitor&) = 0;
     JS_EXPORT_PRIVATE virtual void prepareToExecuteImpl(const AbstractLocker& constraintSolvingLocker, SlotVisitor&);
-    virtual void doParallelWorkImpl(SlotVisitor&);
-    virtual void finishParallelWorkImpl(SlotVisitor&);
     
 private:
     friend class MarkingConstraintSet; // So it can set m_index.
index 27c907c..b3abd2c 100644 (file)
@@ -65,9 +65,9 @@ void MarkingConstraintSet::didStartMarking()
     m_iteration = 1;
 }
 
-void MarkingConstraintSet::add(CString abbreviatedName, CString name, ::Function<void(SlotVisitor&)> function, ConstraintVolatility volatility, ConstraintConcurrency concurrency)
+void MarkingConstraintSet::add(CString abbreviatedName, CString name, ::Function<void(SlotVisitor&)> function, ConstraintVolatility volatility, ConstraintConcurrency concurrency, ConstraintParallelism parallelism)
 {
-    add(std::make_unique<SimpleMarkingConstraint>(WTFMove(abbreviatedName), WTFMove(name), WTFMove(function), volatility, concurrency));
+    add(std::make_unique<SimpleMarkingConstraint>(WTFMove(abbreviatedName), WTFMove(name), WTFMove(function), volatility, concurrency, parallelism));
 }
 
 void MarkingConstraintSet::add(
index 079eee5..6a980ec 100644 (file)
@@ -46,7 +46,17 @@ public:
         CString name,
         ::Function<void(SlotVisitor&)>,
         ConstraintVolatility,
-        ConstraintConcurrency = ConstraintConcurrency::Concurrent);
+        ConstraintConcurrency = ConstraintConcurrency::Concurrent,
+        ConstraintParallelism = ConstraintParallelism::Sequential);
+    
+    void add(
+        CString abbreviatedName, CString name,
+        ::Function<void(SlotVisitor&)> func,
+        ConstraintVolatility volatility,
+        ConstraintParallelism parallelism)
+    {
+        add(abbreviatedName, name, WTFMove(func), volatility, ConstraintConcurrency::Concurrent, parallelism);
+    }
     
     void add(std::unique_ptr<MarkingConstraint>);
     
index 89c606a..29892df 100644 (file)
@@ -78,10 +78,6 @@ void MarkingConstraintSolver::execute(SchedulerPreference preference, ScopedLamb
     RELEASE_ASSERT(!m_pickNextIsStillActive);
     RELEASE_ASSERT(!m_numThreadsThatMayProduceWork);
         
-    for (unsigned indexToRun : m_didExecuteInParallel)
-        m_set.m_set[indexToRun]->finishParallelWork(m_mainVisitor);
-    m_didExecuteInParallel.clear();
-    
     if (!m_toExecuteSequentially.isEmpty()) {
         for (unsigned indexToRun : m_toExecuteSequentially)
             execute(*m_set.m_set[indexToRun]);
@@ -89,7 +85,6 @@ void MarkingConstraintSolver::execute(SchedulerPreference preference, ScopedLamb
     }
         
     RELEASE_ASSERT(m_toExecuteInParallel.isEmpty());
-    RELEASE_ASSERT(!m_toExecuteInParallelSet.bitCount());
 }
 
 void MarkingConstraintSolver::drain(BitVector& unexecuted)
@@ -156,15 +151,23 @@ void MarkingConstraintSolver::execute(MarkingConstraint& constraint)
         return;
     
     constraint.prepareToExecute(NoLockingNecessary, m_mainVisitor);
-    ConstraintParallelism parallelism = constraint.execute(m_mainVisitor);
-    didExecute(parallelism, constraint.index());
+    constraint.execute(m_mainVisitor);
+    m_executed.set(constraint.index());
+}
+
+void MarkingConstraintSolver::addParallelTask(RefPtr<SharedTask<void(SlotVisitor&)>> task, MarkingConstraint& constraint)
+{
+    auto locker = holdLock(m_lock);
+    m_toExecuteInParallel.append(TaskWithConstraint(WTFMove(task), &constraint));
 }
 
 void MarkingConstraintSolver::runExecutionThread(SlotVisitor& visitor, SchedulerPreference preference, ScopedLambda<std::optional<unsigned>()> pickNext)
 {
     for (;;) {
         bool doParallelWorkMode;
-        unsigned indexToRun;
+        MarkingConstraint* constraint = nullptr;
+        unsigned indexToRun = UINT_MAX;
+        TaskWithConstraint task;
         {
             auto locker = holdLock(m_lock);
                         
@@ -173,11 +176,12 @@ void MarkingConstraintSolver::runExecutionThread(SlotVisitor& visitor, Scheduler
                     if (m_toExecuteInParallel.isEmpty())
                         return false;
                     
-                    indexToRun = m_toExecuteInParallel.first();
+                    task = m_toExecuteInParallel.first();
+                    constraint = task.constraint;
                     doParallelWorkMode = true;
                     return true;
                 };
-                            
+                
                 auto tryNextConstraint = [&] () -> bool {
                     if (!m_pickNextIsStillActive)
                         return false;
@@ -192,16 +196,17 @@ void MarkingConstraintSolver::runExecutionThread(SlotVisitor& visitor, Scheduler
                         if (m_executed.get(*pickResult))
                             continue;
                                     
-                        MarkingConstraint& constraint = *m_set.m_set[*pickResult];
-                        if (constraint.concurrency() == ConstraintConcurrency::Sequential) {
+                        MarkingConstraint& candidateConstraint = *m_set.m_set[*pickResult];
+                        if (candidateConstraint.concurrency() == ConstraintConcurrency::Sequential) {
                             m_toExecuteSequentially.append(*pickResult);
                             continue;
                         }
-                        if (constraint.parallelism() == ConstraintParallelism::Parallel)
+                        if (candidateConstraint.parallelism() == ConstraintParallelism::Parallel)
                             m_numThreadsThatMayProduceWork++;
                         indexToRun = *pickResult;
+                        constraint = &candidateConstraint;
                         doParallelWorkMode = false;
-                        constraint.prepareToExecute(locker, visitor);
+                        constraint->prepareToExecute(locker, visitor);
                         return true;
                     }
                 };
@@ -226,34 +231,33 @@ void MarkingConstraintSolver::runExecutionThread(SlotVisitor& visitor, Scheduler
             }
         }
                     
-        ConstraintParallelism parallelism = ConstraintParallelism::Sequential;
-                    
-        MarkingConstraint& constraint = *m_set.m_set[indexToRun];
-                    
         if (doParallelWorkMode)
-            constraint.doParallelWork(visitor);
-        else
-            parallelism = constraint.execute(visitor);
-                    
+            constraint->doParallelWork(visitor, *task.task);
+        else {
+            if (constraint->parallelism() == ConstraintParallelism::Parallel) {
+                visitor.m_currentConstraint = constraint;
+                visitor.m_currentSolver = this;
+            }
+            
+            constraint->execute(visitor);
+            
+            visitor.m_currentConstraint = nullptr;
+            visitor.m_currentSolver = nullptr;
+        }
+        
         {
             auto locker = holdLock(m_lock);
-                        
+            
             if (doParallelWorkMode) {
-                if (m_toExecuteInParallelSet.get(indexToRun)) {
-                    m_didExecuteInParallel.append(indexToRun);
-                                
-                    m_toExecuteInParallel.takeFirst(
-                        [&] (unsigned value) { return value == indexToRun; });
-                    m_toExecuteInParallelSet.clear(indexToRun);
-                }
+                if (!m_toExecuteInParallel.isEmpty()
+                    && task == m_toExecuteInParallel.first())
+                    m_toExecuteInParallel.takeFirst();
+                else
+                    ASSERT(!m_toExecuteInParallel.contains(task));
             } else {
-                if (constraint.parallelism() == ConstraintParallelism::Parallel)
+                if (constraint->parallelism() == ConstraintParallelism::Parallel)
                     m_numThreadsThatMayProduceWork--;
                 m_executed.set(indexToRun);
-                if (parallelism == ConstraintParallelism::Parallel) {
-                    m_toExecuteInParallel.append(indexToRun);
-                    m_toExecuteInParallelSet.set(indexToRun);
-                }
             }
                         
             m_condition.notifyAll();
@@ -261,14 +265,5 @@ void MarkingConstraintSolver::runExecutionThread(SlotVisitor& visitor, Scheduler
     }
 }
 
-void MarkingConstraintSolver::didExecute(ConstraintParallelism parallelism, unsigned index)
-{
-    m_executed.set(index);
-    if (parallelism == ConstraintParallelism::Parallel) {
-        m_toExecuteInParallel.append(index);
-        m_toExecuteInParallelSet.set(index);
-    }
-}
-
 } // namespace JSC
 
index 16c9a56..08b38db 100644 (file)
@@ -64,18 +64,36 @@ public:
     
     void execute(MarkingConstraint&);
     
+    // Parallel constraints can add parallel tasks.
+    void addParallelTask(RefPtr<SharedTask<void(SlotVisitor&)>>, MarkingConstraint&);
+    
 private:
     void runExecutionThread(SlotVisitor&, SchedulerPreference, ScopedLambda<std::optional<unsigned>()> pickNext);
     
-    void didExecute(ConstraintParallelism, unsigned index);
-
+    struct TaskWithConstraint {
+        TaskWithConstraint() { }
+        
+        TaskWithConstraint(RefPtr<SharedTask<void(SlotVisitor&)>> task, MarkingConstraint* constraint)
+            : task(WTFMove(task))
+            , constraint(constraint)
+        {
+        }
+        
+        bool operator==(const TaskWithConstraint& other) const
+        {
+            return task == other.task
+                && constraint == other.constraint;
+        }
+        
+        RefPtr<SharedTask<void(SlotVisitor&)>> task;
+        MarkingConstraint* constraint { nullptr };
+    };
+    
     Heap& m_heap;
     SlotVisitor& m_mainVisitor;
     MarkingConstraintSet& m_set;
     BitVector m_executed;
-    Deque<unsigned, 32> m_toExecuteInParallel;
-    BitVector m_toExecuteInParallelSet;
-    Vector<unsigned, 32> m_didExecuteInParallel;
+    Deque<TaskWithConstraint, 32> m_toExecuteInParallel;
     Vector<unsigned, 32> m_toExecuteSequentially;
     Lock m_lock;
     Condition m_condition;
index 0d80e3f..49bd4a0 100644 (file)
@@ -31,8 +31,9 @@ namespace JSC {
 SimpleMarkingConstraint::SimpleMarkingConstraint(
     CString abbreviatedName, CString name,
     ::Function<void(SlotVisitor&)> executeFunction,
-    ConstraintVolatility volatility, ConstraintConcurrency concurrency)
-    : MarkingConstraint(WTFMove(abbreviatedName), WTFMove(name), volatility, concurrency, ConstraintParallelism::Sequential)
+    ConstraintVolatility volatility, ConstraintConcurrency concurrency,
+    ConstraintParallelism parallelism)
+    : MarkingConstraint(WTFMove(abbreviatedName), WTFMove(name), volatility, concurrency, parallelism)
     , m_executeFunction(WTFMove(executeFunction))
 {
 }
@@ -41,10 +42,9 @@ SimpleMarkingConstraint::~SimpleMarkingConstraint()
 {
 }
 
-ConstraintParallelism SimpleMarkingConstraint::executeImpl(SlotVisitor& visitor)
+void SimpleMarkingConstraint::executeImpl(SlotVisitor& visitor)
 {
     m_executeFunction(visitor);
-    return ConstraintParallelism::Sequential;
 }
 
 } // namespace JSC
index 99bb569..f02188c 100644 (file)
 namespace JSC {
 
 // This allows for an informal way to define constraints. Just pass a lambda to the constructor. The only
-// downside is that this makes it hard for constraints to be stateful, which is necessary for them to be
-// parallel. In those cases, it's easier to just subclass MarkingConstraint.
+// downside is that this makes it hard for constraints to override any functions in MarkingConstraint
+// other than executeImpl. In those cases, just subclass MarkingConstraint.
 class SimpleMarkingConstraint : public MarkingConstraint {
 public:
     JS_EXPORT_PRIVATE SimpleMarkingConstraint(
         CString abbreviatedName, CString name,
         ::Function<void(SlotVisitor&)>,
         ConstraintVolatility,
-        ConstraintConcurrency = ConstraintConcurrency::Concurrent);
+        ConstraintConcurrency = ConstraintConcurrency::Concurrent,
+        ConstraintParallelism = ConstraintParallelism::Sequential);
+    
+    SimpleMarkingConstraint(
+        CString abbreviatedName, CString name,
+        ::Function<void(SlotVisitor&)> func,
+        ConstraintVolatility volatility,
+        ConstraintParallelism parallelism)
+        : SimpleMarkingConstraint(abbreviatedName, name, WTFMove(func), volatility, ConstraintConcurrency::Concurrent, parallelism)
+    {
+    }
     
     JS_EXPORT_PRIVATE ~SimpleMarkingConstraint();
     
 private:
-    ConstraintParallelism executeImpl(SlotVisitor&) override;
+    void executeImpl(SlotVisitor&) override;
 
     ::Function<void(SlotVisitor&)> m_executeFunction;
 };
index 40ff95c..13c48fc 100644 (file)
@@ -786,4 +786,13 @@ MarkStackArray& SlotVisitor::correspondingGlobalStack(MarkStackArray& stack)
     return *m_heap.m_sharedMutatorMarkStack;
 }
 
+void SlotVisitor::addParallelConstraintTask(RefPtr<SharedTask<void(SlotVisitor&)>> task)
+{
+    RELEASE_ASSERT(m_currentSolver);
+    RELEASE_ASSERT(m_currentConstraint);
+    RELEASE_ASSERT(task);
+    
+    m_currentSolver->addParallelTask(task, *m_currentConstraint);
+}
+
 } // namespace JSC
index d6f5665..a551c1e 100644 (file)
@@ -40,6 +40,7 @@ class Heap;
 class HeapCell;
 class HeapSnapshotBuilder;
 class MarkedBlock;
+class MarkingConstraintSolver;
 class UnconditionalFinalizer;
 template<typename T> class Weak;
 class WeakReferenceHarvester;
@@ -170,9 +171,12 @@ public:
     void donateAll();
     
     const char* codeName() const { return m_codeName.data(); }
+    
+    JS_EXPORT_PRIVATE void addParallelConstraintTask(RefPtr<SharedTask<void(SlotVisitor&)>>);
 
 private:
     friend class ParallelModeEnabler;
+    friend class MarkingConstraintSolver;
     
     void appendJSCellOrAuxiliary(HeapCell*);
 
@@ -229,6 +233,9 @@ private:
     
     CString m_codeName;
     
+    MarkingConstraint* m_currentConstraint { nullptr };
+    MarkingConstraintSolver* m_currentSolver { nullptr };
+    
 public:
 #if !ASSERT_DISABLED
     bool m_isCheckingForDefaultMarkViolation;
index c4b4588..2d4c023 100644 (file)
@@ -126,6 +126,14 @@ RefPtr<SharedTask<MarkedBlock::Handle*()>> Subspace::parallelNotEmptyMarkedBlock
         });
 }
 
+void Subspace::sweep()
+{
+    forEachAllocator(
+        [&] (MarkedAllocator& allocator) {
+            allocator.sweep();
+        });
+}
+
 void Subspace::didResizeBits(size_t)
 {
 }
index a8e0a21..e9e463e 100644 (file)
@@ -91,6 +91,8 @@ public:
     template<typename Func>
     void forEachLiveCell(const Func&);
     
+    void sweep();
+    
     Subspace* nextSubspaceInAlignedMemoryAllocator() const { return m_nextSubspaceInAlignedMemoryAllocator; }
     void setNextSubspaceInAlignedMemoryAllocator(Subspace* subspace) { m_nextSubspaceInAlignedMemoryAllocator = subspace; }
     
index 1aa8aca..b80a6e3 100644 (file)
@@ -136,7 +136,7 @@ void Subspace::forEachLiveCell(const Func& func)
     forEachMarkedBlock(
         [&] (MarkedBlock::Handle* handle) {
             handle->forEachLiveCell(
-                [&] (HeapCell* cell, HeapCell::Kind kind) -> IterationStatus { 
+                [&] (size_t, HeapCell* cell, HeapCell::Kind kind) -> IterationStatus {
                     func(cell, kind);
                     return IterationStatus::Continue;
                 });
index 52042f4..41d18a5 100644 (file)
@@ -947,6 +947,7 @@ macro functionForCallCodeBlockGetter(targetRegister)
     end
     loadp JSFunction::m_executable[targetRegister], targetRegister
     loadp FunctionExecutable::m_codeBlockForCall[targetRegister], targetRegister
+    loadp ExecutableToCodeBlockEdge::m_codeBlock[targetRegister], targetRegister
 end
 
 macro functionForConstructCodeBlockGetter(targetRegister)
@@ -957,6 +958,7 @@ macro functionForConstructCodeBlockGetter(targetRegister)
     end
     loadp JSFunction::m_executable[targetRegister], targetRegister
     loadp FunctionExecutable::m_codeBlockForConstruct[targetRegister], targetRegister
+    loadp ExecutableToCodeBlockEdge::m_codeBlock[targetRegister], targetRegister
 end
 
 macro notFunctionCodeBlockGetter(targetRegister)
index 341ab1d..3386d0e 100644 (file)
@@ -50,8 +50,7 @@ void EvalExecutable::visitChildren(JSCell* cell, SlotVisitor& visitor)
     ASSERT_GC_OBJECT_INHERITS(thisObject, info());
     ScriptExecutable::visitChildren(thisObject, visitor);
     visitor.append(thisObject->m_unlinkedEvalCodeBlock);
-    if (EvalCodeBlock* evalCodeBlock = thisObject->m_evalCodeBlock.get())
-        evalCodeBlock->visitWeakly(visitor);
+    visitor.append(thisObject->m_evalCodeBlock);
 }
 
 } // namespace JSC
index 7604bbd..ea2fde0 100644 (file)
@@ -25,6 +25,7 @@
 
 #pragma once
 
+#include "ExecutableToCodeBlockEdge.h"
 #include "ScriptExecutable.h"
 #include "UnlinkedEvalCodeBlock.h"
 
@@ -40,7 +41,7 @@ public:
     
     EvalCodeBlock* codeBlock()
     {
-        return m_evalCodeBlock.get();
+        return bitwise_cast<EvalCodeBlock*>(ExecutableToCodeBlockEdge::unwrap(m_evalCodeBlock.get()));
     }
 
     Ref<JITCode> generatedJITCode()
@@ -70,7 +71,7 @@ protected:
 
     static void visitChildren(JSCell*, SlotVisitor&);
 
-    WriteBarrier<EvalCodeBlock> m_evalCodeBlock;
+    WriteBarrier<ExecutableToCodeBlockEdge> m_evalCodeBlock;
     WriteBarrier<UnlinkedEvalCodeBlock> m_unlinkedEvalCodeBlock;
 };
 
index f158672..7196426 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2009, 2010, 2013, 2015-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2009-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -68,16 +68,16 @@ void FunctionExecutable::destroy(JSCell* cell)
 
 FunctionCodeBlock* FunctionExecutable::baselineCodeBlockFor(CodeSpecializationKind kind)
 {
-    FunctionCodeBlock* result;
+    ExecutableToCodeBlockEdge* edge;
     if (kind == CodeForCall)
-        result = m_codeBlockForCall.get();
+        edge = m_codeBlockForCall.get();
     else {
         RELEASE_ASSERT(kind == CodeForConstruct);
-        result = m_codeBlockForConstruct.get();
+        edge = m_codeBlockForConstruct.get();
     }
-    if (!result)
+    if (!edge)
         return 0;
-    return static_cast<FunctionCodeBlock*>(result->baselineAlternative());
+    return static_cast<FunctionCodeBlock*>(edge->codeBlock()->baselineAlternative());
 }
 
 void FunctionExecutable::visitChildren(JSCell* cell, SlotVisitor& visitor)
@@ -85,10 +85,8 @@ void FunctionExecutable::visitChildren(JSCell* cell, SlotVisitor& visitor)
     FunctionExecutable* thisObject = jsCast<FunctionExecutable*>(cell);
     ASSERT_GC_OBJECT_INHERITS(thisObject, info());
     ScriptExecutable::visitChildren(thisObject, visitor);
-    if (FunctionCodeBlock* codeBlockForCall = thisObject->m_codeBlockForCall.get())
-        codeBlockForCall->visitWeakly(visitor);
-    if (FunctionCodeBlock* codeBlockForConstruct = thisObject->m_codeBlockForConstruct.get())
-        codeBlockForConstruct->visitWeakly(visitor);
+    visitor.append(thisObject->m_codeBlockForCall);
+    visitor.append(thisObject->m_codeBlockForConstruct);
     visitor.append(thisObject->m_unlinkedExecutable);
     visitor.append(thisObject->m_singletonFunction);
     visitor.append(thisObject->m_cachedPolyProtoStructure);
index 7782c42..646fac0 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2009-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2009-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -25,6 +25,7 @@
 
 #pragma once
 
+#include "ExecutableToCodeBlockEdge.h"
 #include "ScriptExecutable.h"
 #include "SourceCode.h"
 #include <wtf/Box.h>
@@ -68,9 +69,12 @@ public:
     // for example, argumentsRegister().
     FunctionCodeBlock* eitherCodeBlock()
     {
+        ExecutableToCodeBlockEdge* edge;
         if (m_codeBlockForCall)
-            return m_codeBlockForCall.get();
-        return m_codeBlockForConstruct.get();
+            edge = m_codeBlockForCall.get();
+        else
+            edge = m_codeBlockForConstruct.get();
+        return bitwise_cast<FunctionCodeBlock*>(ExecutableToCodeBlockEdge::unwrap(edge));
     }
         
     bool isGeneratedForCall() const
@@ -80,17 +84,17 @@ public:
 
     FunctionCodeBlock* codeBlockForCall()
     {
-        return m_codeBlockForCall.get();
+        return bitwise_cast<FunctionCodeBlock*>(ExecutableToCodeBlockEdge::unwrap(m_codeBlockForCall.get()));
     }
 
     bool isGeneratedForConstruct() const
     {
-        return m_codeBlockForConstruct.get();
+        return !!m_codeBlockForConstruct;
     }
 
     FunctionCodeBlock* codeBlockForConstruct()
     {
-        return m_codeBlockForConstruct.get();
+        return bitwise_cast<FunctionCodeBlock*>(ExecutableToCodeBlockEdge::unwrap(m_codeBlockForConstruct.get()));
     }
         
     bool isGeneratedFor(CodeSpecializationKind kind)
@@ -204,8 +208,8 @@ private:
     
     unsigned m_parametersStartOffset;
     WriteBarrier<UnlinkedFunctionExecutable> m_unlinkedExecutable;
-    WriteBarrier<FunctionCodeBlock> m_codeBlockForCall;
-    WriteBarrier<FunctionCodeBlock> m_codeBlockForConstruct;
+    WriteBarrier<ExecutableToCodeBlockEdge> m_codeBlockForCall;
+    WriteBarrier<ExecutableToCodeBlockEdge> m_codeBlockForConstruct;
     RefPtr<TypeSet> m_returnStatementTypeSet;
     WriteBarrier<InferredValue> m_singletonFunction;
     WriteBarrier<Structure> m_cachedPolyProtoStructure;
index 1ed8470..0adf492 100644 (file)
@@ -43,6 +43,8 @@ enum JSType : uint8_t {
     UnlinkedModuleProgramCodeBlockType,
     UnlinkedEvalCodeBlockType,
     UnlinkedFunctionCodeBlockType,
+        
+    CodeBlockType,
 
     JSFixedArrayType,
     JSSourceCodeType,
index d4e8306..eaeef8a 100644 (file)
@@ -93,8 +93,7 @@ void ModuleProgramExecutable::visitChildren(JSCell* cell, SlotVisitor& visitor)
     ScriptExecutable::visitChildren(thisObject, visitor);
     visitor.append(thisObject->m_unlinkedModuleProgramCodeBlock);
     visitor.append(thisObject->m_moduleEnvironmentSymbolTable);
-    if (ModuleProgramCodeBlock* moduleProgramCodeBlock = thisObject->m_moduleProgramCodeBlock.get())
-        moduleProgramCodeBlock->visitWeakly(visitor);
+    visitor.append(thisObject->m_moduleProgramCodeBlock);
 }
 
 } // namespace JSC
index 40b8d96..a1fe864 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2009, 2010, 2013-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2009-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -25,6 +25,7 @@
 
 #pragma once
 
+#include "ExecutableToCodeBlockEdge.h"
 #include "ScriptExecutable.h"
 
 namespace JSC {
@@ -47,7 +48,7 @@ public:
 
     ModuleProgramCodeBlock* codeBlock()
     {
-        return m_moduleProgramCodeBlock.get();
+        return bitwise_cast<ModuleProgramCodeBlock*>(ExecutableToCodeBlockEdge::unwrap(m_moduleProgramCodeBlock.get()));
     }
 
     Ref<JITCode> generatedJITCode()
@@ -78,7 +79,7 @@ private:
 
     WriteBarrier<UnlinkedModuleProgramCodeBlock> m_unlinkedModuleProgramCodeBlock;
     WriteBarrier<SymbolTable> m_moduleEnvironmentSymbolTable;
-    WriteBarrier<ModuleProgramCodeBlock> m_moduleProgramCodeBlock;
+    WriteBarrier<ExecutableToCodeBlockEdge> m_moduleProgramCodeBlock;
 };
 
 } // namespace JSC
index 0e4034c..9c4a803 100644 (file)
@@ -209,8 +209,7 @@ void ProgramExecutable::visitChildren(JSCell* cell, SlotVisitor& visitor)
     ASSERT_GC_OBJECT_INHERITS(thisObject, info());
     ScriptExecutable::visitChildren(thisObject, visitor);
     visitor.append(thisObject->m_unlinkedProgramCodeBlock);
-    if (ProgramCodeBlock* programCodeBlock = thisObject->m_programCodeBlock.get())
-        programCodeBlock->visitWeakly(visitor);
+    visitor.append(thisObject->m_programCodeBlock);
 }
 
 } // namespace JSC
index 240f763..9837c88 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2009, 2010, 2013-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2009-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -25,6 +25,7 @@
 
 #pragma once
 
+#include "ExecutableToCodeBlockEdge.h"
 #include "ScriptExecutable.h"
 
 namespace JSC {
@@ -55,7 +56,7 @@ public:
 
     ProgramCodeBlock* codeBlock()
     {
-        return m_programCodeBlock.get();
+        return bitwise_cast<ProgramCodeBlock*>(ExecutableToCodeBlockEdge::unwrap(m_programCodeBlock.get()));
     }
 
     JSObject* checkSyntax(ExecState*);
@@ -83,7 +84,7 @@ private:
     static void visitChildren(JSCell*, SlotVisitor&);
 
     WriteBarrier<UnlinkedProgramCodeBlock> m_unlinkedProgramCodeBlock;
-    WriteBarrier<ProgramCodeBlock> m_programCodeBlock;
+    WriteBarrier<ExecutableToCodeBlockEdge> m_programCodeBlock;
 };
 
 } // namespace JSC
index bb2aa70..c34021a 100644 (file)
@@ -89,8 +89,8 @@ void ScriptExecutable::installCode(VM& vm, CodeBlock* genericCodeBlock, CodeType
         
         ASSERT(kind == CodeForCall);
         
-        oldCodeBlock = executable->m_programCodeBlock.get();
-        executable->m_programCodeBlock.setMayBeNull(vm, this, codeBlock);
+        oldCodeBlock = ExecutableToCodeBlockEdge::deactivateAndUnwrap(executable->m_programCodeBlock.get());
+        executable->m_programCodeBlock.setMayBeNull(vm, this, ExecutableToCodeBlockEdge::wrapAndActivate(codeBlock));
         break;
     }
 
@@ -100,8 +100,8 @@ void ScriptExecutable::installCode(VM& vm, CodeBlock* genericCodeBlock, CodeType
 
         ASSERT(kind == CodeForCall);
 
-        oldCodeBlock = executable->m_moduleProgramCodeBlock.get();
-        executable->m_moduleProgramCodeBlock.setMayBeNull(vm, this, codeBlock);
+        oldCodeBlock = ExecutableToCodeBlockEdge::deactivateAndUnwrap(executable->m_moduleProgramCodeBlock.get());
+        executable->m_moduleProgramCodeBlock.setMayBeNull(vm, this, ExecutableToCodeBlockEdge::wrapAndActivate(codeBlock));
         break;
     }
 
@@ -111,8 +111,8 @@ void ScriptExecutable::installCode(VM& vm, CodeBlock* genericCodeBlock, CodeType
         
         ASSERT(kind == CodeForCall);
         
-        oldCodeBlock = executable->m_evalCodeBlock.get();
-        executable->m_evalCodeBlock.setMayBeNull(vm, this, codeBlock);
+        oldCodeBlock = ExecutableToCodeBlockEdge::deactivateAndUnwrap(executable->m_evalCodeBlock.get());
+        executable->m_evalCodeBlock.setMayBeNull(vm, this, ExecutableToCodeBlockEdge::wrapAndActivate(codeBlock));
         break;
     }
         
@@ -122,12 +122,12 @@ void ScriptExecutable::installCode(VM& vm, CodeBlock* genericCodeBlock, CodeType
         
         switch (kind) {
         case CodeForCall:
-            oldCodeBlock = executable->m_codeBlockForCall.get();
-            executable->m_codeBlockForCall.setMayBeNull(vm, this, codeBlock);
+            oldCodeBlock = ExecutableToCodeBlockEdge::deactivateAndUnwrap(executable->m_codeBlockForCall.get());
+            executable->m_codeBlockForCall.setMayBeNull(vm, this, ExecutableToCodeBlockEdge::wrapAndActivate(codeBlock));
             break;
         case CodeForConstruct:
-            oldCodeBlock = executable->m_codeBlockForConstruct.get();
-            executable->m_codeBlockForConstruct.setMayBeNull(vm, this, codeBlock);
+            oldCodeBlock = ExecutableToCodeBlockEdge::deactivateAndUnwrap(executable->m_codeBlockForConstruct.get());
+            executable->m_codeBlockForConstruct.setMayBeNull(vm, this, ExecutableToCodeBlockEdge::wrapAndActivate(codeBlock));
             break;
         }
         break;
@@ -268,7 +268,7 @@ CodeBlock* ScriptExecutable::newReplacementCodeBlockFor(
         RELEASE_ASSERT(kind == CodeForCall);
         EvalExecutable* executable = jsCast<EvalExecutable*>(this);
         EvalCodeBlock* baseline = static_cast<EvalCodeBlock*>(
-            executable->m_evalCodeBlock->baselineVersion());
+            executable->codeBlock()->baselineVersion());
         EvalCodeBlock* result = EvalCodeBlock::create(&vm,
             CodeBlock::CopyParsedBlock, *baseline);
         result->setAlternative(vm, baseline);
@@ -279,7 +279,7 @@ CodeBlock* ScriptExecutable::newReplacementCodeBlockFor(
         RELEASE_ASSERT(kind == CodeForCall);
         ProgramExecutable* executable = jsCast<ProgramExecutable*>(this);
         ProgramCodeBlock* baseline = static_cast<ProgramCodeBlock*>(
-            executable->m_programCodeBlock->baselineVersion());
+            executable->codeBlock()->baselineVersion());
         ProgramCodeBlock* result = ProgramCodeBlock::create(&vm,
             CodeBlock::CopyParsedBlock, *baseline);
         result->setAlternative(vm, baseline);
@@ -290,7 +290,7 @@ CodeBlock* ScriptExecutable::newReplacementCodeBlockFor(
         RELEASE_ASSERT(kind == CodeForCall);
         ModuleProgramExecutable* executable = jsCast<ModuleProgramExecutable*>(this);
         ModuleProgramCodeBlock* baseline = static_cast<ModuleProgramCodeBlock*>(
-            executable->m_moduleProgramCodeBlock->baselineVersion());
+            executable->codeBlock()->baselineVersion());
         ModuleProgramCodeBlock* result = ModuleProgramCodeBlock::create(&vm,
             CodeBlock::CopyParsedBlock, *baseline);
         result->setAlternative(vm, baseline);
index 1be8b02..45d083c 100644 (file)
@@ -44,6 +44,7 @@
 #include "ErrorInstance.h"
 #include "EvalCodeBlock.h"
 #include "Exception.h"
+#include "ExecutableToCodeBlockEdge.h"
 #include "FTLThunks.h"
 #include "FastMallocAlignedMemoryAllocator.h"
 #include "FunctionCodeBlock.h"
@@ -249,6 +250,7 @@ VM::VM(VMType vmType, HeapType heapType)
     , webAssemblyCodeBlockSpace("JSWebAssemblyCodeBlockSpace", heap, webAssemblyCodeBlockHeapCellType.get(), fastMallocAllocator.get())
 #endif
     , directEvalExecutableSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), DirectEvalExecutable)
+    , executableToCodeBlockEdgeSpace ISO_SUBSPACE_INIT(heap, cellHeapCellType.get(), ExecutableToCodeBlockEdge)
     , functionExecutableSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), FunctionExecutable)
     , indirectEvalExecutableSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), IndirectEvalExecutable)
     , inferredTypeSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), InferredType)
@@ -261,8 +263,14 @@ VM::VM(VMType vmType, HeapType heapType)
     , structureSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), Structure)
     , weakSetSpace ISO_SUBSPACE_INIT(heap, destructibleObjectHeapCellType.get(), JSWeakSet)
     , weakMapSpace ISO_SUBSPACE_INIT(heap, destructibleObjectHeapCellType.get(), JSWeakMap)
+    , executableToCodeBlockEdgesWithConstraints(executableToCodeBlockEdgeSpace)
+    , executableToCodeBlockEdgesWithFinalizers(executableToCodeBlockEdgeSpace)
     , inferredTypesWithFinalizers(inferredTypeSpace)
     , inferredValuesWithFinalizers(inferredValueSpace)
+    , evalCodeBlockSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), EvalCodeBlock)
+    , functionCodeBlockSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), FunctionCodeBlock)
+    , moduleProgramCodeBlockSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), ModuleProgramCodeBlock)
+    , programCodeBlockSpace ISO_SUBSPACE_INIT(heap, destructibleCellHeapCellType.get(), ProgramCodeBlock)
     , vmType(vmType)
     , clientData(0)
     , topEntryFrame(nullptr)
@@ -352,6 +360,7 @@ VM::VM(VMType vmType, HeapType heapType)
     setIteratorStructure.set(*this, JSSetIterator::createStructure(*this, 0, jsNull()));
     mapIteratorStructure.set(*this, JSMapIterator::createStructure(*this, 0, jsNull()));
     bigIntStructure.set(*this, JSBigInt::createStructure(*this, 0, jsNull()));
+    executableToCodeBlockEdgeStructure.set(*this, ExecutableToCodeBlockEdge::createStructure(*this, nullptr, jsNull()));
 
     sentinelSetBucket.set(*this, JSSet::BucketType::createSentinel(*this));
     sentinelMapBucket.set(*this, JSMap::BucketType::createSentinel(*this));
index c29073d..be2f786 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2008-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2008-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -339,6 +339,7 @@ public:
 #endif
     
     IsoSubspace directEvalExecutableSpace;
+    IsoSubspace executableToCodeBlockEdgeSpace;
     IsoSubspace functionExecutableSpace;
     IsoSubspace indirectEvalExecutableSpace;
     IsoSubspace inferredTypeSpace;
@@ -352,9 +353,47 @@ public:
     IsoSubspace weakSetSpace;
     IsoSubspace weakMapSpace;
     
+    IsoCellSet executableToCodeBlockEdgesWithConstraints;
+    IsoCellSet executableToCodeBlockEdgesWithFinalizers;
     IsoCellSet inferredTypesWithFinalizers;
     IsoCellSet inferredValuesWithFinalizers;
+    
+    struct SpaceAndFinalizerSet {
+        IsoSubspace space;
+        IsoCellSet finalizerSet;
+        
+        template<typename... Arguments>
+        SpaceAndFinalizerSet(Arguments&&... arguments)
+            : space(std::forward<Arguments>(arguments)...)
+            , finalizerSet(space)
+        {
+        }
+        
+        static IsoCellSet& finalizerSetFor(Subspace& space)
+        {
+            return *bitwise_cast<IsoCellSet*>(
+                bitwise_cast<char*>(&space) -
+                OBJECT_OFFSETOF(SpaceAndFinalizerSet, space) +
+                OBJECT_OFFSETOF(SpaceAndFinalizerSet, finalizerSet));
+        }
+    };
+    
+    SpaceAndFinalizerSet evalCodeBlockSpace;
+    SpaceAndFinalizerSet functionCodeBlockSpace;
+    SpaceAndFinalizerSet moduleProgramCodeBlockSpace;
+    SpaceAndFinalizerSet programCodeBlockSpace;
 
+    template<typename Func>
+    void forEachCodeBlockSpace(const Func& func)
+    {
+        // This should not include webAssemblyCodeBlockSpace because this is about subsclasses of
+        // JSC::CodeBlock.
+        func(evalCodeBlockSpace);
+        func(functionCodeBlockSpace);
+        func(moduleProgramCodeBlockSpace);
+        func(programCodeBlockSpace);
+    }
+    
     VMType vmType;
     ClientData* clientData;
     EntryFrame* topEntryFrame;
@@ -418,6 +457,7 @@ public:
     Strong<Structure> setIteratorStructure;
     Strong<Structure> mapIteratorStructure;
     Strong<Structure> bigIntStructure;
+    Strong<Structure> executableToCodeBlockEdgeStructure;
 
     Strong<JSCell> emptyPropertyNameEnumerator;
     Strong<JSCell> sentinelSetBucket;
index a20d094..921cd94 100644 (file)
@@ -230,7 +230,6 @@ public:
 
                         codeBlock->jettison(Profiler::JettisonDueToVMTraps);
                     }
-                    return false;
                 });
                 RELEASE_ASSERT(sawCurrentCodeBlock);
                 
@@ -341,7 +340,6 @@ void VMTraps::handleTraps(ExecState* exec, VMTraps::Mask mask)
             // We want to jettison all code blocks that have vm traps breakpoints, otherwise we could hit them later.
             if (codeBlock->hasInstalledVMTrapBreakpoints())
                 codeBlock->jettison(Profiler::JettisonDueToVMTraps);
-            return false;
         });
     }
 
index 888ca77..75973e2 100644 (file)
@@ -171,17 +171,16 @@ auto VMInspector::codeBlockForMachinePC(const VMInspector::Locker&, void* machin
                 // If the codeBlock is a replacement codeBlock which is in the process of being
                 // compiled, its jitCode will be null, and we can disregard it as a match for
                 // the machinePC we're searching for.
-                return false;
+                return;
             }
 
             if (!JITCode::isJIT(jitCode->jitType()))
-                return false;
+                return;
 
             if (jitCode->contains(machinePC)) {
                 codeBlock = cb;
-                return true;
+                return;
             }
-            return false;
         });
         if (codeBlock)
             return FunctorStatus::Done;
@@ -276,11 +275,10 @@ bool VMInspector::isValidCodeBlock(ExecState* exec, CodeBlock* candidate)
         {
         }
 
-        bool operator()(CodeBlock* codeBlock) const
+        void operator()(CodeBlock* codeBlock) const
         {
             if (codeBlock == candidate)
                 found = true;
-            return found;
         }
 
         CodeBlock* candidate;
index abe9dc6..814f64e 100644 (file)
@@ -1,3 +1,15 @@
+2018-01-04  Filip Pizlo  <fpizlo@apple.com>
+
+        CodeBlocks should be in IsoSubspaces
+        https://bugs.webkit.org/show_bug.cgi?id=180884
+
+        Reviewed by Saam Barati.
+        
+        Deque<>::contains() is helpful for a debug ASSERT.
+
+        * wtf/Deque.h:
+        (WTF::inlineCapacity>::contains):
+
 2018-01-08  Don Olmstead  <don.olmstead@sony.com>
 
         Add WTF_EXPORTs to UniStdExtras
index 8556f75..9d1cf0f 100644 (file)
@@ -75,6 +75,8 @@ public:
     reverse_iterator rend() { return reverse_iterator(begin()); }
     const_reverse_iterator rbegin() const { return const_reverse_iterator(end()); }
     const_reverse_iterator rend() const { return const_reverse_iterator(begin()); }
+    
+    template<typename U> bool contains(const U&);
 
     T& first() { ASSERT(m_start != m_end); return m_buffer.buffer()[m_start]; }
     const T& first() const { ASSERT(m_start != m_end); return m_buffer.buffer()[m_start]; }
@@ -438,6 +440,17 @@ void Deque<T, inlineCapacity>::expandCapacity()
 }
 
 template<typename T, size_t inlineCapacity>
+template<typename U>
+bool Deque<T, inlineCapacity>::contains(const U& searchValue)
+{
+    for (auto& value : *this) {
+        if (value == searchValue)
+            return true;
+    }
+    return false;
+}
+
+template<typename T, size_t inlineCapacity>
 inline auto Deque<T, inlineCapacity>::takeFirst() -> T
 {
     T oldFirst = WTFMove(first());
index f3f230f..7d4ef66 100644 (file)
@@ -1,3 +1,22 @@
+2018-01-04  Filip Pizlo  <fpizlo@apple.com>
+
+        CodeBlocks should be in IsoSubspaces
+        https://bugs.webkit.org/show_bug.cgi?id=180884
+
+        Reviewed by Saam Barati.
+
+        No new tests because no new behavior.
+        
+        Adopting new parallel constraint API, so that more of the logic of doing parallel
+        constraint solving is shared between the DOM's output constraints and JSC's output
+        constraints.
+
+        * bindings/js/DOMGCOutputConstraint.cpp:
+        (WebCore::DOMGCOutputConstraint::executeImpl):
+        (WebCore::DOMGCOutputConstraint::doParallelWorkImpl): Deleted.
+        (WebCore::DOMGCOutputConstraint::finishParallelWorkImpl): Deleted.
+        * bindings/js/DOMGCOutputConstraint.h:
+
 2018-01-08  Simon Fraser  <simon.fraser@apple.com>
 
         Clean up Marquee-related enums
index f422f0d..90aa336 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2017-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -49,16 +49,15 @@ DOMGCOutputConstraint::~DOMGCOutputConstraint()
 {
 }
 
-ConstraintParallelism DOMGCOutputConstraint::executeImpl(SlotVisitor&)
+void DOMGCOutputConstraint::executeImpl(SlotVisitor& visitor)
 {
     Heap& heap = m_vm.heap;
     
     if (heap.mutatorExecutionVersion() == m_lastExecutionVersion)
-        return ConstraintParallelism::Sequential;
+        return;
     
     m_lastExecutionVersion = heap.mutatorExecutionVersion();
     
-    RELEASE_ASSERT(m_tasks.isEmpty());
     m_clientData.forEachOutputConstraintSpace(
         [&] (Subspace& subspace) {
             auto func = [] (SlotVisitor& visitor, HeapCell* heapCell, HeapCell::Kind) {
@@ -66,22 +65,9 @@ ConstraintParallelism DOMGCOutputConstraint::executeImpl(SlotVisitor&)
                 cell->methodTable(visitor.vm())->visitOutputConstraints(cell, visitor);
             };
             
-            m_tasks.append(subspace.forEachMarkedCellInParallel(func));
+            visitor.addParallelConstraintTask(subspace.forEachMarkedCellInParallel(func));
         });
-    
-    return ConstraintParallelism::Parallel;
-}
-        
-void DOMGCOutputConstraint::doParallelWorkImpl(SlotVisitor& visitor)
-{
-    for (auto& task : m_tasks)
-        task->run(visitor);
 }
         
-void DOMGCOutputConstraint::finishParallelWorkImpl(SlotVisitor&)
-{
-    m_tasks.clear();
-}
-
 } // namespace WebCore
 
index c5d7816..4c8f203 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2017-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -39,15 +39,12 @@ public:
     ~DOMGCOutputConstraint();
     
 protected:
-    JSC::ConstraintParallelism executeImpl(JSC::SlotVisitor&) override;
-    void doParallelWorkImpl(JSC::SlotVisitor&) override;
-    void finishParallelWorkImpl(JSC::SlotVisitor&) override;
+    void executeImpl(JSC::SlotVisitor&) override;
 
 private:
     JSC::VM& m_vm;
     JSVMClientData& m_clientData;
     uint64_t m_lastExecutionVersion;
-    Vector<RefPtr<SharedTask<void(SlotVisitor&)>>> m_tasks;
 };
 
 } // namespace WebCore
index d76fcd4..31f0ed4 100755 (executable)
@@ -216,7 +216,7 @@ $warmup=1
 $outer=4
 $quantum=1000
 $includeSunSpider=true
-$includeLongSpider=true
+$includeLongSpider=false
 $includeV8=true
 $includeKraken=true
 $includeJSBench=true
@@ -226,8 +226,8 @@ $includeDSPJS=true
 $includeBrowsermarkJS=false
 $includeBrowsermarkDOM=false
 $includeOctane=true
-$includeCompressionBench = true
-$includeSixSpeed = true
+$includeCompressionBench = false
+$includeSixSpeed = false
 $includeTailBench = true
 $measureGC=false
 $benchmarkPattern=nil