Make opaque root scanning truly constraint-based
authorfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 18 Jan 2017 04:22:45 +0000 (04:22 +0000)
committerfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 18 Jan 2017 04:22:45 +0000 (04:22 +0000)
https://bugs.webkit.org/show_bug.cgi?id=165760

Reviewed by Geoffrey Garen.
JSTests:

Added this test, which demonstrates the benefit of having a dedicated string subspace.

* microbenchmarks/stringalloc.js: Added.

Source/JavaScriptCore:

We have bugs when visitChildren() changes its mind about what opaque root to add, since
we don't have barriers on opaque roots. This supposedly once worked for generational GC,
and I started adding more barriers to support concurrent GC. But I think that the real
bug here is that we want the JSObject->OpaqueRoot to be evaluated as a constraint that
participates in the fixpoint. I like to think of this as an *output* constraint, because it
is concerned with outgoing edges in the heap from the object that registered the constraint.
An *input* constraint is like what Weak<> does when deciding whether the thing it points to
should be live.

Whether or not an object has output constraints depends on its type. So, we want the GC to
have a feature where we rapidly call some function on all marked objects of some type.

It's easy to rapidly scan all marked objects in a MarkedBlock. So, we want to allocate all
objects that have output constraints in their own MarkedBlocks and we want to track the set
of MarkedBlocks with output constraints.

This patch makes it easy to have clients of JSC's internal C++ APIs create a Subspace - like
what we used to call MarkedSpace::Subspace but now it's in the JSC namespace - which is
a collection of objects that you can easily scan during GC from a MarkingConstraint. It's
now possible for internal C++ API clients to register their own MarkingConstraints. The DOM
now uses this to create two Subspaces (more on why two below) and it calls
JSCell::visitOutputConstraints() on all of the marked objects in those subspaces using a new
MarkingConstraint. That MarkingConstraint uses a new style of volatility, called
SeldomGreyed, which is like GreyedByExecution except it is opportunistically not executed
as roots in the hopes that their sole execution will be the snapshot-at-the-end. I also
converted the CodeBlock rescan constraint to SeldomGreyed, since that's also an output
constraint.

This patch also uses Subspace for something pretty obvious: knowing how to call the
destructor. Subspaces can specialize the sweep for their way of invoking destructors. We
have the following subspaces:

- auxiliary
- cell
- destructibleCell - for JSCell subclasses that have destructors and StructureIsImmortal
- stringSpace - inlines ~JSString into the sweep, making string allocation 7% faster
- destructibleObjectSpace - for JSDestructibleObject subclasses

And WebCore adds:

- outputConstraint - for JSDOMObjects that have a visitAdditionalChildren
- globalObjectOutputConstraint - for JSDOMGlobalObjects that have a visitAdditionalChildren,
  since JSDOMGlobalObjects are not JSDestructibleObjects

The Subspace for a type is selected by saying JSC::subspaceFor<Type>(vm). This calls
Type::subspaceFor<Type>(vm). This allows cell classes to override subspaceFor<> and it
allows any subspaceFor<> implementation to query static flags in the type. This is how
JSCell::subspaceFor<> can select either cellSpace or destructibleCellSpace.

This patch is mostly about:

- Moving MarkedSpace::Subspace out of MarkedSpace and making it a nice class with a nice
  API. Almost all of its functionality is just taken out of MarkedSpace.
- Converting users of the old API for allocating objects and getting MarkedAllocators, like
  heap.allocatorForObjectWithoutDestructor() and its friends. That would now say
  vm.cellSpace.allocatorFor().

Altogether, this means that we only have a small regression on Dromaeo. The regression is
due to the fact that we scan output constraints. Before the Subspace optimizations (see
r209766, which was rolled out in r209812), this regression on Dromaeo/jslib was 2x but after
the optimizations in this patch it's only 1.12x. Note that Dromaeo/jslib creats gigabytes of
DOM nodes. Compared to web pages, this is a very extreme synthetic microbenchmark. Still, we
like optimizing these because we don't want to presume what web pages will look like.

The use of Subspaces to specialize destructors happened not because it's super necessary but
because I wanted to introduce a single unified way of communicating to the GC how to treat
different types. Any Subspace feature that allowed us to collect some types together would
have to be mindful of the destructorness of objects. I could have turned this into a
liability where each Subspace has two subsubspaces - one for destructor objects and one for
non-destructor objects, which would have allowed me to keep the old sweep specialization
code. Just days prior, mlam wanted to do something that was hard because of that old sweep
specializer, so I decided to take the opportunity to fix the sweep specializer while also
making Subspace be the one true way of teaching the GC about types. To validate that this
actually does things, I added a JSStringSubspace and a test that shows that this is a 7%
string allocation progression.

In bug 167066, I'm getting rid of the rest of the code in JSC that would special-case for
JSDestructibleObject vs StructureIsImmortal by using the GC's DestructionMode. After that,
Subspace will be only mechanism by which JSC uses the GC to encode types.

Prior to this change, having multiple MarkedSpace::Subspaces would have been expensive
because they create a bunch of MarkedAllocators upfront. We now have the ability to create
MarkedAllocators lazily. We create them on the first allocation from that size class or when
a JIT asks for the MarkedAllocator. The concurrent JITs can ask for MarkedAllocators because
their creation is under a lock.

On my machine, this might be a 1.1% JetStream speed-up with 87% confidence and it might be
a 0.4% PLT3 slow-down with 92% confidence. Note that 0.4% on PLT3 is the level of systematic
error on PLT3 on my computer: I've seen definite 0.4% speed-ups and slow-downs that were not
confirmed by any bot. Let's see what the bots say.

* CMakeLists.txt:
* JavaScriptCore.xcodeproj/project.pbxproj:
* bytecode/ObjectAllocationProfile.h:
(JSC::ObjectAllocationProfile::initialize):
* bytecode/PolymorphicAccess.cpp:
(JSC::AccessCase::generateImpl):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::emitAllocateRawObject):
(JSC::DFG::SpeculativeJIT::compileMakeRope):
(JSC::DFG::SpeculativeJIT::compileAllocatePropertyStorage):
(JSC::DFG::SpeculativeJIT::compileReallocatePropertyStorage):
(JSC::DFG::SpeculativeJIT::compileNewTypedArray):
(JSC::DFG::SpeculativeJIT::emitAllocateButterfly):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* ftl/FTLAbstractHeapRepository.h:
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::compileNewTypedArray):
(JSC::FTL::DFG::LowerDFGToB3::compileMakeRope):
(JSC::FTL::DFG::LowerDFGToB3::compileMaterializeNewObject):
(JSC::FTL::DFG::LowerDFGToB3::allocatePropertyStorageWithSizeImpl):
(JSC::FTL::DFG::LowerDFGToB3::allocateObject):
(JSC::FTL::DFG::LowerDFGToB3::allocatorForSize):
(JSC::FTL::DFG::LowerDFGToB3::allocateVariableSizedObject):
(JSC::FTL::DFG::LowerDFGToB3::allocateVariableSizedCell):
(JSC::FTL::DFG::LowerDFGToB3::allocateJSArray):
* heap/AllocatorAttributes.h:
(JSC::AllocatorAttributes::AllocatorAttributes):
* heap/ConstraintVolatility.h: Added.
(WTF::printInternal):
* heap/GCActivityCallback.cpp:
* heap/Heap.cpp:
(JSC::Heap::Heap):
(JSC::Heap::lastChanceToFinalize):
(JSC::Heap::markToFixpoint):
(JSC::Heap::updateObjectCounts):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collectInThread):
(JSC::Heap::stopTheWorld):
(JSC::Heap::updateAllocationLimits):
(JSC::Heap::bytesVisited):
(JSC::Heap::addCoreConstraints):
(JSC::Heap::addMarkingConstraint):
(JSC::Heap::notifyIsSafeToCollect):
(JSC::Heap::preventCollection):
(JSC::Heap::allowCollection):
(JSC::Heap::setMutatorShouldBeFenced):
(JSC::Heap::buildConstraintSet): Deleted.
(JSC::Heap::writeBarrierOpaqueRootSlow): Deleted.
(JSC::Heap::addMutatorShouldBeFencedCache): Deleted.
* heap/Heap.h:
(JSC::Heap::mutatorExecutionVersion):
(JSC::Heap::numOpaqueRoots):
(JSC::Heap::vm): Deleted.
(JSC::Heap::subspaceForObjectWithoutDestructor): Deleted.
(JSC::Heap::subspaceForObjectDestructor): Deleted.
(JSC::Heap::subspaceForAuxiliaryData): Deleted.
(JSC::Heap::allocatorForObjectWithoutDestructor): Deleted.
(JSC::Heap::allocatorForObjectWithDestructor): Deleted.
(JSC::Heap::allocatorForAuxiliaryData): Deleted.
* heap/HeapInlines.h:
(JSC::Heap::vm):
(JSC::Heap::allocateWithDestructor): Deleted.
(JSC::Heap::allocateWithoutDestructor): Deleted.
(JSC::Heap::allocateObjectOfType): Deleted.
(JSC::Heap::subspaceForObjectOfType): Deleted.
(JSC::Heap::allocatorForObjectOfType): Deleted.
(JSC::Heap::allocateAuxiliary): Deleted.
(JSC::Heap::tryAllocateAuxiliary): Deleted.
(JSC::Heap::tryReallocateAuxiliary): Deleted.
(JSC::Heap::ascribeOwner): Deleted.
(JSC::Heap::writeBarrierOpaqueRoot): Deleted.
* heap/LargeAllocation.cpp:
(JSC::LargeAllocation::tryCreate):
(JSC::LargeAllocation::LargeAllocation):
(JSC::LargeAllocation::~LargeAllocation):
(JSC::LargeAllocation::sweep):
* heap/LargeAllocation.h:
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::MarkedAllocator):
(JSC::MarkedAllocator::tryAllocateWithoutCollecting):
(JSC::MarkedAllocator::tryAllocateIn):
(JSC::MarkedAllocator::allocateSlowCaseImpl):
(JSC::MarkedAllocator::tryAllocateBlock):
(JSC::MarkedAllocator::shrink):
(JSC::MarkedAllocator::markedSpace):
* heap/MarkedAllocator.h:
(JSC::MarkedAllocator::nextAllocatorInSubspace):
(JSC::MarkedAllocator::setNextAllocatorInSubspace):
(JSC::MarkedAllocator::subspace):
(JSC::MarkedAllocator::tryAllocate): Deleted.
(JSC::MarkedAllocator::allocate): Deleted.
(JSC::MarkedAllocator::forEachBlock): Deleted.
* heap/MarkedAllocatorInlines.h: Added.
(JSC::MarkedAllocator::tryAllocate):
(JSC::MarkedAllocator::allocate):
(JSC::MarkedAllocator::forEachBlock):
(JSC::MarkedAllocator::forEachNotEmptyBlock):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::Handle::subspace):
(JSC::MarkedBlock::Handle::sweep):
(JSC::MarkedBlock::Handle::specializedSweep): Deleted.
(JSC::MarkedBlock::Handle::sweepHelperSelectScribbleMode): Deleted.
(JSC::MarkedBlock::Handle::sweepHelperSelectEmptyMode): Deleted.
(JSC::MarkedBlock::Handle::sweepHelperSelectHasNewlyAllocated): Deleted.
(JSC::MarkedBlock::Handle::sweepHelperSelectSweepMode): Deleted.
(JSC::MarkedBlock::Handle::sweepHelperSelectMarksMode): Deleted.
* heap/MarkedBlock.h:
(JSC::MarkedBlock::Handle::visitWeakSet):
* heap/MarkedBlockInlines.h:
(JSC::MarkedBlock::Handle::isNewlyAllocatedStale):
(JSC::MarkedBlock::Handle::hasAnyNewlyAllocated):
(JSC::MarkedBlock::heap):
(JSC::MarkedBlock::space):
(JSC::MarkedBlock::Handle::space):
(JSC::MarkedBlock::Handle::specializedSweep):
(JSC::MarkedBlock::Handle::finishSweepKnowingSubspace):
(JSC::MarkedBlock::Handle::sweepDestructionMode):
(JSC::MarkedBlock::Handle::emptyMode):
(JSC::MarkedBlock::Handle::scribbleMode):
(JSC::MarkedBlock::Handle::newlyAllocatedMode):
(JSC::MarkedBlock::Handle::marksMode):
(JSC::MarkedBlock::Handle::forEachMarkedCell):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::initializeSizeClassForStepSize):
(JSC::MarkedSpace::MarkedSpace):
(JSC::MarkedSpace::lastChanceToFinalize):
(JSC::MarkedSpace::addMarkedAllocator):
(JSC::MarkedSpace::allocate): Deleted.
(JSC::MarkedSpace::tryAllocate): Deleted.
(JSC::MarkedSpace::allocateLarge): Deleted.
(JSC::MarkedSpace::tryAllocateLarge): Deleted.
* heap/MarkedSpace.h:
(JSC::MarkedSpace::heap):
(JSC::MarkedSpace::allocatorLock):
(JSC::MarkedSpace::subspaceForObjectsWithDestructor): Deleted.
(JSC::MarkedSpace::subspaceForObjectsWithoutDestructor): Deleted.
(JSC::MarkedSpace::subspaceForAuxiliaryData): Deleted.
(JSC::MarkedSpace::allocatorFor): Deleted.
(JSC::MarkedSpace::destructorAllocatorFor): Deleted.
(JSC::MarkedSpace::auxiliaryAllocatorFor): Deleted.
(JSC::MarkedSpace::allocateWithoutDestructor): Deleted.
(JSC::MarkedSpace::allocateWithDestructor): Deleted.
(JSC::MarkedSpace::allocateAuxiliary): Deleted.
(JSC::MarkedSpace::tryAllocateAuxiliary): Deleted.
(JSC::MarkedSpace::forEachSubspace): Deleted.
* heap/MarkingConstraint.cpp:
(JSC::MarkingConstraint::MarkingConstraint):
* heap/MarkingConstraint.h:
(JSC::MarkingConstraint::volatility):
* heap/MarkingConstraintSet.cpp:
(JSC::MarkingConstraintSet::resetStats):
(JSC::MarkingConstraintSet::add):
(JSC::MarkingConstraintSet::executeConvergenceImpl):
* heap/MarkingConstraintSet.h:
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::visitChildren):
(JSC::SlotVisitor::visitAsConstraint):
(JSC::SlotVisitor::drain):
(JSC::SlotVisitor::addOpaqueRoot):
(JSC::SlotVisitor::mergeIfNecessary):
(JSC::SlotVisitor::mergeOpaqueRootsIfNecessary): Deleted.
* heap/SlotVisitor.h:
(JSC::SlotVisitor::setIgnoreNewOpaqueRoots):
* heap/SlotVisitorInlines.h:
(JSC::SlotVisitor::reportExtraMemoryVisited):
(JSC::SlotVisitor::reportExternalMemoryVisited):
* heap/Subspace.cpp: Added.
(JSC::Subspace::Subspace):
(JSC::Subspace::~Subspace):
(JSC::Subspace::finishSweep):
(JSC::Subspace::destroy):
(JSC::Subspace::allocate):
(JSC::Subspace::tryAllocate):
(JSC::Subspace::allocatorForSlow):
(JSC::Subspace::allocateSlow):
(JSC::Subspace::tryAllocateSlow):
* heap/Subspace.h: Added.
(JSC::Subspace::tryAllocatorFor):
(JSC::Subspace::allocatorFor):
* heap/SubspaceInlines.h: Added.
(JSC::Subspace::forEachMarkedBlock):
(JSC::Subspace::forEachNotEmptyMarkedBlock):
(JSC::Subspace::forEachLargeAllocation):
(JSC::Subspace::forEachMarkedCell):
* heap/WeakBlock.cpp:
(JSC::WeakBlock::specializedVisit):
* heap/WeakBlock.h:
* heap/WeakSet.h:
(JSC::WeakSet::visit):
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::emitAllocateJSObjectWithKnownSize):
(JSC::AssemblyHelpers::emitAllocateVariableSized):
(JSC::AssemblyHelpers::emitAllocateVariableSizedCell):
* jit/JITOpcodes.cpp:
(JSC::JIT::emit_op_new_object):
* jsc.cpp:
* runtime/ButterflyInlines.h:
(JSC::Butterfly::createUninitialized):
(JSC::Butterfly::growArrayRight):
* runtime/ClassInfo.h:
* runtime/ClonedArguments.cpp:
(JSC::ClonedArguments::createEmpty):
* runtime/DirectArguments.cpp:
(JSC::DirectArguments::overrideThings):
* runtime/GenericArgumentsInlines.h:
(JSC::GenericArguments<Type>::initModifiedArgumentsDescriptor):
* runtime/HashMapImpl.h:
(JSC::HashMapBuffer::create):
* runtime/JSArray.cpp:
(JSC::JSArray::tryCreateUninitialized):
(JSC::JSArray::unshiftCountSlowCase):
* runtime/JSArrayBufferView.cpp:
(JSC::JSArrayBufferView::ConstructionContext::ConstructionContext):
* runtime/JSCell.h:
(JSC::subspaceFor):
* runtime/JSCellInlines.h:
(JSC::JSCell::visitOutputConstraints):
(JSC::JSCell::subspaceFor):
(JSC::allocateCell):
* runtime/JSDestructibleObject.h:
(JSC::JSDestructibleObject::subspaceFor):
* runtime/JSDestructibleObjectSubspace.cpp: Added.
(JSC::JSDestructibleObjectSubspace::JSDestructibleObjectSubspace):
(JSC::JSDestructibleObjectSubspace::~JSDestructibleObjectSubspace):
(JSC::JSDestructibleObjectSubspace::finishSweep):
(JSC::JSDestructibleObjectSubspace::destroy):
* runtime/JSDestructibleObjectSubspace.h: Added.
* runtime/JSObject.h:
(JSC::JSObject::JSObject):
* runtime/JSObjectInlines.h:
* runtime/JSSegmentedVariableObject.h:
* runtime/JSString.h:
(JSC::JSString::subspaceFor):
* runtime/JSStringSubspace.cpp: Added.
(JSC::JSStringSubspace::JSStringSubspace):
(JSC::JSStringSubspace::~JSStringSubspace):
(JSC::JSStringSubspace::finishSweep):
(JSC::JSStringSubspace::destroy):
* runtime/JSStringSubspace.h: Added.
* runtime/RegExpMatchesArray.h:
(JSC::tryCreateUninitializedRegExpMatchesArray):
* runtime/VM.cpp:
(JSC::VM::VM):
* runtime/VM.h:

Source/WebCore:

No new tests yet. I think that writing tests for this is a big investigation:
https://bugs.webkit.org/show_bug.cgi?id=165808

Remove the previous advancing wavefront DOM write barrier. I don't think this will scale
very well. It's super confusing.

This change makes it so that visitAdditionalChildren can become a GC constraint that
executes as part of the fixpoint. This changes all WebCore visitAdditionalChildren into
output constraints by using new JSC API for Subspaces and MarkingConstraints.

* ForwardingHeaders/heap/MarkedAllocatorInlines.h: Added.
* ForwardingHeaders/heap/MarkedBlockInlines.h: Added.
* ForwardingHeaders/heap/MarkingConstraint.h: Added.
* ForwardingHeaders/heap/SubspaceInlines.h: Added.
* ForwardingHeaders/heap/VisitingTimeout.h: Added.
* WebCore.xcodeproj/project.pbxproj:
* bindings/js/CommonVM.cpp:
(WebCore::commonVMSlow):
(WebCore::writeBarrierOpaqueRootSlow): Deleted.
* bindings/js/CommonVM.h:
(WebCore::writeBarrierOpaqueRoot): Deleted.
* bindings/js/JSDOMGlobalObject.cpp:
(WebCore::JSDOMGlobalObject::finishCreation):
(WebCore::JSDOMGlobalObject::scriptExecutionContext):
* bindings/js/JSDOMWrapper.cpp:
(WebCore::outputConstraintSubspaceFor):
(WebCore::globalObjectOutputConstraintSubspaceFor):
* bindings/js/JSDOMWrapper.h:
* bindings/js/WebCoreJSClientData.cpp: Added.
(WebCore::JSVMClientData::JSVMClientData):
(WebCore::JSVMClientData::~JSVMClientData):
(WebCore::JSVMClientData::getAllWorlds):
(WebCore::initNormalWorldClientData):
* bindings/js/WebCoreJSClientData.h:
(WebCore::JSVMClientData::outputConstraintSpace):
(WebCore::JSVMClientData::globalObjectOutputConstraintSpace):
(WebCore::JSVMClientData::forEachOutputConstraintSpace):
(WebCore::JSVMClientData::JSVMClientData): Deleted.
(WebCore::JSVMClientData::~JSVMClientData): Deleted.
(WebCore::JSVMClientData::getAllWorlds): Deleted.
(WebCore::initNormalWorldClientData): Deleted.
* bindings/scripts/CodeGeneratorJS.pm:
(GenerateHeader):
(GenerateImplementation):
* dom/ContainerNodeAlgorithms.cpp:
(WebCore::notifyChildNodeInserted):
(WebCore::notifyChildNodeRemoved):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@210844 268f45cc-cd09-0410-ab3c-d52691b4dbfc

85 files changed:
JSTests/ChangeLog
JSTests/microbenchmarks/stringalloc.js [new file with mode: 0644]
Source/JavaScriptCore/CMakeLists.txt
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/bytecode/ObjectAllocationProfile.h
Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h
Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
Source/JavaScriptCore/heap/AllocatorAttributes.h
Source/JavaScriptCore/heap/ConstraintVolatility.h [new file with mode: 0644]
Source/JavaScriptCore/heap/GCActivityCallback.cpp
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/heap/Heap.h
Source/JavaScriptCore/heap/HeapInlines.h
Source/JavaScriptCore/heap/LargeAllocation.cpp
Source/JavaScriptCore/heap/LargeAllocation.h
Source/JavaScriptCore/heap/MarkedAllocator.cpp
Source/JavaScriptCore/heap/MarkedAllocator.h
Source/JavaScriptCore/heap/MarkedAllocatorInlines.h [new file with mode: 0644]
Source/JavaScriptCore/heap/MarkedBlock.cpp
Source/JavaScriptCore/heap/MarkedBlock.h
Source/JavaScriptCore/heap/MarkedBlockInlines.h
Source/JavaScriptCore/heap/MarkedSpace.cpp
Source/JavaScriptCore/heap/MarkedSpace.h
Source/JavaScriptCore/heap/MarkingConstraint.cpp
Source/JavaScriptCore/heap/MarkingConstraint.h
Source/JavaScriptCore/heap/MarkingConstraintSet.cpp
Source/JavaScriptCore/heap/MarkingConstraintSet.h
Source/JavaScriptCore/heap/SlotVisitor.cpp
Source/JavaScriptCore/heap/SlotVisitor.h
Source/JavaScriptCore/heap/SlotVisitorInlines.h
Source/JavaScriptCore/heap/Subspace.cpp [new file with mode: 0644]
Source/JavaScriptCore/heap/Subspace.h [new file with mode: 0644]
Source/JavaScriptCore/heap/SubspaceInlines.h [new file with mode: 0644]
Source/JavaScriptCore/heap/WeakBlock.cpp
Source/JavaScriptCore/heap/WeakBlock.h
Source/JavaScriptCore/heap/WeakSet.h
Source/JavaScriptCore/jit/AssemblyHelpers.h
Source/JavaScriptCore/jit/JITOpcodes.cpp
Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
Source/JavaScriptCore/jsc.cpp
Source/JavaScriptCore/runtime/ButterflyInlines.h
Source/JavaScriptCore/runtime/ClassInfo.h
Source/JavaScriptCore/runtime/ClonedArguments.cpp
Source/JavaScriptCore/runtime/DirectArguments.cpp
Source/JavaScriptCore/runtime/GenericArgumentsInlines.h
Source/JavaScriptCore/runtime/HashMapImpl.h
Source/JavaScriptCore/runtime/JSArray.cpp
Source/JavaScriptCore/runtime/JSArrayBufferView.cpp
Source/JavaScriptCore/runtime/JSCell.h
Source/JavaScriptCore/runtime/JSCellInlines.h
Source/JavaScriptCore/runtime/JSDestructibleObject.h
Source/JavaScriptCore/runtime/JSDestructibleObjectSubspace.cpp [new file with mode: 0644]
Source/JavaScriptCore/runtime/JSDestructibleObjectSubspace.h [new file with mode: 0644]
Source/JavaScriptCore/runtime/JSObject.h
Source/JavaScriptCore/runtime/JSObjectInlines.h
Source/JavaScriptCore/runtime/JSSegmentedVariableObject.h
Source/JavaScriptCore/runtime/JSString.h
Source/JavaScriptCore/runtime/JSStringSubspace.cpp [new file with mode: 0644]
Source/JavaScriptCore/runtime/JSStringSubspace.h [new file with mode: 0644]
Source/JavaScriptCore/runtime/RegExpMatchesArray.h
Source/JavaScriptCore/runtime/VM.cpp
Source/JavaScriptCore/runtime/VM.h
Source/WebCore/CMakeLists.txt
Source/WebCore/ChangeLog
Source/WebCore/ForwardingHeaders/heap/MarkedAllocatorInlines.h [new file with mode: 0644]
Source/WebCore/ForwardingHeaders/heap/MarkedBlockInlines.h [new file with mode: 0644]
Source/WebCore/ForwardingHeaders/heap/MarkingConstraint.h [new file with mode: 0644]
Source/WebCore/ForwardingHeaders/heap/SubspaceInlines.h [new file with mode: 0644]
Source/WebCore/ForwardingHeaders/heap/VisitingTimeout.h [new file with mode: 0644]
Source/WebCore/WebCore.xcodeproj/project.pbxproj
Source/WebCore/bindings/js/CommonVM.cpp
Source/WebCore/bindings/js/CommonVM.h
Source/WebCore/bindings/js/JSDOMGlobalObject.cpp
Source/WebCore/bindings/js/JSDOMWrapper.cpp
Source/WebCore/bindings/js/JSDOMWrapper.h
Source/WebCore/bindings/js/WebCoreJSClientData.cpp [new file with mode: 0644]
Source/WebCore/bindings/js/WebCoreJSClientData.h
Source/WebCore/bindings/js/WorkerScriptController.cpp
Source/WebCore/bindings/scripts/CodeGeneratorJS.pm
Source/WebCore/dom/ContainerNodeAlgorithms.cpp

index abfa141..de9ae70 100644 (file)
@@ -1,3 +1,14 @@
+2017-01-16  Filip Pizlo  <fpizlo@apple.com>
+
+        Make opaque root scanning truly constraint-based
+        https://bugs.webkit.org/show_bug.cgi?id=165760
+
+        Reviewed by Geoffrey Garen.
+        
+        Added this test, which demonstrates the benefit of having a dedicated string subspace.
+
+        * microbenchmarks/stringalloc.js: Added.
+
 2017-01-17  Michael Saboff  <msaboff@apple.com>
 
         Nested parenthesized regular expressions with non-zero minimum counts appear to hang and use lots of memory
diff --git a/JSTests/microbenchmarks/stringalloc.js b/JSTests/microbenchmarks/stringalloc.js
new file mode 100644 (file)
index 0000000..b0fec2f
--- /dev/null
@@ -0,0 +1,4 @@
+var global;
+var array = ["a", "b"];
+for (var i = 0; i < 10000000; ++i)
+    global = array[i & 1] + "c";
index b686500..997edc1 100644 (file)
@@ -504,6 +504,7 @@ set(JavaScriptCore_SOURCES
     heap/SlotVisitor.cpp
     heap/SpaceTimeMutatorScheduler.cpp
     heap/StopIfNecessaryTimer.cpp
+    heap/Subspace.cpp
     heap/SynchronousStopTheWorldMutatorScheduler.cpp
     heap/VisitRaceKey.cpp
     heap/Weak.cpp
@@ -752,6 +753,7 @@ set(JavaScriptCore_SOURCES
     runtime/JSDataView.cpp
     runtime/JSDataViewPrototype.cpp
     runtime/JSDateMath.cpp
+    runtime/JSDestructibleObjectSubspace.cpp
     runtime/JSEnvironmentRecord.cpp
     runtime/JSFixedArray.cpp
     runtime/JSFunction.cpp
@@ -792,6 +794,7 @@ set(JavaScriptCore_SOURCES
     runtime/JSString.cpp
     runtime/JSStringIterator.cpp
     runtime/JSStringJoiner.cpp
+    runtime/JSStringSubspace.cpp
     runtime/JSSymbolTableObject.cpp
     runtime/JSTemplateRegistryKey.cpp
     runtime/JSTypedArrayConstructors.cpp
index 5244917..8e7df22 100644 (file)
@@ -1,3 +1,347 @@
+2017-01-16  Filip Pizlo  <fpizlo@apple.com>
+
+        Make opaque root scanning truly constraint-based
+        https://bugs.webkit.org/show_bug.cgi?id=165760
+
+        Reviewed by Geoffrey Garen.
+
+        We have bugs when visitChildren() changes its mind about what opaque root to add, since
+        we don't have barriers on opaque roots. This supposedly once worked for generational GC,
+        and I started adding more barriers to support concurrent GC. But I think that the real
+        bug here is that we want the JSObject->OpaqueRoot to be evaluated as a constraint that
+        participates in the fixpoint. I like to think of this as an *output* constraint, because it
+        is concerned with outgoing edges in the heap from the object that registered the constraint.
+        An *input* constraint is like what Weak<> does when deciding whether the thing it points to
+        should be live.
+
+        Whether or not an object has output constraints depends on its type. So, we want the GC to
+        have a feature where we rapidly call some function on all marked objects of some type.
+        
+        It's easy to rapidly scan all marked objects in a MarkedBlock. So, we want to allocate all
+        objects that have output constraints in their own MarkedBlocks and we want to track the set
+        of MarkedBlocks with output constraints.
+        
+        This patch makes it easy to have clients of JSC's internal C++ APIs create a Subspace - like
+        what we used to call MarkedSpace::Subspace but now it's in the JSC namespace - which is
+        a collection of objects that you can easily scan during GC from a MarkingConstraint. It's
+        now possible for internal C++ API clients to register their own MarkingConstraints. The DOM
+        now uses this to create two Subspaces (more on why two below) and it calls
+        JSCell::visitOutputConstraints() on all of the marked objects in those subspaces using a new
+        MarkingConstraint. That MarkingConstraint uses a new style of volatility, called
+        SeldomGreyed, which is like GreyedByExecution except it is opportunistically not executed
+        as roots in the hopes that their sole execution will be the snapshot-at-the-end. I also
+        converted the CodeBlock rescan constraint to SeldomGreyed, since that's also an output
+        constraint.
+        
+        This patch also uses Subspace for something pretty obvious: knowing how to call the
+        destructor. Subspaces can specialize the sweep for their way of invoking destructors. We
+        have the following subspaces:
+        
+        - auxiliary
+        - cell
+        - destructibleCell - for JSCell subclasses that have destructors and StructureIsImmortal
+        - stringSpace - inlines ~JSString into the sweep, making string allocation 7% faster
+        - destructibleObjectSpace - for JSDestructibleObject subclasses
+        
+        And WebCore adds:
+        
+        - outputConstraint - for JSDOMObjects that have a visitAdditionalChildren
+        - globalObjectOutputConstraint - for JSDOMGlobalObjects that have a visitAdditionalChildren,
+          since JSDOMGlobalObjects are not JSDestructibleObjects
+        
+        The Subspace for a type is selected by saying JSC::subspaceFor<Type>(vm). This calls
+        Type::subspaceFor<Type>(vm). This allows cell classes to override subspaceFor<> and it
+        allows any subspaceFor<> implementation to query static flags in the type. This is how
+        JSCell::subspaceFor<> can select either cellSpace or destructibleCellSpace.
+        
+        This patch is mostly about:
+        
+        - Moving MarkedSpace::Subspace out of MarkedSpace and making it a nice class with a nice
+          API. Almost all of its functionality is just taken out of MarkedSpace.
+        - Converting users of the old API for allocating objects and getting MarkedAllocators, like
+          heap.allocatorForObjectWithoutDestructor() and its friends. That would now say
+          vm.cellSpace.allocatorFor().
+        
+        Altogether, this means that we only have a small regression on Dromaeo. The regression is
+        due to the fact that we scan output constraints. Before the Subspace optimizations (see
+        r209766, which was rolled out in r209812), this regression on Dromaeo/jslib was 2x but after
+        the optimizations in this patch it's only 1.12x. Note that Dromaeo/jslib creats gigabytes of
+        DOM nodes. Compared to web pages, this is a very extreme synthetic microbenchmark. Still, we
+        like optimizing these because we don't want to presume what web pages will look like.
+        
+        The use of Subspaces to specialize destructors happened not because it's super necessary but
+        because I wanted to introduce a single unified way of communicating to the GC how to treat
+        different types. Any Subspace feature that allowed us to collect some types together would
+        have to be mindful of the destructorness of objects. I could have turned this into a
+        liability where each Subspace has two subsubspaces - one for destructor objects and one for
+        non-destructor objects, which would have allowed me to keep the old sweep specialization
+        code. Just days prior, mlam wanted to do something that was hard because of that old sweep
+        specializer, so I decided to take the opportunity to fix the sweep specializer while also
+        making Subspace be the one true way of teaching the GC about types. To validate that this
+        actually does things, I added a JSStringSubspace and a test that shows that this is a 7%
+        string allocation progression.
+        
+        In bug 167066, I'm getting rid of the rest of the code in JSC that would special-case for
+        JSDestructibleObject vs StructureIsImmortal by using the GC's DestructionMode. After that,
+        Subspace will be only mechanism by which JSC uses the GC to encode types.
+        
+        Prior to this change, having multiple MarkedSpace::Subspaces would have been expensive
+        because they create a bunch of MarkedAllocators upfront. We now have the ability to create
+        MarkedAllocators lazily. We create them on the first allocation from that size class or when
+        a JIT asks for the MarkedAllocator. The concurrent JITs can ask for MarkedAllocators because
+        their creation is under a lock.
+        
+        On my machine, this might be a 1.1% JetStream speed-up with 87% confidence and it might be
+        a 0.4% PLT3 slow-down with 92% confidence. Note that 0.4% on PLT3 is the level of systematic
+        error on PLT3 on my computer: I've seen definite 0.4% speed-ups and slow-downs that were not
+        confirmed by any bot. Let's see what the bots say.
+        
+        * CMakeLists.txt:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * bytecode/ObjectAllocationProfile.h:
+        (JSC::ObjectAllocationProfile::initialize):
+        * bytecode/PolymorphicAccess.cpp:
+        (JSC::AccessCase::generateImpl):
+        * dfg/DFGSpeculativeJIT.cpp:
+        (JSC::DFG::SpeculativeJIT::emitAllocateRawObject):
+        (JSC::DFG::SpeculativeJIT::compileMakeRope):
+        (JSC::DFG::SpeculativeJIT::compileAllocatePropertyStorage):
+        (JSC::DFG::SpeculativeJIT::compileReallocatePropertyStorage):
+        (JSC::DFG::SpeculativeJIT::compileNewTypedArray):
+        (JSC::DFG::SpeculativeJIT::emitAllocateButterfly):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::compile):
+        * ftl/FTLAbstractHeapRepository.h:
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::compileNewTypedArray):
+        (JSC::FTL::DFG::LowerDFGToB3::compileMakeRope):
+        (JSC::FTL::DFG::LowerDFGToB3::compileMaterializeNewObject):
+        (JSC::FTL::DFG::LowerDFGToB3::allocatePropertyStorageWithSizeImpl):
+        (JSC::FTL::DFG::LowerDFGToB3::allocateObject):
+        (JSC::FTL::DFG::LowerDFGToB3::allocatorForSize):
+        (JSC::FTL::DFG::LowerDFGToB3::allocateVariableSizedObject):
+        (JSC::FTL::DFG::LowerDFGToB3::allocateVariableSizedCell):
+        (JSC::FTL::DFG::LowerDFGToB3::allocateJSArray):
+        * heap/AllocatorAttributes.h:
+        (JSC::AllocatorAttributes::AllocatorAttributes):
+        * heap/ConstraintVolatility.h: Added.
+        (WTF::printInternal):
+        * heap/GCActivityCallback.cpp:
+        * heap/Heap.cpp:
+        (JSC::Heap::Heap):
+        (JSC::Heap::lastChanceToFinalize):
+        (JSC::Heap::markToFixpoint):
+        (JSC::Heap::updateObjectCounts):
+        (JSC::Heap::collectAllGarbage):
+        (JSC::Heap::collectInThread):
+        (JSC::Heap::stopTheWorld):
+        (JSC::Heap::updateAllocationLimits):
+        (JSC::Heap::bytesVisited):
+        (JSC::Heap::addCoreConstraints):
+        (JSC::Heap::addMarkingConstraint):
+        (JSC::Heap::notifyIsSafeToCollect):
+        (JSC::Heap::preventCollection):
+        (JSC::Heap::allowCollection):
+        (JSC::Heap::setMutatorShouldBeFenced):
+        (JSC::Heap::buildConstraintSet): Deleted.
+        (JSC::Heap::writeBarrierOpaqueRootSlow): Deleted.
+        (JSC::Heap::addMutatorShouldBeFencedCache): Deleted.
+        * heap/Heap.h:
+        (JSC::Heap::mutatorExecutionVersion):
+        (JSC::Heap::numOpaqueRoots):
+        (JSC::Heap::vm): Deleted.
+        (JSC::Heap::subspaceForObjectWithoutDestructor): Deleted.
+        (JSC::Heap::subspaceForObjectDestructor): Deleted.
+        (JSC::Heap::subspaceForAuxiliaryData): Deleted.
+        (JSC::Heap::allocatorForObjectWithoutDestructor): Deleted.
+        (JSC::Heap::allocatorForObjectWithDestructor): Deleted.
+        (JSC::Heap::allocatorForAuxiliaryData): Deleted.
+        * heap/HeapInlines.h:
+        (JSC::Heap::vm):
+        (JSC::Heap::allocateWithDestructor): Deleted.
+        (JSC::Heap::allocateWithoutDestructor): Deleted.
+        (JSC::Heap::allocateObjectOfType): Deleted.
+        (JSC::Heap::subspaceForObjectOfType): Deleted.
+        (JSC::Heap::allocatorForObjectOfType): Deleted.
+        (JSC::Heap::allocateAuxiliary): Deleted.
+        (JSC::Heap::tryAllocateAuxiliary): Deleted.
+        (JSC::Heap::tryReallocateAuxiliary): Deleted.
+        (JSC::Heap::ascribeOwner): Deleted.
+        (JSC::Heap::writeBarrierOpaqueRoot): Deleted.
+        * heap/LargeAllocation.cpp:
+        (JSC::LargeAllocation::tryCreate):
+        (JSC::LargeAllocation::LargeAllocation):
+        (JSC::LargeAllocation::~LargeAllocation):
+        (JSC::LargeAllocation::sweep):
+        * heap/LargeAllocation.h:
+        * heap/MarkedAllocator.cpp:
+        (JSC::MarkedAllocator::MarkedAllocator):
+        (JSC::MarkedAllocator::tryAllocateWithoutCollecting):
+        (JSC::MarkedAllocator::tryAllocateIn):
+        (JSC::MarkedAllocator::allocateSlowCaseImpl):
+        (JSC::MarkedAllocator::tryAllocateBlock):
+        (JSC::MarkedAllocator::shrink):
+        (JSC::MarkedAllocator::markedSpace):
+        * heap/MarkedAllocator.h:
+        (JSC::MarkedAllocator::nextAllocatorInSubspace):
+        (JSC::MarkedAllocator::setNextAllocatorInSubspace):
+        (JSC::MarkedAllocator::subspace):
+        (JSC::MarkedAllocator::tryAllocate): Deleted.
+        (JSC::MarkedAllocator::allocate): Deleted.
+        (JSC::MarkedAllocator::forEachBlock): Deleted.
+        * heap/MarkedAllocatorInlines.h: Added.
+        (JSC::MarkedAllocator::tryAllocate):
+        (JSC::MarkedAllocator::allocate):
+        (JSC::MarkedAllocator::forEachBlock):
+        (JSC::MarkedAllocator::forEachNotEmptyBlock):
+        * heap/MarkedBlock.cpp:
+        (JSC::MarkedBlock::Handle::subspace):
+        (JSC::MarkedBlock::Handle::sweep):
+        (JSC::MarkedBlock::Handle::specializedSweep): Deleted.
+        (JSC::MarkedBlock::Handle::sweepHelperSelectScribbleMode): Deleted.
+        (JSC::MarkedBlock::Handle::sweepHelperSelectEmptyMode): Deleted.
+        (JSC::MarkedBlock::Handle::sweepHelperSelectHasNewlyAllocated): Deleted.
+        (JSC::MarkedBlock::Handle::sweepHelperSelectSweepMode): Deleted.
+        (JSC::MarkedBlock::Handle::sweepHelperSelectMarksMode): Deleted.
+        * heap/MarkedBlock.h:
+        (JSC::MarkedBlock::Handle::visitWeakSet):
+        * heap/MarkedBlockInlines.h:
+        (JSC::MarkedBlock::Handle::isNewlyAllocatedStale):
+        (JSC::MarkedBlock::Handle::hasAnyNewlyAllocated):
+        (JSC::MarkedBlock::heap):
+        (JSC::MarkedBlock::space):
+        (JSC::MarkedBlock::Handle::space):
+        (JSC::MarkedBlock::Handle::specializedSweep):
+        (JSC::MarkedBlock::Handle::finishSweepKnowingSubspace):
+        (JSC::MarkedBlock::Handle::sweepDestructionMode):
+        (JSC::MarkedBlock::Handle::emptyMode):
+        (JSC::MarkedBlock::Handle::scribbleMode):
+        (JSC::MarkedBlock::Handle::newlyAllocatedMode):
+        (JSC::MarkedBlock::Handle::marksMode):
+        (JSC::MarkedBlock::Handle::forEachMarkedCell):
+        * heap/MarkedSpace.cpp:
+        (JSC::MarkedSpace::initializeSizeClassForStepSize):
+        (JSC::MarkedSpace::MarkedSpace):
+        (JSC::MarkedSpace::lastChanceToFinalize):
+        (JSC::MarkedSpace::addMarkedAllocator):
+        (JSC::MarkedSpace::allocate): Deleted.
+        (JSC::MarkedSpace::tryAllocate): Deleted.
+        (JSC::MarkedSpace::allocateLarge): Deleted.
+        (JSC::MarkedSpace::tryAllocateLarge): Deleted.
+        * heap/MarkedSpace.h:
+        (JSC::MarkedSpace::heap):
+        (JSC::MarkedSpace::allocatorLock):
+        (JSC::MarkedSpace::subspaceForObjectsWithDestructor): Deleted.
+        (JSC::MarkedSpace::subspaceForObjectsWithoutDestructor): Deleted.
+        (JSC::MarkedSpace::subspaceForAuxiliaryData): Deleted.
+        (JSC::MarkedSpace::allocatorFor): Deleted.
+        (JSC::MarkedSpace::destructorAllocatorFor): Deleted.
+        (JSC::MarkedSpace::auxiliaryAllocatorFor): Deleted.
+        (JSC::MarkedSpace::allocateWithoutDestructor): Deleted.
+        (JSC::MarkedSpace::allocateWithDestructor): Deleted.
+        (JSC::MarkedSpace::allocateAuxiliary): Deleted.
+        (JSC::MarkedSpace::tryAllocateAuxiliary): Deleted.
+        (JSC::MarkedSpace::forEachSubspace): Deleted.
+        * heap/MarkingConstraint.cpp:
+        (JSC::MarkingConstraint::MarkingConstraint):
+        * heap/MarkingConstraint.h:
+        (JSC::MarkingConstraint::volatility):
+        * heap/MarkingConstraintSet.cpp:
+        (JSC::MarkingConstraintSet::resetStats):
+        (JSC::MarkingConstraintSet::add):
+        (JSC::MarkingConstraintSet::executeConvergenceImpl):
+        * heap/MarkingConstraintSet.h:
+        * heap/SlotVisitor.cpp:
+        (JSC::SlotVisitor::visitChildren):
+        (JSC::SlotVisitor::visitAsConstraint):
+        (JSC::SlotVisitor::drain):
+        (JSC::SlotVisitor::addOpaqueRoot):
+        (JSC::SlotVisitor::mergeIfNecessary):
+        (JSC::SlotVisitor::mergeOpaqueRootsIfNecessary): Deleted.
+        * heap/SlotVisitor.h:
+        (JSC::SlotVisitor::setIgnoreNewOpaqueRoots):
+        * heap/SlotVisitorInlines.h:
+        (JSC::SlotVisitor::reportExtraMemoryVisited):
+        (JSC::SlotVisitor::reportExternalMemoryVisited):
+        * heap/Subspace.cpp: Added.
+        (JSC::Subspace::Subspace):
+        (JSC::Subspace::~Subspace):
+        (JSC::Subspace::finishSweep):
+        (JSC::Subspace::destroy):
+        (JSC::Subspace::allocate):
+        (JSC::Subspace::tryAllocate):
+        (JSC::Subspace::allocatorForSlow):
+        (JSC::Subspace::allocateSlow):
+        (JSC::Subspace::tryAllocateSlow):
+        * heap/Subspace.h: Added.
+        (JSC::Subspace::tryAllocatorFor):
+        (JSC::Subspace::allocatorFor):
+        * heap/SubspaceInlines.h: Added.
+        (JSC::Subspace::forEachMarkedBlock):
+        (JSC::Subspace::forEachNotEmptyMarkedBlock):
+        (JSC::Subspace::forEachLargeAllocation):
+        (JSC::Subspace::forEachMarkedCell):
+        * heap/WeakBlock.cpp:
+        (JSC::WeakBlock::specializedVisit):
+        * heap/WeakBlock.h:
+        * heap/WeakSet.h:
+        (JSC::WeakSet::visit):
+        * jit/AssemblyHelpers.h:
+        (JSC::AssemblyHelpers::emitAllocateJSObjectWithKnownSize):
+        (JSC::AssemblyHelpers::emitAllocateVariableSized):
+        (JSC::AssemblyHelpers::emitAllocateVariableSizedCell):
+        * jit/JITOpcodes.cpp:
+        (JSC::JIT::emit_op_new_object):
+        * jsc.cpp:
+        * runtime/ButterflyInlines.h:
+        (JSC::Butterfly::createUninitialized):
+        (JSC::Butterfly::growArrayRight):
+        * runtime/ClassInfo.h:
+        * runtime/ClonedArguments.cpp:
+        (JSC::ClonedArguments::createEmpty):
+        * runtime/DirectArguments.cpp:
+        (JSC::DirectArguments::overrideThings):
+        * runtime/GenericArgumentsInlines.h:
+        (JSC::GenericArguments<Type>::initModifiedArgumentsDescriptor):
+        * runtime/HashMapImpl.h:
+        (JSC::HashMapBuffer::create):
+        * runtime/JSArray.cpp:
+        (JSC::JSArray::tryCreateUninitialized):
+        (JSC::JSArray::unshiftCountSlowCase):
+        * runtime/JSArrayBufferView.cpp:
+        (JSC::JSArrayBufferView::ConstructionContext::ConstructionContext):
+        * runtime/JSCell.h:
+        (JSC::subspaceFor):
+        * runtime/JSCellInlines.h:
+        (JSC::JSCell::visitOutputConstraints):
+        (JSC::JSCell::subspaceFor):
+        (JSC::allocateCell):
+        * runtime/JSDestructibleObject.h:
+        (JSC::JSDestructibleObject::subspaceFor):
+        * runtime/JSDestructibleObjectSubspace.cpp: Added.
+        (JSC::JSDestructibleObjectSubspace::JSDestructibleObjectSubspace):
+        (JSC::JSDestructibleObjectSubspace::~JSDestructibleObjectSubspace):
+        (JSC::JSDestructibleObjectSubspace::finishSweep):
+        (JSC::JSDestructibleObjectSubspace::destroy):
+        * runtime/JSDestructibleObjectSubspace.h: Added.
+        * runtime/JSObject.h:
+        (JSC::JSObject::JSObject):
+        * runtime/JSObjectInlines.h:
+        * runtime/JSSegmentedVariableObject.h:
+        * runtime/JSString.h:
+        (JSC::JSString::subspaceFor):
+        * runtime/JSStringSubspace.cpp: Added.
+        (JSC::JSStringSubspace::JSStringSubspace):
+        (JSC::JSStringSubspace::~JSStringSubspace):
+        (JSC::JSStringSubspace::finishSweep):
+        (JSC::JSStringSubspace::destroy):
+        * runtime/JSStringSubspace.h: Added.
+        * runtime/RegExpMatchesArray.h:
+        (JSC::tryCreateUninitializedRegExpMatchesArray):
+        * runtime/VM.cpp:
+        (JSC::VM::VM):
+        * runtime/VM.h:
+
 2017-01-17  Michael Saboff  <msaboff@apple.com>
 
         Nested parenthesized regular expressions with non-zero minimum counts appear to hang and use lots of memory
index 7cfeb99..0986b95 100644 (file)
                0F1FB38E1E173A6500A9BE50 /* SynchronousStopTheWorldMutatorScheduler.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F1FB38A1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.cpp */; };
                0F1FB38F1E173A6700A9BE50 /* SynchronousStopTheWorldMutatorScheduler.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1FB38B1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.h */; };
                0F1FB3901E173A6B00A9BE50 /* MutatorScheduler.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F1FB38C1E173A6200A9BE50 /* MutatorScheduler.cpp */; };
-               0F1FB3931E177A7200A9BE50 /* VisitingTimeout.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1FB3921E177A6F00A9BE50 /* VisitingTimeout.h */; };
+               0F1FB3931E177A7200A9BE50 /* VisitingTimeout.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1FB3921E177A6F00A9BE50 /* VisitingTimeout.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F1FB3961E1AF7E100A9BE50 /* DFGPlanInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1FB3941E1AF7DF00A9BE50 /* DFGPlanInlines.h */; };
                0F1FB3971E1AF7E300A9BE50 /* DFGWorklistInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1FB3951E1AF7DF00A9BE50 /* DFGWorklistInlines.h */; };
                0F1FB3991E1F65FB00A9BE50 /* MutatorScheduler.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F1FB3981E1F65F900A9BE50 /* MutatorScheduler.h */; };
                0F64B27A1A7957B2006E4E66 /* CallEdge.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F64B2781A7957B2006E4E66 /* CallEdge.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F64EAF31C4ECD0600621E9B /* AirArgInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F64EAF21C4ECD0600621E9B /* AirArgInlines.h */; };
                0F660E371E0517B90031462C /* MarkingConstraint.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F660E331E0517B70031462C /* MarkingConstraint.cpp */; };
-               0F660E381E0517BB0031462C /* MarkingConstraint.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F660E341E0517B70031462C /* MarkingConstraint.h */; };
+               0F660E381E0517BB0031462C /* MarkingConstraint.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F660E341E0517B70031462C /* MarkingConstraint.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F660E391E0517BF0031462C /* MarkingConstraintSet.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F660E351E0517B70031462C /* MarkingConstraintSet.cpp */; };
                0F660E3A1E0517C10031462C /* MarkingConstraintSet.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F660E361E0517B80031462C /* MarkingConstraintSet.h */; };
                0F664CE81DA304EF00B00A11 /* CodeBlockSetInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F664CE71DA304ED00B00A11 /* CodeBlockSetInlines.h */; };
                0F7C39FB1C8F629300480151 /* RegExpInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C39FA1C8F629300480151 /* RegExpInlines.h */; };
                0F7C39FD1C8F659500480151 /* RegExpObjectInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C39FC1C8F659500480151 /* RegExpObjectInlines.h */; };
                0F7C39FF1C90C55B00480151 /* DFGOpInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C39FE1C90C55B00480151 /* DFGOpInfo.h */; };
-               0F7C5FB81D888A0C0044F5E2 /* MarkedBlockInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C5FB71D888A010044F5E2 /* MarkedBlockInlines.h */; };
-               0F7C5FBA1D8895070044F5E2 /* MarkedSpaceInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C5FB91D8895050044F5E2 /* MarkedSpaceInlines.h */; };
+               0F7C5FB81D888A0C0044F5E2 /* MarkedBlockInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C5FB71D888A010044F5E2 /* MarkedBlockInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F7CF94F1DBEEE880098CC12 /* ReleaseHeapAccessScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */; };
                0F7CF9521DC027D90098CC12 /* StopIfNecessaryTimer.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7CF9511DC027D70098CC12 /* StopIfNecessaryTimer.h */; };
                0F7CF9531DC027DB0098CC12 /* StopIfNecessaryTimer.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7CF9501DC027D70098CC12 /* StopIfNecessaryTimer.cpp */; };
                0F7CF9561DC1258D0098CC12 /* AtomicsObject.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7CF9541DC1258B0098CC12 /* AtomicsObject.cpp */; };
                0F7CF9571DC125900098CC12 /* AtomicsObject.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7CF9551DC1258B0098CC12 /* AtomicsObject.h */; };
+               0F7DF1341E2970D70095951B /* ConstraintVolatility.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7DF12F1E2970D50095951B /* ConstraintVolatility.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F7DF1351E2970DC0095951B /* MarkedSpaceInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7DF1301E2970D50095951B /* MarkedSpaceInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F7DF1361E2970DF0095951B /* Subspace.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7DF1311E2970D50095951B /* Subspace.cpp */; };
+               0F7DF1371E2970E10095951B /* Subspace.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7DF1321E2970D50095951B /* Subspace.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F7DF1381E2970E40095951B /* SubspaceInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7DF1331E2970D50095951B /* SubspaceInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F7DF13B1E2971110095951B /* JSDestructibleObjectSubspace.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7DF1391E29710E0095951B /* JSDestructibleObjectSubspace.cpp */; };
+               0F7DF13C1E2971130095951B /* JSDestructibleObjectSubspace.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7DF13A1E29710E0095951B /* JSDestructibleObjectSubspace.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F7DF13F1E2AFC4D0095951B /* JSStringSubspace.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7DF13E1E2AFC4B0095951B /* JSStringSubspace.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F7DF1401E2AFC500095951B /* JSStringSubspace.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7DF13D1E2AFC4B0095951B /* JSStringSubspace.cpp */; };
+               0F7DF1461E2BEF6A0095951B /* MarkedAllocatorInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7DF1451E2BEF680095951B /* MarkedAllocatorInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F7F988B1D9596C500F4F12E /* DFGStoreBarrierClusteringPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7F98891D9596C300F4F12E /* DFGStoreBarrierClusteringPhase.cpp */; };
                0F7F988C1D9596C800F4F12E /* DFGStoreBarrierClusteringPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7F988A1D9596C300F4F12E /* DFGStoreBarrierClusteringPhase.h */; };
                0F8023EA1613832B00A0BA45 /* ByValInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F8023E91613832300A0BA45 /* ByValInfo.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F7C39FC1C8F659500480151 /* RegExpObjectInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RegExpObjectInlines.h; sourceTree = "<group>"; };
                0F7C39FE1C90C55B00480151 /* DFGOpInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGOpInfo.h; path = dfg/DFGOpInfo.h; sourceTree = "<group>"; };
                0F7C5FB71D888A010044F5E2 /* MarkedBlockInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MarkedBlockInlines.h; sourceTree = "<group>"; };
-               0F7C5FB91D8895050044F5E2 /* MarkedSpaceInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MarkedSpaceInlines.h; sourceTree = "<group>"; };
                0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ReleaseHeapAccessScope.h; sourceTree = "<group>"; };
                0F7CF9501DC027D70098CC12 /* StopIfNecessaryTimer.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = StopIfNecessaryTimer.cpp; sourceTree = "<group>"; };
                0F7CF9511DC027D70098CC12 /* StopIfNecessaryTimer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StopIfNecessaryTimer.h; sourceTree = "<group>"; };
                0F7CF9541DC1258B0098CC12 /* AtomicsObject.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = AtomicsObject.cpp; sourceTree = "<group>"; };
                0F7CF9551DC1258B0098CC12 /* AtomicsObject.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AtomicsObject.h; sourceTree = "<group>"; };
+               0F7DF12F1E2970D50095951B /* ConstraintVolatility.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ConstraintVolatility.h; sourceTree = "<group>"; };
+               0F7DF1301E2970D50095951B /* MarkedSpaceInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MarkedSpaceInlines.h; sourceTree = "<group>"; };
+               0F7DF1311E2970D50095951B /* Subspace.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = Subspace.cpp; sourceTree = "<group>"; };
+               0F7DF1321E2970D50095951B /* Subspace.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Subspace.h; sourceTree = "<group>"; };
+               0F7DF1331E2970D50095951B /* SubspaceInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SubspaceInlines.h; sourceTree = "<group>"; };
+               0F7DF1391E29710E0095951B /* JSDestructibleObjectSubspace.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSDestructibleObjectSubspace.cpp; sourceTree = "<group>"; };
+               0F7DF13A1E29710E0095951B /* JSDestructibleObjectSubspace.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSDestructibleObjectSubspace.h; sourceTree = "<group>"; };
+               0F7DF13D1E2AFC4B0095951B /* JSStringSubspace.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSStringSubspace.cpp; sourceTree = "<group>"; };
+               0F7DF13E1E2AFC4B0095951B /* JSStringSubspace.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSStringSubspace.h; sourceTree = "<group>"; };
+               0F7DF1451E2BEF680095951B /* MarkedAllocatorInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MarkedAllocatorInlines.h; sourceTree = "<group>"; };
                0F7F98891D9596C300F4F12E /* DFGStoreBarrierClusteringPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGStoreBarrierClusteringPhase.cpp; path = dfg/DFGStoreBarrierClusteringPhase.cpp; sourceTree = "<group>"; };
                0F7F988A1D9596C300F4F12E /* DFGStoreBarrierClusteringPhase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGStoreBarrierClusteringPhase.h; path = dfg/DFGStoreBarrierClusteringPhase.h; sourceTree = "<group>"; };
                0F8023E91613832300A0BA45 /* ByValInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ByValInfo.h; sourceTree = "<group>"; };
                                0FA762011DB9242300B7A2FD /* CollectionScope.h */,
                                146B14DB12EB5B12001BEC1B /* ConservativeRoots.cpp */,
                                149DAAF212EB559D0083B12B /* ConservativeRoots.h */,
+                               0F7DF12F1E2970D50095951B /* ConstraintVolatility.h */,
                                2A7A58EE1808A4C40020BDF7 /* DeferGC.cpp */,
                                0F136D4B174AD69B0075B354 /* DeferGC.h */,
                                0FBB73BA1DEF8644002C009E /* DeleteAllCodeEffort.h */,
                                14B7234012D7D0DA003BD5ED /* MachineStackMarker.h */,
                                C2B916C414DA040C00CBAC86 /* MarkedAllocator.cpp */,
                                C2B916C114DA014E00CBAC86 /* MarkedAllocator.h */,
+                               0F7DF1451E2BEF680095951B /* MarkedAllocatorInlines.h */,
                                142D6F0613539A2800B02E86 /* MarkedBlock.cpp */,
                                142D6F0713539A2800B02E86 /* MarkedBlock.h */,
                                0F7C5FB71D888A010044F5E2 /* MarkedBlockInlines.h */,
                                141448CA13A176EC00F5BA1A /* MarkedBlockSet.h */,
                                14D2F3D8139F4BE200491031 /* MarkedSpace.cpp */,
                                14D2F3D9139F4BE200491031 /* MarkedSpace.h */,
-                               0F7C5FB91D8895050044F5E2 /* MarkedSpaceInlines.h */,
+                               0F7DF1301E2970D50095951B /* MarkedSpaceInlines.h */,
                                0F660E331E0517B70031462C /* MarkingConstraint.cpp */,
                                0F660E341E0517B70031462C /* MarkingConstraint.h */,
                                0F660E351E0517B70031462C /* MarkingConstraintSet.cpp */,
                                0F7CF9511DC027D70098CC12 /* StopIfNecessaryTimer.h */,
                                142E3132134FF0A600AFADB5 /* Strong.h */,
                                145722851437E140005FDE26 /* StrongInlines.h */,
+                               0F7DF1311E2970D50095951B /* Subspace.cpp */,
+                               0F7DF1321E2970D50095951B /* Subspace.h */,
+                               0F7DF1331E2970D50095951B /* SubspaceInlines.h */,
                                0F1FB38A1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.cpp */,
                                0F1FB38B1E173A6200A9BE50 /* SynchronousStopTheWorldMutatorScheduler.h */,
                                141448CC13A1783700F5BA1A /* TinyBloomFilter.h */,
                                E178633F0D9BEC0000D74E75 /* InitializeThreading.h */,
                                E35E035D1B7AB43E0073AD2A /* InspectorInstrumentationObject.cpp */,
                                E35E035E1B7AB43E0073AD2A /* InspectorInstrumentationObject.h */,
+                               A7A8AF2B17ADB5F3005AB174 /* Int8Array.h */,
                                A7A8AF2C17ADB5F3005AB174 /* Int16Array.h */,
                                A7A8AF2D17ADB5F3005AB174 /* Int32Array.h */,
-                               A7A8AF2B17ADB5F3005AB174 /* Int8Array.h */,
                                BC9BB95B0E19680600DF8855 /* InternalFunction.cpp */,
                                BC11667A0E199C05008066DD /* InternalFunction.h */,
                                A1B9E2331B4E0D6700BC7FED /* IntlCollator.cpp */,
                                9788FC221471AD0C0068CE2D /* JSDateMath.cpp */,
                                9788FC231471AD0C0068CE2D /* JSDateMath.h */,
                                C2A7F687160432D400F76B98 /* JSDestructibleObject.h */,
+                               0F7DF1391E29710E0095951B /* JSDestructibleObjectSubspace.cpp */,
+                               0F7DF13A1E29710E0095951B /* JSDestructibleObjectSubspace.h */,
                                BC22A39A0E16E14800AF21C8 /* JSEnvironmentRecord.cpp */,
                                14F252560D08DD8D004ECFFF /* JSEnvironmentRecord.h */,
                                A7B4ACAE1484C9CE00B38A36 /* JSExportMacros.h */,
                                BC756FC60E2031B200DE7D12 /* JSGlobalObjectFunctions.cpp */,
                                BC756FC70E2031B200DE7D12 /* JSGlobalObjectFunctions.h */,
                                79B819921DD25CF500DDC714 /* JSGlobalObjectInlines.h */,
+                               0F2B66C917B6B5AB00A7AE3F /* JSInt8Array.h */,
                                0F2B66CA17B6B5AB00A7AE3F /* JSInt16Array.h */,
                                0F2B66CB17B6B5AB00A7AE3F /* JSInt32Array.h */,
-                               0F2B66C917B6B5AB00A7AE3F /* JSInt8Array.h */,
                                E33F507E1B8429A400413856 /* JSInternalPromise.cpp */,
                                E33F507F1B8429A400413856 /* JSInternalPromise.h */,
                                E33F50761B84225700413856 /* JSInternalPromiseConstructor.cpp */,
                                70EC0EBD1AA0D7DA00B6AAFA /* JSStringIterator.h */,
                                2600B5A4152BAAA70091EE5F /* JSStringJoiner.cpp */,
                                2600B5A5152BAAA70091EE5F /* JSStringJoiner.h */,
+                               0F7DF13D1E2AFC4B0095951B /* JSStringSubspace.cpp */,
+                               0F7DF13E1E2AFC4B0095951B /* JSStringSubspace.h */,
                                0F919D09157EE09D004A4E7D /* JSSymbolTableObject.cpp */,
                                0F919D0A157EE09D004A4E7D /* JSSymbolTableObject.h */,
                                70ECA6001AFDBEA200449739 /* JSTemplateRegistryKey.cpp */,
                                53F256E11B87E28000B4B768 /* JSTypedArrayViewPrototype.cpp */,
                                53917E7C1B791106000EBD33 /* JSTypedArrayViewPrototype.h */,
                                6507D2970E871E4A00D7D896 /* JSTypeInfo.h */,
-                               0F2B66D417B6B5AB00A7AE3F /* JSUint16Array.h */,
-                               0F2B66D517B6B5AB00A7AE3F /* JSUint32Array.h */,
                                0F2B66D217B6B5AB00A7AE3F /* JSUint8Array.h */,
                                0F2B66D317B6B5AB00A7AE3F /* JSUint8ClampedArray.h */,
+                               0F2B66D417B6B5AB00A7AE3F /* JSUint16Array.h */,
+                               0F2B66D517B6B5AB00A7AE3F /* JSUint32Array.h */,
                                A7CA3AE117DA41AE006538AF /* JSWeakMap.cpp */,
                                A7CA3AE217DA41AE006538AF /* JSWeakMap.h */,
                                709FB8611AE335C60039D069 /* JSWeakSet.cpp */,
                                0F5B4A321C84F0D600F1B17E /* SlowPathReturnType.h */,
                                93303FE80E6A72B500786E6A /* SmallStrings.cpp */,
                                93303FEA0E6A72C000786E6A /* SmallStrings.h */,
+                               425BA1337E4344E1B269A671 /* SourceOrigin.h */,
                                0F0CD4C315F6B6B50032F1C0 /* SparseArrayValueMap.cpp */,
                                0FB7F39215ED8E3800F167B2 /* SparseArrayValueMap.h */,
                                0F3AC751183EA1040032029F /* StackAlignment.h */,
                                0F2D4DE019832D91007D4B19 /* TypeProfilerLog.h */,
                                0F2D4DE319832D91007D4B19 /* TypeSet.cpp */,
                                0F2D4DE419832D91007D4B19 /* TypeSet.h */,
+                               A7A8AF3017ADB5F3005AB174 /* Uint8Array.h */,
+                               A7A8AF3117ADB5F3005AB174 /* Uint8ClampedArray.h */,
                                A7A8AF3217ADB5F3005AB174 /* Uint16Array.h */,
                                866739D113BFDE710023D87C /* Uint16WithFraction.h */,
                                A7A8AF3317ADB5F3005AB174 /* Uint32Array.h */,
-                               A7A8AF3017ADB5F3005AB174 /* Uint8Array.h */,
-                               A7A8AF3117ADB5F3005AB174 /* Uint8ClampedArray.h */,
                                0FE050231AA9095600D33B33 /* VarOffset.cpp */,
                                0FE050241AA9095600D33B33 /* VarOffset.h */,
                                E18E3A570DF9278C00D90B34 /* VM.cpp */,
                                709FB8661AE335C60039D069 /* WeakSetPrototype.h */,
                                A7DCB77912E3D90500911940 /* WriteBarrier.h */,
                                C2B6D75218A33793004A9301 /* WriteBarrierInlines.h */,
-                               425BA1337E4344E1B269A671 /* SourceOrigin.h */,
                                F73926918DC64330AFCDF0D7 /* JSSourceCode.cpp */,
                                3032175DF1AD47D8998B34E1 /* JSSourceCode.h */,
                                11C197C2624848EDA84CED7F /* JSScriptFetcher.cpp */,
                                BCD203E80E1718F4002C7E82 /* DatePrototype.lut.h in Headers */,
                                BC18C3FA0E16F5CD00B34460 /* Debugger.h in Headers */,
                                BC18C3FB0E16F5CD00B34460 /* DebuggerCallFrame.h in Headers */,
+                               0F7DF1351E2970DC0095951B /* MarkedSpaceInlines.h in Headers */,
                                6AD2CB4D19B9140100065719 /* DebuggerEvalEnabler.h in Headers */,
                                A5FC84B21D1DDAD6006B5C46 /* DebuggerLocation.h in Headers */,
                                A5A1A0941D8CB33E004C2EB8 /* DebuggerParseData.h in Headers */,
                                0FC0977114693AF500CF2442 /* DFGOSRExitCompiler.h in Headers */,
                                0F7025AA1714B0FC00382C0E /* DFGOSRExitCompilerCommon.h in Headers */,
                                0F392C8A1B46188400844728 /* DFGOSRExitFuzz.h in Headers */,
+                               0F7DF13F1E2AFC4D0095951B /* JSStringSubspace.h in Headers */,
                                0FEFC9AB1681A3B600567F53 /* DFGOSRExitJumpPlaceholder.h in Headers */,
                                0F235BEE17178E7300690C7F /* DFGOSRExitPreparation.h in Headers */,
                                0F6237981AE45CA700D402EA /* DFGPhantomInsertionPhase.h in Headers */,
                                A54E8EB018BFFBBB00556D28 /* GCSegmentedArray.h in Headers */,
                                A54E8EB118BFFBBE00556D28 /* GCSegmentedArrayInlines.h in Headers */,
                                0F86A26F1D6F7B3300CB0C92 /* GCTypeMap.h in Headers */,
+                               0F7DF1381E2970E40095951B /* SubspaceInlines.h in Headers */,
                                9959E9311BD18272001AA413 /* generate-combined-inspector-json.py in Headers */,
                                C4703CC0192844960013FBEA /* generate-inspector-protocol-bindings.py in Headers */,
+                               0F7DF1461E2BEF6A0095951B /* MarkedAllocatorInlines.h in Headers */,
                                99DA00AF1BD5994E00F4575C /* generate-js-builtins.py in Headers */,
                                A5EA70EC19F5B3EA0098F5EC /* generate_cpp_alternate_backend_dispatcher_header.py in Headers */,
                                A5EF9B141A1D43F600702E90 /* generate_cpp_backend_dispatcher_header.py in Headers */,
                                FE187A021BFBE5610038BBCA /* JITMulGenerator.h in Headers */,
                                FE99B2491C24C3D300C82159 /* JITNegGenerator.h in Headers */,
                                0F24E54D17EE274900ABB217 /* JITOperations.h in Headers */,
+                               0F7DF1371E2970E10095951B /* Subspace.h in Headers */,
                                FE3A06C01C11041A00390FDD /* JITRightShiftGenerator.h in Headers */,
                                0F766D3115AA8112008F363E /* JITStubRoutine.h in Headers */,
                                0F766D2C15A8CC3A008F363E /* JITStubRoutineSet.h in Headers */,
                                7C184E1F17BEE22E007CB63A /* JSPromisePrototype.h in Headers */,
                                996B731F1BDA08EF00331B84 /* JSPromisePrototype.lut.h in Headers */,
                                2A05ABD61961DF2400341750 /* JSPropertyNameEnumerator.h in Headers */,
+                               0F7DF13C1E2971130095951B /* JSDestructibleObjectSubspace.h in Headers */,
                                E3EF88751B66DF23003F26CB /* JSPropertyNameIterator.h in Headers */,
                                862553D216136E1A009F17D0 /* JSProxy.h in Headers */,
                                A552C3801ADDB8FE00139726 /* JSRemoteInspector.h in Headers */,
                                A7C0C4AC168103020017011D /* JSScriptRefPrivate.h in Headers */,
                                0F919D11157F332C004A4E7D /* JSSegmentedVariableObject.h in Headers */,
                                A7299D9E17D12837005F5FF9 /* JSSet.h in Headers */,
+                               0F7DF1341E2970D70095951B /* ConstraintVolatility.h in Headers */,
                                A790DD70182F499700588807 /* JSSetIterator.h in Headers */,
                                BC18C4270E16F5CD00B34460 /* JSString.h in Headers */,
                                86E85539111B9968001AF51E /* JSStringBuilder.h in Headers */,
                                0F7C5FB81D888A0C0044F5E2 /* MarkedBlockInlines.h in Headers */,
                                141448CB13A176EC00F5BA1A /* MarkedBlockSet.h in Headers */,
                                14D2F3DB139F4BE200491031 /* MarkedSpace.h in Headers */,
-                               0F7C5FBA1D8895070044F5E2 /* MarkedSpaceInlines.h in Headers */,
                                142D6F1213539A4100B02E86 /* MarkStack.h in Headers */,
                                8612E4CD152389EC00C836BE /* MatchResult.h in Headers */,
                                4340A4851A9051AF00D73CCA /* MathCommon.h in Headers */,
                                0F7F988B1D9596C500F4F12E /* DFGStoreBarrierClusteringPhase.cpp in Sources */,
                                0F9E32631B05AB0400801ED5 /* DFGStoreBarrierInsertionPhase.cpp in Sources */,
                                0FC20CB51852E2C600C9E954 /* DFGStrengthReductionPhase.cpp in Sources */,
+                               0F7DF13B1E2971110095951B /* JSDestructibleObjectSubspace.cpp in Sources */,
                                0F893BDB1936E23C001211F4 /* DFGStructureAbstractValue.cpp in Sources */,
                                0F79085519A290B200F6310C /* DFGStructureRegistrationPhase.cpp in Sources */,
                                0F2FCCFE18A60070001A27F8 /* DFGThreadData.cpp in Sources */,
                                5B70CFDF1DB69E6600EC23F9 /* JSAsyncFunction.cpp in Sources */,
                                1421359B0A677F4F00A8195E /* JSBase.cpp in Sources */,
                                86FA9E91142BBB2E001773B7 /* JSBoundFunction.cpp in Sources */,
+                               0F7DF1401E2AFC500095951B /* JSStringSubspace.cpp in Sources */,
                                1440F8AF0A508D200005F061 /* JSCallbackConstructor.cpp in Sources */,
                                1440F8920A508B100005F061 /* JSCallbackFunction.cpp in Sources */,
                                14ABDF600A437FEF00ECCA01 /* JSCallbackObject.cpp in Sources */,
                                A5C3A1A518C0490200C9593A /* JSGlobalObjectConsoleClient.cpp in Sources */,
                                A59455921824744700CC3843 /* JSGlobalObjectDebuggable.cpp in Sources */,
                                A57D23E91891B0770031C7FA /* JSGlobalObjectDebuggerAgent.cpp in Sources */,
+                               0F7DF1361E2970DF0095951B /* Subspace.cpp in Sources */,
                                ADE8029B1E08F1DE0058DE78 /* WebAssemblyLinkErrorPrototype.cpp in Sources */,
                                14E9D17B107EC469004DDA21 /* JSGlobalObjectFunctions.cpp in Sources */,
                                A51007C0187CC3C600B38879 /* JSGlobalObjectInspectorController.cpp in Sources */,
index 843c9ee..c4da4c7 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -82,7 +82,7 @@ public:
         ASSERT(inlineCapacity <= JSFinalObject::maxInlineCapacity());
 
         size_t allocationSize = JSFinalObject::allocationSize(inlineCapacity);
-        MarkedAllocator* allocator = vm.heap.allocatorForObjectWithoutDestructor(allocationSize);
+        MarkedAllocator* allocator = vm.cellSpace.allocatorFor(allocationSize);
         
         // Take advantage of extra inline capacity available in the size class.
         if (allocator) {
index bc62ec1..3cd1984 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -1264,7 +1264,7 @@ void AccessCase::generateImpl(AccessGenerationState& state)
             size_t newSize = newStructure()->outOfLineCapacity() * sizeof(JSValue);
             
             if (allocatingInline) {
-                MarkedAllocator* allocator = vm.heap.allocatorForAuxiliaryData(newSize);
+                MarkedAllocator* allocator = vm.auxiliarySpace.allocatorFor(newSize);
                 
                 if (!allocator) {
                     // Yuck, this case would suck!
index e98e121..05d76fa 100644 (file)
@@ -110,7 +110,7 @@ void SpeculativeJIT::emitAllocateRawObject(GPRReg resultGPR, Structure* structur
     m_jit.move(TrustedImmPtr(0), storageGPR);
     
     if (size) {
-        if (MarkedAllocator* allocator = m_jit.vm()->heap.allocatorForAuxiliaryData(size)) {
+        if (MarkedAllocator* allocator = m_jit.vm()->auxiliarySpace.allocatorFor(size)) {
             m_jit.move(TrustedImmPtr(allocator), scratchGPR);
             m_jit.emitAllocate(storageGPR, allocator, scratchGPR, scratch2GPR, slowCases);
             
@@ -125,7 +125,7 @@ void SpeculativeJIT::emitAllocateRawObject(GPRReg resultGPR, Structure* structur
     }
 
     size_t allocationSize = JSFinalObject::allocationSize(inlineCapacity);
-    MarkedAllocator* allocatorPtr = m_jit.vm()->heap.allocatorForObjectWithoutDestructor(allocationSize);
+    MarkedAllocator* allocatorPtr = subspaceFor<JSFinalObject>(*m_jit.vm())->allocatorFor(allocationSize);
     if (allocatorPtr) {
         m_jit.move(TrustedImmPtr(allocatorPtr), scratchGPR);
         emitAllocateJSObject(resultGPR, allocatorPtr, scratchGPR, TrustedImmPtr(structure), storageGPR, scratch2GPR, slowCases);
@@ -3897,7 +3897,7 @@ void SpeculativeJIT::compileMakeRope(Node* node)
     GPRReg scratchGPR = scratch.gpr();
     
     JITCompiler::JumpList slowPath;
-    MarkedAllocator* markedAllocator = m_jit.vm()->heap.allocatorForObjectWithDestructor(sizeof(JSRopeString));
+    MarkedAllocator* markedAllocator = subspaceFor<JSString>(*m_jit.vm())->allocatorFor(sizeof(JSRopeString));
     RELEASE_ASSERT(markedAllocator);
     m_jit.move(TrustedImmPtr(markedAllocator), allocatorGPR);
     emitAllocateJSCell(resultGPR, markedAllocator, allocatorGPR, TrustedImmPtr(m_jit.vm()->stringStructure.get()), scratchGPR, slowPath);
@@ -7555,7 +7555,7 @@ void SpeculativeJIT::compileAllocatePropertyStorage(Node* node)
     
     size_t size = initialOutOfLineCapacity * sizeof(JSValue);
 
-    MarkedAllocator* allocator = m_jit.vm()->heap.allocatorForAuxiliaryData(size);
+    MarkedAllocator* allocator = m_jit.vm()->auxiliarySpace.allocatorFor(size);
 
     if (!allocator || node->transition()->previous->couldHaveIndexingHeader()) {
         SpeculateCellOperand base(this, node->child1());
@@ -7600,7 +7600,7 @@ void SpeculativeJIT::compileReallocatePropertyStorage(Node* node)
     size_t newSize = oldSize * outOfLineGrowthFactor;
     ASSERT(newSize == node->transition()->next->outOfLineCapacity() * sizeof(JSValue));
     
-    MarkedAllocator* allocator = m_jit.vm()->heap.allocatorForAuxiliaryData(newSize);
+    MarkedAllocator* allocator = m_jit.vm()->auxiliarySpace.allocatorFor(newSize);
 
     if (!allocator || node->transition()->previous->couldHaveIndexingHeader()) {
         SpeculateCellOperand base(this, node->child1());
@@ -7986,7 +7986,7 @@ void SpeculativeJIT::compileNewTypedArray(Node* node)
         m_jit.and32(TrustedImm32(~7), scratchGPR);
     }
     m_jit.emitAllocateVariableSized(
-        storageGPR, m_jit.vm()->heap.subspaceForAuxiliaryData(), scratchGPR, scratchGPR,
+        storageGPR, m_jit.vm()->auxiliarySpace, scratchGPR, scratchGPR,
         scratchGPR2, slowCases);
     
     MacroAssembler::Jump done = m_jit.branchTest32(MacroAssembler::Zero, sizeGPR);
@@ -9564,7 +9564,7 @@ void SpeculativeJIT::emitAllocateButterfly(GPRReg storageResultGPR, GPRReg sizeG
     m_jit.lshift32(TrustedImm32(3), scratch1);
     m_jit.add32(TrustedImm32(sizeof(IndexingHeader)), scratch1, scratch2);
     m_jit.emitAllocateVariableSized(
-        storageResultGPR, m_jit.vm()->heap.subspaceForAuxiliaryData(), scratch2, scratch1, scratch3, slowCases);
+        storageResultGPR, m_jit.vm()->auxiliarySpace, scratch2, scratch1, scratch3, slowCases);
     m_jit.addPtr(TrustedImm32(sizeof(IndexingHeader)), storageResultGPR);
 
     m_jit.store32(sizeGPR, MacroAssembler::Address(storageResultGPR, Butterfly::offsetOfPublicLength()));
index 9cde8da..ab09229 100644 (file)
@@ -4173,7 +4173,7 @@ void SpeculativeJIT::compile(Node* node)
         
         Structure* structure = node->structure();
         size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
-        MarkedAllocator* allocatorPtr = m_jit.vm()->heap.allocatorForObjectWithoutDestructor(allocationSize);
+        MarkedAllocator* allocatorPtr = subspaceFor<JSFinalObject>(*m_jit.vm())->allocatorFor(allocationSize);
 
         m_jit.move(TrustedImmPtr(allocatorPtr), allocatorGPR);
         emitAllocateJSObject(resultGPR, allocatorPtr, allocatorGPR, TrustedImmPtr(structure), TrustedImmPtr(0), scratchGPR, slowPath);
index 606064c..3594e49 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -4145,7 +4145,7 @@ void SpeculativeJIT::compile(Node* node)
 
         Structure* structure = node->structure();
         size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
-        MarkedAllocator* allocatorPtr = m_jit.vm()->heap.allocatorForObjectWithoutDestructor(allocationSize);
+        MarkedAllocator* allocatorPtr = subspaceFor<JSFinalObject>(*m_jit.vm())->allocatorFor(allocationSize);
 
         m_jit.move(TrustedImmPtr(allocatorPtr), allocatorGPR);
         emitAllocateJSObject(resultGPR, allocatorPtr, allocatorGPR, TrustedImmPtr(structure), TrustedImmPtr(0), scratchGPR, slowPath);
index 0880698..4268179 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -124,8 +124,8 @@ namespace JSC { namespace FTL {
     macro(JSEnvironmentRecord_variables, JSEnvironmentRecord::offsetOfVariables(), sizeof(EncodedJSValue)) \
     macro(JSPropertyNameEnumerator_cachedPropertyNamesVectorContents, 0, sizeof(WriteBarrier<JSString>)) \
     macro(JSRopeString_fibers, JSRopeString::offsetOfFibers(), sizeof(WriteBarrier<JSString>)) \
-    macro(MarkedSpace_Subspace_allocatorForSizeStep, OBJECT_OFFSETOF(MarkedSpace::Subspace, allocatorForSizeStep), sizeof(MarkedAllocator*)) \
     macro(ScopedArguments_overflowStorage, ScopedArguments::overflowStorageOffset(), sizeof(EncodedJSValue)) \
+    macro(Subspace_allocatorForSizeStep, Subspace::offsetOfAllocatorForSizeStep(), sizeof(MarkedAllocator*)) \
     macro(WriteBarrierBuffer_bufferContents, 0, sizeof(JSCell*)) \
     macro(characters8, 0, sizeof(LChar)) \
     macro(characters16, 0, sizeof(UChar)) \
index 8db38d9..f3ecad1 100644 (file)
@@ -4751,8 +4751,7 @@ private:
                     m_out.constIntPtr(~static_cast<intptr_t>(7)));
             }
         
-            LValue allocator = allocatorForSize(
-                vm().heap.subspaceForAuxiliaryData(), byteSize, slowCase);
+            LValue allocator = allocatorForSize(vm().auxiliarySpace, byteSize, slowCase);
             LValue storage = allocateHeapCell(allocator, slowCase);
             
             splatWords(
@@ -4993,8 +4992,7 @@ private:
         
         LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
         
-        MarkedAllocator* allocator =
-            vm().heap.allocatorForObjectWithDestructor(sizeof(JSRopeString));
+        MarkedAllocator* allocator = subspaceFor<JSRopeString>(vm())->allocatorFor(sizeof(JSRopeString));
         DFG_ASSERT(m_graph, m_node, allocator);
         
         LValue result = allocateCell(
@@ -8636,7 +8634,7 @@ private:
             
             if (structure->outOfLineCapacity() || hasIndexedProperties(structure->indexingType())) {
                 size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
-                MarkedAllocator* cellAllocator = vm().heap.allocatorForObjectWithoutDestructor(allocationSize);
+                MarkedAllocator* cellAllocator = subspaceFor<JSFinalObject>(vm())->allocatorFor(allocationSize);
                 DFG_ASSERT(m_graph, m_node, cellAllocator);
 
                 bool hasIndexingHeader = hasIndexedProperties(structure->indexingType());
@@ -8676,7 +8674,7 @@ private:
                 ValueFromBlock noButterfly = m_out.anchor(m_out.intPtrZero);
                 
                 LValue startOfStorage = allocateHeapCell(
-                    allocatorForSize(vm().heap.subspaceForAuxiliaryData(), butterflySize, slowPath),
+                    allocatorForSize(vm().auxiliarySpace, butterflySize, slowPath),
                     slowPath);
 
                 LValue fastButterflyValue = m_out.add(
@@ -9622,7 +9620,7 @@ private:
         LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
 
         size_t sizeInBytes = sizeInValues * sizeof(JSValue);
-        MarkedAllocator* allocator = vm().heap.allocatorForAuxiliaryData(sizeInBytes);
+        MarkedAllocator* allocator = vm().auxiliarySpace.allocatorFor(sizeInBytes);
         LValue startOfStorage = allocateHeapCell(m_out.constIntPtr(allocator), slowPath);
         ValueFromBlock fastButterfly = m_out.anchor(
             m_out.add(m_out.constIntPtr(sizeInBytes + sizeof(IndexingHeader)), startOfStorage));
@@ -10530,7 +10528,7 @@ private:
     LValue allocateObject(
         size_t size, StructureType structure, LValue butterfly, LBasicBlock slowPath)
     {
-        MarkedAllocator* allocator = vm().heap.allocatorForObjectOfType<ClassType>(size);
+        MarkedAllocator* allocator = subspaceFor<ClassType>(vm())->allocatorFor(size);
         return allocateObject(m_out.constIntPtr(allocator), structure, butterfly, slowPath);
     }
     
@@ -10547,10 +10545,10 @@ private:
         
         // Try to do some constant-folding here.
         if (subspace->hasIntPtr() && size->hasIntPtr()) {
-            MarkedSpace::Subspace* actualSubspace = bitwise_cast<MarkedSpace::Subspace*>(subspace->asIntPtr());
+            Subspace* actualSubspace = bitwise_cast<Subspace*>(subspace->asIntPtr());
             size_t actualSize = size->asIntPtr();
             
-            MarkedAllocator* actualAllocator = MarkedSpace::allocatorFor(*actualSubspace, actualSize);
+            MarkedAllocator* actualAllocator = actualSubspace->allocatorFor(actualSize);
             if (!actualAllocator) {
                 LBasicBlock continuation = m_out.newBlock();
                 LBasicBlock lastNext = m_out.insertNewBlocksBefore(continuation);
@@ -10580,11 +10578,11 @@ private:
         
         return m_out.loadPtr(
             m_out.baseIndex(
-                m_heaps.MarkedSpace_Subspace_allocatorForSizeStep,
+                m_heaps.Subspace_allocatorForSizeStep,
                 subspace, m_out.sub(sizeClassIndex, m_out.intPtrOne)));
     }
     
-    LValue allocatorForSize(MarkedSpace::Subspace& subspace, LValue size, LBasicBlock slowPath)
+    LValue allocatorForSize(Subspace& subspace, LValue size, LBasicBlock slowPath)
     {
         return allocatorForSize(m_out.constIntPtr(&subspace), size, slowPath);
     }
@@ -10594,7 +10592,7 @@ private:
         LValue size, Structure* structure, LValue butterfly, LBasicBlock slowPath)
     {
         LValue allocator = allocatorForSize(
-            vm().heap.subspaceForObjectOfType<ClassType>(), size, slowPath);
+            *subspaceFor<ClassType>(vm()), size, slowPath);
         return allocateObject(allocator, structure, butterfly, slowPath);
     }
 
@@ -10603,14 +10601,14 @@ private:
         LValue size, Structure* structure, LBasicBlock slowPath)
     {
         LValue allocator = allocatorForSize(
-            vm().heap.subspaceForObjectOfType<ClassType>(), size, slowPath);
+            *subspaceFor<ClassType>(vm()), size, slowPath);
         return allocateCell(allocator, structure, slowPath);
     }
     
     LValue allocateObject(Structure* structure)
     {
         size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
-        MarkedAllocator* allocator = vm().heap.allocatorForObjectWithoutDestructor(allocationSize);
+        MarkedAllocator* allocator = subspaceFor<JSFinalObject>(vm())->allocatorFor(allocationSize);
         
         // FIXME: If the allocator is null, we could simply emit a normal C call to the allocator
         // instead of putting it on the slow path.
@@ -10712,8 +10710,7 @@ private:
         LValue butterflySize = m_out.add(
             payloadSize, m_out.constIntPtr(sizeof(IndexingHeader)));
             
-        LValue allocator = allocatorForSize(
-            vm().heap.subspaceForAuxiliaryData(), butterflySize, failCase);
+        LValue allocator = allocatorForSize(vm().auxiliarySpace, butterflySize, failCase);
         LValue startOfStorage = allocateHeapCell(allocator, failCase);
             
         LValue butterfly = m_out.add(startOfStorage, m_out.constIntPtr(sizeof(IndexingHeader)));
index cb28d1a..6d5299f 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2016-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -25,6 +25,7 @@
 
 #pragma once
 
+#include "ConstraintVolatility.h"
 #include "DestructionMode.h"
 #include "HeapCell.h"
 #include <wtf/PrintStream.h>
@@ -34,6 +35,12 @@ namespace JSC {
 struct AllocatorAttributes {
     AllocatorAttributes() { }
     
+    AllocatorAttributes(DestructionMode destruction, HeapCell::Kind cellKind)
+        : destruction(destruction)
+        , cellKind(cellKind)
+    {
+    }
+    
     void dump(PrintStream& out) const;
     
     DestructionMode destruction { DoesNotNeedDestruction };
diff --git a/Source/JavaScriptCore/heap/ConstraintVolatility.h b/Source/JavaScriptCore/heap/ConstraintVolatility.h
new file mode 100644 (file)
index 0000000..5cf986f
--- /dev/null
@@ -0,0 +1,73 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include <wtf/PrintStream.h>
+
+namespace JSC {
+
+enum class ConstraintVolatility : uint8_t {
+    // The constraint needs to be validated, but it is unlikely to ever produce information.
+    // It's best to run it at the bitter end.
+    SeldomGreyed,
+    
+    // FIXME: We could introduce a new kind of volatility called GreyedByResumption, which
+    // would mean running all of the times that GreyedByExecution runs except as a root in a
+    // full GC.
+    // https://bugs.webkit.org/show_bug.cgi?id=166830
+    
+    // The constraint needs to be reevaluated anytime the mutator runs: so at GC start and
+    // whenever the GC resuspends after a resumption. This is almost always something that
+    // you'd call a "root" in a traditional GC.
+    GreyedByExecution,
+    
+    // The constraint needs to be reevaluated any time any object is marked and anytime the
+    // mutator resumes.
+    GreyedByMarking
+};
+    
+} // namespace JSC
+
+namespace WTF {
+
+inline void printInternal(PrintStream& out, JSC::ConstraintVolatility volatility)
+{
+    switch (volatility) {
+    case JSC::ConstraintVolatility::SeldomGreyed:
+        out.print("SeldomGreyed");
+        return;
+    case JSC::ConstraintVolatility::GreyedByExecution:
+        out.print("GreyedByExecuction");
+        return;
+    case JSC::ConstraintVolatility::GreyedByMarking:
+        out.print("GreyedByMarking");
+        return;
+    }
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
+} // namespace WTF
+
index d17200a..e55dc6e 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2010 Apple Inc. All rights reserved.
+ * Copyright (C) 2010-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -29,7 +29,7 @@
 #include "config.h"
 #include "GCActivityCallback.h"
 
-#include "Heap.h"
+#include "HeapInlines.h"
 #include "JSLock.h"
 #include "JSObject.h"
 #include "VM.h"
index 828e0f8..5ffde68 100644 (file)
@@ -192,7 +192,7 @@ public:
             double timing = after - m_before;
             SimpleStats& stats = timingStats(m_name, *m_scope);
             stats.add(timing);
-            dataLog("[GC:", *m_scope, "] ", m_name, " took: ", timing, " ms (average ", stats.mean(), " ms).\n");
+            dataLog("[GC:", *m_scope, "] ", m_name, " took: ", timing, "ms (average ", stats.mean(), "ms).\n");
         }
     }
 private:
@@ -260,6 +260,7 @@ Heap::Heap(VM* vm, HeapType heapType)
     , m_collectorSlotVisitor(std::make_unique<SlotVisitor>(*this))
     , m_mutatorMarkStack(std::make_unique<MarkStackArray>())
     , m_raceMarkStack(std::make_unique<MarkStackArray>())
+    , m_constraintSet(std::make_unique<MarkingConstraintSet>())
     , m_handleSet(vm)
     , m_codeBlocks(std::make_unique<CodeBlockSet>())
     , m_jitStubRoutines(std::make_unique<JITStubRoutineSet>())
@@ -361,6 +362,7 @@ void Heap::lastChanceToFinalize()
     
     m_arrayBuffers.lastChanceToFinalize();
     m_codeBlocks->lastChanceToFinalize();
+    m_objectSpace.stopAllocating();
     m_objectSpace.lastChanceToFinalize();
     releaseDelayedReleasedObjects();
 
@@ -585,13 +587,21 @@ void Heap::markToFixpoint(double gcStartTime)
     // checks because bootstrap would have put things into the visitor. So, we should fall
     // through to draining.
     
-    for (unsigned iteration = 1; ; ++iteration) {
+    unsigned iteration = 1;
+    for (;;) {
         if (Options::logGC())
-            dataLog("i#", iteration, " b=", m_barriersExecuted, " ");
+            dataLog("v=", bytesVisited() / 1024, "kb o=", m_opaqueRoots.size(), " b=", m_barriersExecuted, " ");
         
         if (slotVisitor.didReachTermination()) {
+            if (Options::logGC())
+                dataLog("i#", iteration, " ");
+        
             assertSharedMarkStacksEmpty();
             
+            slotVisitor.mergeIfNecessary();
+            for (auto& parallelVisitor : m_parallelSlotVisitors)
+                parallelVisitor->mergeIfNecessary();
+            
             // FIXME: Take m_mutatorDidRun into account when scheduling constraints. Most likely,
             // we don't have to execute root constraints again unless the mutator did run. At a
             // minimum, we could use this for work estimates - but it's probably more than just an
@@ -603,14 +613,15 @@ void Heap::markToFixpoint(double gcStartTime)
             // when we have deep stacks or a lot of DOM stuff.
             // https://bugs.webkit.org/show_bug.cgi?id=166831
             
-            bool executedEverything =
+            bool converged =
                 m_constraintSet->executeConvergence(slotVisitor, MonotonicTime::infinity());
-            if (executedEverything && slotVisitor.isEmpty()) {
+            if (converged && slotVisitor.isEmpty()) {
                 assertSharedMarkStacksEmpty();
                 break;
             }
             
             m_scheduler->didExecuteConstraints();
+            iteration++;
         }
         
         if (Options::logGC())
@@ -628,7 +639,7 @@ void Heap::markToFixpoint(double gcStartTime)
         
         if (Options::logGC()) {
             double thisPauseMS = (MonotonicTime::now() - m_stopTime).milliseconds();
-            dataLog("p=", thisPauseMS, " ms (max ", maxPauseMS(thisPauseMS), ")...]\n");
+            dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), ")...]\n");
         }
         
         resumeTheWorld();
@@ -780,9 +791,7 @@ void Heap::updateObjectCounts(double gcStartTime)
     if (m_collectionScope == CollectionScope::Full)
         m_totalBytesVisited = 0;
 
-    m_totalBytesVisitedThisCycle =
-        m_collectorSlotVisitor->bytesVisited() +
-        threadBytesVisited();
+    m_totalBytesVisitedThisCycle = bytesVisited();
     
     m_totalBytesVisited += m_totalBytesVisitedThisCycle;
 }
@@ -1011,14 +1020,14 @@ void Heap::collectAllGarbage()
     else {
         double before = 0;
         if (Options::logGC()) {
-            dataLog("[Full sweep: ", capacity() / 1024, " kb ");
+            dataLog("[Full sweep: ", capacity() / 1024, "kb ");
             before = currentTimeMS();
         }
         m_objectSpace.sweep();
         m_objectSpace.shrink();
         if (Options::logGC()) {
             double after = currentTimeMS();
-            dataLog("=> ", capacity() / 1024, " kb, ", after - before, " ms]\n");
+            dataLog("=> ", capacity() / 1024, "kb, ", after - before, "ms]\n");
         }
     }
     m_objectSpace.assertNoUnswept();
@@ -1101,7 +1110,7 @@ void Heap::collectInThread()
 
     MonotonicTime before;
     if (Options::logGC()) {
-        dataLog("[GC: START ", capacity() / 1024, " kb ");
+        dataLog("[GC: START ", capacity() / 1024, "kb ");
         before = MonotonicTime::now();
     }
     
@@ -1170,7 +1179,7 @@ void Heap::collectInThread()
     if (Options::logGC()) {
         MonotonicTime after = MonotonicTime::now();
         double thisPauseMS = (after - m_stopTime).milliseconds();
-        dataLog("p=", thisPauseMS, " ms (max ", maxPauseMS(thisPauseMS), "), cycle ", (after - before).milliseconds(), " ms END]\n");
+        dataLog("p=", thisPauseMS, "ms (max ", maxPauseMS(thisPauseMS), "), cycle ", (after - before).milliseconds(), "ms END]\n");
     }
     
     {
@@ -1196,6 +1205,12 @@ void Heap::stopTheWorld()
     RELEASE_ASSERT(!m_collectorBelievesThatTheWorldIsStopped);
     waitWhileNeedFinalize();
     stopTheMutator();
+    
+    if (m_mutatorDidRun)
+        m_mutatorExecutionVersion++;
+    
+    m_mutatorDidRun = false;
+    
     suspendCompilerThreads();
     m_collectorBelievesThatTheWorldIsStopped = true;
 
@@ -1769,7 +1784,7 @@ void Heap::updateAllocationLimits()
     m_bytesAllocatedThisCycle = 0;
 
     if (Options::logGC())
-        dataLog("=> ", currentHeapSize / 1024, " kb, ");
+        dataLog("=> ", currentHeapSize / 1024, "kb, ");
 }
 
 void Heap::didFinishCollection(double gcStartTime)
@@ -1976,6 +1991,11 @@ size_t Heap::threadVisitCount()
     return result;
 }
 
+size_t Heap::bytesVisited()
+{
+    return m_collectorSlotVisitor->bytesVisited() + threadBytesVisited();
+}
+
 size_t Heap::threadBytesVisited()
 {       
     size_t result = 0;
@@ -2125,10 +2145,8 @@ void Heap::setRunLoop(CFRunLoopRef runLoop)
 }
 #endif // USE(CF)
 
-void Heap::buildConstraintSet()
+void Heap::addCoreConstraints()
 {
-    m_constraintSet = std::make_unique<MarkingConstraintSet>();
-    
     m_constraintSet->add(
         "Cs", "Conservative Scan",
         [this] (SlotVisitor& slotVisitor, const VisitingTimeout&) {
@@ -2141,7 +2159,7 @@ void Heap::buildConstraintSet()
             gatherScratchBufferRoots(conservativeRoots);
             slotVisitor.append(conservativeRoots);
         },
-        MarkingConstraint::GreyedByExecution);
+        ConstraintVolatility::GreyedByExecution);
     
     m_constraintSet->add(
         "Msr", "Misc Small Roots",
@@ -2162,7 +2180,7 @@ void Heap::buildConstraintSet()
             slotVisitor.appendUnbarriered(m_vm->exception());
             slotVisitor.appendUnbarriered(m_vm->lastException());
         },
-        MarkingConstraint::GreyedByExecution);
+        ConstraintVolatility::GreyedByExecution);
     
     m_constraintSet->add(
         "Sh", "Strong Handles",
@@ -2170,7 +2188,7 @@ void Heap::buildConstraintSet()
             m_handleSet.visitStrongHandles(slotVisitor);
             m_handleStack.visit(slotVisitor);
         },
-        MarkingConstraint::GreyedByExecution);
+        ConstraintVolatility::GreyedByExecution);
     
     m_constraintSet->add(
         "D", "Debugger",
@@ -2190,25 +2208,21 @@ void Heap::buildConstraintSet()
             
             m_vm->shadowChicken().visitChildren(slotVisitor);
         },
-        MarkingConstraint::GreyedByExecution);
+        ConstraintVolatility::GreyedByExecution);
     
     m_constraintSet->add(
         "Jsr", "JIT Stub Routines",
         [this] (SlotVisitor& slotVisitor, const VisitingTimeout&) {
             m_jitStubRoutines->traceMarkedStubRoutines(slotVisitor);
         },
-        MarkingConstraint::GreyedByExecution);
+        ConstraintVolatility::GreyedByExecution);
     
     m_constraintSet->add(
         "Ws", "Weak Sets",
         [this] (SlotVisitor& slotVisitor, const VisitingTimeout&) {
-            slotVisitor.mergeOpaqueRootsIfNecessary();
-            for (auto& parallelVisitor : m_parallelSlotVisitors)
-                parallelVisitor->mergeOpaqueRootsIfNecessary();
-            
             m_objectSpace.visitWeakSets(slotVisitor);
         },
-        MarkingConstraint::GreyedByMarking);
+        ConstraintVolatility::GreyedByMarking);
     
     m_constraintSet->add(
         "Wrh", "Weak Reference Harvesters",
@@ -2216,7 +2230,7 @@ void Heap::buildConstraintSet()
             for (WeakReferenceHarvester* current = m_weakReferenceHarvesters.head(); current; current = current->next())
                 current->visitWeakReferences(slotVisitor);
         },
-        MarkingConstraint::GreyedByMarking);
+        ConstraintVolatility::GreyedByMarking);
     
 #if ENABLE(DFG_JIT)
     m_constraintSet->add(
@@ -2236,7 +2250,7 @@ void Heap::buildConstraintSet()
             if (Options::logGC() == GCLogging::Verbose)
                 dataLog("DFG Worklists:\n", slotVisitor);
         },
-        MarkingConstraint::GreyedByMarking);
+        ConstraintVolatility::GreyedByMarking);
 #endif
     
     m_constraintSet->add(
@@ -2250,7 +2264,7 @@ void Heap::buildConstraintSet()
                         slotVisitor.visitAsConstraint(codeBlock);
                 });
         },
-        MarkingConstraint::GreyedByExecution);
+        ConstraintVolatility::SeldomGreyed);
     
     m_constraintSet->add(
         "Mrms", "Mutator+Race Mark Stack",
@@ -2268,12 +2282,18 @@ void Heap::buildConstraintSet()
         [this] (SlotVisitor&) -> double {
             return m_mutatorMarkStack->size() + m_raceMarkStack->size();
         },
-        MarkingConstraint::GreyedByExecution);
+        ConstraintVolatility::GreyedByExecution);
+}
+
+void Heap::addMarkingConstraint(std::unique_ptr<MarkingConstraint> constraint)
+{
+    PreventCollectionScope preventCollectionScope(*this);
+    m_constraintSet->add(WTFMove(constraint));
 }
 
 void Heap::notifyIsSafeToCollect()
 {
-    buildConstraintSet();
+    addCoreConstraints();
     
     m_isSafeToCollect = true;
     
@@ -2311,6 +2331,9 @@ void Heap::notifyIsSafeToCollect()
 
 void Heap::preventCollection()
 {
+    if (!m_isSafeToCollect)
+        return;
+    
     // This prevents the collectContinuously thread from starting a collection.
     m_collectContinuouslyLock.lock();
     
@@ -2327,6 +2350,9 @@ void Heap::preventCollection()
 
 void Heap::allowCollection()
 {
+    if (!m_isSafeToCollect)
+        return;
+    
     m_collectContinuouslyLock.unlock();
 }
 
@@ -2339,27 +2365,10 @@ void Heap::forEachSlotVisitor(const Func& func)
         func(*slotVisitor);
 }
 
-void Heap::writeBarrierOpaqueRootSlow(void* root)
-{
-    ASSERT(mutatorShouldBeFenced());
-    
-    auto locker = holdLock(m_opaqueRootsMutex);
-    m_opaqueRoots.add(root);
-}
-
-void Heap::addMutatorShouldBeFencedCache(bool& cache)
-{
-    ASSERT(hasHeapAccess());
-    cache = m_mutatorShouldBeFenced;
-    m_mutatorShouldBeFencedCaches.append(&cache);
-}
-
 void Heap::setMutatorShouldBeFenced(bool value)
 {
     m_mutatorShouldBeFenced = value;
     m_barrierThreshold = value ? tautologicalThreshold : blackThreshold;
-    for (bool* cache : m_mutatorShouldBeFencedCaches)
-        *cache = value;
 }
     
 } // namespace JSC
index 8447588..12df6c3 100644 (file)
@@ -31,7 +31,6 @@
 #include "HeapObserver.h"
 #include "ListableHandler.h"
 #include "MachineStackMarker.h"
-#include "MarkedAllocator.h"
 #include "MarkedBlock.h"
 #include "MarkedBlockSet.h"
 #include "MarkedSpace.h"
@@ -71,7 +70,9 @@ class JSCell;
 class JSValue;
 class LLIntOffsetsExtractor;
 class MarkStackArray;
+class MarkedAllocator;
 class MarkedArgumentBuffer;
+class MarkingConstraint;
 class MarkingConstraintSet;
 class MutatorScheduler;
 class SlotVisitor;
@@ -124,17 +125,16 @@ public:
     // Take this if you know that from->cellState() < barrierThreshold.
     JS_EXPORT_PRIVATE void writeBarrierSlowPath(const JSCell* from);
 
-    void writeBarrierOpaqueRoot(void*);
-
     Heap(VM*, HeapType);
     ~Heap();
     void lastChanceToFinalize();
     void releaseDelayedReleasedObjects();
 
+    VM* vm() const;
+
     // Set a hard limit where JSC will crash if live heap size exceeds it.
     void setMaxLiveSize(size_t size) { m_maxLiveSize = size; }
 
-    VM* vm() const { return m_vm; }
     MarkedSpace& objectSpace() { return m_objectSpace; }
     MachineThreads& machineThreads() { return m_machineThreads; }
 
@@ -159,20 +159,6 @@ public:
     // helping heap.
     JS_EXPORT_PRIVATE bool isCurrentThreadBusy();
     
-    MarkedSpace::Subspace& subspaceForObjectWithoutDestructor() { return m_objectSpace.subspaceForObjectsWithoutDestructor(); }
-    MarkedSpace::Subspace& subspaceForObjectDestructor() { return m_objectSpace.subspaceForObjectsWithDestructor(); }
-    MarkedSpace::Subspace& subspaceForAuxiliaryData() { return m_objectSpace.subspaceForAuxiliaryData(); }
-    template<typename ClassType> MarkedSpace::Subspace& subspaceForObjectOfType();
-    MarkedAllocator* allocatorForObjectWithoutDestructor(size_t bytes) { return m_objectSpace.allocatorFor(bytes); }
-    MarkedAllocator* allocatorForObjectWithDestructor(size_t bytes) { return m_objectSpace.destructorAllocatorFor(bytes); }
-    template<typename ClassType> MarkedAllocator* allocatorForObjectOfType(size_t bytes);
-    MarkedAllocator* allocatorForAuxiliaryData(size_t bytes) { return m_objectSpace.auxiliaryAllocatorFor(bytes); }
-    void* allocateAuxiliary(JSCell* intendedOwner, size_t);
-    void* tryAllocateAuxiliary(JSCell* intendedOwner, size_t);
-    void* tryAllocateAuxiliary(GCDeferralContext*, JSCell* intendedOwner, size_t);
-    void* tryReallocateAuxiliary(JSCell* intendedOwner, void* oldBase, size_t oldSize, size_t newSize);
-    void ascribeOwner(JSCell* intendedOwner, void*);
-
     typedef void (*Finalizer)(JSCell*);
     JS_EXPORT_PRIVATE void addFinalizer(JSCell*, Finalizer);
     void addExecutable(ExecutableBase*);
@@ -350,8 +336,14 @@ public:
     void preventCollection();
     void allowCollection();
     
-    JS_EXPORT_PRIVATE void addMutatorShouldBeFencedCache(bool&);
+    size_t bytesVisited();
+    
+    uint64_t mutatorExecutionVersion() const { return m_mutatorExecutionVersion; }
+    
+    JS_EXPORT_PRIVATE void addMarkingConstraint(std::unique_ptr<MarkingConstraint>);
     
+    size_t numOpaqueRoots() const { return m_opaqueRoots.size(); }
+
 #if USE(CF)
     CFRunLoopRef runLoop() const { return m_runLoop.get(); }
     JS_EXPORT_PRIVATE void setRunLoop(CFRunLoopRef);
@@ -384,18 +376,6 @@ private:
     class Thread;
     friend class Thread;
 
-    template<typename T> friend void* allocateCell(Heap&);
-    template<typename T> friend void* allocateCell(Heap&, size_t);
-    template<typename T> friend void* allocateCell(Heap&, GCDeferralContext*);
-    template<typename T> friend void* allocateCell(Heap&, GCDeferralContext*, size_t);
-
-    void* allocateWithDestructor(size_t); // For use with objects with destructors.
-    void* allocateWithoutDestructor(size_t); // For use with objects without destructors.
-    void* allocateWithDestructor(GCDeferralContext*, size_t);
-    void* allocateWithoutDestructor(GCDeferralContext*, size_t);
-    template<typename ClassType> void* allocateObjectOfType(size_t); // Chooses one of the methods above based on type.
-    template<typename ClassType> void* allocateObjectOfType(GCDeferralContext*, size_t);
-
     static const size_t minExtraMemory = 256;
     
     class FinalizerOwner : public WeakHandleOwner {
@@ -490,11 +470,9 @@ private:
     
     void forEachCodeBlockImpl(const ScopedLambda<bool(CodeBlock*)>&);
     
-    JS_EXPORT_PRIVATE void writeBarrierOpaqueRootSlow(void*);
-    
     void setMutatorShouldBeFenced(bool value);
     
-    void buildConstraintSet();
+    void addCoreConstraints();
     
     template<typename Func>
     void iterateExecutingAndCompilingCodeBlocks(const Func&);
@@ -566,7 +544,6 @@ private:
 
     bool m_mutatorShouldBeFenced { Options::forceFencedBarrier() };
     unsigned m_barrierThreshold { Options::forceFencedBarrier() ? tautologicalThreshold : blackThreshold };
-    Vector<bool*> m_mutatorShouldBeFencedCaches;
 
     VM* m_vm;
     double m_lastFullGCLength;
@@ -610,7 +587,7 @@ private:
     bool m_parallelMarkersShouldExit { false };
 
     Lock m_opaqueRootsMutex;
-    HashSet<void*> m_opaqueRoots;
+    HashSet<const void*> m_opaqueRoots;
 
     static const size_t s_blockFragmentLength = 32;
 
@@ -645,6 +622,7 @@ private:
     bool m_threadShouldStop { false };
     bool m_threadIsStopping { false };
     bool m_mutatorDidRun { true };
+    uint64_t m_mutatorExecutionVersion { 0 };
     Box<Lock> m_threadLock;
     RefPtr<AutomaticThreadCondition> m_threadCondition; // The mutator must not wait on this. It would cause a deadlock.
     RefPtr<AutomaticThread> m_thread;
index 3e01134..c592b0d 100644 (file)
 
 namespace JSC {
 
+ALWAYS_INLINE VM* Heap::vm() const
+{
+    return bitwise_cast<VM*>(bitwise_cast<uintptr_t>(this) - OBJECT_OFFSETOF(VM, heap));
+}
+
 ALWAYS_INLINE Heap* Heap::heap(const HeapCell* cell)
 {
     return cell->heap();
@@ -175,136 +180,6 @@ template<typename Functor> inline void Heap::forEachProtectedCell(const Functor&
     m_handleSet.forEachStrongHandle(functor, m_protectedValues);
 }
 
-inline void* Heap::allocateWithDestructor(size_t bytes)
-{
-#if ENABLE(ALLOCATION_LOGGING)
-    dataLogF("JSC GC allocating %lu bytes with normal destructor.\n", bytes);
-#endif
-    ASSERT(isValidAllocation(bytes));
-    return m_objectSpace.allocateWithDestructor(bytes);
-}
-
-inline void* Heap::allocateWithoutDestructor(size_t bytes)
-{
-#if ENABLE(ALLOCATION_LOGGING)
-    dataLogF("JSC GC allocating %lu bytes without destructor.\n", bytes);
-#endif
-    ASSERT(isValidAllocation(bytes));
-    return m_objectSpace.allocateWithoutDestructor(bytes);
-}
-
-inline void* Heap::allocateWithDestructor(GCDeferralContext* deferralContext, size_t bytes)
-{
-    ASSERT(isValidAllocation(bytes));
-    return m_objectSpace.allocateWithDestructor(deferralContext, bytes);
-}
-
-inline void* Heap::allocateWithoutDestructor(GCDeferralContext* deferralContext, size_t bytes)
-{
-    ASSERT(isValidAllocation(bytes));
-    return m_objectSpace.allocateWithoutDestructor(deferralContext, bytes);
-}
-
-template<typename ClassType>
-inline void* Heap::allocateObjectOfType(size_t bytes)
-{
-    // JSCell::classInfo() expects objects allocated with normal destructor to derive from JSDestructibleObject.
-    ASSERT((!ClassType::needsDestruction || (ClassType::StructureFlags & StructureIsImmortal) || std::is_convertible<ClassType, JSDestructibleObject>::value));
-
-    if (ClassType::needsDestruction)
-        return allocateWithDestructor(bytes);
-    return allocateWithoutDestructor(bytes);
-}
-
-template<typename ClassType>
-inline void* Heap::allocateObjectOfType(GCDeferralContext* deferralContext, size_t bytes)
-{
-    ASSERT((!ClassType::needsDestruction || (ClassType::StructureFlags & StructureIsImmortal) || std::is_convertible<ClassType, JSDestructibleObject>::value));
-
-    if (ClassType::needsDestruction)
-        return allocateWithDestructor(deferralContext, bytes);
-    return allocateWithoutDestructor(deferralContext, bytes);
-}
-
-template<typename ClassType>
-inline MarkedSpace::Subspace& Heap::subspaceForObjectOfType()
-{
-    // JSCell::classInfo() expects objects allocated with normal destructor to derive from JSDestructibleObject.
-    ASSERT((!ClassType::needsDestruction || (ClassType::StructureFlags & StructureIsImmortal) || std::is_convertible<ClassType, JSDestructibleObject>::value));
-    
-    if (ClassType::needsDestruction)
-        return subspaceForObjectDestructor();
-    return subspaceForObjectWithoutDestructor();
-}
-
-template<typename ClassType>
-inline MarkedAllocator* Heap::allocatorForObjectOfType(size_t bytes)
-{
-    // JSCell::classInfo() expects objects allocated with normal destructor to derive from JSDestructibleObject.
-    ASSERT((!ClassType::needsDestruction || (ClassType::StructureFlags & StructureIsImmortal) || std::is_convertible<ClassType, JSDestructibleObject>::value));
-
-    MarkedAllocator* result;
-    if (ClassType::needsDestruction)
-        result = allocatorForObjectWithDestructor(bytes);
-    else
-        result = allocatorForObjectWithoutDestructor(bytes);
-    
-    ASSERT(result || !ClassType::info()->isSubClassOf(JSCallee::info()));
-    return result;
-}
-
-inline void* Heap::allocateAuxiliary(JSCell* intendedOwner, size_t bytes)
-{
-    void* result = m_objectSpace.allocateAuxiliary(bytes);
-#if ENABLE(ALLOCATION_LOGGING)
-    dataLogF("JSC GC allocating %lu bytes of auxiliary for %p: %p.\n", bytes, intendedOwner, result);
-#else
-    UNUSED_PARAM(intendedOwner);
-#endif
-    return result;
-}
-
-inline void* Heap::tryAllocateAuxiliary(JSCell* intendedOwner, size_t bytes)
-{
-    void* result = m_objectSpace.tryAllocateAuxiliary(bytes);
-#if ENABLE(ALLOCATION_LOGGING)
-    dataLogF("JSC GC allocating %lu bytes of auxiliary for %p: %p.\n", bytes, intendedOwner, result);
-#else
-    UNUSED_PARAM(intendedOwner);
-#endif
-    return result;
-}
-
-inline void* Heap::tryAllocateAuxiliary(GCDeferralContext* deferralContext, JSCell* intendedOwner, size_t bytes)
-{
-    void* result = m_objectSpace.tryAllocateAuxiliary(deferralContext, bytes);
-#if ENABLE(ALLOCATION_LOGGING)
-    dataLogF("JSC GC allocating %lu bytes of auxiliary for %p: %p.\n", bytes, intendedOwner, result);
-#else
-    UNUSED_PARAM(intendedOwner);
-#endif
-    return result;
-}
-
-inline void* Heap::tryReallocateAuxiliary(JSCell* intendedOwner, void* oldBase, size_t oldSize, size_t newSize)
-{
-    void* newBase = tryAllocateAuxiliary(intendedOwner, newSize);
-    if (!newBase)
-        return nullptr;
-    memcpy(newBase, oldBase, oldSize);
-    return newBase;
-}
-
-inline void Heap::ascribeOwner(JSCell* intendedOwner, void* storage)
-{
-#if ENABLE(ALLOCATION_LOGGING)
-    dataLogF("JSC GC ascribing %p as owner of storage %p.\n", intendedOwner, storage);
-#else
-    UNUSED_PARAM(intendedOwner);
-    UNUSED_PARAM(storage);
-#endif
-}
-
 #if USE(FOUNDATION)
 template <typename T>
 inline void Heap::releaseSoon(RetainPtr<T>&& object)
@@ -405,10 +280,4 @@ inline void Heap::stopIfNecessary()
         stopIfNecessarySlow();
 }
 
-inline void Heap::writeBarrierOpaqueRoot(void* root)
-{
-    if (mutatorShouldBeFenced())
-        writeBarrierOpaqueRootSlow(root);
-}
-
 } // namespace JSC
index fa1ebd5..839c616 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2016-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 
 namespace JSC {
 
-LargeAllocation* LargeAllocation::tryCreate(Heap& heap, size_t size, const AllocatorAttributes& attributes)
+LargeAllocation* LargeAllocation::tryCreate(Heap& heap, size_t size, Subspace* subspace)
 {
     void* space = tryFastAlignedMalloc(alignment, headerSize() + size);
     if (!space)
         return nullptr;
     if (scribbleFreeCells())
         scribble(space, size);
-    return new (NotNull, space) LargeAllocation(heap, size, attributes);
+    return new (NotNull, space) LargeAllocation(heap, size, subspace);
 }
 
-LargeAllocation::LargeAllocation(Heap& heap, size_t size, const AllocatorAttributes& attributes)
+LargeAllocation::LargeAllocation(Heap& heap, size_t size, Subspace* subspace)
     : m_cellSize(size)
     , m_isNewlyAllocated(true)
     , m_hasValidCell(true)
-    , m_attributes(attributes)
+    , m_attributes(subspace->attributes())
+    , m_subspace(subspace)
     , m_weakSet(heap.vm(), *this)
 {
     m_isMarked.store(0);
 }
 
+LargeAllocation::~LargeAllocation()
+{
+    if (isOnList())
+        remove();
+}
+
 void LargeAllocation::lastChanceToFinalize()
 {
     m_weakSet.lastChanceToFinalize();
@@ -92,7 +99,7 @@ void LargeAllocation::sweep()
     
     if (m_hasValidCell && !isLive()) {
         if (m_attributes.destruction == NeedsDestruction)
-            static_cast<JSCell*>(cell())->callDestructor(*vm());
+            m_subspace->destroy(*vm(), static_cast<JSCell*>(cell()));
         m_hasValidCell = false;
     }
 }
index f00d598..528575b 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2016-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -37,9 +37,11 @@ class SlotVisitor;
 // objects directly using malloc, and put the LargeAllocation header just before them. We can detect
 // when a HeapCell* is a LargeAllocation because it will have the MarkedBlock::atomSize / 2 bit set.
 
-class LargeAllocation {
+class LargeAllocation : public BasicRawSentinelNode<LargeAllocation> {
 public:
-    static LargeAllocation* tryCreate(Heap&, size_t, const AllocatorAttributes&);
+    static LargeAllocation* tryCreate(Heap&, size_t, Subspace*);
+    
+    ~LargeAllocation();
     
     static LargeAllocation* fromCell(const void* cell)
     {
@@ -136,7 +138,7 @@ public:
     void dump(PrintStream&) const;
     
 private:
-    LargeAllocation(Heap&, size_t, const AllocatorAttributes&);
+    LargeAllocation(Heap&, size_t, Subspace*);
     
     static const unsigned alignment = MarkedBlock::atomSize;
     static const unsigned halfAlignment = alignment / 2;
@@ -148,6 +150,7 @@ private:
     bool m_hasValidCell;
     Atomic<bool> m_isMarked;
     AllocatorAttributes m_attributes;
+    Subspace* m_subspace;
     WeakSet m_weakSet;
 };
 
index 101760b..952db02 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2013, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -31,6 +31,7 @@
 #include "Heap.h"
 #include "IncrementalSweeper.h"
 #include "JSCInlines.h"
+#include "MarkedAllocatorInlines.h"
 #include "MarkedBlockInlines.h"
 #include "SuperSampler.h"
 #include "VM.h"
 
 namespace JSC {
 
-MarkedAllocator::MarkedAllocator(Heap* heap, MarkedSpace* markedSpace, size_t cellSize, const AllocatorAttributes& attributes)
+MarkedAllocator::MarkedAllocator(Heap* heap, Subspace* subspace, size_t cellSize)
     : m_currentBlock(0)
     , m_lastActiveBlock(0)
     , m_cellSize(static_cast<unsigned>(cellSize))
-    , m_attributes(attributes)
+    , m_attributes(subspace->attributes())
     , m_heap(heap)
-    , m_markedSpace(markedSpace)
+    , m_subspace(subspace)
 {
 }
 
@@ -111,7 +112,7 @@ void* MarkedAllocator::tryAllocateWithoutCollecting()
     
     if (Options::stealEmptyBlocksFromOtherAllocators()
         && shouldStealEmptyBlocksFromOtherAllocators()) {
-        if (MarkedBlock::Handle* block = m_markedSpace->findEmptyBlockToSteal()) {
+        if (MarkedBlock::Handle* block = markedSpace().findEmptyBlockToSteal()) {
             block->sweep();
             
             // It's good that this clears canAllocateButNotEmpty as well as all other bits,
@@ -166,7 +167,7 @@ void* MarkedAllocator::tryAllocateIn(MarkedBlock::Handle* block)
     }
     RELEASE_ASSERT(result);
     setIsEden(NoLockingNecessary, m_currentBlock, true);
-    m_markedSpace->didAllocateInBlock(m_currentBlock);
+    markedSpace().didAllocateInBlock(m_currentBlock);
     return result;
 }
 
@@ -206,7 +207,7 @@ void* MarkedAllocator::allocateSlowCaseImpl(GCDeferralContext* deferralContext,
     ASSERT(m_heap->vm()->currentThreadIsHoldingAPILock());
     doTestCollectionsIfNeeded(deferralContext);
 
-    ASSERT(!m_markedSpace->isIterating());
+    ASSERT(!markedSpace().isIterating());
     m_heap->didAllocate(m_freeList.originalSize);
     
     didConsumeFreeList();
@@ -254,7 +255,7 @@ MarkedBlock::Handle* MarkedAllocator::tryAllocateBlock()
     if (!handle)
         return nullptr;
     
-    m_markedSpace->didAddBlock(handle);
+    markedSpace().didAddBlock(handle);
     
     return handle;
 }
@@ -444,7 +445,7 @@ void MarkedAllocator::shrink()
 {
     m_empty.forEachSetBit(
         [&] (size_t index) {
-            m_markedSpace->freeBlock(m_blocks[index]);
+            markedSpace().freeBlock(m_blocks[index]);
         });
 }
 
@@ -486,5 +487,10 @@ void MarkedAllocator::dumpBits(PrintStream& out)
         });
 }
 
+MarkedSpace& MarkedAllocator::markedSpace() const
+{
+    return m_subspace->space();
+}
+
 } // namespace JSC
 
index 96bf656..09903e4 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2013, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -135,7 +135,7 @@ public:
     static ptrdiff_t offsetOfFreeList();
     static ptrdiff_t offsetOfCellSize();
 
-    MarkedAllocator(Heap*, MarkedSpace*, size_t cellSize, const AllocatorAttributes&);
+    MarkedAllocator(Heap*, Subspace*, size_t cellSize);
     void lastChanceToFinalize();
     void prepareForAllocation();
     void stopAllocating();
@@ -163,6 +163,7 @@ public:
     }
     
     template<typename Functor> void forEachBlock(const Functor&);
+    template<typename Functor> void forEachNotEmptyBlock(const Functor&);
     
     void addBlock(MarkedBlock::Handle*);
     void removeBlock(MarkedBlock::Handle*);
@@ -200,14 +201,18 @@ public:
     }
     
     MarkedAllocator* nextAllocator() const { return m_nextAllocator; }
+    MarkedAllocator* nextAllocatorInSubspace() const { return m_nextAllocatorInSubspace; }
     
-    // MarkedSpace calls this during init.
     void setNextAllocator(MarkedAllocator* allocator) { m_nextAllocator = allocator; }
+    void setNextAllocatorInSubspace(MarkedAllocator* allocator) { m_nextAllocatorInSubspace = allocator; }
     
     MarkedBlock::Handle* findEmptyBlockToSteal();
     
     MarkedBlock::Handle* findBlockToSweep();
     
+    Subspace* subspace() const { return m_subspace; }
+    MarkedSpace& markedSpace() const;
+    
     void dump(PrintStream&) const;
     void dumpBits(PrintStream& = WTF::dataFile());
     
@@ -253,9 +258,12 @@ private:
     Lock m_lock;
     unsigned m_cellSize;
     AllocatorAttributes m_attributes;
+    // FIXME: All of these should probably be references.
+    // https://bugs.webkit.org/show_bug.cgi?id=166988
     Heap* m_heap;
-    MarkedSpace* m_markedSpace;
-    MarkedAllocator* m_nextAllocator;
+    Subspace* m_subspace;
+    MarkedAllocator* m_nextAllocator { nullptr };
+    MarkedAllocator* m_nextAllocatorInSubspace { nullptr };
 };
 
 inline ptrdiff_t MarkedAllocator::offsetOfFreeList()
@@ -268,48 +276,4 @@ inline ptrdiff_t MarkedAllocator::offsetOfCellSize()
     return OBJECT_OFFSETOF(MarkedAllocator, m_cellSize);
 }
 
-ALWAYS_INLINE void* MarkedAllocator::tryAllocate(GCDeferralContext* deferralContext)
-{
-    unsigned remaining = m_freeList.remaining;
-    if (remaining) {
-        unsigned cellSize = m_cellSize;
-        remaining -= cellSize;
-        m_freeList.remaining = remaining;
-        return m_freeList.payloadEnd - remaining - cellSize;
-    }
-    
-    FreeCell* head = m_freeList.head;
-    if (UNLIKELY(!head))
-        return tryAllocateSlowCase(deferralContext);
-    
-    m_freeList.head = head->next;
-    return head;
-}
-
-ALWAYS_INLINE void* MarkedAllocator::allocate(GCDeferralContext* deferralContext)
-{
-    unsigned remaining = m_freeList.remaining;
-    if (remaining) {
-        unsigned cellSize = m_cellSize;
-        remaining -= cellSize;
-        m_freeList.remaining = remaining;
-        return m_freeList.payloadEnd - remaining - cellSize;
-    }
-    
-    FreeCell* head = m_freeList.head;
-    if (UNLIKELY(!head))
-        return allocateSlowCase(deferralContext);
-    
-    m_freeList.head = head->next;
-    return head;
-}
-
-template <typename Functor> inline void MarkedAllocator::forEachBlock(const Functor& functor)
-{
-    m_live.forEachSetBit(
-        [&] (size_t index) {
-            functor(m_blocks[index]);
-        });
-}
-
 } // namespace JSC
diff --git a/Source/JavaScriptCore/heap/MarkedAllocatorInlines.h b/Source/JavaScriptCore/heap/MarkedAllocatorInlines.h
new file mode 100644 (file)
index 0000000..bd9d707
--- /dev/null
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "MarkedAllocator.h"
+
+namespace JSC {
+
+ALWAYS_INLINE void* MarkedAllocator::tryAllocate(GCDeferralContext* deferralContext)
+{
+    unsigned remaining = m_freeList.remaining;
+    if (remaining) {
+        unsigned cellSize = m_cellSize;
+        remaining -= cellSize;
+        m_freeList.remaining = remaining;
+        return m_freeList.payloadEnd - remaining - cellSize;
+    }
+    
+    FreeCell* head = m_freeList.head;
+    if (UNLIKELY(!head))
+        return tryAllocateSlowCase(deferralContext);
+    
+    m_freeList.head = head->next;
+    return head;
+}
+
+ALWAYS_INLINE void* MarkedAllocator::allocate(GCDeferralContext* deferralContext)
+{
+    unsigned remaining = m_freeList.remaining;
+    if (remaining) {
+        unsigned cellSize = m_cellSize;
+        remaining -= cellSize;
+        m_freeList.remaining = remaining;
+        return m_freeList.payloadEnd - remaining - cellSize;
+    }
+    
+    FreeCell* head = m_freeList.head;
+    if (UNLIKELY(!head))
+        return allocateSlowCase(deferralContext);
+    
+    m_freeList.head = head->next;
+    return head;
+}
+
+template <typename Functor> inline void MarkedAllocator::forEachBlock(const Functor& functor)
+{
+    m_live.forEachSetBit(
+        [&] (size_t index) {
+            functor(m_blocks[index]);
+        });
+}
+
+template <typename Functor> inline void MarkedAllocator::forEachNotEmptyBlock(const Functor& functor)
+{
+    m_markingNotEmpty.forEachSetBit(
+        [&] (size_t index) {
+            functor(m_blocks[index]);
+        });
+}
+
+} // namespace JSC
+
index 262d501..88c631d 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -89,194 +89,6 @@ MarkedBlock::MarkedBlock(VM& vm, Handle& handle)
         dataLog(RawPointer(this), ": Allocated.\n");
 }
 
-template<MarkedBlock::Handle::EmptyMode emptyMode, MarkedBlock::Handle::SweepMode sweepMode, MarkedBlock::Handle::SweepDestructionMode destructionMode, MarkedBlock::Handle::ScribbleMode scribbleMode, MarkedBlock::Handle::NewlyAllocatedMode newlyAllocatedMode, MarkedBlock::Handle::MarksMode marksMode>
-FreeList MarkedBlock::Handle::specializedSweep()
-{
-    RELEASE_ASSERT(!(destructionMode == BlockHasNoDestructors && sweepMode == SweepOnly));
-    
-    SuperSamplerScope superSamplerScope(false);
-
-    MarkedBlock& block = this->block();
-    
-    if (false)
-        dataLog(RawPointer(this), "/", RawPointer(&block), ": MarkedBlock::Handle::specializedSweep!\n");
-    
-    if (Options::useBumpAllocator()
-        && emptyMode == IsEmpty
-        && newlyAllocatedMode == DoesNotHaveNewlyAllocated) {
-        
-        // This is an incredibly powerful assertion that checks the sanity of our block bits.
-        if (marksMode == MarksNotStale && !block.m_marks.isEmpty()) {
-            WTF::dataFile().atomically(
-                [&] (PrintStream& out) {
-                    out.print("Block ", RawPointer(&block), ": marks not empty!\n");
-                    out.print("Block lock is held: ", block.m_lock.isHeld(), "\n");
-                    out.print("Marking version of block: ", block.m_markingVersion, "\n");
-                    out.print("Marking version of heap: ", space()->markingVersion(), "\n");
-                    UNREACHABLE_FOR_PLATFORM();
-                });
-        }
-        
-        char* startOfLastCell = static_cast<char*>(cellAlign(block.atoms() + m_endAtom - 1));
-        char* payloadEnd = startOfLastCell + cellSize();
-        RELEASE_ASSERT(payloadEnd - MarkedBlock::blockSize <= bitwise_cast<char*>(&block));
-        char* payloadBegin = bitwise_cast<char*>(block.atoms() + firstAtom());
-        if (scribbleMode == Scribble)
-            scribble(payloadBegin, payloadEnd - payloadBegin);
-        if (sweepMode == SweepToFreeList)
-            setIsFreeListed();
-        else
-            m_allocator->setIsEmpty(NoLockingNecessary, this, true);
-        if (space()->isMarking())
-            block.m_lock.unlock();
-        FreeList result = FreeList::bump(payloadEnd, payloadEnd - payloadBegin);
-        if (false)
-            dataLog("Quickly swept block ", RawPointer(this), " with cell size ", cellSize(), " and attributes ", m_attributes, ": ", result, "\n");
-        return result;
-    }
-
-    // This produces a free list that is ordered in reverse through the block.
-    // This is fine, since the allocation code makes no assumptions about the
-    // order of the free list.
-    FreeCell* head = 0;
-    size_t count = 0;
-    bool isEmpty = true;
-    Vector<size_t> deadCells;
-    auto handleDeadCell = [&] (size_t i) {
-        HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&block.atoms()[i]);
-
-        if (destructionMode != BlockHasNoDestructors && emptyMode == NotEmpty)
-            static_cast<JSCell*>(cell)->callDestructor(*vm());
-
-        if (sweepMode == SweepToFreeList) {
-            FreeCell* freeCell = reinterpret_cast_ptr<FreeCell*>(cell);
-            if (scribbleMode == Scribble)
-                scribble(freeCell, cellSize());
-            freeCell->next = head;
-            head = freeCell;
-            ++count;
-        }
-    };
-    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
-        if (emptyMode == NotEmpty
-            && ((marksMode == MarksNotStale && block.m_marks.get(i))
-                || (newlyAllocatedMode == HasNewlyAllocated && m_newlyAllocated.get(i)))) {
-            isEmpty = false;
-            continue;
-        }
-        
-        if (destructionMode == BlockHasDestructorsAndCollectorIsRunning)
-            deadCells.append(i);
-        else
-            handleDeadCell(i);
-    }
-    
-    // We only want to discard the newlyAllocated bits if we're creating a FreeList,
-    // otherwise we would lose information on what's currently alive.
-    if (sweepMode == SweepToFreeList && newlyAllocatedMode == HasNewlyAllocated)
-        m_newlyAllocatedVersion = MarkedSpace::nullVersion;
-    
-    if (space()->isMarking())
-        block.m_lock.unlock();
-    
-    if (destructionMode == BlockHasDestructorsAndCollectorIsRunning) {
-        for (size_t i : deadCells)
-            handleDeadCell(i);
-    }
-
-    FreeList result = FreeList::list(head, count * cellSize());
-    if (sweepMode == SweepToFreeList)
-        setIsFreeListed();
-    else if (isEmpty)
-        m_allocator->setIsEmpty(NoLockingNecessary, this, true);
-    if (false)
-        dataLog("Slowly swept block ", RawPointer(&block), " with cell size ", cellSize(), " and attributes ", m_attributes, ": ", result, "\n");
-    return result;
-}
-
-FreeList MarkedBlock::Handle::sweep(SweepMode sweepMode)
-{
-    // FIXME: Maybe HelpingGCScope should just be called SweepScope?
-    HelpingGCScope helpingGCScope(*heap());
-    
-    m_allocator->setIsUnswept(NoLockingNecessary, this, false);
-    
-    m_weakSet.sweep();
-
-    if (sweepMode == SweepOnly && m_attributes.destruction == DoesNotNeedDestruction)
-        return FreeList();
-
-    if (UNLIKELY(m_isFreeListed)) {
-        RELEASE_ASSERT(sweepMode == SweepToFreeList);
-        return FreeList();
-    }
-    
-    ASSERT(!m_allocator->isAllocated(NoLockingNecessary, this));
-    
-    if (space()->isMarking())
-        block().m_lock.lock();
-    
-    if (m_attributes.destruction == NeedsDestruction) {
-        if (space()->isMarking())
-            return sweepHelperSelectScribbleMode<BlockHasDestructorsAndCollectorIsRunning>(sweepMode);
-        return sweepHelperSelectScribbleMode<BlockHasDestructors>(sweepMode);
-    }
-    return sweepHelperSelectScribbleMode<BlockHasNoDestructors>(sweepMode);
-}
-
-template<MarkedBlock::Handle::SweepDestructionMode destructionMode>
-FreeList MarkedBlock::Handle::sweepHelperSelectScribbleMode(SweepMode sweepMode)
-{
-    if (scribbleFreeCells())
-        return sweepHelperSelectEmptyMode<destructionMode, Scribble>(sweepMode);
-    return sweepHelperSelectEmptyMode<destructionMode, DontScribble>(sweepMode);
-}
-
-template<MarkedBlock::Handle::SweepDestructionMode destructionMode, MarkedBlock::Handle::ScribbleMode scribbleMode>
-FreeList MarkedBlock::Handle::sweepHelperSelectEmptyMode(SweepMode sweepMode)
-{
-    // It's not obvious, but this is the only way to know if the block is empty. It's the only
-    // bit that captures these caveats:
-    // - It's true when the block is freshly allocated.
-    // - It's true if the block had been swept in the past, all destructors were called, and that
-    //   sweep proved that the block is empty.
-    // - It's false if there are any destructors that need to be called, even if the block has no
-    //   live objects.
-    if (m_allocator->isEmpty(NoLockingNecessary, this))
-        return sweepHelperSelectHasNewlyAllocated<IsEmpty, destructionMode, scribbleMode>(sweepMode);
-    return sweepHelperSelectHasNewlyAllocated<NotEmpty, destructionMode, scribbleMode>(sweepMode);
-}
-
-template<MarkedBlock::Handle::EmptyMode emptyMode, MarkedBlock::Handle::SweepDestructionMode destructionMode, MarkedBlock::Handle::ScribbleMode scribbleMode>
-FreeList MarkedBlock::Handle::sweepHelperSelectHasNewlyAllocated(SweepMode sweepMode)
-{
-    if (hasAnyNewlyAllocated())
-        return sweepHelperSelectSweepMode<emptyMode, destructionMode, scribbleMode, HasNewlyAllocated>(sweepMode);
-    return sweepHelperSelectSweepMode<emptyMode, destructionMode, scribbleMode, DoesNotHaveNewlyAllocated>(sweepMode);
-}
-
-template<MarkedBlock::Handle::EmptyMode emptyMode, MarkedBlock::Handle::SweepDestructionMode destructionMode, MarkedBlock::Handle::ScribbleMode scribbleMode, MarkedBlock::Handle::NewlyAllocatedMode newlyAllocatedMode>
-FreeList MarkedBlock::Handle::sweepHelperSelectSweepMode(SweepMode sweepMode)
-{
-    if (sweepMode == SweepToFreeList)
-        return sweepHelperSelectMarksMode<emptyMode, SweepToFreeList, destructionMode, scribbleMode, newlyAllocatedMode>();
-    return sweepHelperSelectMarksMode<emptyMode, SweepOnly, destructionMode, scribbleMode, newlyAllocatedMode>();
-}
-
-template<MarkedBlock::Handle::EmptyMode emptyMode, MarkedBlock::Handle::SweepMode sweepMode, MarkedBlock::Handle::SweepDestructionMode destructionMode, MarkedBlock::Handle::ScribbleMode scribbleMode, MarkedBlock::Handle::NewlyAllocatedMode newlyAllocatedMode>
-FreeList MarkedBlock::Handle::sweepHelperSelectMarksMode()
-{
-    HeapVersion markingVersion = space()->markingVersion();
-    bool marksAreUseful = !block().areMarksStale(markingVersion);
-    
-    if (space()->isMarking())
-        marksAreUseful |= block().marksConveyLivenessDuringMarking(markingVersion);
-    
-    if (!marksAreUseful)
-        return specializedSweep<emptyMode, sweepMode, destructionMode, scribbleMode, newlyAllocatedMode, MarksStale>();
-    return specializedSweep<emptyMode, sweepMode, destructionMode, scribbleMode, newlyAllocatedMode, MarksNotStale>();
-}
-
 void MarkedBlock::Handle::unsweepWithNoNewlyAllocated()
 {
     RELEASE_ASSERT(m_isFreeListed);
@@ -590,6 +402,86 @@ void MarkedBlock::Handle::dumpState(PrintStream& out)
         });
 }
 
+Subspace* MarkedBlock::Handle::subspace() const
+{
+    return allocator()->subspace();
+}
+
+FreeList MarkedBlock::Handle::sweep(SweepMode sweepMode)
+{
+    // FIXME: Maybe HelpingGCScope should just be called SweepScope?
+    HelpingGCScope helpingGCScope(*heap());
+    
+    m_allocator->setIsUnswept(NoLockingNecessary, this, false);
+    
+    m_weakSet.sweep();
+
+    if (sweepMode == SweepOnly && m_attributes.destruction == DoesNotNeedDestruction)
+        return FreeList();
+
+    if (UNLIKELY(m_isFreeListed)) {
+        RELEASE_ASSERT(sweepMode == SweepToFreeList);
+        return FreeList();
+    }
+    
+    ASSERT(!m_allocator->isAllocated(NoLockingNecessary, this));
+    
+    if (space()->isMarking())
+        block().m_lock.lock();
+    
+    if (m_attributes.destruction == NeedsDestruction)
+        return subspace()->finishSweep(*this, sweepMode);
+    
+    // Handle the no-destructor specializations here, since we have the most of those. This
+    // ensures that they don't get re-specialized for every destructor space.
+    
+    EmptyMode emptyMode = this->emptyMode();
+    ScribbleMode scribbleMode = this->scribbleMode();
+    NewlyAllocatedMode newlyAllocatedMode = this->newlyAllocatedMode();
+    MarksMode marksMode = this->marksMode();
+    
+    FreeList result;
+    auto trySpecialized = [&] () -> bool {
+        if (sweepMode != SweepToFreeList)
+            return false;
+        if (scribbleMode != DontScribble)
+            return false;
+        if (newlyAllocatedMode != DoesNotHaveNewlyAllocated)
+            return false;
+        
+        switch (emptyMode) {
+        case IsEmpty:
+            switch (marksMode) {
+            case MarksNotStale:
+                result = specializedSweep<true, IsEmpty, SweepToFreeList, BlockHasNoDestructors, DontScribble, DoesNotHaveNewlyAllocated, MarksNotStale>(IsEmpty, SweepToFreeList, BlockHasNoDestructors, DontScribble, DoesNotHaveNewlyAllocated, MarksNotStale, [] (VM&, JSCell*) { });
+                return true;
+            case MarksStale:
+                result = specializedSweep<true, IsEmpty, SweepToFreeList, BlockHasNoDestructors, DontScribble, DoesNotHaveNewlyAllocated, MarksStale>(IsEmpty, SweepToFreeList, BlockHasNoDestructors, DontScribble, DoesNotHaveNewlyAllocated, MarksStale, [] (VM&, JSCell*) { });
+                return true;
+            }
+            break;
+        case NotEmpty:
+            switch (marksMode) {
+            case MarksNotStale:
+                result = specializedSweep<true, NotEmpty, SweepToFreeList, BlockHasNoDestructors, DontScribble, DoesNotHaveNewlyAllocated, MarksNotStale>(IsEmpty, SweepToFreeList, BlockHasNoDestructors, DontScribble, DoesNotHaveNewlyAllocated, MarksNotStale, [] (VM&, JSCell*) { });
+                return true;
+            case MarksStale:
+                result = specializedSweep<true, NotEmpty, SweepToFreeList, BlockHasNoDestructors, DontScribble, DoesNotHaveNewlyAllocated, MarksStale>(IsEmpty, SweepToFreeList, BlockHasNoDestructors, DontScribble, DoesNotHaveNewlyAllocated, MarksStale, [] (VM&, JSCell*) { });
+                return true;
+            }
+            break;
+        }
+        
+        return false;
+    };
+    
+    if (trySpecialized())
+        return result;
+
+    // The template arguments don't matter because the first one is false.
+    return specializedSweep<false, IsEmpty, SweepOnly, BlockHasNoDestructors, DontScribble, HasNewlyAllocated, MarksStale>(emptyMode, sweepMode, BlockHasNoDestructors, scribbleMode, newlyAllocatedMode, marksMode, [] (VM&, JSCell*) { });
+}
+
 } // namespace JSC
 
 namespace WTF {
index b0dd4a6..b105941 100644 (file)
@@ -1,7 +1,7 @@
 /*
  *  Copyright (C) 1999-2000 Harri Porten (porten@kde.org)
  *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
- *  Copyright (C) 2003-2009, 2011, 2016 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
  *
  *  This library is free software; you can redistribute it and/or
  *  modify it under the terms of the GNU Lesser General Public
@@ -41,6 +41,7 @@ class JSCell;
 class MarkedAllocator;
 class MarkedSpace;
 class SlotVisitor;
+class Subspace;
 
 typedef uintptr_t Bits;
 typedef uint32_t HeapVersion;
@@ -110,6 +111,7 @@ public:
         void lastChanceToFinalize();
 
         MarkedAllocator* allocator() const;
+        Subspace* subspace() const;
         Heap* heap() const;
         inline MarkedSpace* space() const;
         VM* vm() const;
@@ -127,13 +129,17 @@ public:
         enum SweepMode { SweepOnly, SweepToFreeList };
         FreeList sweep(SweepMode = SweepOnly);
         
+        // This is to be called by Subspace.
+        template<typename DestroyFunc>
+        FreeList finishSweepKnowingSubspace(SweepMode, const DestroyFunc&);
+        
         void unsweepWithNoNewlyAllocated();
         
         void zap(const FreeList&);
         
         void shrink();
             
-        unsigned visitWeakSet(SlotVisitor&);
+        void visitWeakSet(SlotVisitor&);
         void reapWeakSet();
             
         // While allocating from a free list, MarkedBlock temporarily has bogus
@@ -174,8 +180,9 @@ public:
         template <typename Functor> IterationStatus forEachCell(const Functor&);
         template <typename Functor> inline IterationStatus forEachLiveCell(const Functor&);
         template <typename Functor> inline IterationStatus forEachDeadCell(const Functor&);
+        template <typename Functor> inline IterationStatus forEachMarkedCell(const Functor&);
             
-        bool areMarksStale();
+        JS_EXPORT_PRIVATE bool areMarksStale();
         
         void assertMarksNotStale();
             
@@ -194,33 +201,20 @@ public:
         Handle(Heap&, void*);
         
         enum SweepDestructionMode { BlockHasNoDestructors, BlockHasDestructors, BlockHasDestructorsAndCollectorIsRunning };
-        
-        template<SweepDestructionMode>
-        FreeList sweepHelperSelectScribbleMode(SweepMode = SweepOnly);
-            
         enum ScribbleMode { DontScribble, Scribble };
-            
-        template<SweepDestructionMode, ScribbleMode>
-        FreeList sweepHelperSelectEmptyMode(SweepMode = SweepOnly);
-            
         enum EmptyMode { IsEmpty, NotEmpty };
-        
-        template<EmptyMode, SweepDestructionMode, ScribbleMode>
-        FreeList sweepHelperSelectHasNewlyAllocated(SweepMode = SweepOnly);
-        
         enum NewlyAllocatedMode { HasNewlyAllocated, DoesNotHaveNewlyAllocated };
+        enum MarksMode { MarksStale, MarksNotStale };
         
-        template<EmptyMode, SweepDestructionMode, ScribbleMode, NewlyAllocatedMode>
-        FreeList sweepHelperSelectSweepMode(SweepMode = SweepOnly);
+        SweepDestructionMode sweepDestructionMode();
+        EmptyMode emptyMode();
+        ScribbleMode scribbleMode();
+        NewlyAllocatedMode newlyAllocatedMode();
+        MarksMode marksMode();
         
-        template<EmptyMode, SweepMode, SweepDestructionMode, ScribbleMode, NewlyAllocatedMode>
-        FreeList sweepHelperSelectMarksMode();
+        template<bool, EmptyMode, SweepMode, SweepDestructionMode, ScribbleMode, NewlyAllocatedMode, MarksMode, typename DestroyFunc>
+        FreeList specializedSweep(EmptyMode, SweepMode, SweepDestructionMode, ScribbleMode, NewlyAllocatedMode, MarksMode, const DestroyFunc&);
         
-        enum MarksMode { MarksStale, MarksNotStale };
-        
-        template<EmptyMode, SweepMode, SweepDestructionMode, ScribbleMode, NewlyAllocatedMode, MarksMode>
-        FreeList specializedSweep();
-            
         template<typename Func>
         void forEachFreeCell(const FreeList&, const Func&);
         
@@ -282,7 +276,7 @@ public:
         
     WeakSet& weakSet();
 
-    bool areMarksStale();
+    JS_EXPORT_PRIVATE bool areMarksStale();
     bool areMarksStale(HeapVersion markingVersion);
     struct MarksWithDependency {
         bool areStale;
@@ -432,7 +426,7 @@ inline void MarkedBlock::Handle::shrink()
     m_weakSet.shrink();
 }
 
-inline unsigned MarkedBlock::Handle::visitWeakSet(SlotVisitor& visitor)
+inline void MarkedBlock::Handle::visitWeakSet(SlotVisitor& visitor)
 {
     return m_weakSet.visit(visitor);
 }
index 3581579..78379f3 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2016-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 
 #pragma once
 
+#include "JSCell.h"
 #include "MarkedAllocator.h"
 #include "MarkedBlock.h"
 #include "MarkedSpace.h"
+#include "Operations.h"
+#include "SuperSampler.h"
 #include "VM.h"
 
 namespace JSC {
@@ -37,6 +40,31 @@ inline unsigned MarkedBlock::Handle::cellsPerBlock()
     return MarkedSpace::blockPayload / cellSize();
 }
 
+inline bool MarkedBlock::Handle::isNewlyAllocatedStale() const
+{
+    return m_newlyAllocatedVersion != space()->newlyAllocatedVersion();
+}
+
+inline bool MarkedBlock::Handle::hasAnyNewlyAllocated()
+{
+    return !isNewlyAllocatedStale();
+}
+
+inline Heap* MarkedBlock::heap() const
+{
+    return &vm()->heap;
+}
+
+inline MarkedSpace* MarkedBlock::space() const
+{
+    return &heap()->objectSpace();
+}
+
+inline MarkedSpace* MarkedBlock::Handle::space() const
+{
+    return &heap()->objectSpace();
+}
+
 inline bool MarkedBlock::marksConveyLivenessDuringMarking(HeapVersion markingVersion)
 {
     // This returns true if any of these is true:
@@ -89,14 +117,227 @@ inline bool MarkedBlock::Handle::isLiveCell(HeapVersion markingVersion, bool isM
     return isLive(markingVersion, isMarking, static_cast<const HeapCell*>(p));
 }
 
-inline bool MarkedBlock::Handle::isNewlyAllocatedStale() const
+// The following has to be true for specialization to kick in:
+//
+// sweepMode == SweepToFreeList
+// scribbleMode == DontScribble
+// newlyAllocatedMode == DoesNotHaveNewlyAllocated
+// destructionMode != BlockHasDestrictorsAndCollectorIsRunning
+//
+// emptyMode = IsEmpty
+//     destructionMode = DoesNotNeedDestruction
+//         marksMode = MarksNotStale (1)
+//         marksMode = MarksStale (2)
+// emptyMode = NotEmpty
+//     destructionMode = DoesNotNeedDestruction
+//         marksMode = MarksNotStale (3)
+//         marksMode = MarksStale (4)
+//     destructionMode = NeedsDestruction
+//         marksMode = MarksNotStale (5)
+//         marksMode = MarksStale (6)
+//
+// Only the DoesNotNeedDestruction one should be specialized by MarkedBlock.
+
+template<bool specialize, MarkedBlock::Handle::EmptyMode specializedEmptyMode, MarkedBlock::Handle::SweepMode specializedSweepMode, MarkedBlock::Handle::SweepDestructionMode specializedDestructionMode, MarkedBlock::Handle::ScribbleMode specializedScribbleMode, MarkedBlock::Handle::NewlyAllocatedMode specializedNewlyAllocatedMode, MarkedBlock::Handle::MarksMode specializedMarksMode, typename DestroyFunc>
+FreeList MarkedBlock::Handle::specializedSweep(MarkedBlock::Handle::EmptyMode emptyMode, MarkedBlock::Handle::SweepMode sweepMode, MarkedBlock::Handle::SweepDestructionMode destructionMode, MarkedBlock::Handle::ScribbleMode scribbleMode, MarkedBlock::Handle::NewlyAllocatedMode newlyAllocatedMode, MarkedBlock::Handle::MarksMode marksMode, const DestroyFunc& destroyFunc)
 {
-    return m_newlyAllocatedVersion != space()->newlyAllocatedVersion();
+    if (specialize) {
+        emptyMode = specializedEmptyMode;
+        sweepMode = specializedSweepMode;
+        destructionMode = specializedDestructionMode;
+        scribbleMode = specializedScribbleMode;
+        newlyAllocatedMode = specializedNewlyAllocatedMode;
+        marksMode = specializedMarksMode;
+    }
+    
+    RELEASE_ASSERT(!(destructionMode == BlockHasNoDestructors && sweepMode == SweepOnly));
+    
+    SuperSamplerScope superSamplerScope(false);
+
+    MarkedBlock& block = this->block();
+    
+    if (false)
+        dataLog(RawPointer(this), "/", RawPointer(&block), ": MarkedBlock::Handle::specializedSweep!\n");
+    
+    if (Options::useBumpAllocator()
+        && emptyMode == IsEmpty
+        && newlyAllocatedMode == DoesNotHaveNewlyAllocated) {
+        
+        // This is an incredibly powerful assertion that checks the sanity of our block bits.
+        if (marksMode == MarksNotStale && !block.m_marks.isEmpty()) {
+            WTF::dataFile().atomically(
+                [&] (PrintStream& out) {
+                    out.print("Block ", RawPointer(&block), ": marks not empty!\n");
+                    out.print("Block lock is held: ", block.m_lock.isHeld(), "\n");
+                    out.print("Marking version of block: ", block.m_markingVersion, "\n");
+                    out.print("Marking version of heap: ", space()->markingVersion(), "\n");
+                    UNREACHABLE_FOR_PLATFORM();
+                });
+        }
+        
+        char* startOfLastCell = static_cast<char*>(cellAlign(block.atoms() + m_endAtom - 1));
+        char* payloadEnd = startOfLastCell + cellSize();
+        RELEASE_ASSERT(payloadEnd - MarkedBlock::blockSize <= bitwise_cast<char*>(&block));
+        char* payloadBegin = bitwise_cast<char*>(block.atoms() + firstAtom());
+        if (scribbleMode == Scribble)
+            scribble(payloadBegin, payloadEnd - payloadBegin);
+        if (sweepMode == SweepToFreeList)
+            setIsFreeListed();
+        else
+            m_allocator->setIsEmpty(NoLockingNecessary, this, true);
+        if (space()->isMarking())
+            block.m_lock.unlock();
+        FreeList result = FreeList::bump(payloadEnd, payloadEnd - payloadBegin);
+        if (false)
+            dataLog("Quickly swept block ", RawPointer(this), " with cell size ", cellSize(), " and attributes ", m_attributes, ": ", result, "\n");
+        return result;
+    }
+
+    // This produces a free list that is ordered in reverse through the block.
+    // This is fine, since the allocation code makes no assumptions about the
+    // order of the free list.
+    FreeCell* head = 0;
+    size_t count = 0;
+    bool isEmpty = true;
+    Vector<size_t> deadCells;
+    VM& vm = *this->vm();
+    auto handleDeadCell = [&] (size_t i) {
+        HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&block.atoms()[i]);
+
+        if (destructionMode != BlockHasNoDestructors && emptyMode == NotEmpty) {
+            JSCell* jsCell = static_cast<JSCell*>(cell);
+            if (!jsCell->isZapped()) {
+                destroyFunc(vm, jsCell);
+                jsCell->zap();
+            }
+        }
+
+        if (sweepMode == SweepToFreeList) {
+            FreeCell* freeCell = reinterpret_cast_ptr<FreeCell*>(cell);
+            if (scribbleMode == Scribble)
+                scribble(freeCell, cellSize());
+            freeCell->next = head;
+            head = freeCell;
+            ++count;
+        }
+    };
+    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
+        if (emptyMode == NotEmpty
+            && ((marksMode == MarksNotStale && block.m_marks.get(i))
+                || (newlyAllocatedMode == HasNewlyAllocated && m_newlyAllocated.get(i)))) {
+            isEmpty = false;
+            continue;
+        }
+        
+        if (destructionMode == BlockHasDestructorsAndCollectorIsRunning)
+            deadCells.append(i);
+        else
+            handleDeadCell(i);
+    }
+    
+    // We only want to discard the newlyAllocated bits if we're creating a FreeList,
+    // otherwise we would lose information on what's currently alive.
+    if (sweepMode == SweepToFreeList && newlyAllocatedMode == HasNewlyAllocated)
+        m_newlyAllocatedVersion = MarkedSpace::nullVersion;
+    
+    if (space()->isMarking())
+        block.m_lock.unlock();
+    
+    if (destructionMode == BlockHasDestructorsAndCollectorIsRunning) {
+        for (size_t i : deadCells)
+            handleDeadCell(i);
+    }
+
+    FreeList result = FreeList::list(head, count * cellSize());
+    if (sweepMode == SweepToFreeList)
+        setIsFreeListed();
+    else if (isEmpty)
+        m_allocator->setIsEmpty(NoLockingNecessary, this, true);
+    if (false)
+        dataLog("Slowly swept block ", RawPointer(&block), " with cell size ", cellSize(), " and attributes ", m_attributes, ": ", result, "\n");
+    return result;
 }
 
-inline bool MarkedBlock::Handle::hasAnyNewlyAllocated()
+template<typename DestroyFunc>
+FreeList MarkedBlock::Handle::finishSweepKnowingSubspace(SweepMode sweepMode, const DestroyFunc& destroyFunc)
 {
-    return !isNewlyAllocatedStale();
+    SweepDestructionMode destructionMode = this->sweepDestructionMode();
+    EmptyMode emptyMode = this->emptyMode();
+    ScribbleMode scribbleMode = this->scribbleMode();
+    NewlyAllocatedMode newlyAllocatedMode = this->newlyAllocatedMode();
+    MarksMode marksMode = this->marksMode();
+
+    FreeList result;
+    auto trySpecialized = [&] () -> bool {
+        if (sweepMode != SweepToFreeList)
+            return false;
+        if (scribbleMode != DontScribble)
+            return false;
+        if (newlyAllocatedMode != DoesNotHaveNewlyAllocated)
+            return false;
+        if (destructionMode != BlockHasDestructors)
+            return false;
+        if (emptyMode == IsEmpty)
+            return false;
+        
+        switch (marksMode) {
+        case MarksNotStale:
+            result = specializedSweep<true, NotEmpty, SweepToFreeList, BlockHasDestructors, DontScribble, DoesNotHaveNewlyAllocated, MarksNotStale>(IsEmpty, SweepToFreeList, BlockHasDestructors, DontScribble, DoesNotHaveNewlyAllocated, MarksNotStale, destroyFunc);
+            return true;
+        case MarksStale:
+            result = specializedSweep<true, NotEmpty, SweepToFreeList, BlockHasDestructors, DontScribble, DoesNotHaveNewlyAllocated, MarksStale>(IsEmpty, SweepToFreeList, BlockHasDestructors, DontScribble, DoesNotHaveNewlyAllocated, MarksStale, destroyFunc);
+            return true;
+        }
+        
+        return false;
+    };
+    
+    if (trySpecialized())
+        return result;
+    
+    // The template arguments don't matter because the first one is false.
+    return specializedSweep<false, IsEmpty, SweepOnly, BlockHasNoDestructors, DontScribble, HasNewlyAllocated, MarksStale>(emptyMode, sweepMode, destructionMode, scribbleMode, newlyAllocatedMode, marksMode, destroyFunc);
+}
+
+inline MarkedBlock::Handle::SweepDestructionMode MarkedBlock::Handle::sweepDestructionMode()
+{
+    if (m_attributes.destruction == NeedsDestruction) {
+        if (space()->isMarking())
+            return BlockHasDestructorsAndCollectorIsRunning;
+        return BlockHasDestructors;
+    }
+    return BlockHasNoDestructors;
+}
+
+inline MarkedBlock::Handle::EmptyMode MarkedBlock::Handle::emptyMode()
+{
+    // It's not obvious, but this is the only way to know if the block is empty. It's the only
+    // bit that captures these caveats:
+    // - It's true when the block is freshly allocated.
+    // - It's true if the block had been swept in the past, all destructors were called, and that
+    //   sweep proved that the block is empty.
+    // - It's false if there are any destructors that need to be called, even if the block has no
+    //   live objects.
+    return m_allocator->isEmpty(NoLockingNecessary, this) ? IsEmpty : NotEmpty;
+}
+
+inline MarkedBlock::Handle::ScribbleMode MarkedBlock::Handle::scribbleMode()
+{
+    return scribbleFreeCells() ? Scribble : DontScribble;
+}
+
+inline MarkedBlock::Handle::NewlyAllocatedMode MarkedBlock::Handle::newlyAllocatedMode()
+{
+    return hasAnyNewlyAllocated() ? HasNewlyAllocated : DoesNotHaveNewlyAllocated;
+}
+
+inline MarkedBlock::Handle::MarksMode MarkedBlock::Handle::marksMode()
+{
+    HeapVersion markingVersion = space()->markingVersion();
+    bool marksAreUseful = !block().areMarksStale(markingVersion);
+    if (space()->isMarking())
+        marksAreUseful |= block().marksConveyLivenessDuringMarking(markingVersion);
+    return marksAreUseful ? MarksNotStale : MarksStale;
 }
 
 template <typename Functor>
@@ -129,19 +370,24 @@ inline IterationStatus MarkedBlock::Handle::forEachDeadCell(const Functor& funct
     return IterationStatus::Continue;
 }
 
-inline Heap* MarkedBlock::heap() const
-{
-    return &vm()->heap;
-}
-
-inline MarkedSpace* MarkedBlock::space() const
+template <typename Functor>
+inline IterationStatus MarkedBlock::Handle::forEachMarkedCell(const Functor& functor)
 {
-    return &heap()->objectSpace();
-}
+    HeapCell::Kind kind = m_attributes.cellKind;
+    MarkedBlock& block = this->block();
+    bool areMarksStale = block.areMarksStale();
+    WTF::loadLoadFence();
+    if (areMarksStale)
+        return IterationStatus::Continue;
+    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
+        HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
+        if (!block.isMarkedRaw(cell))
+            continue;
 
-inline MarkedSpace* MarkedBlock::Handle::space() const
-{
-    return &heap()->objectSpace();
+        if (functor(cell, kind) == IterationStatus::Done)
+            return IterationStatus::Done;
+    }
+    return IterationStatus::Continue;
 }
 
 } // namespace JSC
index e46d550..ab2bdc1 100644 (file)
@@ -1,5 +1,5 @@
 /*
- *  Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2016 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
  *  Copyright (C) 2007 Eric Seidel <eric@webkit.org>
  *
  *  This library is free software; you can redistribute it and/or
@@ -25,6 +25,7 @@
 #include "IncrementalSweeper.h"
 #include "JSObject.h"
 #include "JSCInlines.h"
+#include "MarkedAllocatorInlines.h"
 #include "MarkedBlockInlines.h"
 #include <wtf/ListDump.h>
 
@@ -177,16 +178,18 @@ void buildSizeClassTable(TableType& table, const SizeClassCons& cons, const Defa
 
 void MarkedSpace::initializeSizeClassForStepSize()
 {
-    // We call this multiple times and we may call it simultaneously from multiple threads. That's
-    // OK, since it always stores the same values into the table.
-    
-    buildSizeClassTable(
-        s_sizeClassForSizeStep,
-        [&] (size_t sizeClass) -> size_t {
-            return sizeClass;
-        },
-        [&] (size_t sizeClass) -> size_t {
-            return sizeClass;
+    static std::once_flag flag;
+    std::call_once(
+        flag,
+        [] {
+            buildSizeClassTable(
+                s_sizeClassForSizeStep,
+                [&] (size_t sizeClass) -> size_t {
+                    return sizeClass;
+                },
+                [&] (size_t sizeClass) -> size_t {
+                    return sizeClass;
+                });
         });
 }
 
@@ -196,35 +199,6 @@ MarkedSpace::MarkedSpace(Heap* heap)
     , m_isIterating(false)
 {
     initializeSizeClassForStepSize();
-    
-    forEachSubspace(
-        [&] (Subspace& subspace, AllocatorAttributes attributes) -> IterationStatus {
-            subspace.attributes = attributes;
-            
-            buildSizeClassTable(
-                subspace.allocatorForSizeStep,
-                [&] (size_t sizeClass) -> MarkedAllocator* {
-                    return subspace.bagOfAllocators.add(heap, this, sizeClass, attributes);
-                },
-                [&] (size_t) -> MarkedAllocator* {
-                    return nullptr;
-                });
-            
-            return IterationStatus::Continue;
-        });
-    
-    MarkedAllocator* previous = nullptr;
-    forEachSubspace(
-        [&] (Subspace& subspace, AllocatorAttributes) -> IterationStatus {
-            for (MarkedAllocator* allocator : subspace.bagOfAllocators) {
-                allocator->setNextAllocator(previous);
-                previous = allocator;
-            }
-            
-            return IterationStatus::Continue;
-        });
-    m_firstAllocator = previous;
-    m_allocatorForEmptyAllocation = previous;
 }
 
 MarkedSpace::~MarkedSpace()
@@ -240,7 +214,6 @@ MarkedSpace::~MarkedSpace()
 
 void MarkedSpace::lastChanceToFinalize()
 {
-    stopAllocating();
     forEachAllocator(
         [&] (MarkedAllocator& allocator) -> IterationStatus {
             allocator.lastChanceToFinalize();
@@ -250,72 +223,6 @@ void MarkedSpace::lastChanceToFinalize()
         allocation->lastChanceToFinalize();
 }
 
-void* MarkedSpace::allocate(Subspace& subspace, size_t bytes)
-{
-    if (false)
-        dataLog("Allocating ", bytes, " bytes in ", subspace.attributes, ".\n");
-    if (MarkedAllocator* allocator = allocatorFor(subspace, bytes)) {
-        void* result = allocator->allocate();
-        return result;
-    }
-    return allocateLarge(subspace, nullptr, bytes);
-}
-
-void* MarkedSpace::allocate(Subspace& subspace, GCDeferralContext* deferralContext, size_t bytes)
-{
-    if (false)
-        dataLog("Allocating ", bytes, " deferred bytes in ", subspace.attributes, ".\n");
-    if (MarkedAllocator* allocator = allocatorFor(subspace, bytes)) {
-        void* result = allocator->allocate(deferralContext);
-        return result;
-    }
-    return allocateLarge(subspace, deferralContext, bytes);
-}
-
-void* MarkedSpace::tryAllocate(Subspace& subspace, size_t bytes)
-{
-    if (false)
-        dataLog("Try-allocating ", bytes, " bytes in ", subspace.attributes, ".\n");
-    if (MarkedAllocator* allocator = allocatorFor(subspace, bytes)) {
-        void* result = allocator->tryAllocate();
-        return result;
-    }
-    return tryAllocateLarge(subspace, nullptr, bytes);
-}
-
-void* MarkedSpace::tryAllocate(Subspace& subspace, GCDeferralContext* deferralContext, size_t bytes)
-{
-    if (false)
-        dataLog("Try-allocating ", bytes, " deferred bytes in ", subspace.attributes, ".\n");
-    if (MarkedAllocator* allocator = allocatorFor(subspace, bytes)) {
-        void* result = allocator->tryAllocate(deferralContext);
-        return result;
-    }
-    return tryAllocateLarge(subspace, deferralContext, bytes);
-}
-
-void* MarkedSpace::allocateLarge(Subspace& subspace, GCDeferralContext* deferralContext, size_t size)
-{
-    void* result = tryAllocateLarge(subspace, deferralContext, size);
-    RELEASE_ASSERT(result);
-    return result;
-}
-
-void* MarkedSpace::tryAllocateLarge(Subspace& subspace, GCDeferralContext* deferralContext, size_t size)
-{
-    m_heap->collectIfNecessaryOrDefer(deferralContext);
-    
-    size = WTF::roundUpToMultipleOf<sizeStep>(size);
-    LargeAllocation* allocation = LargeAllocation::tryCreate(*m_heap, size, subspace.attributes);
-    if (!allocation)
-        return nullptr;
-    
-    m_largeAllocations.append(allocation);
-    m_heap->didAllocate(size);
-    m_capacity += size;
-    return allocation->cell();
-}
-
 void MarkedSpace::sweep()
 {
     m_heap->sweeper()->willFinishSweeping();
@@ -654,4 +561,24 @@ void MarkedSpace::dumpBits(PrintStream& out)
         });
 }
 
+MarkedAllocator* MarkedSpace::addMarkedAllocator(
+    const AbstractLocker&, Subspace* subspace, size_t sizeClass)
+{
+    MarkedAllocator* allocator = m_bagOfAllocators.add(heap(), subspace, sizeClass);
+    allocator->setNextAllocator(nullptr);
+    
+    WTF::storeStoreFence();
+
+    if (!m_firstAllocator) {
+        m_firstAllocator = allocator;
+        m_lastAllocator = allocator;
+        m_allocatorForEmptyAllocation = allocator;
+    } else {
+        m_lastAllocator->setNextAllocator(allocator);
+        m_lastAllocator = allocator;
+    }
+    
+    return allocator;
+}
+
 } // namespace JSC
index 1261dc2..26be5e3 100644 (file)
@@ -1,7 +1,7 @@
 /*
  *  Copyright (C) 1999-2000 Harri Porten (porten@kde.org)
  *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
- *  Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2011, 2016 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
  *
  *  This library is free software; you can redistribute it and/or
  *  modify it under the terms of the GNU Lesser General Public
@@ -39,6 +39,7 @@ namespace JSC {
 class Heap;
 class HeapIterationScope;
 class LLIntOffsetsExtractor;
+class Subspace;
 class WeakSet;
 
 typedef uint32_t HeapVersion;
@@ -86,46 +87,14 @@ public:
         return (index + 1) * sizeStep;
     }
     
-    // Each Subspace corresponds to all of the blocks for all of the sizes for some "class" of
-    // objects. There are three classes: non-destructor JSCells, destructor JSCells, and auxiliary.
-    // MarkedSpace is set up to make it relatively easy to add new Subspaces.
-    struct Subspace {
-        std::array<MarkedAllocator*, numSizeClasses> allocatorForSizeStep;
-        
-        // Each MarkedAllocator is a size class.
-        Bag<MarkedAllocator> bagOfAllocators;
-        
-        AllocatorAttributes attributes;
-    };
-    
     MarkedSpace(Heap*);
     ~MarkedSpace();
-    void lastChanceToFinalize();
-
-    static size_t optimalSizeFor(size_t);
     
-    static MarkedAllocator* allocatorFor(Subspace&, size_t);
-
-    MarkedAllocator* allocatorFor(size_t);
-    MarkedAllocator* destructorAllocatorFor(size_t);
-    MarkedAllocator* auxiliaryAllocatorFor(size_t);
-
-    JS_EXPORT_PRIVATE void* allocate(Subspace&, size_t);
-    JS_EXPORT_PRIVATE void* allocate(Subspace&, GCDeferralContext*, size_t);
-    JS_EXPORT_PRIVATE void* tryAllocate(Subspace&, size_t);
-    JS_EXPORT_PRIVATE void* tryAllocate(Subspace&, GCDeferralContext*, size_t);
+    Heap* heap() const { return m_heap; }
     
-    void* allocateWithDestructor(size_t);
-    void* allocateWithoutDestructor(size_t);
-    void* allocateWithDestructor(GCDeferralContext*, size_t);
-    void* allocateWithoutDestructor(GCDeferralContext*, size_t);
-    void* allocateAuxiliary(size_t);
-    void* tryAllocateAuxiliary(size_t);
-    void* tryAllocateAuxiliary(GCDeferralContext*, size_t);
-    
-    Subspace& subspaceForObjectsWithDestructor() { return m_destructorSpace; }
-    Subspace& subspaceForObjectsWithoutDestructor() { return m_normalSpace; }
-    Subspace& subspaceForAuxiliaryData() { return m_auxiliarySpace; }
+    void lastChanceToFinalize(); // You must call stopAllocating before you call this.
+
+    static size_t optimalSizeFor(size_t);
     
     void prepareForAllocation();
 
@@ -190,42 +159,35 @@ public:
     
     MarkedBlock::Handle* findEmptyBlockToSteal();
     
+    Lock& allocatorLock() { return m_allocatorLock; }
+    MarkedAllocator* addMarkedAllocator(const AbstractLocker&, Subspace*, size_t cellSize);
+    
     // When this is true it means that we have flipped but the mark bits haven't converged yet.
     bool isMarking() const { return m_isMarking; }
     
     void dumpBits(PrintStream& = WTF::dataFile());
     
+    JS_EXPORT_PRIVATE static std::array<size_t, numSizeClasses> s_sizeClassForSizeStep;
+    
 private:
     friend class LLIntOffsetsExtractor;
     friend class JIT;
     friend class WeakSet;
+    friend class Subspace;
     
-    JS_EXPORT_PRIVATE static std::array<size_t, numSizeClasses> s_sizeClassForSizeStep;
-    
-    void* allocateLarge(Subspace&, GCDeferralContext*, size_t);
-    void* tryAllocateLarge(Subspace&, GCDeferralContext*, size_t);
+    void* allocateSlow(Subspace&, GCDeferralContext*, size_t);
+    void* tryAllocateSlow(Subspace&, GCDeferralContext*, size_t);
 
     static void initializeSizeClassForStepSize();
     
     void initializeSubspace(Subspace&);
 
     template<typename Functor> inline void forEachAllocator(const Functor&);
-    template<typename Functor> inline void forEachSubspace(const Functor&);
     
     void addActiveWeakSet(WeakSet*);
 
-    Subspace m_destructorSpace;
-    Subspace m_normalSpace;
-    Subspace m_auxiliarySpace;
+    Vector<Subspace*> m_subspaces;
 
-    Heap* m_heap;
-    HeapVersion m_markingVersion { initialVersion };
-    HeapVersion m_newlyAllocatedVersion { initialVersion };
-    size_t m_capacity;
-    bool m_isIterating;
-    bool m_isMarking { false };
-    MarkedBlockSet m_blocks;
-    
     Vector<LargeAllocation*> m_largeAllocations;
     unsigned m_largeAllocationsNurseryOffset { 0 };
     unsigned m_largeAllocationsOffsetForThisCollection { 0 };
@@ -233,72 +195,25 @@ private:
     LargeAllocation** m_largeAllocationsForThisCollectionBegin { nullptr };
     LargeAllocation** m_largeAllocationsForThisCollectionEnd { nullptr };
     unsigned m_largeAllocationsForThisCollectionSize { 0 };
+
+    Heap* m_heap;
+    HeapVersion m_markingVersion { initialVersion };
+    HeapVersion m_newlyAllocatedVersion { initialVersion };
+    size_t m_capacity;
+    bool m_isIterating;
+    bool m_isMarking { false };
+    MarkedBlockSet m_blocks;
     
     SentinelLinkedList<WeakSet, BasicRawSentinelNode<WeakSet>> m_activeWeakSets;
     SentinelLinkedList<WeakSet, BasicRawSentinelNode<WeakSet>> m_newActiveWeakSets;
-    
+
+    Lock m_allocatorLock;
+    Bag<MarkedAllocator> m_bagOfAllocators;
     MarkedAllocator* m_firstAllocator { nullptr };
+    MarkedAllocator* m_lastAllocator { nullptr };
     MarkedAllocator* m_allocatorForEmptyAllocation { nullptr };
 };
 
-inline MarkedAllocator* MarkedSpace::allocatorFor(Subspace& space, size_t bytes)
-{
-    ASSERT(bytes);
-    if (bytes <= largeCutoff)
-        return space.allocatorForSizeStep[sizeClassToIndex(bytes)];
-    return nullptr;
-}
-
-inline MarkedAllocator* MarkedSpace::allocatorFor(size_t bytes)
-{
-    return allocatorFor(m_normalSpace, bytes);
-}
-
-inline MarkedAllocator* MarkedSpace::destructorAllocatorFor(size_t bytes)
-{
-    return allocatorFor(m_destructorSpace, bytes);
-}
-
-inline MarkedAllocator* MarkedSpace::auxiliaryAllocatorFor(size_t bytes)
-{
-    return allocatorFor(m_auxiliarySpace, bytes);
-}
-
-inline void* MarkedSpace::allocateWithoutDestructor(size_t bytes)
-{
-    return allocate(m_normalSpace, bytes);
-}
-
-inline void* MarkedSpace::allocateWithDestructor(size_t bytes)
-{
-    return allocate(m_destructorSpace, bytes);
-}
-
-inline void* MarkedSpace::allocateWithoutDestructor(GCDeferralContext* deferralContext, size_t bytes)
-{
-    return allocate(m_normalSpace, deferralContext, bytes);
-}
-
-inline void* MarkedSpace::allocateWithDestructor(GCDeferralContext* deferralContext, size_t bytes)
-{
-    return allocate(m_destructorSpace, deferralContext, bytes);
-}
-
-inline void* MarkedSpace::allocateAuxiliary(size_t bytes)
-{
-    return allocate(m_auxiliarySpace, bytes);
-}
-
-inline void* MarkedSpace::tryAllocateAuxiliary(size_t bytes)
-{
-    return tryAllocate(m_auxiliarySpace, bytes);
-}
-
-inline void* MarkedSpace::tryAllocateAuxiliary(GCDeferralContext* deferralContext, size_t bytes)
-{
-    return tryAllocate(m_auxiliarySpace, deferralContext, bytes);
-}
-
 template <typename Functor> inline void MarkedSpace::forEachBlock(const Functor& functor)
 {
     forEachAllocator(
@@ -317,26 +232,6 @@ void MarkedSpace::forEachAllocator(const Functor& functor)
     }
 }
 
-template<typename Functor>
-inline void MarkedSpace::forEachSubspace(const Functor& func)
-{
-    AllocatorAttributes attributes;
-    
-    attributes.destruction = NeedsDestruction;
-    attributes.cellKind = HeapCell::JSCell;
-    if (func(m_destructorSpace, attributes) == IterationStatus::Done)
-        return;
-    
-    attributes.destruction = DoesNotNeedDestruction;
-    attributes.cellKind = HeapCell::JSCell;
-    if (func(m_normalSpace, attributes) == IterationStatus::Done)
-        return;
-
-    attributes.destruction = DoesNotNeedDestruction;
-    attributes.cellKind = HeapCell::Auxiliary;
-    func(m_auxiliarySpace, attributes);
-}
-
 ALWAYS_INLINE size_t MarkedSpace::optimalSizeFor(size_t bytes)
 {
     ASSERT(bytes);
index 84f2e9f..39a3081 100644 (file)
@@ -33,7 +33,7 @@ namespace JSC {
 MarkingConstraint::MarkingConstraint(
     CString abbreviatedName, CString name,
     ::Function<void(SlotVisitor&, const VisitingTimeout&)> executeFunction,
-    Volatility volatility)
+    ConstraintVolatility volatility)
     : m_abbreviatedName(abbreviatedName)
     , m_name(WTFMove(name))
     , m_executeFunction(WTFMove(executeFunction))
@@ -45,7 +45,7 @@ MarkingConstraint::MarkingConstraint(
     CString abbreviatedName, CString name,
     ::Function<void(SlotVisitor&, const VisitingTimeout&)> executeFunction,
     ::Function<double(SlotVisitor&)> quickWorkEstimateFunction,
-    Volatility volatility)
+    ConstraintVolatility volatility)
     : m_abbreviatedName(abbreviatedName)
     , m_name(WTFMove(name))
     , m_executeFunction(WTFMove(executeFunction))
index 01095fc..4c57419 100644 (file)
@@ -25,6 +25,7 @@
 
 #pragma once
 
+#include "ConstraintVolatility.h"
 #include "VisitingTimeout.h"
 #include <wtf/FastMalloc.h>
 #include <wtf/Function.h>
@@ -41,34 +42,18 @@ class MarkingConstraint {
     WTF_MAKE_NONCOPYABLE(MarkingConstraint);
     WTF_MAKE_FAST_ALLOCATED;
 public:
-    enum Volatility {
-        // FIXME: We could introduce a new kind of volatility called GreyedByResumption, which
-        // would mean running all of the times that GreyedByExecution runs except as a root in a
-        // full GC.
-        // https://bugs.webkit.org/show_bug.cgi?id=166830
-        
-        // The constraint needs to be reevaluated anytime the mutator runs: so at GC start and
-        // whenever the GC resuspends after a resumption. This is almost always something that
-        // you'd call a "root" in a traditional GC.
-        GreyedByExecution,
-        
-        // The constraint needs to be reevaluated any time any object is marked and anytime the
-        // mutator resumes.
-        GreyedByMarking
-    };
-    
-    MarkingConstraint(
+    JS_EXPORT_PRIVATE MarkingConstraint(
         CString abbreviatedName, CString name,
         ::Function<void(SlotVisitor&, const VisitingTimeout&)>,
-        Volatility);
+        ConstraintVolatility);
     
-    MarkingConstraint(
+    JS_EXPORT_PRIVATE MarkingConstraint(
         CString abbreviatedName, CString name,
         ::Function<void(SlotVisitor&, const VisitingTimeout&)>,
         ::Function<double(SlotVisitor&)>,
-        Volatility);
+        ConstraintVolatility);
     
-    ~MarkingConstraint();
+    JS_EXPORT_PRIVATE ~MarkingConstraint();
     
     unsigned index() const { return m_index; }
     
@@ -93,7 +78,7 @@ public:
         return lastVisitCount() + quickWorkEstimate(visitor);
     }
     
-    Volatility volatility() const { return m_volatility; }
+    ConstraintVolatility volatility() const { return m_volatility; }
     
 private:
     friend class MarkingConstraintSet; // So it can set m_index.
@@ -103,7 +88,7 @@ private:
     CString m_name;
     ::Function<void(SlotVisitor&, const VisitingTimeout& timeout)> m_executeFunction;
     ::Function<double(SlotVisitor&)> m_quickWorkEstimateFunction;
-    Volatility m_volatility;
+    ConstraintVolatility m_volatility;
     size_t m_lastVisitCount { 0 };
 };
 
index cb72876..f6ef027 100644 (file)
@@ -93,17 +93,19 @@ void MarkingConstraintSet::resetStats()
     for (auto& constraint : m_set) {
         constraint->resetStats();
         switch (constraint->volatility()) {
-        case MarkingConstraint::GreyedByExecution:
+        case ConstraintVolatility::GreyedByExecution:
             m_unexecutedRoots.set(constraint->index());
             break;
-        case MarkingConstraint::GreyedByMarking:
+        case ConstraintVolatility::GreyedByMarking:
             m_unexecutedOutgrowths.set(constraint->index());
             break;
+        case ConstraintVolatility::SeldomGreyed:
+            break;
         }
     }
 }
 
-void MarkingConstraintSet::add(CString abbreviatedName, CString name, Function<void(SlotVisitor&, const VisitingTimeout&)> function, MarkingConstraint::Volatility volatility)
+void MarkingConstraintSet::add(CString abbreviatedName, CString name, Function<void(SlotVisitor&, const VisitingTimeout&)> function, ConstraintVolatility volatility)
 {
     add(std::make_unique<MarkingConstraint>(WTFMove(abbreviatedName), WTFMove(name), WTFMove(function), volatility));
 }
@@ -112,7 +114,7 @@ void MarkingConstraintSet::add(
     CString abbreviatedName, CString name,
     Function<void(SlotVisitor&, const VisitingTimeout&)> executeFunction,
     Function<double(SlotVisitor&)> quickWorkEstimateFunction,
-    MarkingConstraint::Volatility volatility)
+    ConstraintVolatility volatility)
 {
     add(std::make_unique<MarkingConstraint>(WTFMove(abbreviatedName), WTFMove(name), WTFMove(executeFunction), WTFMove(quickWorkEstimateFunction), volatility));
 }
@@ -122,7 +124,7 @@ void MarkingConstraintSet::add(
 {
     constraint->m_index = m_set.size();
     m_ordered.append(constraint.get());
-    if (constraint->volatility() == MarkingConstraint::GreyedByMarking)
+    if (constraint->volatility() == ConstraintVolatility::GreyedByMarking)
         m_outgrowths.append(constraint.get());
     m_set.append(WTFMove(constraint));
 }
@@ -194,14 +196,29 @@ bool MarkingConstraintSet::executeConvergenceImpl(SlotVisitor& visitor, Monotoni
         m_ordered.begin(), m_ordered.end(),
         [&] (MarkingConstraint* a, MarkingConstraint* b) -> bool {
             // Remember: return true if a should come before b.
-            if (a->volatility() != b->volatility()) {
+            
+            auto volatilityScore = [] (MarkingConstraint* constraint) -> unsigned {
+                return constraint->volatility() == ConstraintVolatility::GreyedByMarking ? 1 : 0;
+            };
+            
+            unsigned aVolatilityScore = volatilityScore(a);
+            unsigned bVolatilityScore = volatilityScore(b);
+            
+            if (aVolatilityScore != bVolatilityScore) {
                 if (isWavefrontAdvancing)
-                    return a->volatility() > b->volatility(); // GreyedByMarking should come before GreyedByExecution.
+                    return aVolatilityScore > bVolatilityScore;
                 else
-                    return a->volatility() < b->volatility(); // GreyedByExecution should come before GreyedByMarking.
+                    return aVolatilityScore < bVolatilityScore;
             }
             
-            return a->workEstimate(visitor) > b->workEstimate(visitor);
+            double aWorkEstimate = a->workEstimate(visitor);
+            double bWorkEstimate = b->workEstimate(visitor);
+            
+            if (aWorkEstimate != bWorkEstimate)
+                return aWorkEstimate > bWorkEstimate;
+            
+            // This causes us to use SeldomGreyed vs GreyedByExecution as a final tie-breaker.
+            return a->volatility() > b->volatility();
         });
     
     for (MarkingConstraint* constraint : m_ordered) {
index c979972..0937d43 100644 (file)
@@ -42,14 +42,14 @@ public:
         CString abbreviatedName,
         CString name,
         ::Function<void(SlotVisitor&, const VisitingTimeout&)>,
-        MarkingConstraint::Volatility);
+        ConstraintVolatility);
     
     void add(
         CString abbreviatedName,
         CString name,
         ::Function<void(SlotVisitor&, const VisitingTimeout&)>,
         ::Function<double(SlotVisitor&)>,
-        MarkingConstraint::Volatility);
+        ConstraintVolatility);
     
     void add(std::unique_ptr<MarkingConstraint>);
     
@@ -62,8 +62,8 @@ public:
     // that you'll do some draining after this and then use executeConvergence().
     bool executeBootstrap(SlotVisitor&, MonotonicTime timeout = MonotonicTime::infinity());
     
-    // Returns true if all constraints were executed. This assumes that you've alraedy
-    // visited roots and drained from there.
+    // Returns true if this executed all constraints and none of them produced new work. This
+    // assumes that you've alraedy visited roots and drained from there.
     bool executeConvergence(
         SlotVisitor&,
         MonotonicTime timeout = MonotonicTime::infinity());
index 7591c7d..1094733 100644 (file)
@@ -346,12 +346,6 @@ private:
     SlotVisitor& m_visitor;
 };
 
-void SlotVisitor::visitAsConstraint(const JSCell* cell)
-{
-    m_isVisitingMutatorStack = true;
-    visitChildren(cell);
-}
-
 ALWAYS_INLINE void SlotVisitor::visitChildren(const JSCell* cell)
 {
     ASSERT(Heap::isMarkedConcurrently(cell));
@@ -360,8 +354,8 @@ ALWAYS_INLINE void SlotVisitor::visitChildren(const JSCell* cell)
     
     if (false) {
         dataLog("Visiting ", RawPointer(cell));
-        if (m_isVisitingMutatorStack)
-            dataLog(" (mutator)");
+        if (!m_isFirstVisit)
+            dataLog(" (subsequent)");
         dataLog("\n");
     }
     
@@ -395,11 +389,17 @@ ALWAYS_INLINE void SlotVisitor::visitChildren(const JSCell* cell)
     }
     
     if (UNLIKELY(m_heapSnapshotBuilder)) {
-        if (!m_isVisitingMutatorStack)
+        if (m_isFirstVisit)
             m_heapSnapshotBuilder->appendNode(const_cast<JSCell*>(cell));
     }
 }
 
+void SlotVisitor::visitAsConstraint(const JSCell* cell)
+{
+    m_isFirstVisit = false;
+    visitChildren(cell);
+}
+
 void SlotVisitor::donateKnownParallel(MarkStackArray& from, MarkStackArray& to)
 {
     // NOTE: Because we re-try often, we can afford to be conservative, and
@@ -469,7 +469,7 @@ void SlotVisitor::drain(MonotonicTime timeout)
         updateMutatorIsStopped(locker);
         if (!m_collectorStack.isEmpty()) {
             m_collectorStack.refill();
-            m_isVisitingMutatorStack = false;
+            m_isFirstVisit = true;
             for (unsigned countdown = Options::minimumNumberOfScansBetweenRebalance(); m_collectorStack.canRemoveLast() && countdown--;)
                 visitChildren(m_collectorStack.removeLast());
         } else if (!m_mutatorStack.isEmpty()) {
@@ -477,7 +477,7 @@ void SlotVisitor::drain(MonotonicTime timeout)
             // We know for sure that we are visiting objects because of the barrier, not because of
             // marking. Marking will visit an object exactly once. The barrier will visit it
             // possibly many times, and always after it was already marked.
-            m_isVisitingMutatorStack = true;
+            m_isFirstVisit = false;
             for (unsigned countdown = Options::minimumNumberOfScansBetweenRebalance(); m_mutatorStack.canRemoveLast() && countdown--;)
                 visitChildren(m_mutatorStack.removeLast());
         } else
@@ -486,7 +486,7 @@ void SlotVisitor::drain(MonotonicTime timeout)
         donateKnownParallel();
     }
     
-    mergeOpaqueRootsIfNecessary();
+    mergeIfNecessary();
 }
 
 bool SlotVisitor::didReachTermination()
@@ -614,15 +614,18 @@ void SlotVisitor::addOpaqueRoot(void* root)
     if (!root)
         return;
     
+    if (m_ignoreNewOpaqueRoots)
+        return;
+    
     if (Options::numberOfGCMarkers() == 1) {
         // Put directly into the shared HashSet.
-        m_visitCount += m_heap.m_opaqueRoots.add(root).isNewEntry;
+        m_heap.m_opaqueRoots.add(root);
         return;
     }
     // Put into the local set, but merge with the shared one every once in
     // a while to make sure that the local sets don't grow too large.
     mergeOpaqueRootsIfProfitable();
-    m_visitCount += m_opaqueRoots.add(root);
+    m_opaqueRoots.add(root);
 }
 
 bool SlotVisitor::containsOpaqueRoot(void* root) const
@@ -647,13 +650,13 @@ TriState SlotVisitor::containsOpaqueRootTriState(void* root) const
     return MixedTriState;
 }
 
-void SlotVisitor::mergeOpaqueRootsIfNecessary()
+void SlotVisitor::mergeIfNecessary()
 {
     if (m_opaqueRoots.isEmpty())
         return;
     mergeOpaqueRoots();
 }
-    
+
 void SlotVisitor::mergeOpaqueRootsIfProfitable()
 {
     if (static_cast<unsigned>(m_opaqueRoots.size()) < Options::opaqueRootMergeThreshold())
index fda72de..2e4655d 100644 (file)
@@ -92,6 +92,7 @@ public:
     void append(const Weak<T>& weak);
     
     JS_EXPORT_PRIVATE void addOpaqueRoot(void*);
+    
     JS_EXPORT_PRIVATE bool containsOpaqueRoot(void*) const;
     TriState containsOpaqueRootTriState(void*) const;
 
@@ -116,6 +117,8 @@ public:
 
     SharedDrainResult drainInParallel(MonotonicTime timeout = MonotonicTime::infinity());
     SharedDrainResult drainInParallelPassively(MonotonicTime timeout = MonotonicTime::infinity());
+    
+    JS_EXPORT_PRIVATE void mergeIfNecessary();
 
     // This informs the GC about auxiliary of some size that we are keeping alive. If you don't do
     // this then the space will be freed at end of GC.
@@ -135,8 +138,6 @@ public:
     
     HeapVersion markingVersion() const { return m_markingVersion; }
 
-    void mergeOpaqueRootsIfNecessary();
-    
     bool mutatorIsStopped() const { return m_mutatorIsStopped; }
     
     Lock& rightToRun() { return m_rightToRun; }
@@ -155,6 +156,8 @@ public:
     void visitAsConstraint(const JSCell*);
     
     bool didReachTermination();
+    
+    void setIgnoreNewOpaqueRoots(bool value) { m_ignoreNewOpaqueRoots = value; }
 
 private:
     friend class ParallelModeEnabler;
@@ -176,7 +179,8 @@ private:
     
     void noteLiveAuxiliaryCell(HeapCell*);
     
-    JS_EXPORT_PRIVATE void mergeOpaqueRoots();
+    void mergeOpaqueRoots();
+
     void mergeOpaqueRootsIfProfitable();
 
     void visitChildren(const JSCell*);
@@ -190,6 +194,7 @@ private:
     MarkStackArray m_collectorStack;
     MarkStackArray m_mutatorStack;
     OpaqueRootSet m_opaqueRoots; // Handle-owning data structures not visible to the garbage collector.
+    bool m_ignoreNewOpaqueRoots { false }; // Useful as a debugging mode.
     
     size_t m_bytesVisited;
     size_t m_visitCount;
@@ -201,7 +206,7 @@ private:
 
     HeapSnapshotBuilder* m_heapSnapshotBuilder { nullptr };
     JSCell* m_currentCell { nullptr };
-    bool m_isVisitingMutatorStack { false };
+    bool m_isFirstVisit { false };
     bool m_mutatorIsStopped { false };
     bool m_canOptimizeForStoppedMutator { false };
     Lock m_rightToRun;
index 0fb365f..1fd1d83 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -81,14 +81,14 @@ inline void SlotVisitor::appendValuesHidden(const WriteBarrierBase<Unknown>* bar
 
 inline void SlotVisitor::reportExtraMemoryVisited(size_t size)
 {
-    if (!m_isVisitingMutatorStack)
+    if (m_isFirstVisit)
         heap()->reportExtraMemoryVisited(size);
 }
 
 #if ENABLE(RESOURCE_USAGE)
 inline void SlotVisitor::reportExternalMemoryVisited(size_t size)
 {
-    if (!m_isVisitingMutatorStack)
+    if (m_isFirstVisit)
         heap()->reportExternalMemoryVisited(size);
 }
 #endif
diff --git a/Source/JavaScriptCore/heap/Subspace.cpp b/Source/JavaScriptCore/heap/Subspace.cpp
new file mode 100644 (file)
index 0000000..ed6c19e
--- /dev/null
@@ -0,0 +1,196 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "Subspace.h"
+
+#include "JSCInlines.h"
+#include "MarkedAllocatorInlines.h"
+#include "MarkedBlockInlines.h"
+#include "PreventCollectionScope.h"
+#include "SubspaceInlines.h"
+
+namespace JSC {
+
+namespace {
+
+// Writing it this way ensures that when you pass this as a functor, the callee is specialized for
+// this callback. If you wrote this as a normal function then the callee would be specialized for
+// the function's type and it would have indirect calls to that function. And unlike a lambda, it's
+// possible to mark this ALWAYS_INLINE.
+struct DestroyFunc {
+    ALWAYS_INLINE void operator()(VM& vm, JSCell* cell) const
+    {
+        ASSERT(cell->structureID());
+        ASSERT(cell->inlineTypeFlags() & StructureIsImmortal);
+        Structure* structure = cell->structure(vm);
+        const ClassInfo* classInfo = structure->classInfo();
+        MethodTable::DestroyFunctionPtr destroy = classInfo->methodTable.destroy;
+        destroy(cell);
+    }
+};
+
+} // anonymous namespace
+
+Subspace::Subspace(CString name, Heap& heap, AllocatorAttributes attributes)
+    : m_space(heap.objectSpace())
+    , m_name(name)
+    , m_attributes(attributes)
+{
+    // It's remotely possible that we're GCing right now even if the client is careful to only
+    // create subspaces right after VM creation, since collectContinuously (and probably other
+    // things) could cause a GC to be launched at pretty much any time and it's not 100% obvious
+    // that all clients would be able to ensure that there are zero safepoints between when they
+    // create VM and when they do this. Preventing GC while we're creating the Subspace ensures
+    // that we don't have to worry about whether it's OK for the GC to ever see a brand new
+    // subspace.
+    PreventCollectionScope preventCollectionScope(heap);
+    heap.objectSpace().m_subspaces.append(this);
+    
+    for (size_t i = MarkedSpace::numSizeClasses; i--;)
+        m_allocatorForSizeStep[i] = nullptr;
+}
+
+Subspace::~Subspace()
+{
+}
+
+FreeList Subspace::finishSweep(MarkedBlock::Handle& block, MarkedBlock::Handle::SweepMode sweepMode)
+{
+    return block.finishSweepKnowingSubspace(sweepMode, DestroyFunc());
+}
+
+void Subspace::destroy(VM& vm, JSCell* cell)
+{
+    DestroyFunc()(vm, cell);
+}
+
+// The reason why we distinguish between allocate and tryAllocate is to minimize the number of
+// checks on the allocation path in both cases. Likewise, the reason why we have overloads with and
+// without deferralContext is to minimize the amount of code for calling allocate when you don't
+// need the deferralContext.
+void* Subspace::allocate(size_t size)
+{
+    if (MarkedAllocator* allocator = tryAllocatorFor(size))
+        return allocator->allocate();
+    return allocateSlow(nullptr, size);
+}
+
+void* Subspace::allocate(GCDeferralContext* deferralContext, size_t size)
+{
+    if (MarkedAllocator* allocator = tryAllocatorFor(size))
+        return allocator->allocate(deferralContext);
+    return allocateSlow(deferralContext, size);
+}
+
+void* Subspace::tryAllocate(size_t size)
+{
+    if (MarkedAllocator* allocator = tryAllocatorFor(size))
+        return allocator->tryAllocate();
+    return tryAllocateSlow(nullptr, size);
+}
+
+void* Subspace::tryAllocate(GCDeferralContext* deferralContext, size_t size)
+{
+    if (MarkedAllocator* allocator = tryAllocatorFor(size))
+        return allocator->tryAllocate(deferralContext);
+    return tryAllocateSlow(deferralContext, size);
+}
+
+MarkedAllocator* Subspace::allocatorForSlow(size_t size)
+{
+    size_t index = MarkedSpace::sizeClassToIndex(size);
+    size_t sizeClass = MarkedSpace::s_sizeClassForSizeStep[index];
+    if (!sizeClass)
+        return nullptr;
+    
+    // This is written in such a way that it's OK for the JIT threads to end up here if they want
+    // to generate code that uses some allocator that hadn't been used yet. Note that a possibly-
+    // just-as-good solution would be to return null if we're in the JIT since the JIT treats null
+    // allocator as "please always take the slow path". But, that could lead to performance
+    // surprises and the algorithm here is pretty easy. Only this code has to hold the lock, to
+    // prevent simultaneously MarkedAllocator creations from multiple threads. This code ensures
+    // that any "forEachAllocator" traversals will only see this allocator after it's initialized
+    // enough: it will have 
+    auto locker = holdLock(m_space.allocatorLock());
+    if (MarkedAllocator* allocator = m_allocatorForSizeStep[index])
+        return allocator;
+
+    if (false)
+        dataLog("Creating marked allocator for ", m_name, ", ", m_attributes, ", ", sizeClass, ".\n");
+    MarkedAllocator* allocator = m_space.addMarkedAllocator(locker, this, sizeClass);
+    index = MarkedSpace::sizeClassToIndex(sizeClass);
+    for (;;) {
+        if (MarkedSpace::s_sizeClassForSizeStep[index] != sizeClass)
+            break;
+
+        m_allocatorForSizeStep[index] = allocator;
+        
+        if (!index--)
+            break;
+    }
+    allocator->setNextAllocatorInSubspace(m_firstAllocator);
+    WTF::storeStoreFence();
+    m_firstAllocator = allocator;
+    return allocator;
+}
+
+void* Subspace::allocateSlow(GCDeferralContext* deferralContext, size_t size)
+{
+    void* result = tryAllocateSlow(deferralContext, size);
+    RELEASE_ASSERT(result);
+    return result;
+}
+
+void* Subspace::tryAllocateSlow(GCDeferralContext* deferralContext, size_t size)
+{
+    if (MarkedAllocator* allocator = allocatorFor(size))
+        return allocator->tryAllocate(deferralContext);
+    
+    if (size <= Options::largeAllocationCutoff()
+        && size <= MarkedSpace::largeCutoff) {
+        dataLog("FATAL: attampting to allocate small object using large allocation.\n");
+        dataLog("Requested allocation size: ", size, "\n");
+        RELEASE_ASSERT_NOT_REACHED();
+    }
+    
+    m_space.heap()->collectIfNecessaryOrDefer(deferralContext);
+    
+    size = WTF::roundUpToMultipleOf<MarkedSpace::sizeStep>(size);
+    LargeAllocation* allocation = LargeAllocation::tryCreate(*m_space.m_heap, size, this);
+    if (!allocation)
+        return nullptr;
+    
+    m_space.m_largeAllocations.append(allocation);
+    m_space.m_heap->didAllocate(size);
+    m_space.m_capacity += size;
+    
+    m_largeAllocations.append(allocation);
+        
+    return allocation->cell();
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/Subspace.h b/Source/JavaScriptCore/heap/Subspace.h
new file mode 100644 (file)
index 0000000..d95f71b
--- /dev/null
@@ -0,0 +1,122 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "MarkedBlock.h"
+#include "MarkedSpace.h"
+#include <wtf/text/CString.h>
+
+namespace JSC {
+
+// The idea of subspaces is that you can provide some custom behavior for your objects if you
+// allocate them from a custom Subspace in which you override some of the virtual methods. This
+// class is the baseclass of Subspaces and it provides a reasonable default implementation, where
+// sweeping assumes immortal structure. The common ways of overriding this are:
+//
+// - Provide customized destructor behavior. You can change how the destructor is called. You can
+//   also specialize the destructor call in the loop.
+//
+// - Use the Subspace as a quick way to iterate all of the objects in that subspace.
+class Subspace {
+    WTF_MAKE_NONCOPYABLE(Subspace);
+    WTF_MAKE_FAST_ALLOCATED;
+public:
+    JS_EXPORT_PRIVATE Subspace(CString name, Heap&, AllocatorAttributes);
+    JS_EXPORT_PRIVATE virtual ~Subspace();
+    
+    const char *name() const { return m_name.data(); }
+    MarkedSpace& space() const { return m_space; }
+    
+    const AllocatorAttributes& attributes() const { return m_attributes; }
+    
+    // The purpose of overriding this is to specialize the sweep for your destructors. This won't
+    // be called for no-destructor blocks. This must call MarkedBlock::finishSweepKnowingSubspace.
+    virtual FreeList finishSweep(MarkedBlock::Handle&, MarkedBlock::Handle::SweepMode);
+    
+    // These get called for large objects.
+    virtual void destroy(VM&, JSCell*);
+    
+    MarkedAllocator* tryAllocatorFor(size_t);
+    MarkedAllocator* allocatorFor(size_t);
+    
+    JS_EXPORT_PRIVATE void* allocate(size_t);
+    JS_EXPORT_PRIVATE void* allocate(GCDeferralContext*, size_t);
+    
+    JS_EXPORT_PRIVATE void* tryAllocate(size_t);
+    JS_EXPORT_PRIVATE void* tryAllocate(GCDeferralContext*, size_t);
+    
+    template<typename Func>
+    void forEachMarkedBlock(const Func&);
+    
+    template<typename Func>
+    void forEachNotEmptyMarkedBlock(const Func&);
+    
+    template<typename Func>
+    void forEachLargeAllocation(const Func&);
+    
+    template<typename Func>
+    void forEachMarkedCell(const Func&);
+    
+    static ptrdiff_t offsetOfAllocatorForSizeStep() { return OBJECT_OFFSETOF(Subspace, m_allocatorForSizeStep); }
+    
+    MarkedAllocator** allocatorForSizeStep() { return &m_allocatorForSizeStep[0]; }
+
+private:
+    MarkedAllocator* allocatorForSlow(size_t);
+    
+    // These slow paths are concerned with large allocations and allocator creation.
+    void* allocateSlow(GCDeferralContext*, size_t);
+    void* tryAllocateSlow(GCDeferralContext*, size_t);
+    
+    MarkedSpace& m_space;
+    
+    CString m_name;
+    AllocatorAttributes m_attributes;
+    
+    std::array<MarkedAllocator*, MarkedSpace::numSizeClasses> m_allocatorForSizeStep;
+    MarkedAllocator* m_firstAllocator { nullptr };
+    SentinelLinkedList<LargeAllocation, BasicRawSentinelNode<LargeAllocation>> m_largeAllocations;
+};
+
+ALWAYS_INLINE MarkedAllocator* Subspace::tryAllocatorFor(size_t size)
+{
+    if (size <= MarkedSpace::largeCutoff)
+        return m_allocatorForSizeStep[MarkedSpace::sizeClassToIndex(size)];
+    return nullptr;
+}
+
+ALWAYS_INLINE MarkedAllocator* Subspace::allocatorFor(size_t size)
+{
+    if (size <= MarkedSpace::largeCutoff) {
+        if (MarkedAllocator* result = m_allocatorForSizeStep[MarkedSpace::sizeClassToIndex(size)])
+            return result;
+        return allocatorForSlow(size);
+    }
+    return nullptr;
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/SubspaceInlines.h b/Source/JavaScriptCore/heap/SubspaceInlines.h
new file mode 100644 (file)
index 0000000..b6851c3
--- /dev/null
@@ -0,0 +1,76 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "JSCell.h"
+#include "MarkedAllocator.h"
+#include "MarkedBlock.h"
+#include "MarkedSpace.h"
+#include "Subspace.h"
+
+namespace JSC {
+
+template<typename Func>
+void Subspace::forEachMarkedBlock(const Func& func)
+{
+    for (MarkedAllocator* allocator = m_firstAllocator; allocator; allocator = allocator->nextAllocatorInSubspace())
+        allocator->forEachBlock(func);
+}
+
+template<typename Func>
+void Subspace::forEachNotEmptyMarkedBlock(const Func& func)
+{
+    for (MarkedAllocator* allocator = m_firstAllocator; allocator; allocator = allocator->nextAllocatorInSubspace())
+        allocator->forEachNotEmptyBlock(func);
+}
+
+template<typename Func>
+void Subspace::forEachLargeAllocation(const Func& func)
+{
+    for (LargeAllocation* allocation = m_largeAllocations.begin(); allocation != m_largeAllocations.end(); allocation = allocation->next())
+        func(allocation);
+}
+
+template<typename Func>
+void Subspace::forEachMarkedCell(const Func& func)
+{
+    forEachNotEmptyMarkedBlock(
+        [&] (MarkedBlock::Handle* handle) {
+            handle->forEachMarkedCell(
+                [&] (HeapCell* cell, HeapCell::Kind kind) -> IterationStatus { 
+                    func(cell, kind);
+                    return IterationStatus::Continue;
+                });
+        });
+    forEachLargeAllocation(
+        [&] (LargeAllocation* allocation) {
+            if (allocation->isMarked())
+                func(allocation->cell(), m_attributes.cellKind);
+        });
+}
+
+} // namespace JSC
+
index 8845d6f..0ac3180 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -100,7 +100,8 @@ void WeakBlock::specializedVisit(ContainerType& container, SlotVisitor& visitor)
 {
     HeapVersion markingVersion = visitor.markingVersion();
 
-    for (size_t i = 0; i < weakImplCount(); ++i) {
+    size_t count = weakImplCount();
+    for (size_t i = 0; i < count; ++i) {
         WeakImpl* weakImpl = &weakImpls()[i];
         if (weakImpl->state() != WeakImpl::Live)
             continue;
index 8519e23..6d4ff07 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -64,6 +64,7 @@ public:
     SweepResult takeSweepResult();
 
     void visit(SlotVisitor&);
+
     void reap();
 
     void lastChanceToFinalize();
index 35d45db..ddcf743 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -53,7 +53,8 @@ public:
 
     bool isEmpty() const;
 
-    unsigned visit(SlotVisitor&);
+    void visit(SlotVisitor&);
+
     void reap();
     void sweep();
     void shrink();
@@ -106,14 +107,10 @@ inline void WeakSet::lastChanceToFinalize()
         block->lastChanceToFinalize();
 }
 
-inline unsigned WeakSet::visit(SlotVisitor& visitor)
+inline void WeakSet::visit(SlotVisitor& visitor)
 {
-    unsigned count = 0;
-    for (WeakBlock* block = m_blocks.head(); block; block = block->next()) {
-        count++;
+    for (WeakBlock* block = m_blocks.head(); block; block = block->next())
         block->visit(visitor);
-    }
-    return count;
 }
 
 inline void WeakSet::reap()
index ee74155..dbc8684 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2013-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -1542,7 +1542,7 @@ public:
         GPRReg resultGPR, StructureType structure, StorageType storage, GPRReg scratchGPR1,
         GPRReg scratchGPR2, JumpList& slowPath, size_t size)
     {
-        MarkedAllocator* allocator = vm()->heap.allocatorForObjectOfType<ClassType>(size);
+        MarkedAllocator* allocator = subspaceFor<ClassType>(*vm())->allocatorFor(size);
         if (!allocator) {
             slowPath.append(jump());
             return;
@@ -1559,7 +1559,7 @@ public:
     
     // allocationSize can be aliased with any of the other input GPRs. If it's not aliased then it
     // won't be clobbered.
-    void emitAllocateVariableSized(GPRReg resultGPR, MarkedSpace::Subspace& subspace, GPRReg allocationSize, GPRReg scratchGPR1, GPRReg scratchGPR2, JumpList& slowPath)
+    void emitAllocateVariableSized(GPRReg resultGPR, Subspace& subspace, GPRReg allocationSize, GPRReg scratchGPR1, GPRReg scratchGPR2, JumpList& slowPath)
     {
         static_assert(!(MarkedSpace::sizeStep & (MarkedSpace::sizeStep - 1)), "MarkedSpace::sizeStep must be a power of two.");
         
@@ -1568,7 +1568,7 @@ public:
         add32(TrustedImm32(MarkedSpace::sizeStep - 1), allocationSize, scratchGPR1);
         urshift32(TrustedImm32(stepShift), scratchGPR1);
         slowPath.append(branch32(Above, scratchGPR1, TrustedImm32(MarkedSpace::largeCutoff >> stepShift)));
-        move(TrustedImmPtr(&subspace.allocatorForSizeStep[0] - 1), scratchGPR2);
+        move(TrustedImmPtr(subspace.allocatorForSizeStep() - 1), scratchGPR2);
         loadPtr(BaseIndex(scratchGPR2, scratchGPR1, timesPtr()), scratchGPR1);
         
         emitAllocate(resultGPR, nullptr, scratchGPR1, scratchGPR2, slowPath);
@@ -1577,7 +1577,7 @@ public:
     template<typename ClassType, typename StructureType>
     void emitAllocateVariableSizedCell(GPRReg resultGPR, StructureType structure, GPRReg allocationSize, GPRReg scratchGPR1, GPRReg scratchGPR2, JumpList& slowPath)
     {
-        MarkedSpace::Subspace& subspace = vm()->heap.template subspaceForObjectOfType<ClassType>();
+        Subspace& subspace = *subspaceFor<ClassType>(*vm());
         emitAllocateVariableSized(resultGPR, subspace, allocationSize, scratchGPR1, scratchGPR2, slowPath);
         emitStoreStructureWithTypeInfo(structure, resultGPR, scratchGPR2);
     }
index 5672fa0..337e0b7 100644 (file)
@@ -83,7 +83,7 @@ void JIT::emit_op_new_object(Instruction* currentInstruction)
 {
     Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure();
     size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
-    MarkedAllocator* allocator = m_vm->heap.allocatorForObjectWithoutDestructor(allocationSize);
+    MarkedAllocator* allocator = subspaceFor<JSFinalObject>(*m_vm)->allocatorFor(allocationSize);
 
     RegisterID resultReg = regT0;
     RegisterID allocatorReg = regT1;
index 1d23593..b53b208 100644 (file)
@@ -164,7 +164,7 @@ void JIT::emit_op_new_object(Instruction* currentInstruction)
 {
     Structure* structure = currentInstruction[3].u.objectAllocationProfile->structure();
     size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
-    MarkedAllocator* allocator = m_vm->heap.allocatorForObjectWithoutDestructor(allocationSize);
+    MarkedAllocator* allocator = subspaceFor<JSFinalObject>(*m_vm)->allocatorFor(allocationSize);
 
     RegisterID resultReg = returnValueGPR;
     RegisterID allocatorReg = regT1;
index 651f448..a347de5 100644 (file)
@@ -1,6 +1,6 @@
 /*
  *  Copyright (C) 1999-2000 Harri Porten (porten@kde.org)
- *  Copyright (C) 2004-2008, 2012-2013, 2015-2016 Apple Inc. All rights reserved.
+ *  Copyright (C) 2004-2017 Apple Inc. All rights reserved.
  *  Copyright (C) 2006 Bjoern Graf (bjoern.graf@gmail.com)
  *
  *  This library is free software; you can redistribute it and/or
@@ -166,7 +166,6 @@ public:
     }
 
     typedef JSNonFinalObject Base;
-    static const bool needsDestruction = false;
 
     Root* root() const { return m_root.get(); }
     void setRoot(VM& vm, Root* root) { m_root.set(vm, this, root); }
@@ -267,7 +266,6 @@ public:
     typedef JSDestructibleObject Base;
 
     DECLARE_INFO;
-    static const bool needsDestruction = true;
 
     static Structure* createStructure(VM& vm, JSGlobalObject* globalObject, JSValue prototype)
     {
index 465a68a..db1b3cf 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -59,10 +59,10 @@ ALWAYS_INLINE unsigned Butterfly::optimalContiguousVectorLength(Structure* struc
     return optimalContiguousVectorLength(structure ? structure->outOfLineCapacity() : 0, vectorLength);
 }
 
-inline Butterfly* Butterfly::createUninitialized(VM& vm, JSCell* intendedOwner, size_t preCapacity, size_t propertyCapacity, bool hasIndexingHeader, size_t indexingPayloadSizeInBytes)
+inline Butterfly* Butterfly::createUninitialized(VM& vm, JSCell*, size_t preCapacity, size_t propertyCapacity, bool hasIndexingHeader, size_t indexingPayloadSizeInBytes)
 {
     size_t size = totalSize(preCapacity, propertyCapacity, hasIndexingHeader, indexingPayloadSizeInBytes);
-    void* base = vm.heap.allocateAuxiliary(intendedOwner, size);
+    void* base = vm.auxiliarySpace.allocate(size);
     Butterfly* result = fromBase(base, preCapacity, propertyCapacity);
     return result;
 }
@@ -134,14 +134,16 @@ inline Butterfly* Butterfly::growArrayRight(
     size_t newIndexingPayloadSizeInBytes)
 {
     ASSERT_UNUSED(oldStructure, !indexingHeader()->preCapacity(oldStructure));
-    ASSERT_UNUSED(oldStructure, hadIndexingHeader == oldStructure->hasIndexingHeader(intendedOwner));
+    ASSERT_UNUSED(intendedOwner, hadIndexingHeader == oldStructure->hasIndexingHeader(intendedOwner));
     void* theBase = base(0, propertyCapacity);
     size_t oldSize = totalSize(0, propertyCapacity, hadIndexingHeader, oldIndexingPayloadSizeInBytes);
     size_t newSize = totalSize(0, propertyCapacity, true, newIndexingPayloadSizeInBytes);
-    theBase = vm.heap.tryReallocateAuxiliary(intendedOwner, theBase, oldSize, newSize);
-    if (!theBase)
-        return 0;
-    return fromBase(theBase, 0, propertyCapacity);
+    void* newBase = vm.auxiliarySpace.tryAllocate(newSize);
+    if (!newBase)
+        return nullptr;
+    // FIXME: This probably shouldn't be a memcpy.
+    memcpy(newBase, theBase, oldSize);
+    return fromBase(newBase, 0, propertyCapacity);
 }
 
 inline Butterfly* Butterfly::growArrayRight(
index 9e4245d..2b2157c 100644 (file)
@@ -1,7 +1,7 @@
 /*
  *  Copyright (C) 1999-2001 Harri Porten (porten@kde.org)
  *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
- *  Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
  *
  *  This library is free software; you can redistribute it and/or
  *  modify it under the terms of the GNU Library General Public
@@ -38,7 +38,7 @@ struct MethodTable {
 
     typedef void (*VisitChildrenFunctionPtr)(JSCell*, SlotVisitor&);
     VisitChildrenFunctionPtr visitChildren;
-
+    
     typedef CallType (*GetCallDataFunctionPtr)(JSCell*, CallData&);
     GetCallDataFunctionPtr getCallData;
 
@@ -122,6 +122,9 @@ struct MethodTable {
 
     typedef size_t (*EstimatedSizeFunctionPtr)(JSCell*);
     EstimatedSizeFunctionPtr estimatedSize;
+    
+    typedef void (*VisitOutputConstraintsPtr)(JSCell*, SlotVisitor&);
+    VisitOutputConstraintsPtr visitOutputConstraints;
 };
 
 #define CREATE_MEMBER_CHECKER(member) \
@@ -174,7 +177,8 @@ struct MethodTable {
         &ClassName::getPrototype, \
         &ClassName::dumpToStream, \
         &ClassName::heapSnapshot, \
-        &ClassName::estimatedSize \
+        &ClassName::estimatedSize, \
+        &ClassName::visitOutputConstraints \
     }, \
     ClassName::TypedArrayStorageType
 
index 7bed2dc..b7bdfad 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -54,7 +54,7 @@ ClonedArguments* ClonedArguments::createEmpty(
         butterfly->arrayStorage()->m_numValuesInVector = vectorLength;
 
     } else {
-        void* temp = vm.heap.tryAllocateAuxiliary(nullptr, Butterfly::totalSize(0, structure->outOfLineCapacity(), true, vectorLength * sizeof(EncodedJSValue)));
+        void* temp = vm.auxiliarySpace.tryAllocate(Butterfly::totalSize(0, structure->outOfLineCapacity(), true, vectorLength * sizeof(EncodedJSValue)));
         if (!temp)
             return 0;
         butterfly = Butterfly::fromBase(temp, 0, structure->outOfLineCapacity());
index 5e31c31..aa06a8a 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -118,7 +118,7 @@ void DirectArguments::overrideThings(VM& vm)
     putDirect(vm, vm.propertyNames->callee, m_callee.get(), DontEnum);
     putDirect(vm, vm.propertyNames->iteratorSymbol, globalObject()->arrayProtoValuesFunction(), DontEnum);
     
-    void* backingStore = vm.heap.tryAllocateAuxiliary(this, mappedArgumentsSize());
+    void* backingStore = vm.auxiliarySpace.tryAllocate(mappedArgumentsSize());
     RELEASE_ASSERT(backingStore);
     bool* overrides = static_cast<bool*>(backingStore);
     m_mappedArguments.set(vm, this, overrides);
index db23a81..e9ec01f 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -259,7 +259,7 @@ void GenericArguments<Type>::initModifiedArgumentsDescriptor(VM& vm, unsigned ar
     RELEASE_ASSERT(!m_modifiedArgumentsDescriptor);
 
     if (argsLength) {
-        void* backingStore = vm.heap.tryAllocateAuxiliary(this, WTF::roundUpToMultipleOf<8>(argsLength));
+        void* backingStore = vm.auxiliarySpace.tryAllocate(WTF::roundUpToMultipleOf<8>(argsLength));
         RELEASE_ASSERT(backingStore);
         bool* modifiedArguments = static_cast<bool*>(backingStore);
         m_modifiedArgumentsDescriptor.set(vm, this, modifiedArguments);
index f88fa83..fbeb72f 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2016-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -171,11 +171,11 @@ public:
         return bitwise_cast<BucketType**>(this);
     }
 
-    static HashMapBuffer* create(ExecState* exec, VM& vm, JSCell* owner, uint32_t capacity)
+    static HashMapBuffer* create(ExecState* exec, VM& vm, JSCell*, uint32_t capacity)
     {
         auto scope = DECLARE_THROW_SCOPE(vm);
         size_t allocationSize = HashMapBuffer::allocationSize(capacity);
-        void* data = vm.heap.tryAllocateAuxiliary(owner, allocationSize);
+        void* data = vm.auxiliarySpace.tryAllocate(allocationSize);
         if (!data) {
             throwOutOfMemoryError(exec, scope);
             return nullptr;
index df69825..fa136fa 100644 (file)
@@ -75,7 +75,7 @@ JSArray* JSArray::tryCreateUninitialized(VM& vm, GCDeferralContext* deferralCont
             || hasContiguous(indexingType));
 
         unsigned vectorLength = Butterfly::optimalContiguousVectorLength(structure, initialLength);
-        void* temp = vm.heap.tryAllocateAuxiliary(deferralContext, nullptr, Butterfly::totalSize(0, outOfLineStorage, true, vectorLength * sizeof(EncodedJSValue)));
+        void* temp = vm.auxiliarySpace.tryAllocate(deferralContext, Butterfly::totalSize(0, outOfLineStorage, true, vectorLength * sizeof(EncodedJSValue)));
         if (!temp)
             return nullptr;
         butterfly = Butterfly::fromBase(temp, 0, outOfLineStorage);
@@ -90,7 +90,7 @@ JSArray* JSArray::tryCreateUninitialized(VM& vm, GCDeferralContext* deferralCont
         }
     } else {
         unsigned vectorLength = ArrayStorage::optimalVectorLength(0, structure, initialLength);
-        void* temp = vm.heap.tryAllocateAuxiliary(nullptr, Butterfly::totalSize(0, outOfLineStorage, true, ArrayStorage::sizeFor(vectorLength)));
+        void* temp = vm.auxiliarySpace.tryAllocate(deferralContext, Butterfly::totalSize(0, outOfLineStorage, true, ArrayStorage::sizeFor(vectorLength)));
         if (!temp)
             return nullptr;
         butterfly = Butterfly::fromBase(temp, 0, outOfLineStorage);
@@ -347,7 +347,7 @@ bool JSArray::unshiftCountSlowCase(const AbstractLocker&, VM& vm, DeferGC&, bool
         allocatedNewStorage = false;
     } else {
         size_t newSize = Butterfly::totalSize(0, propertyCapacity, true, ArrayStorage::sizeFor(desiredCapacity));
-        newAllocBase = vm.heap.tryAllocateAuxiliary(this, newSize);
+        newAllocBase = vm.auxiliarySpace.tryAllocate(newSize);
         if (!newAllocBase)
             return false;
         newStorageCapacity = desiredCapacity;
index 65f721e..a3b3888 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -65,7 +65,7 @@ JSArrayBufferView::ConstructionContext::ConstructionContext(
         void* temp;
         size_t size = sizeOf(length, elementSize);
         if (size) {
-            temp = vm.heap.tryAllocateAuxiliary(nullptr, size);
+            temp = vm.auxiliarySpace.tryAllocate(nullptr, size);
             if (!temp)
                 return;
         } else
index e63feaf..dd8b53e 100644 (file)
@@ -1,7 +1,7 @@
 /*
  *  Copyright (C) 1999-2001 Harri Porten (porten@kde.org)
  *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
- *  Copyright (C) 2003, 2004, 2005, 2007, 2008, 2009, 2015 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
  *
  *  This library is free software; you can redistribute it and/or
  *  modify it under the terms of the GNU Library General Public
@@ -81,6 +81,12 @@ public:
 
     static const bool needsDestruction = false;
 
+    // Don't call this directly. Call JSC::subspaceFor<Type>(vm) instead.
+    // FIXME: Refer to Subspace by reference.
+    // https://bugs.webkit.org/show_bug.cgi?id=166988
+    template<typename CellType>
+    static Subspace* subspaceFor(VM&);
+
     static JSCell* seenMultipleCalleeObjects() { return bitwise_cast<JSCell*>(static_cast<uintptr_t>(1)); }
 
     enum CreatingEarlyCellTag { CreatingEarlyCell };
@@ -154,6 +160,7 @@ public:
     JS_EXPORT_PRIVATE static size_t estimatedSize(JSCell*);
 
     static void visitChildren(JSCell*, SlotVisitor&);
+    static void visitOutputConstraints(JSCell*, SlotVisitor&);
 
     JS_EXPORT_PRIVATE static void heapSnapshot(JSCell*, HeapSnapshotBuilder&);
 
@@ -290,4 +297,12 @@ inline To jsDynamicCast(JSValue from)
     return nullptr;
 }
 
+// FIXME: Refer to Subspace by reference.
+// https://bugs.webkit.org/show_bug.cgi?id=166988
+template<typename Type>
+inline Subspace* subspaceFor(VM& vm)
+{
+    return Type::template subspaceFor<Type>(vm);
+}
+
 } // namespace JSC
index b7f181b..283c7ab 100644 (file)
@@ -120,6 +120,10 @@ inline void JSCell::visitChildren(JSCell* cell, SlotVisitor& visitor)
     visitor.appendUnbarriered(cell->structure(visitor.vm()));
 }
 
+inline void JSCell::visitOutputConstraints(JSCell*, SlotVisitor&)
+{
+}
+
 ALWAYS_INLINE VM& ExecState::vm() const
 {
     ASSERT(callee());
@@ -129,12 +133,20 @@ ALWAYS_INLINE VM& ExecState::vm() const
     return *callee()->markedBlock().vm();
 }
 
+template<typename CellType>
+Subspace* JSCell::subspaceFor(VM& vm)
+{
+    if (CellType::needsDestruction)
+        return &vm.destructibleCellSpace;
+    return &vm.cellSpace;
+}
+
 template<typename T>
 void* allocateCell(Heap& heap, size_t size)
 {
     ASSERT(!DisallowGC::isGCDisallowedOnCurrentThread());
     ASSERT(size >= sizeof(T));
-    JSCell* result = static_cast<JSCell*>(heap.allocateObjectOfType<T>(size));
+    JSCell* result = static_cast<JSCell*>(subspaceFor<T>(*heap.vm())->allocate(size));
 #if ENABLE(GC_VALIDATION)
     ASSERT(!heap.vm()->isInitializingObject());
     heap.vm()->setInitializingObjectClass(T::info());
@@ -153,7 +165,7 @@ template<typename T>
 void* allocateCell(Heap& heap, GCDeferralContext* deferralContext, size_t size)
 {
     ASSERT(size >= sizeof(T));
-    JSCell* result = static_cast<JSCell*>(heap.allocateObjectOfType<T>(deferralContext, size));
+    JSCell* result = static_cast<JSCell*>(subspaceFor<T>(*heap.vm())->allocate(deferralContext, size));
 #if ENABLE(GC_VALIDATION)
     ASSERT(!heap.vm()->isInitializingObject());
     heap.vm()->setInitializingObjectClass(T::info());
index 49b3884..0c10f2d 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -36,6 +36,12 @@ public:
     typedef JSNonFinalObject Base;
 
     static const bool needsDestruction = true;
+    
+    template<typename CellType>
+    static Subspace* subspaceFor(VM& vm)
+    {
+        return &vm.destructibleObjectSpace;
+    }
 
     const ClassInfo* classInfo() const { return m_classInfo; }
     
diff --git a/Source/JavaScriptCore/runtime/JSDestructibleObjectSubspace.cpp b/Source/JavaScriptCore/runtime/JSDestructibleObjectSubspace.cpp
new file mode 100644 (file)
index 0000000..544eff2
--- /dev/null
@@ -0,0 +1,66 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "JSDestructibleObjectSubspace.h"
+
+#include "MarkedBlockInlines.h"
+#include "JSCInlines.h"
+#include "SubspaceInlines.h"
+
+namespace JSC {
+
+namespace {
+
+struct DestroyFunc {
+    ALWAYS_INLINE void operator()(VM&, JSCell* cell) const
+    {
+        static_cast<JSDestructibleObject*>(cell)->classInfo()->methodTable.destroy(cell);
+    }
+};
+
+} // anonymous namespace
+
+JSDestructibleObjectSubspace::JSDestructibleObjectSubspace(CString name, Heap& heap)
+    : Subspace(name, heap, AllocatorAttributes(NeedsDestruction, HeapCell::JSCell))
+{
+}
+
+JSDestructibleObjectSubspace::~JSDestructibleObjectSubspace()
+{
+}
+
+FreeList JSDestructibleObjectSubspace::finishSweep(MarkedBlock::Handle& handle, MarkedBlock::Handle::SweepMode sweepMode)
+{
+    return handle.finishSweepKnowingSubspace(sweepMode, DestroyFunc());
+}
+
+void JSDestructibleObjectSubspace::destroy(VM& vm, JSCell* cell)
+{
+    DestroyFunc()(vm, cell);
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/runtime/JSDestructibleObjectSubspace.h b/Source/JavaScriptCore/runtime/JSDestructibleObjectSubspace.h
new file mode 100644 (file)
index 0000000..0538a7a
--- /dev/null
@@ -0,0 +1,42 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "Subspace.h"
+
+namespace JSC {
+
+class JSDestructibleObjectSubspace : public Subspace {
+public:
+    JS_EXPORT_PRIVATE JSDestructibleObjectSubspace(CString name, Heap&);
+    JS_EXPORT_PRIVATE virtual ~JSDestructibleObjectSubspace();
+    
+    FreeList finishSweep(MarkedBlock::Handle&, MarkedBlock::Handle::SweepMode) override;
+    void destroy(VM&, JSCell*) override;
+};
+
+} // namespace JSC
+
index cf5974d..da5810a 100644 (file)
@@ -1,7 +1,7 @@
 /*
  *  Copyright (C) 1999-2001 Harri Porten (porten@kde.org)
  *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
- *  Copyright (C) 2003-2009, 2012-2016 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
  *
  *  This library is free software; you can redistribute it and/or
  *  modify it under the terms of the GNU Library General Public
@@ -1280,7 +1280,6 @@ inline JSObject::JSObject(VM& vm, Structure* structure, Butterfly* butterfly)
     : JSCell(vm, structure)
     , m_butterfly(vm, this, butterfly)
 {
-    vm.heap.ascribeOwner(this, butterfly);
 }
 
 inline JSValue JSObject::getPrototypeDirect() const
index a69dfbe..2cb9dde 100644 (file)
@@ -1,7 +1,7 @@
 /*
  *  Copyright (C) 1999-2001 Harri Porten (porten@kde.org)
  *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
- *  Copyright (C) 2003-2006, 2008, 2009, 2012-2016 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
  *  Copyright (C) 2007 Eric Seidel (eric@webkit.org)
  *
  *  This library is free software; you can redistribute it and/or
index d5c3195..212f957 100644 (file)
@@ -85,7 +85,7 @@ public:
     
     JS_EXPORT_PRIVATE static void visitChildren(JSCell*, SlotVisitor&);
     JS_EXPORT_PRIVATE static void heapSnapshot(JSCell*, HeapSnapshotBuilder&);
-
+    
 protected:
     JSSegmentedVariableObject(VM& vm, Structure* structure, JSScope* scope)
         : JSSymbolTableObject(vm, structure, scope)
@@ -99,6 +99,8 @@ protected:
     }
     
 private:
+    // FIXME: This needs a destructor, which can only be added using custom subspace.
+    
     SegmentedVector<WriteBarrier<Unknown>, 16> m_variables;
     ConcurrentJSLock m_lock;
 };
index 1f77819..bcb5405 100644 (file)
@@ -1,7 +1,7 @@
 /*
  *  Copyright (C) 1999-2001 Harri Porten (porten@kde.org)
  *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
- *  Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2014, 2016 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
  *
  *  This library is free software; you can redistribute it and/or
  *  modify it under the terms of the GNU Library General Public
@@ -85,9 +85,17 @@ public:
 
     static const bool needsDestruction = true;
     static void destroy(JSCell*);
-
+    
+    // We specialize the string subspace to get the fastest possible sweep. This wouldn't be
+    // necessary if JSString didn't have a destructor.
+    template<typename>
+    static Subspace* subspaceFor(VM& vm)
+    {
+        return &vm.stringSpace;
+    }
+    
     static const unsigned MaxLength = std::numeric_limits<int32_t>::max();
-
+    
 private:
     JSString(VM& vm, PassRefPtr<StringImpl> value)
         : JSCell(vm, vm.stringStructure.get())
@@ -234,6 +242,8 @@ private:
     friend JSString* jsSubstring(ExecState*, JSString*, unsigned offset, unsigned length);
 };
 
+// NOTE: This class cannot override JSString's destructor. JSString's destructor is called directly
+// from JSStringSubspace::
 class JSRopeString final : public JSString {
     friend class JSString;
 
diff --git a/Source/JavaScriptCore/runtime/JSStringSubspace.cpp b/Source/JavaScriptCore/runtime/JSStringSubspace.cpp
new file mode 100644 (file)
index 0000000..40130c4
--- /dev/null
@@ -0,0 +1,66 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "JSStringSubspace.h"
+
+#include "MarkedBlockInlines.h"
+#include "JSCInlines.h"
+#include "SubspaceInlines.h"
+
+namespace JSC {
+
+namespace {
+
+struct DestroyFunc {
+    ALWAYS_INLINE void operator()(VM&, JSCell* cell) const
+    {
+        static_cast<JSString*>(cell)->JSString::~JSString();
+    }
+};
+
+} // anonymous namespace
+
+JSStringSubspace::JSStringSubspace(CString name, Heap& heap)
+    : Subspace(name, heap, AllocatorAttributes(NeedsDestruction, HeapCell::JSCell))
+{
+}
+
+JSStringSubspace::~JSStringSubspace()
+{
+}
+
+FreeList JSStringSubspace::finishSweep(MarkedBlock::Handle& handle, MarkedBlock::Handle::SweepMode sweepMode)
+{
+    return handle.finishSweepKnowingSubspace(sweepMode, DestroyFunc());
+}
+
+void JSStringSubspace::destroy(VM& vm, JSCell* cell)
+{
+    DestroyFunc()(vm, cell);
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/runtime/JSStringSubspace.h b/Source/JavaScriptCore/runtime/JSStringSubspace.h
new file mode 100644 (file)
index 0000000..6accea0
--- /dev/null
@@ -0,0 +1,42 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "Subspace.h"
+
+namespace JSC {
+
+class JSStringSubspace : public Subspace {
+public:
+    JS_EXPORT_PRIVATE JSStringSubspace(CString name, Heap&);
+    JS_EXPORT_PRIVATE virtual ~JSStringSubspace();
+    
+    FreeList finishSweep(MarkedBlock::Handle&, MarkedBlock::Handle::SweepMode) override;
+    void destroy(VM&, JSCell*) override;
+};
+
+} // namespace JSC
+
index 237e0df..8465fbf 100644 (file)
@@ -1,5 +1,5 @@
 /*
- *  Copyright (C) 2008, 2016 Apple Inc. All Rights Reserved.
+ *  Copyright (C) 2008-2017 Apple Inc. All Rights Reserved.
  *
  *  This library is free software; you can redistribute it and/or
  *  modify it under the terms of the GNU Lesser General Public
@@ -38,7 +38,7 @@ ALWAYS_INLINE JSArray* tryCreateUninitializedRegExpMatchesArray(VM& vm, GCDeferr
     if (vectorLength > MAX_STORAGE_VECTOR_LENGTH)
         return 0;
 
-    void* temp = vm.heap.tryAllocateAuxiliary(deferralContext, nullptr, Butterfly::totalSize(0, structure->outOfLineCapacity(), true, vectorLength * sizeof(EncodedJSValue)));
+    void* temp = vm.auxiliarySpace.tryAllocate(deferralContext, Butterfly::totalSize(0, structure->outOfLineCapacity(), true, vectorLength * sizeof(EncodedJSValue)));
     if (!temp)
         return nullptr;
     Butterfly* butterfly = Butterfly::fromBase(temp, 0, structure->outOfLineCapacity());
index 2abc8a2..2e14ee0 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2008, 2011, 2013-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2008-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -166,6 +166,11 @@ VM::VM(VMType vmType, HeapType heapType)
     , executableAllocator(*this)
 #endif
     , heap(this, heapType)
+    , auxiliarySpace("Auxiliary", heap, AllocatorAttributes(DoesNotNeedDestruction, HeapCell::Auxiliary))
+    , cellSpace("JSCell", heap, AllocatorAttributes(DoesNotNeedDestruction, HeapCell::JSCell))
+    , destructibleCellSpace("Destructible JSCell", heap, AllocatorAttributes(NeedsDestruction, HeapCell::JSCell))
+    , stringSpace("JSString", heap)
+    , destructibleObjectSpace("JSDestructibleObject", heap)
     , vmType(vmType)
     , clientData(0)
     , topVMEntryFrame(nullptr)
index b5f1499..0c6e134 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2008, 2009, 2013-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2008-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -40,7 +40,9 @@
 #include "Intrinsic.h"
 #include "JITThunks.h"
 #include "JSCJSValue.h"
+#include "JSDestructibleObjectSubspace.h"
 #include "JSLock.h"
+#include "JSStringSubspace.h"
 #include "MacroAssemblerCodeRef.h"
 #include "Microtask.h"
 #include "NumericStrings.h"
@@ -49,6 +51,7 @@
 #include "SmallStrings.h"
 #include "SourceCode.h"
 #include "Strong.h"
+#include "Subspace.h"
 #include "TemplateRegistryKeyTable.h"
 #include "ThunkGenerators.h"
 #include "VMEntryRecord.h"
@@ -287,6 +290,14 @@ public:
     // The heap should be just after executableAllocator and before other members to ensure that it's
     // destructed after all the objects that reference it.
     Heap heap;
+    
+    Subspace auxiliarySpace;
+    
+    // Whenever possible, use subspaceFor<CellType>(vm) to get one of these subspaces.
+    Subspace cellSpace;
+    Subspace destructibleCellSpace;
+    JSStringSubspace stringSpace;
+    JSDestructibleObjectSubspace destructibleObjectSpace;
 
 #if ENABLE(DFG_JIT)
     std::unique_ptr<DFG::LongLivedState> dfgState;
index 6aac796..c14ec43 100644 (file)
@@ -1189,6 +1189,7 @@ set(WebCore_SOURCES
     bindings/js/ScriptState.cpp
     bindings/js/StructuredClone.cpp
     bindings/js/SerializedScriptValue.cpp
+    bindings/js/WebCoreJSClientData.cpp
     bindings/js/WebCoreTypedArrayController.cpp
     bindings/js/WorkerScriptController.cpp
 
index 3fe13bb..232d561 100644 (file)
@@ -1,3 +1,58 @@
+2017-01-16  Filip Pizlo  <fpizlo@apple.com>
+
+        Make opaque root scanning truly constraint-based
+        https://bugs.webkit.org/show_bug.cgi?id=165760
+
+        Reviewed by Geoffrey Garen.
+
+        No new tests yet. I think that writing tests for this is a big investigation:
+        https://bugs.webkit.org/show_bug.cgi?id=165808
+        
+        Remove the previous advancing wavefront DOM write barrier. I don't think this will scale
+        very well. It's super confusing.
+        
+        This change makes it so that visitAdditionalChildren can become a GC constraint that
+        executes as part of the fixpoint. This changes all WebCore visitAdditionalChildren into
+        output constraints by using new JSC API for Subspaces and MarkingConstraints.
+
+        * ForwardingHeaders/heap/MarkedAllocatorInlines.h: Added.
+        * ForwardingHeaders/heap/MarkedBlockInlines.h: Added.
+        * ForwardingHeaders/heap/MarkingConstraint.h: Added.
+        * ForwardingHeaders/heap/SubspaceInlines.h: Added.
+        * ForwardingHeaders/heap/VisitingTimeout.h: Added.
+        * WebCore.xcodeproj/project.pbxproj:
+        * bindings/js/CommonVM.cpp:
+        (WebCore::commonVMSlow):
+        (WebCore::writeBarrierOpaqueRootSlow): Deleted.
+        * bindings/js/CommonVM.h:
+        (WebCore::writeBarrierOpaqueRoot): Deleted.
+        * bindings/js/JSDOMGlobalObject.cpp:
+        (WebCore::JSDOMGlobalObject::finishCreation):
+        (WebCore::JSDOMGlobalObject::scriptExecutionContext):
+        * bindings/js/JSDOMWrapper.cpp:
+        (WebCore::outputConstraintSubspaceFor):
+        (WebCore::globalObjectOutputConstraintSubspaceFor):
+        * bindings/js/JSDOMWrapper.h:
+        * bindings/js/WebCoreJSClientData.cpp: Added.
+        (WebCore::JSVMClientData::JSVMClientData):
+        (WebCore::JSVMClientData::~JSVMClientData):
+        (WebCore::JSVMClientData::getAllWorlds):
+        (WebCore::initNormalWorldClientData):
+        * bindings/js/WebCoreJSClientData.h:
+        (WebCore::JSVMClientData::outputConstraintSpace):
+        (WebCore::JSVMClientData::globalObjectOutputConstraintSpace):
+        (WebCore::JSVMClientData::forEachOutputConstraintSpace):
+        (WebCore::JSVMClientData::JSVMClientData): Deleted.
+        (WebCore::JSVMClientData::~JSVMClientData): Deleted.
+        (WebCore::JSVMClientData::getAllWorlds): Deleted.
+        (WebCore::initNormalWorldClientData): Deleted.
+        * bindings/scripts/CodeGeneratorJS.pm:
+        (GenerateHeader):
+        (GenerateImplementation):
+        * dom/ContainerNodeAlgorithms.cpp:
+        (WebCore::notifyChildNodeInserted):
+        (WebCore::notifyChildNodeRemoved):
+
 2017-01-17  Michael Catanzaro  <mcatanzaro@igalia.com>
 
         Unreviewed, rolling out r210834
diff --git a/Source/WebCore/ForwardingHeaders/heap/MarkedAllocatorInlines.h b/Source/WebCore/ForwardingHeaders/heap/MarkedAllocatorInlines.h
new file mode 100644 (file)
index 0000000..02abc96
--- /dev/null
@@ -0,0 +1,2 @@
+#pragma once
+#include <JavaScriptCore/MarkedAllocatorInlines.h>
diff --git a/Source/WebCore/ForwardingHeaders/heap/MarkedBlockInlines.h b/Source/WebCore/ForwardingHeaders/heap/MarkedBlockInlines.h
new file mode 100644 (file)
index 0000000..0dd0be8
--- /dev/null
@@ -0,0 +1,2 @@
+#pragma once
+#include <JavaScriptCore/MarkedBlockInlines.h>
diff --git a/Source/WebCore/ForwardingHeaders/heap/MarkingConstraint.h b/Source/WebCore/ForwardingHeaders/heap/MarkingConstraint.h
new file mode 100644 (file)
index 0000000..6a00f52
--- /dev/null
@@ -0,0 +1,2 @@
+#pragma once
+#include <JavaScriptCore/MarkingConstraint.h>
diff --git a/Source/WebCore/ForwardingHeaders/heap/SubspaceInlines.h b/Source/WebCore/ForwardingHeaders/heap/SubspaceInlines.h
new file mode 100644 (file)
index 0000000..35678cf
--- /dev/null
@@ -0,0 +1,2 @@
+#pragma once
+#include <JavaScriptCore/SubspaceInlines.h>
diff --git a/Source/WebCore/ForwardingHeaders/heap/VisitingTimeout.h b/Source/WebCore/ForwardingHeaders/heap/VisitingTimeout.h
new file mode 100644 (file)
index 0000000..d427a5a
--- /dev/null
@@ -0,0 +1,2 @@
+#pragma once
+#include <JavaScriptCore/VisitingTimeout.h>
index 4124703..cdeb636 100644 (file)
                0F6A12BD1A00923700C6DE72 /* DebugPageOverlays.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6A12BB1A00923700C6DE72 /* DebugPageOverlays.cpp */; };
                0F6A12BE1A00923700C6DE72 /* DebugPageOverlays.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6A12BC1A00923700C6DE72 /* DebugPageOverlays.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F7D07331884C56C00B4AF86 /* PlatformTextTrack.h in Headers */ = {isa = PBXBuildFile; fileRef = 072847E216EBC5B00043CFA4 /* PlatformTextTrack.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F7DF1481E2BF1B10095951B /* WebCoreJSClientData.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7DF1471E2BF1A60095951B /* WebCoreJSClientData.cpp */; };
                0F87166F1C869D83004FF0DE /* LengthPoint.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F87166D1C869D83004FF0DE /* LengthPoint.cpp */; };
                0F8716701C869D83004FF0DE /* LengthPoint.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F87166E1C869D83004FF0DE /* LengthPoint.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F8B45721DC3FBA300443C3F /* IntersectionObserverCallback.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F8B45711DC3FBA300443C3F /* IntersectionObserverCallback.h */; };
                0F6383DC18615B29003E5DB5 /* ThreadedScrollingTree.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ThreadedScrollingTree.h; sourceTree = "<group>"; };
                0F6A12BB1A00923700C6DE72 /* DebugPageOverlays.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = DebugPageOverlays.cpp; sourceTree = "<group>"; };
                0F6A12BC1A00923700C6DE72 /* DebugPageOverlays.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = DebugPageOverlays.h; sourceTree = "<group>"; };
+               0F7DF1471E2BF1A60095951B /* WebCoreJSClientData.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = WebCoreJSClientData.cpp; sourceTree = "<group>"; };
                0F87166D1C869D83004FF0DE /* LengthPoint.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = LengthPoint.cpp; sourceTree = "<group>"; };
                0F87166E1C869D83004FF0DE /* LengthPoint.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LengthPoint.h; sourceTree = "<group>"; };
                0F8B456F1DC3FB1000443C3F /* IntersectionObserverCallback.idl */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; path = IntersectionObserverCallback.idl; sourceTree = "<group>"; };
                                414B82031D6DF0D90077EBE3 /* StructuredClone.h */,
                                419BE7521BC7F3DB00E1C85B /* WebCoreBuiltinNames.h */,
                                BC53D910114310CC000D817E /* WebCoreJSClientData.h */,
+                               0F7DF1471E2BF1A60095951B /* WebCoreJSClientData.cpp */,
                                0F099D0617B968A100FF84B9 /* WebCoreTypedArrayController.cpp */,
                                0F099D0717B968A100FF84B9 /* WebCoreTypedArrayController.h */,
                                E1A643FC0EC097A000779668 /* WorkerScriptController.cpp */,
                                598365E61355F60D001B185D /* JSPositionErrorCallback.cpp in Sources */,
                                7C330A071DF9F95100D3395C /* JSPositionOptions.cpp in Sources */,
                                65DF31FF09D1CC60000BE325 /* JSProcessingInstruction.cpp in Sources */,
+                               0F7DF1481E2BF1B10095951B /* WebCoreJSClientData.cpp in Sources */,
                                E44613ED0CD681BA00FADA75 /* JSProgressEvent.cpp in Sources */,
                                077664FC183E6B5C00133B92 /* JSQuickTimePluginReplacement.cpp in Sources */,
                                B658FFA11522EF3A00DD5595 /* JSRadioNodeList.cpp in Sources */,
index 2116862..b1b06aa 100644 (file)
@@ -38,7 +38,6 @@ using namespace JSC;
 namespace WebCore {
 
 VM* g_commonVMOrNull;
-bool g_opaqueRootWriteBarrierEnabled;
 
 VM& commonVMSlow()
 {
@@ -60,18 +59,11 @@ VM& commonVMSlow()
 #endif
     
     g_commonVMOrNull->setGlobalConstRedeclarationShouldThrow(Settings::globalConstRedeclarationShouldThrow());
-    g_commonVMOrNull->heap.addMutatorShouldBeFencedCache(g_opaqueRootWriteBarrierEnabled);
     
-    initNormalWorldClientData(g_commonVMOrNull);
+    JSVMClientData::initNormalWorld(g_commonVMOrNull);
     
     return *g_commonVMOrNull;
 }
 
-void writeBarrierOpaqueRootSlow(void* root)
-{
-    if (VM* vm = g_commonVMOrNull)
-        vm->heap.writeBarrierOpaqueRoot(root);
-}
-
 } // namespace WebCore
 
index 244ccdd..9c69675 100644 (file)
@@ -32,10 +32,8 @@ class VM;
 namespace WebCore {
 
 WEBCORE_EXPORT extern JSC::VM* g_commonVMOrNull;
-WEBCORE_EXPORT extern bool g_opaqueRootWriteBarrierEnabled;
 
 WEBCORE_EXPORT JSC::VM& commonVMSlow();
-WEBCORE_EXPORT void writeBarrierOpaqueRootSlow(void*);
 
 inline JSC::VM& commonVM()
 {
@@ -44,12 +42,5 @@ inline JSC::VM& commonVM()
     return commonVMSlow();
 }
 
-template<typename Func>
-void writeBarrierOpaqueRoot(const Func& rootThunk)
-{
-    if (g_opaqueRootWriteBarrierEnabled)
-        writeBarrierOpaqueRootSlow(rootThunk());
-}
-
 } // namespace WebCore
 
index d1e0944..b07fb26 100644 (file)
@@ -161,6 +161,8 @@ void JSDOMGlobalObject::finishCreation(VM& vm)
     ASSERT(inherits(info()));
 
     addBuiltinGlobals(vm);
+    
+    RELEASE_ASSERT(classInfo());
 }
 
 void JSDOMGlobalObject::finishCreation(VM& vm, JSObject* thisValue)
@@ -169,6 +171,8 @@ void JSDOMGlobalObject::finishCreation(VM& vm, JSObject* thisValue)
     ASSERT(inherits(info()));
 
     addBuiltinGlobals(vm);
+    
+    RELEASE_ASSERT(classInfo());
 }
 
 ScriptExecutionContext* JSDOMGlobalObject::scriptExecutionContext() const
@@ -177,7 +181,8 @@ ScriptExecutionContext* JSDOMGlobalObject::scriptExecutionContext() const
         return jsCast<const JSDOMWindowBase*>(this)->scriptExecutionContext();
     if (inherits(JSWorkerGlobalScopeBase::info()))
         return jsCast<const JSWorkerGlobalScopeBase*>(this)->scriptExecutionContext();
-    ASSERT_NOT_REACHED();
+    dataLog("Unexpected global object: ", JSValue(this), "\n");
+    RELEASE_ASSERT_NOT_REACHED();
     return 0;
 }
 
index 325e72b..21da863 100644 (file)
@@ -28,6 +28,7 @@
 
 #include "DOMWrapperWorld.h"
 #include "JSDOMWindow.h"
+#include "WebCoreJSClientData.h"
 #include <runtime/Error.h>
 
 using namespace JSC;
@@ -43,4 +44,14 @@ JSDOMWindow& JSDOMObject::domWindow() const
     return *domWindow;
 }
 
+Subspace* outputConstraintSubspaceFor(VM& vm)
+{
+    return &static_cast<JSVMClientData*>(vm.clientData)->outputConstraintSpace();
+}
+
+Subspace* globalObjectOutputConstraintSubspaceFor(VM& vm)
+{
+    return &static_cast<JSVMClientData*>(vm.clientData)->globalObjectOutputConstraintSpace();
+}
+
 } // namespace WebCore
index 6f37462..11d439c 100644 (file)
@@ -73,12 +73,15 @@ protected:
     }
 };
 
+WEBCORE_EXPORT JSC::Subspace* outputConstraintSubspaceFor(JSC::VM&);
+WEBCORE_EXPORT JSC::Subspace* globalObjectOutputConstraintSubspaceFor(JSC::VM&);
+
 template<typename ImplementationClass> class JSDOMWrapper : public JSDOMObject {
 public:
     typedef JSDOMObject Base;
     typedef ImplementationClass DOMWrapped;
     static constexpr bool isDOMWrapper = true;
-
+    
     ImplementationClass& wrapped() const { return m_wrapped; }
     static ptrdiff_t offsetOfWrapped() { return OBJECT_OFFSETOF(JSDOMWrapper<ImplementationClass>, m_wrapped); }
 
diff --git a/Source/WebCore/bindings/js/WebCoreJSClientData.cpp b/Source/WebCore/bindings/js/WebCoreJSClientData.cpp
new file mode 100644 (file)
index 0000000..5464e91
--- /dev/null
@@ -0,0 +1,117 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "WebCoreJSClientData.h"
+
+#include "JSDOMBinding.h"
+#include <heap/HeapInlines.h>
+#include <heap/MarkingConstraint.h>
+#include <heap/MarkedAllocatorInlines.h>
+#include <heap/MarkedBlockInlines.h>
+#include <heap/SubspaceInlines.h>
+#include <heap/VisitingTimeout.h>
+#include <runtime/VM.h>
+#include <wtf/MainThread.h>
+
+using namespace JSC;
+
+namespace WebCore {
+
+JSVMClientData::JSVMClientData(VM& vm)
+    : m_builtinFunctions(vm)
+    , m_builtinNames(&vm)
+    , m_outputConstraintSpace("WebCore Wrapper w/ Output Constraint", vm.heap)
+    , m_globalObjectOutputConstraintSpace("WebCore Global Object w/ Output Constraint", vm.heap, AllocatorAttributes(DoesNotNeedDestruction, HeapCell::JSCell))
+{
+}
+
+JSVMClientData::~JSVMClientData()
+{
+    ASSERT(m_worldSet.contains(m_normalWorld.get()));
+    ASSERT(m_worldSet.size() == 1);
+    ASSERT(m_normalWorld->hasOneRef());
+    m_normalWorld = nullptr;
+    ASSERT(m_worldSet.isEmpty());
+}
+
+void JSVMClientData::getAllWorlds(Vector<Ref<DOMWrapperWorld>>& worlds)
+{
+    ASSERT(worlds.isEmpty());
+    
+    worlds.reserveInitialCapacity(m_worldSet.size());
+    for (auto it = m_worldSet.begin(), end = m_worldSet.end(); it != end; ++it)
+        worlds.uncheckedAppend(*(*it));
+}
+
+void JSVMClientData::initNormalWorld(VM* vm)
+{
+    JSVMClientData* clientData = new JSVMClientData(*vm);
+    vm->clientData = clientData; // ~VM deletes this pointer.
+    
+    auto constraint = std::make_unique<MarkingConstraint>(
+        "Wcoc", "WebCore Output Constraints",
+        [vm, clientData, lastExecutionVersion = vm->heap.mutatorExecutionVersion()]
+        (SlotVisitor& slotVisitor, const VisitingTimeout&) mutable {
+            Heap& heap = vm->heap;
+            
+            if (heap.mutatorExecutionVersion() == lastExecutionVersion)
+                return;
+            
+            lastExecutionVersion = heap.mutatorExecutionVersion();
+
+            // We have to manage the visit count here ourselves. We need to know that if this adds
+            // opaque roots then we cannot declare termination yet. The way we signal this to the
+            // constraint solver is by adding to the visit count.
+            
+            size_t numOpaqueRootsBefore = heap.numOpaqueRoots();
+
+            // FIXME: Make this parallel!
+            unsigned numRevisited = 0;
+            clientData->forEachOutputConstraintSpace(
+                [&] (Subspace& subspace) {
+                    subspace.forEachMarkedCell(
+                        [&] (HeapCell* heapCell, HeapCell::Kind) {
+                            JSCell* cell = static_cast<JSCell*>(heapCell);
+                            cell->methodTable(*vm)->visitOutputConstraints(cell, slotVisitor);
+                            numRevisited++;
+                        });
+                });
+            if (Options::logGC())
+                dataLog("(", numRevisited, ")");
+            
+            slotVisitor.mergeIfNecessary();
+            
+            slotVisitor.addToVisitCount(heap.numOpaqueRoots() - numOpaqueRootsBefore);
+        },
+        ConstraintVolatility::SeldomGreyed);
+    vm->heap.addMarkingConstraint(WTFMove(constraint));
+        
+    clientData->m_normalWorld = DOMWrapperWorld::create(*vm, true);
+    vm->m_typedArrayController = adoptRef(new WebCoreTypedArrayController());
+}
+
+} // namespace WebCore
+
index f797351..9ef6819 100644 (file)
@@ -1,6 +1,6 @@
 /*
  *  Copyright (C) 1999-2001 Harri Porten (porten@kde.org)
- *  Copyright (C) 2003, 2004, 2005, 2006, 2008, 2009 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
  *  Copyright (C) 2007 Samuel Weinig <sam@webkit.org>
  *  Copyright (C) 2009 Google, Inc. All rights reserved.
  *
@@ -33,34 +33,17 @@ namespace WebCore {
 class JSVMClientData : public JSC::VM::ClientData {
     WTF_MAKE_NONCOPYABLE(JSVMClientData); WTF_MAKE_FAST_ALLOCATED;
     friend class VMWorldIterator;
-    friend void initNormalWorldClientData(JSC::VM*);
 
 public:
-    explicit JSVMClientData(JSC::VM& vm)
-        : m_builtinFunctions(vm)
-        , m_builtinNames(&vm)
-    {
-    }
+    explicit JSVMClientData(JSC::VM&);
 
-    virtual ~JSVMClientData()
-    {
-        ASSERT(m_worldSet.contains(m_normalWorld.get()));
-        ASSERT(m_worldSet.size() == 1);
-        ASSERT(m_normalWorld->hasOneRef());
-        m_normalWorld = nullptr;
-        ASSERT(m_worldSet.isEmpty());
-    }
+    virtual ~JSVMClientData();
+    
+    WEBCORE_EXPORT static void initNormalWorld(JSC::VM*);
 
     DOMWrapperWorld& normalWorld() { return *m_normalWorld; }
 
-    void getAllWorlds(Vector<Ref<DOMWrapperWorld>>& worlds)
-    {
-        ASSERT(worlds.isEmpty());
-
-        worlds.reserveInitialCapacity(m_worldSet.size());
-        for (auto it = m_worldSet.begin(), end = m_worldSet.end(); it != end; ++it)
-            worlds.uncheckedAppend(*(*it));
-    }
+    void getAllWorlds(Vector<Ref<DOMWrapperWorld>>&);
 
     void rememberWorld(DOMWrapperWorld& world)
     {
@@ -76,6 +59,16 @@ public:
 
     WebCoreBuiltinNames& builtinNames() { return m_builtinNames; }
     JSBuiltinFunctions& builtinFunctions() { return m_builtinFunctions; }
+    
+    JSC::Subspace& outputConstraintSpace() { return m_outputConstraintSpace; }
+    JSC::Subspace& globalObjectOutputConstraintSpace() { return m_globalObjectOutputConstraintSpace; }
+    
+    template<typename Func>
+    void forEachOutputConstraintSpace(const Func& func)
+    {
+        func(m_outputConstraintSpace);
+        func(m_globalObjectOutputConstraintSpace);
+    }
 
 private:
     HashSet<DOMWrapperWorld*> m_worldSet;
@@ -83,14 +76,9 @@ private:
 
     JSBuiltinFunctions m_builtinFunctions;
     WebCoreBuiltinNames m_builtinNames;
+    
+    JSC::JSDestructibleObjectSubspace m_outputConstraintSpace;
+    JSC::Subspace m_globalObjectOutputConstraintSpace;
 };
 
-inline void initNormalWorldClientData(JSC::VM* vm)
-{
-    JSVMClientData* clientData = new JSVMClientData(*vm);
-    vm->clientData = clientData; // ~VM deletes this pointer.
-    clientData->m_normalWorld = DOMWrapperWorld::create(*vm, true);
-    vm->m_typedArrayController = adoptRef(new WebCoreTypedArrayController());
-}
-
 } // namespace WebCore
index c73449f..9bb1470 100644 (file)
@@ -53,7 +53,7 @@ WorkerScriptController::WorkerScriptController(WorkerGlobalScope* workerGlobalSc
 {
     m_vm->heap.acquireAccess(); // It's not clear that we have good discipline for heap access, so turn it on permanently.
     m_vm->ensureWatchdog();
-    initNormalWorldClientData(m_vm.get());
+    JSVMClientData::initNormalWorld(m_vm.get());
 }
 
 WorkerScriptController::~WorkerScriptController()
index 71b26d4..37fc740 100644 (file)
@@ -1965,6 +1965,23 @@ sub GenerateHeader
         push(@headerContent, "    static void visitChildren(JSCell*, JSC::SlotVisitor&);\n");
         push(@headerContent, "    void visitAdditionalChildren(JSC::SlotVisitor&);\n") if $interface->extendedAttributes->{JSCustomMarkFunction};
         push(@headerContent, "\n");
+
+        if ($interface->extendedAttributes->{JSCustomMarkFunction}) {
+            # We assume that the logic in visitAdditionalChildren is highly volatile, and during a
+            # concurrent GC or in between eden GCs something may happen that would lead to this
+            # logic behaving differently. Since this could mark objects or add opaque roots, this
+            # means that after any increment of mutator resumption in a concurrent GC and at least
+            # once during any eden GC we need to re-execute visitAdditionalChildren on any objects
+            # that we had executed it on before. We do this using the DOM's own MarkingConstraint,
+            # which will call visitOutputConstraints on all objects in the DOM's own
+            # outputConstraintSubspace. visitOutputConstraints is the name JSC uses for the method
+            # that the GC calls to ask an object is it would like to mark anything else after the
+            # program resumed since the last call to visitChildren or visitOutputConstraints. Since
+            # this just calls visitAdditionalChildren, you usually don't have to worry about this.
+            push(@headerContent, "    static void visitOutputConstraints(JSCell*, JSC::SlotVisitor&);\n");
+            my $subspaceFunc = IsDOMGlobalObject($interface) ? "globalObjectOutputConstraintSubspaceFor" : "outputConstraintSubspaceFor";
+            push(@headerContent, "    template<typename> static JSC::Subspace* subspaceFor(JSC::VM& vm) { return $subspaceFunc(vm); }\n");
+        }
     }
 
     if (InstanceNeedsEstimatedSize($interface)) {
@@ -4134,6 +4151,15 @@ END
             }
         }
         push(@implContent, "}\n\n");
+        if ($interface->extendedAttributes->{JSCustomMarkFunction}) {
+            push(@implContent, "void ${className}::visitOutputConstraints(JSCell* cell, SlotVisitor& visitor)\n");
+            push(@implContent, "{\n");
+            push(@implContent, "    auto* thisObject = jsCast<${className}*>(cell);\n");
+            push(@implContent, "    ASSERT_GC_OBJECT_INHERITS(thisObject, info());\n");
+            push(@implContent, "    Base::visitOutputConstraints(thisObject, visitor);\n");
+            push(@implContent, "    thisObject->visitAdditionalChildren(visitor);\n");
+            push(@implContent, "}\n\n");
+        }
     }
 
     if (InstanceNeedsEstimatedSize($interface)) {
index 7f3fea6..75369d6 100644 (file)
@@ -26,7 +26,6 @@
 #include "config.h"
 #include "ContainerNodeAlgorithms.h"
 
-#include "CommonVM.h"
 #include "HTMLFrameOwnerElement.h"
 #include "InspectorInstrumentation.h"
 #include "NoEventDispatchAssertion.h"
@@ -102,8 +101,6 @@ void notifyChildNodeInserted(ContainerNode& insertionPoint, Node& node, NodeVect
         notifyNodeInsertedIntoDocument(insertionPoint, node, postInsertionNotificationTargets);
     else if (is<ContainerNode>(node))
         notifyNodeInsertedIntoTree(insertionPoint, downcast<ContainerNode>(node), postInsertionNotificationTargets);
-
-    writeBarrierOpaqueRoot([&insertionPoint] () -> void* { return insertionPoint.opaqueRoot(); });
 }
 
 void notifyNodeRemovedFromDocument(ContainerNode& insertionPoint, Node& node)
@@ -155,8 +152,6 @@ void notifyNodeRemovedFromTree(ContainerNode& insertionPoint, ContainerNode& nod
 
 void notifyChildNodeRemoved(ContainerNode& insertionPoint, Node& child)
 {
-    writeBarrierOpaqueRoot([&child] () -> void* { return &child; });
-
     if (!child.inDocument()) {
         if (is<ContainerNode>(child))
             notifyNodeRemovedFromTree(insertionPoint, downcast<ContainerNode>(child));