Butterflies should be allocated in Auxiliary MarkedSpace instead of CopiedSpace and...
authorfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 23 Aug 2016 19:52:08 +0000 (19:52 +0000)
committerfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 23 Aug 2016 19:52:08 +0000 (19:52 +0000)
https://bugs.webkit.org/show_bug.cgi?id=160125

Reviewed by Geoffrey Garen.
JSTests:

Most of the things I did properly covered by existing tests, but I found some simple cases of
unshifting that had sketchy coverage.

* stress/array-storage-array-unshift.js: Added.
* stress/contiguous-array-unshift.js: Added.
* stress/double-array-unshift.js: Added.
* stress/int32-array-unshift.js: Added.

Source/bmalloc:

I needed to tryMemalign, so I added such a thing.

* bmalloc/Allocator.cpp:
(bmalloc::Allocator::allocate):
(bmalloc::Allocator::tryAllocate):
(bmalloc::Allocator::allocateImpl):
(bmalloc::Allocator::reallocate):
* bmalloc/Allocator.h:
* bmalloc/Cache.h:
(bmalloc::Cache::allocate):
(bmalloc::Cache::tryAllocate):
* bmalloc/bmalloc.h:
(bmalloc::api::malloc):
(bmalloc::api::tryMemalign):
(bmalloc::api::memalign):

Source/JavaScriptCore:

In order to make the GC concurrent (bug 149432), we would either need to enable concurrent
copying or we would need to not copy. Concurrent copying carries a 1-2% throughput overhead
from the barriers alone. Considering that MarkedSpace does a decent job of avoiding
fragmentation, it's unlikely that it's worth paying 1-2% throughput for copying. So, we want
to get rid of copied space. This change moves copied space's biggest client over to marked
space.

Moving butterflies to marked space means having them use the new Auxiliary HeapCell
allocation path. This is a fairly mechanical change, but it caused performance regressions
everywhere, so this change also fixes MarkedSpace's performance issues.

At a high level the mechanical changes are:

- We use AuxiliaryBarrier instead of CopyBarrier.

- We use tryAllocateAuxiliary instead of tryAllocateStorage. I got rid of the silly
  CheckedBoolean stuff, since it's so much more trouble than it's worth.

- The JITs have to emit inlined marked space allocations instead of inline copy space
  allocations.

- Everyone has to get used to zeroing their butterflies after allocation instead of relying
  on them being pre-zeroed by the GC. Copied space would zero things for you, while marked
  space doesn't.

That's about 1/3 of this change. But this led to performance problems, which I fixed with
optimizations that amounted to a major MarkedSpace rewrite:

- MarkedSpace always causes internal fragmentation for array allocations because the vector
  length we choose when we resize usually leads to a cell size that doesn't correspond to any
  size class. I got around this by making array allocations usually round up vectorLength to
  the maximum allowed by the size class that we would have allocated in. Also,
  ensureLengthSlow() and friends first make sure that the requested length can't just be
  fulfilled with the current allocation size. This safeguard means that not every array
  allocation has to do size class queries. For example, the fast path of new Array(length)
  never does any size class queries, under the assumption that (1) the speed gained from
  avoiding an ensureLengthSlow() call, which then just changes the vectorLength by doing the
  size class query, is too small to offset the speed lost by doing the query on every
  allocation and (2) new Array(length) is a pretty good hint that resizing is not very
  likely.

- Size classes in MarkedSpace were way too precise, which led to external fragmentation. This
  changes MarkedSpace size classes to use a linear progression for very small sizes followed
  by a geometric progression that naturally transitions to a hyperbolic progression. We want
  hyperbolic sizes when we get close to blockSize: for example the largest size we want is
  payloadSize / 2 rounded down, to ensure we get exactly two cells with minimal slop. The
  next size down should be payloadSize / 3 rounded down, and so on. After the last precise
  size (80 bytes), we proceed using a geometric progression, but round up each size to
  minimize slop at the end of the block. This naturally causes the geometric progression to
  turn hyperbolic for large sizes. The size class configuration happens at VM start-up, so
  can be controlled with runtime options. I found that a base of 1.4 works pretty well.

- Large allocations caused massive internal fragmentation, since the smallest large
  allocation had to use exactly blockSize, and the largest small allocation used
  blockSize / 2. The next size up - the first large allocation size to require two blocks -
  also had 50% internal fragmentation. This is because we required large allocations to be
  blockSize aligned, so that MarkedBlock::blockFor() would work. I decided to rewrite all of
  that. Cells no longer have to be owned by a MarkedBlock. They can now alternatively be
  owned by a LargeAllocation. These two things are abstracted as CellContainer. You know that
  a cell is owned by a LargeAllocation if the MarkedBlock::atomSize / 2 bit is set.
  Basically, large allocations are deliberately misaligned by 8 bytes. This actually works
  out great since (1) typed arrays won't use large allocations anyway since they have their
  own malloc fallback and (2) large array butterflies already have a 8 byte header, which
  means that the 8 byte base misalignment aligns the large array payload on a 16 byte
  boundary. I took extreme care to make sure that the isLargeAllocation bit checks are as
  rare as possible; for example, ExecState::vm() skips the check because we know that callees
  must be small allocations. It's also possible to use template tricks to do one check for
  cell container kind, and then invoke a function specialized for MarkedBlock or a function
  specialized for LargeAllocation. LargeAllocation includes stubs for all MarkedBlock methods
  that get used from functions that are template-specialized like this. That's mostly to
  speed up the GC marking code. Most other code can use CellContainer API or HeapCell API
  directly. That's another thing: HeapCell, the common base of JSCell and auxiliary
  allocations, is now smart enough to do a lot of things for you, like HeapCell::vm(),
  HeapCell::heap(), HeapCell::isLargeAllocation(), and HeapCell::cellContainer(). The size
  cutoff for large allocations is runtime-configurable, so long as you don't choose something
  so small that callees end up large. I found that 400 bytes is roughly optimal. This means
  that the MarkedBlock size classes end up being:

  16, 32, 48, 64, 80, 112, 160, 224, 320

  The next size class would have been 432, but that's above the 400 byte cutoff. All of this
  is configurable with --sizeClassProgression and --largeAllocationCutoff. You can see what
  size classes you end up with by doing --dumpSizeClasses=true.

- Copied space uses 64KB blocks, while marked space used to use 16KB blocks. Allocating a lot
  of stuff in 16KB blocks is slower than allocating it in 64KB blocks. I got more speed from
  changing MarkedBlock::blockSize to 64KB. This would have been a space fail before, but now
  that we have LargeAllocation, it ends up being an overall win.

- Even after all of that, copying butterflies was still faster because it allowed us to skip
  sweeping dead space. A good GC allocates over dead bytes without explicitly freeing them,
  so the GC pause is O(size of live), not O(size of live + dead). O(dead) is usually much
  larger than O(live), especially in an eden collection. Copying satisfies this premise while
  mark+sweep does not. So, I invented a new kind of allocator: bump'n'pop. Previously, our
  MarkedSpace allocator was a freelist pop. That's simple and easy to inline but requires
  that we walk the block to build a free list. This means walking dead space. The new
  allocator allows totally free MarkedBlocks to simply set up a bump-pointer arena instead.
  The allocator is a hybrid of bump-pointer and freelist pop. It tries bump first. The bump
  pointer always bumps by cellSize, so the result of filling a block with bumping looks as if
  we had used freelist popping to fill it. Additionally, each MarkedBlock now has a bit to
  quickly tell if the block is entirely free. This makes sweeping O(1) whenever a MarkedBlock
  is completely empty, which is the common case because of the generational hypothesis: the
  number of objects that survive an eden collection is a tiny fraction of the number of
  objects that had been allocated, and this fraction is so small that there are typically
  fewer than one survivors per MarkedBlock. This change was enough to make this change a net
  win over tip-of-tree.

- FTL now shares the same allocation fast paths as everything else, which is great, because
  bump'n'pop has gnarly control flow. We don't really want B3 to have to think about that
  control flow, since it won't be able to improve the machine code we write ourselves. GC
  fast paths are best written in assembly. So, I've empowered B3 to have even better support
  for Patchpoint terminals. It's now totally fine for a Patchpoint terminal to be non-Void.
  So, the new FTL allocation fast paths are just Patchpoint terminals that call through to
  AssemblyHelpers::emitAllocate(). B3 still reasons about things like constant-folding the
  size class calculation and constant-hoisting the allocator. Also, I gave the FTL the
  ability to constant-fold some allocator logic (in case we first assume that we're doing a
  variable-length allocation but then realize that the length is known). I think it makes
  sense to have constant folding rules in FTL::Output, or whatever the B3 IR builder is,
  since this makes lowering easier (you can constant fold during lowering more easily) and it
  reduces the amount of malloc traffic. In the future, we could teach B3 how to better
  constant-fold this code. That would require allowing loads to be constant-folded, which is
  doable but hella tricky.

All of this put together gives us neutral perf on JetStream, Speedometer, and PLT3. SunSpider
sometimes gets penalized depending on how you run it. By comparison, the alternative approach
of using a copy barrier would have cost us 1-2%. That's the real apples-to-apples comparison
if your premise is that we should have a concurrent GC. After we finish removing copied
space, we will be barrier-ready for concurrent GC: we already have a marking barrier and we
simply won't need a copying barrier. This change gets us there for the purposes of our
benchmarks, since the remaining clients of copied space are not very important. On the other
hand, if we keep copying, then getting barrier-ready would mean adding back the copy barrier,
which costs more perf.

We might get bigger speed-ups once we remove CopiedSpace altogether. That requires moving
typed arrays and a few other weird things over to Aux MarkedSpace.

This also includes some header sanitization. The introduction of AuxiliaryBarrier, HeapCell,
and CellContainer meant that I had to include those files from everywhere. Fortunately,
just including JSCInlines.h (instead of manually including the files that includes) is
usually enough. So, I made most of JSC's cpp files include JSCInlines.h, which is something
that we were already basically doing. In places where JSCInlines.h would be too much, I just
included HeapInlines.h. This got weird, because we previously included HeapInlines.h from
JSObject.h. That's bad because it led to some circular dependencies, so I fixed it - but that
meant having to manually include HeapInlines.h from the places that previously got it
implicitly via JSObject.h. But that led to more problems for some reason: I started getting
build errors because non-JSC files were having trouble including Opcode.h. That's just silly,
since Opcode.h is meant to be an internal JSC header. So, I made it an internal header and
made it impossible to include it from outside JSC. This was a lot of work, but it was
necessary to get the patch to build on all ports. It's also a net win. There were many places
in WebCore that were transitively including a *ton* of JSC headers just because of the
JSObject.h->HeapInlines.h edge and a bunch of dependency edges that arose from some public
(for WebCore) JSC headers needing Interpreter.h or Opcode.h for bad reasons.

* API/JSTypedArray.cpp:
* API/ObjCCallbackFunction.mm:
* CMakeLists.txt:
* JavaScriptCore.xcodeproj/project.pbxproj:
* Scripts/builtins/builtins_generate_combined_implementation.py:
(BuiltinsCombinedImplementationGenerator.generate_secondary_header_includes):
* Scripts/builtins/builtins_generate_internals_wrapper_implementation.py:
(BuiltinsInternalsWrapperImplementationGenerator.generate_secondary_header_includes):
* Scripts/builtins/builtins_generate_separate_implementation.py:
(BuiltinsSeparateImplementationGenerator.generate_secondary_header_includes):
* assembler/AbstractMacroAssembler.h:
(JSC::AbstractMacroAssembler::JumpList::JumpList):
(JSC::AbstractMacroAssembler::JumpList::link):
(JSC::AbstractMacroAssembler::JumpList::linkTo):
(JSC::AbstractMacroAssembler::JumpList::append):
* assembler/MacroAssemblerARM64.h:
(JSC::MacroAssemblerARM64::add32):
* b3/B3BasicBlock.cpp:
(JSC::B3::BasicBlock::appendIntConstant):
(JSC::B3::BasicBlock::appendBoolConstant):
(JSC::B3::BasicBlock::clearSuccessors):
* b3/B3BasicBlock.h:
* b3/B3DuplicateTails.cpp:
* b3/B3StackmapGenerationParams.h:
* b3/testb3.cpp:
(JSC::B3::testBranchBitAndImmFusion):
(JSC::B3::testPatchpointTerminalReturnValue):
(JSC::B3::zero):
(JSC::B3::run):
* bindings/ScriptValue.cpp:
* bytecode/AdaptiveInferredPropertyValueWatchpointBase.cpp:
* bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp:
* bytecode/ObjectAllocationProfile.h:
(JSC::ObjectAllocationProfile::initialize):
* bytecode/PolymorphicAccess.cpp:
(JSC::AccessCase::generateImpl):
* bytecode/StructureStubInfo.cpp:
* dfg/DFGOperations.cpp:
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::emitAllocateRawObject):
(JSC::DFG::SpeculativeJIT::compileMakeRope):
(JSC::DFG::SpeculativeJIT::compileAllocatePropertyStorage):
(JSC::DFG::SpeculativeJIT::compileReallocatePropertyStorage):
* dfg/DFGSpeculativeJIT.h:
(JSC::DFG::SpeculativeJIT::emitAllocateJSCell):
(JSC::DFG::SpeculativeJIT::emitAllocateJSObject):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGStrengthReductionPhase.cpp:
(JSC::DFG::StrengthReductionPhase::handleNode):
* ftl/FTLAbstractHeapRepository.h:
* ftl/FTLCompile.cpp:
* ftl/FTLJITFinalizer.cpp:
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::compileCreateDirectArguments):
(JSC::FTL::DFG::LowerDFGToB3::compileNewArrayWithSize):
(JSC::FTL::DFG::LowerDFGToB3::compileMakeRope):
(JSC::FTL::DFG::LowerDFGToB3::compileMaterializeNewObject):
(JSC::FTL::DFG::LowerDFGToB3::initializeArrayElements):
(JSC::FTL::DFG::LowerDFGToB3::allocatePropertyStorageWithSizeImpl):
(JSC::FTL::DFG::LowerDFGToB3::emitRightShiftSnippet):
(JSC::FTL::DFG::LowerDFGToB3::allocateHeapCell):
(JSC::FTL::DFG::LowerDFGToB3::storeStructure):
(JSC::FTL::DFG::LowerDFGToB3::allocateCell):
(JSC::FTL::DFG::LowerDFGToB3::allocateObject):
(JSC::FTL::DFG::LowerDFGToB3::allocatorForSize):
(JSC::FTL::DFG::LowerDFGToB3::allocateVariableSizedObject):
(JSC::FTL::DFG::LowerDFGToB3::allocateJSArray):
* ftl/FTLOutput.cpp:
(JSC::FTL::Output::constBool):
(JSC::FTL::Output::constInt32):
(JSC::FTL::Output::add):
(JSC::FTL::Output::shl):
(JSC::FTL::Output::aShr):
(JSC::FTL::Output::lShr):
(JSC::FTL::Output::zeroExt):
(JSC::FTL::Output::equal):
(JSC::FTL::Output::notEqual):
(JSC::FTL::Output::above):
(JSC::FTL::Output::aboveOrEqual):
(JSC::FTL::Output::below):
(JSC::FTL::Output::belowOrEqual):
(JSC::FTL::Output::greaterThan):
(JSC::FTL::Output::greaterThanOrEqual):
(JSC::FTL::Output::lessThan):
(JSC::FTL::Output::lessThanOrEqual):
(JSC::FTL::Output::select):
(JSC::FTL::Output::unreachable):
(JSC::FTL::Output::appendSuccessor):
(JSC::FTL::Output::speculate):
(JSC::FTL::Output::addIncomingToPhi):
* ftl/FTLOutput.h:
* ftl/FTLValueFromBlock.h:
(JSC::FTL::ValueFromBlock::ValueFromBlock):
(JSC::FTL::ValueFromBlock::operator bool):
(JSC::FTL::ValueFromBlock::value):
(JSC::FTL::ValueFromBlock::block):
* ftl/FTLWeightedTarget.h:
(JSC::FTL::WeightedTarget::target):
(JSC::FTL::WeightedTarget::weight):
(JSC::FTL::WeightedTarget::frequentedBlock):
* heap/CellContainer.h: Added.
(JSC::CellContainer::CellContainer):
(JSC::CellContainer::operator bool):
(JSC::CellContainer::isMarkedBlock):
(JSC::CellContainer::isLargeAllocation):
(JSC::CellContainer::markedBlock):
(JSC::CellContainer::largeAllocation):
* heap/CellContainerInlines.h: Added.
(JSC::CellContainer::isMarkedOrRetired):
(JSC::CellContainer::isMarked):
(JSC::CellContainer::isMarkedOrNewlyAllocated):
(JSC::CellContainer::setHasAnyMarked):
(JSC::CellContainer::cellSize):
(JSC::CellContainer::weakSet):
* heap/ConservativeRoots.cpp:
(JSC::ConservativeRoots::ConservativeRoots):
(JSC::ConservativeRoots::~ConservativeRoots):
(JSC::ConservativeRoots::grow):
(JSC::ConservativeRoots::genericAddPointer):
(JSC::ConservativeRoots::genericAddSpan):
* heap/ConservativeRoots.h:
(JSC::ConservativeRoots::size):
(JSC::ConservativeRoots::roots):
* heap/CopyToken.h:
* heap/FreeList.cpp: Added.
(JSC::FreeList::dump):
* heap/FreeList.h: Added.
(JSC::FreeList::FreeList):
(JSC::FreeList::list):
(JSC::FreeList::bump):
(JSC::FreeList::operator==):
(JSC::FreeList::operator!=):
(JSC::FreeList::operator bool):
* heap/Heap.cpp:
(JSC::Heap::Heap):
(JSC::Heap::finalizeUnconditionalFinalizers):
(JSC::Heap::markRoots):
(JSC::Heap::copyBackingStores):
(JSC::Heap::gatherStackRoots):
(JSC::Heap::gatherJSStackRoots):
(JSC::Heap::gatherScratchBufferRoots):
(JSC::Heap::clearLivenessData):
(JSC::Heap::visitSmallStrings):
(JSC::Heap::visitConservativeRoots):
(JSC::Heap::removeDeadCompilerWorklistEntries):
(JSC::Heap::gatherExtraHeapSnapshotData):
(JSC::Heap::removeDeadHeapSnapshotNodes):
(JSC::Heap::visitProtectedObjects):
(JSC::Heap::visitArgumentBuffers):
(JSC::Heap::visitException):
(JSC::Heap::visitStrongHandles):
(JSC::Heap::visitHandleStack):
(JSC::Heap::visitSamplingProfiler):
(JSC::Heap::traceCodeBlocksAndJITStubRoutines):
(JSC::Heap::converge):
(JSC::Heap::visitWeakHandles):
(JSC::Heap::updateObjectCounts):
(JSC::Heap::clearUnmarkedExecutables):
(JSC::Heap::deleteUnmarkedCompiledCode):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collect):
(JSC::Heap::collectWithoutAnySweep):
(JSC::Heap::collectImpl):
(JSC::Heap::suspendCompilerThreads):
(JSC::Heap::willStartCollection):
(JSC::Heap::flushOldStructureIDTables):
(JSC::Heap::flushWriteBarrierBuffer):
(JSC::Heap::stopAllocation):
(JSC::Heap::reapWeakHandles):
(JSC::Heap::pruneStaleEntriesFromWeakGCMaps):
(JSC::Heap::sweepArrayBuffers):
(JSC::Heap::snapshotMarkedSpace):
(JSC::Heap::deleteSourceProviderCaches):
(JSC::Heap::notifyIncrementalSweeper):
(JSC::Heap::writeBarrierCurrentlyExecutingCodeBlocks):
(JSC::Heap::resetAllocators):
(JSC::Heap::updateAllocationLimits):
(JSC::Heap::didFinishCollection):
(JSC::Heap::resumeCompilerThreads):
(JSC::Zombify::visit):
* heap/Heap.h:
(JSC::Heap::subspaceForObjectDestructor):
(JSC::Heap::subspaceForAuxiliaryData):
(JSC::Heap::allocatorForObjectWithoutDestructor):
(JSC::Heap::allocatorForObjectWithDestructor):
(JSC::Heap::allocatorForAuxiliaryData):
(JSC::Heap::storageAllocator):
* heap/HeapCell.h:
(JSC::HeapCell::zap):
(JSC::HeapCell::isZapped):
* heap/HeapCellInlines.h: Added.
(JSC::HeapCell::isLargeAllocation):
(JSC::HeapCell::cellContainer):
(JSC::HeapCell::markedBlock):
(JSC::HeapCell::largeAllocation):
(JSC::HeapCell::heap):
(JSC::HeapCell::vm):
(JSC::HeapCell::cellSize):
(JSC::HeapCell::allocatorAttributes):
(JSC::HeapCell::destructionMode):
(JSC::HeapCell::cellKind):
* heap/HeapInlines.h:
(JSC::Heap::isCollecting):
(JSC::Heap::heap):
(JSC::Heap::isLive):
(JSC::Heap::isMarked):
(JSC::Heap::testAndSetMarked):
(JSC::Heap::setMarked):
(JSC::Heap::cellSize):
(JSC::Heap::writeBarrier):
(JSC::Heap::allocateWithoutDestructor):
(JSC::Heap::allocateObjectOfType):
(JSC::Heap::subspaceForObjectOfType):
(JSC::Heap::allocatorForObjectOfType):
(JSC::Heap::allocateAuxiliary):
(JSC::Heap::tryAllocateAuxiliary):
(JSC::Heap::tryReallocateAuxiliary):
(JSC::Heap::tryAllocateStorage):
(JSC::Heap::didFreeBlock):
(JSC::Heap::isPointerGCObject): Deleted.
(JSC::Heap::isValueGCObject): Deleted.
* heap/HeapUtil.h: Added.
(JSC::HeapUtil::findGCObjectPointersForMarking):
(JSC::HeapUtil::isPointerGCObjectJSCell):
(JSC::HeapUtil::isValueGCObject):
* heap/LargeAllocation.cpp: Added.
(JSC::LargeAllocation::tryCreate):
(JSC::LargeAllocation::LargeAllocation):
(JSC::LargeAllocation::lastChanceToFinalize):
(JSC::LargeAllocation::shrink):
(JSC::LargeAllocation::visitWeakSet):
(JSC::LargeAllocation::reapWeakSet):
(JSC::LargeAllocation::clearMarks):
(JSC::LargeAllocation::clearMarksWithCollectionType):
(JSC::LargeAllocation::isEmpty):
(JSC::LargeAllocation::sweep):
(JSC::LargeAllocation::destroy):
(JSC::LargeAllocation::dump):
* heap/LargeAllocation.h: Added.
(JSC::LargeAllocation::fromCell):
(JSC::LargeAllocation::cell):
(JSC::LargeAllocation::isLargeAllocation):
(JSC::LargeAllocation::heap):
(JSC::LargeAllocation::vm):
(JSC::LargeAllocation::weakSet):
(JSC::LargeAllocation::clearNewlyAllocated):
(JSC::LargeAllocation::isNewlyAllocated):
(JSC::LargeAllocation::isMarked):
(JSC::LargeAllocation::isMarkedOrNewlyAllocated):
(JSC::LargeAllocation::isLive):
(JSC::LargeAllocation::hasValidCell):
(JSC::LargeAllocation::cellSize):
(JSC::LargeAllocation::aboveLowerBound):
(JSC::LargeAllocation::belowUpperBound):
(JSC::LargeAllocation::contains):
(JSC::LargeAllocation::attributes):
(JSC::LargeAllocation::testAndSetMarked):
(JSC::LargeAllocation::setMarked):
(JSC::LargeAllocation::clearMarked):
(JSC::LargeAllocation::setHasAnyMarked):
(JSC::LargeAllocation::headerSize):
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::MarkedAllocator):
(JSC::isListPagedOut):
(JSC::MarkedAllocator::isPagedOut):
(JSC::MarkedAllocator::retire):
(JSC::MarkedAllocator::tryAllocateWithoutCollectingImpl):
(JSC::MarkedAllocator::tryAllocateWithoutCollecting):
(JSC::MarkedAllocator::allocateSlowCase):
(JSC::MarkedAllocator::tryAllocateSlowCase):
(JSC::MarkedAllocator::allocateSlowCaseImpl):
(JSC::blockHeaderSize):
(JSC::MarkedAllocator::blockSizeForBytes):
(JSC::MarkedAllocator::tryAllocateBlock):
(JSC::MarkedAllocator::addBlock):
(JSC::MarkedAllocator::removeBlock):
(JSC::MarkedAllocator::reset):
(JSC::MarkedAllocator::lastChanceToFinalize):
(JSC::MarkedAllocator::setFreeList):
(JSC::MarkedAllocator::tryAllocateHelper): Deleted.
(JSC::MarkedAllocator::tryPopFreeList): Deleted.
(JSC::MarkedAllocator::tryAllocate): Deleted.
(JSC::MarkedAllocator::allocateBlock): Deleted.
* heap/MarkedAllocator.h:
(JSC::MarkedAllocator::destruction):
(JSC::MarkedAllocator::cellKind):
(JSC::MarkedAllocator::heap):
(JSC::MarkedAllocator::takeLastActiveBlock):
(JSC::MarkedAllocator::offsetOfFreeList):
(JSC::MarkedAllocator::offsetOfCellSize):
(JSC::MarkedAllocator::tryAllocate):
(JSC::MarkedAllocator::allocate):
(JSC::MarkedAllocator::stopAllocating):
(JSC::MarkedAllocator::resumeAllocating):
(JSC::MarkedAllocator::offsetOfFreeListHead): Deleted.
(JSC::MarkedAllocator::MarkedAllocator): Deleted.
(JSC::MarkedAllocator::init): Deleted.
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::tryCreate):
(JSC::MarkedBlock::MarkedBlock):
(JSC::MarkedBlock::specializedSweep):
(JSC::MarkedBlock::sweep):
(JSC::MarkedBlock::sweepHelperSelectResetMode):
(JSC::MarkedBlock::sweepHelperSelectStateAndSweepMode):
(JSC::MarkedBlock::stopAllocating):
(JSC::MarkedBlock::clearMarksWithCollectionType):
(JSC::MarkedBlock::lastChanceToFinalize):
(JSC::MarkedBlock::resumeAllocating):
(JSC::MarkedBlock::didRetireBlock):
(JSC::MarkedBlock::forEachFreeCell):
(JSC::MarkedBlock::create): Deleted.
(JSC::MarkedBlock::callDestructor): Deleted.
(JSC::MarkedBlock::sweepHelper): Deleted.
* heap/MarkedBlock.h:
(JSC::MarkedBlock::VoidFunctor::returnValue):
(JSC::MarkedBlock::setHasAnyMarked):
(JSC::MarkedBlock::hasAnyMarked):
(JSC::MarkedBlock::clearHasAnyMarked):
(JSC::MarkedBlock::firstAtom):
(JSC::MarkedBlock::isAtomAligned):
(JSC::MarkedBlock::cellAlign):
(JSC::MarkedBlock::blockFor):
(JSC::MarkedBlock::isEmpty):
(JSC::MarkedBlock::cellSize):
(JSC::MarkedBlock::isMarkedOrRetired):
(JSC::MarkedBlock::FreeList::FreeList): Deleted.
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::initializeSizeClassForStepSize):
(JSC::MarkedSpace::MarkedSpace):
(JSC::MarkedSpace::lastChanceToFinalize):
(JSC::MarkedSpace::allocateLarge):
(JSC::MarkedSpace::tryAllocateLarge):
(JSC::MarkedSpace::sweep):
(JSC::MarkedSpace::sweepABit):
(JSC::MarkedSpace::sweepLargeAllocations):
(JSC::MarkedSpace::zombifySweep):
(JSC::MarkedSpace::resetAllocators):
(JSC::MarkedSpace::visitWeakSets):
(JSC::MarkedSpace::reapWeakSets):
(JSC::MarkedSpace::stopAllocating):
(JSC::MarkedSpace::resumeAllocating):
(JSC::MarkedSpace::isPagedOut):
(JSC::MarkedSpace::shrink):
(JSC::MarkedSpace::clearNewlyAllocated):
(JSC::MarkedSpace::clearMarks):
(JSC::MarkedSpace::didFinishIterating):
(JSC::MarkedSpace::objectCount):
(JSC::MarkedSpace::size):
(JSC::MarkedSpace::capacity):
(JSC::MarkedSpace::forEachAllocator): Deleted.
* heap/MarkedSpace.h:
(JSC::MarkedSpace::sizeClassIndex):
(JSC::MarkedSpace::subspaceForObjectsWithDestructor):
(JSC::MarkedSpace::subspaceForObjectsWithoutDestructor):
(JSC::MarkedSpace::subspaceForAuxiliaryData):
(JSC::MarkedSpace::blocksWithNewObjects):
(JSC::MarkedSpace::largeAllocations):
(JSC::MarkedSpace::largeAllocationsNurseryOffset):
(JSC::MarkedSpace::largeAllocationsOffsetForThisCollection):
(JSC::MarkedSpace::largeAllocationsForThisCollectionBegin):
(JSC::MarkedSpace::largeAllocationsForThisCollectionEnd):
(JSC::MarkedSpace::largeAllocationsForThisCollectionSize):
(JSC::MarkedSpace::forEachLiveCell):
(JSC::MarkedSpace::forEachDeadCell):
(JSC::MarkedSpace::allocatorFor):
(JSC::MarkedSpace::destructorAllocatorFor):
(JSC::MarkedSpace::auxiliaryAllocatorFor):
(JSC::MarkedSpace::allocate):
(JSC::MarkedSpace::tryAllocate):
(JSC::MarkedSpace::allocateWithoutDestructor):
(JSC::MarkedSpace::allocateWithDestructor):
(JSC::MarkedSpace::allocateAuxiliary):
(JSC::MarkedSpace::tryAllocateAuxiliary):
(JSC::MarkedSpace::forEachBlock):
(JSC::MarkedSpace::didAllocateInBlock):
(JSC::MarkedSpace::forEachAllocator):
(JSC::MarkedSpace::forEachSubspace):
(JSC::MarkedSpace::optimalSizeFor):
(JSC::MarkedSpace::objectCount): Deleted.
(JSC::MarkedSpace::size): Deleted.
(JSC::MarkedSpace::capacity): Deleted.
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::didStartMarking):
(JSC::SlotVisitor::reset):
(JSC::SlotVisitor::clearMarkStack):
(JSC::SlotVisitor::append):
(JSC::SlotVisitor::appendJSCellOrAuxiliary):
(JSC::SlotVisitor::setMarkedAndAppendToMarkStack):
(JSC::SlotVisitor::appendToMarkStack):
(JSC::SlotVisitor::markAuxiliary):
(JSC::SlotVisitor::noteLiveAuxiliaryCell):
(JSC::SetCurrentCellScope::SetCurrentCellScope):
(JSC::SlotVisitor::visitChildren):
* heap/SlotVisitor.h:
* heap/WeakBlock.cpp:
(JSC::WeakBlock::create):
(JSC::WeakBlock::destroy):
(JSC::WeakBlock::WeakBlock):
(JSC::WeakBlock::visit):
(JSC::WeakBlock::reap):
* heap/WeakBlock.h:
(JSC::WeakBlock::disconnectContainer):
(JSC::WeakBlock::disconnectMarkedBlock): Deleted.
* heap/WeakSet.cpp:
(JSC::WeakSet::sweep):
(JSC::WeakSet::addAllocator):
* heap/WeakSet.h:
(JSC::WeakSet::WeakSet):
* heap/WeakSetInlines.h:
(JSC::WeakSet::allocate):
* inspector/InjectedScriptManager.cpp:
* inspector/JSGlobalObjectInspectorController.cpp:
* inspector/JSJavaScriptCallFrame.cpp:
* inspector/ScriptDebugServer.cpp:
* inspector/agents/InspectorDebuggerAgent.cpp:
* interpreter/CachedCall.h:
(JSC::CachedCall::CachedCall):
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::emitAllocate):
(JSC::AssemblyHelpers::emitAllocateJSCell):
(JSC::AssemblyHelpers::emitAllocateJSObject):
(JSC::AssemblyHelpers::emitAllocateJSObjectWithKnownSize):
(JSC::AssemblyHelpers::emitAllocateVariableSized):
* jit/JITOpcodes.cpp:
(JSC::JIT::emit_op_new_object):
(JSC::JIT::emit_op_create_this):
* jit/JITOpcodes32_64.cpp:
(JSC::JIT::emit_op_new_object):
(JSC::JIT::emit_op_create_this):
* jit/JITOperations.cpp:
* jit/JITOperations.h:
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emitWriteBarrier):
* jsc.cpp:
(functionDescribeArray):
* llint/LLIntData.cpp:
(JSC::LLInt::Data::performAssertions):
* llint/LowLevelInterpreter.asm:
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:
* parser/ModuleAnalyzer.cpp:
* runtime/ArrayConventions.h:
(JSC::indexIsSufficientlyBeyondLengthForSparseMap):
(JSC::indexingHeaderForArrayStorage):
(JSC::baseIndexingHeaderForArrayStorage):
(JSC::indexingHeaderForArray): Deleted.
(JSC::baseIndexingHeaderForArray): Deleted.
* runtime/ArrayStorage.h:
(JSC::ArrayStorage::length):
(JSC::ArrayStorage::setLength):
(JSC::ArrayStorage::vectorLength):
(JSC::ArrayStorage::setVectorLength):
(JSC::ArrayStorage::copyHeaderFromDuringGC):
(JSC::ArrayStorage::sizeFor):
(JSC::ArrayStorage::totalSizeFor):
(JSC::ArrayStorage::totalSize):
(JSC::ArrayStorage::availableVectorLength):
(JSC::ArrayStorage::optimalVectorLength):
* runtime/AuxiliaryBarrier.h: Added.
(JSC::AuxiliaryBarrier::AuxiliaryBarrier):
(JSC::AuxiliaryBarrier::clear):
(JSC::AuxiliaryBarrier::get):
(JSC::AuxiliaryBarrier::slot):
(JSC::AuxiliaryBarrier::operator bool):
(JSC::AuxiliaryBarrier::setWithoutBarrier):
* runtime/AuxiliaryBarrierInlines.h: Added.
(JSC::AuxiliaryBarrier<T>::AuxiliaryBarrier):
(JSC::AuxiliaryBarrier<T>::set):
* runtime/Butterfly.h:
(JSC::Butterfly::fromBase):
(JSC::Butterfly::fromPointer):
* runtime/ButterflyInlines.h:
(JSC::Butterfly::availableContiguousVectorLength):
(JSC::Butterfly::optimalContiguousVectorLength):
(JSC::Butterfly::createUninitialized):
(JSC::Butterfly::growArrayRight):
* runtime/ClonedArguments.cpp:
(JSC::ClonedArguments::createEmpty):
* runtime/DataView.cpp:
* runtime/DirectArguments.h:
* runtime/ECMAScriptSpecInternalFunctions.cpp:
* runtime/GeneratorFrame.cpp:
* runtime/GeneratorPrototype.cpp:
* runtime/IntlCollator.cpp:
* runtime/IntlCollatorConstructor.cpp:
* runtime/IntlCollatorPrototype.cpp:
* runtime/IntlDateTimeFormat.cpp:
* runtime/IntlDateTimeFormatConstructor.cpp:
* runtime/IntlDateTimeFormatPrototype.cpp:
* runtime/IntlNumberFormat.cpp:
* runtime/IntlNumberFormatConstructor.cpp:
* runtime/IntlNumberFormatPrototype.cpp:
* runtime/JSArray.cpp:
(JSC::createArrayButterflyInDictionaryIndexingMode):
(JSC::JSArray::tryCreateUninitialized):
(JSC::JSArray::setLengthWritable):
(JSC::JSArray::unshiftCountSlowCase):
(JSC::JSArray::setLengthWithArrayStorage):
(JSC::JSArray::appendMemcpy):
(JSC::JSArray::setLength):
(JSC::JSArray::pop):
(JSC::JSArray::push):
(JSC::JSArray::fastSlice):
(JSC::JSArray::shiftCountWithArrayStorage):
(JSC::JSArray::shiftCountWithAnyIndexingType):
(JSC::JSArray::unshiftCountWithArrayStorage):
(JSC::JSArray::fillArgList):
(JSC::JSArray::copyToArguments):
* runtime/JSArray.h:
(JSC::createContiguousArrayButterfly):
(JSC::createArrayButterfly):
(JSC::JSArray::create):
(JSC::JSArray::tryCreateUninitialized): Deleted.
* runtime/JSArrayBufferView.h:
* runtime/JSCInlines.h:
* runtime/JSCJSValue.cpp:
* runtime/JSCallee.cpp:
* runtime/JSCell.cpp:
(JSC::JSCell::estimatedSize):
(JSC::JSCell::copyBackingStore):
* runtime/JSCell.h:
(JSC::JSCell::cellStateOffset):
* runtime/JSCellInlines.h:
(JSC::JSCell::visitChildren):
(JSC::ExecState::vm):
(JSC::JSCell::canUseFastGetOwnProperty):
(JSC::JSCell::classInfo):
(JSC::JSCell::toBoolean):
(JSC::JSCell::pureToBoolean):
(JSC::JSCell::callDestructor):
(JSC::JSCell::vm): Deleted.
* runtime/JSFunction.cpp:
(JSC::JSFunction::create):
(JSC::JSFunction::allocateAndInitializeRareData):
(JSC::JSFunction::initializeRareData):
(JSC::JSFunction::getOwnPropertySlot):
(JSC::JSFunction::put):
(JSC::JSFunction::deleteProperty):
(JSC::JSFunction::defineOwnProperty):
(JSC::JSFunction::setFunctionName):
(JSC::JSFunction::reifyLength):
(JSC::JSFunction::reifyName):
(JSC::JSFunction::reifyLazyPropertyIfNeeded):
(JSC::JSFunction::reifyBoundNameIfNeeded):
* runtime/JSFunction.h:
* runtime/JSFunctionInlines.h:
(JSC::JSFunction::createWithInvalidatedReallocationWatchpoint):
(JSC::JSFunction::JSFunction):
* runtime/JSGenericTypedArrayViewInlines.h:
(JSC::JSGenericTypedArrayView<Adaptor>::slowDownAndWasteMemory):
* runtime/JSInternalPromise.cpp:
* runtime/JSInternalPromiseConstructor.cpp:
* runtime/JSInternalPromiseDeferred.cpp:
* runtime/JSInternalPromisePrototype.cpp:
* runtime/JSJob.cpp:
* runtime/JSMapIterator.cpp:
* runtime/JSModuleNamespaceObject.cpp:
* runtime/JSModuleRecord.cpp:
* runtime/JSObject.cpp:
(JSC::getClassPropertyNames):
(JSC::JSObject::visitButterfly):
(JSC::JSObject::visitChildren):
(JSC::JSObject::heapSnapshot):
(JSC::JSObject::notifyPresenceOfIndexedAccessors):
(JSC::JSObject::createInitialIndexedStorage):
(JSC::JSObject::createInitialUndecided):
(JSC::JSObject::createInitialInt32):
(JSC::JSObject::createInitialDouble):
(JSC::JSObject::createInitialContiguous):
(JSC::JSObject::createArrayStorage):
(JSC::JSObject::createInitialArrayStorage):
(JSC::JSObject::convertUndecidedToInt32):
(JSC::JSObject::convertUndecidedToContiguous):
(JSC::JSObject::convertUndecidedToArrayStorage):
(JSC::JSObject::convertInt32ToDouble):
(JSC::JSObject::convertInt32ToArrayStorage):
(JSC::JSObject::convertDoubleToArrayStorage):
(JSC::JSObject::convertContiguousToArrayStorage):
(JSC::JSObject::putByIndexBeyondVectorLength):
(JSC::JSObject::putDirectIndexBeyondVectorLength):
(JSC::JSObject::putDirectNativeFunctionWithoutTransition):
(JSC::JSObject::getNewVectorLength):
(JSC::JSObject::increaseVectorLength):
(JSC::JSObject::ensureLengthSlow):
(JSC::JSObject::growOutOfLineStorage):
(JSC::JSObject::copyButterfly): Deleted.
(JSC::JSObject::copyBackingStore): Deleted.
* runtime/JSObject.h:
(JSC::JSObject::initializeIndex):
(JSC::JSObject::globalObject):
(JSC::JSObject::putDirectInternal):
(JSC::JSObject::putOwnDataProperty):
(JSC::JSObject::setStructureAndReallocateStorageIfNecessary): Deleted.
* runtime/JSObjectInlines.h:
* runtime/JSPromise.cpp:
* runtime/JSPromiseConstructor.cpp:
* runtime/JSPromiseDeferred.cpp:
* runtime/JSPromisePrototype.cpp:
* runtime/JSPropertyNameIterator.cpp:
* runtime/JSScope.cpp:
(JSC::JSScope::resolve):
* runtime/JSScope.h:
(JSC::JSScope::globalObject):
(JSC::Register::operator=):
(JSC::JSScope::vm): Deleted.
* runtime/JSSetIterator.cpp:
* runtime/JSStringIterator.cpp:
* runtime/JSTemplateRegistryKey.cpp:
* runtime/JSTypedArrayViewConstructor.cpp:
* runtime/JSTypedArrayViewPrototype.cpp:
* runtime/JSWeakMap.cpp:
* runtime/JSWeakSet.cpp:
* runtime/MapConstructor.cpp:
* runtime/MapPrototype.cpp:
* runtime/NativeStdFunctionCell.cpp:
* runtime/Operations.h:
(JSC::jsAdd):
(JSC::resetFreeCellsBadly):
(JSC::resetBadly):
* runtime/Options.h:
* runtime/PropertyTable.cpp:
* runtime/ProxyConstructor.cpp:
* runtime/ProxyObject.cpp:
* runtime/ProxyRevoke.cpp:
* runtime/RegExpMatchesArray.h:
(JSC::tryCreateUninitializedRegExpMatchesArray):
(JSC::createRegExpMatchesArray):
* runtime/RuntimeType.cpp:
* runtime/SamplingProfiler.cpp:
(JSC::SamplingProfiler::processUnverifiedStackTraces):
* runtime/SetConstructor.cpp:
* runtime/SetPrototype.cpp:
* runtime/TemplateRegistry.cpp:
* runtime/TypeProfilerLog.cpp:
* runtime/TypeSet.cpp:
* runtime/WeakMapConstructor.cpp:
* runtime/WeakMapData.cpp:
* runtime/WeakMapPrototype.cpp:
* runtime/WeakSetConstructor.cpp:
* runtime/WeakSetPrototype.cpp:
* tools/JSDollarVMPrototype.cpp:
(JSC::JSDollarVMPrototype::isInObjectSpace):
(JSC::JSDollarVMPrototype::isInStorageSpace):

Source/WebCore:

No new tests because no new WebCore behavior.

Just rewiring #includes.

* ForwardingHeaders/heap/HeapInlines.h: Added.
* ForwardingHeaders/runtime/AuxiliaryBarrierInlines.h: Added.
* Modules/indexeddb/server/SQLiteIDBBackingStore.cpp:
* Modules/indexeddb/server/UniqueIDBDatabase.cpp:
* bindings/js/JSApplePayPaymentAuthorizedEventCustom.cpp:
* bindings/js/JSApplePayPaymentMethodSelectedEventCustom.cpp:
* bindings/js/JSApplePayShippingContactSelectedEventCustom.cpp:
* bindings/js/JSApplePayShippingMethodSelectedEventCustom.cpp:
* bindings/js/JSClientRectCustom.cpp:
* bindings/js/JSDOMBinding.h:
* bindings/js/JSDOMStringListCustom.cpp:
* bindings/js/JSErrorEventCustom.cpp:
* bindings/js/JSPopStateEventCustom.cpp:
* bindings/js/JSWebGL2RenderingContextCustom.cpp:
* contentextensions/ContentExtensionParser.cpp:
* dom/ErrorEvent.cpp:
* inspector/CommandLineAPIModule.cpp:
* testing/GCObservation.cpp:
(WebCore::GCObservation::GCObservation):

Source/WebKit2:

Just rewiring some #includes.

* WebProcess/InjectedBundle/DOM/InjectedBundleRangeHandle.cpp:
* WebProcess/Plugins/Netscape/JSNPObject.cpp:

Source/WTF:

I needed tryFastAlignedMalloc() so I added it.

* wtf/FastMalloc.cpp:
(WTF::fastAlignedMalloc):
(WTF::tryFastAlignedMalloc):
(WTF::fastAlignedFree):
* wtf/FastMalloc.h:

Tools:

* DumpRenderTree/TestRunner.cpp: Rewire some #includes.
* Scripts/run-jsc-stress-tests: New test flag!

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@204854 268f45cc-cd09-0410-ab3c-d52691b4dbfc

288 files changed:
JSTests/ChangeLog
JSTests/stress/array-storage-array-unshift.js [new file with mode: 0644]
JSTests/stress/contiguous-array-unshift.js [new file with mode: 0644]
JSTests/stress/double-array-unshift.js [new file with mode: 0644]
JSTests/stress/int32-array-unshift.js [new file with mode: 0644]
Source/JavaScriptCore/API/JSTypedArray.cpp
Source/JavaScriptCore/API/ObjCCallbackFunction.mm
Source/JavaScriptCore/CMakeLists.txt
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/Scripts/builtins/builtins_generate_combined_implementation.py
Source/JavaScriptCore/Scripts/builtins/builtins_generate_internals_wrapper_implementation.py
Source/JavaScriptCore/Scripts/builtins/builtins_generate_separate_implementation.py
Source/JavaScriptCore/assembler/AbstractMacroAssembler.h
Source/JavaScriptCore/assembler/MacroAssembler.h
Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.cpp [new file with mode: 0644]
Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.h
Source/JavaScriptCore/b3/B3BasicBlock.cpp
Source/JavaScriptCore/b3/B3BasicBlock.h
Source/JavaScriptCore/b3/B3DuplicateTails.cpp
Source/JavaScriptCore/b3/B3StackmapGenerationParams.h
Source/JavaScriptCore/b3/testb3.cpp
Source/JavaScriptCore/bindings/ScriptValue.cpp
Source/JavaScriptCore/bytecode/AdaptiveInferredPropertyValueWatchpointBase.cpp
Source/JavaScriptCore/bytecode/BytecodeBasicBlock.cpp
Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.cpp
Source/JavaScriptCore/bytecode/BytecodeUseDef.h
Source/JavaScriptCore/bytecode/CallLinkInfo.cpp
Source/JavaScriptCore/bytecode/CallLinkInfo.h
Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/bytecode/CodeBlock.h
Source/JavaScriptCore/bytecode/Instruction.h
Source/JavaScriptCore/bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp
Source/JavaScriptCore/bytecode/ObjectAllocationProfile.h
Source/JavaScriptCore/bytecode/Opcode.h
Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp
Source/JavaScriptCore/bytecode/PolymorphicAccess.h
Source/JavaScriptCore/bytecode/PreciseJumpTargets.cpp
Source/JavaScriptCore/bytecode/StructureStubInfo.cpp
Source/JavaScriptCore/bytecode/StructureStubInfo.h
Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.cpp
Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.h
Source/JavaScriptCore/bytecode/UnlinkedInstructionStream.cpp
Source/JavaScriptCore/bytecode/UnlinkedInstructionStream.h
Source/JavaScriptCore/dfg/DFGOperations.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h
Source/JavaScriptCore/ftl/FTLCompile.cpp
Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp
Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
Source/JavaScriptCore/ftl/FTLOutput.cpp
Source/JavaScriptCore/ftl/FTLOutput.h
Source/JavaScriptCore/ftl/FTLValueFromBlock.h
Source/JavaScriptCore/ftl/FTLWeightedTarget.h
Source/JavaScriptCore/heap/CellContainer.h [new file with mode: 0644]
Source/JavaScriptCore/heap/CellContainerInlines.h [new file with mode: 0644]
Source/JavaScriptCore/heap/ConservativeRoots.cpp
Source/JavaScriptCore/heap/ConservativeRoots.h
Source/JavaScriptCore/heap/CopyToken.h
Source/JavaScriptCore/heap/FreeList.cpp [new file with mode: 0644]
Source/JavaScriptCore/heap/FreeList.h [new file with mode: 0644]
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/heap/Heap.h
Source/JavaScriptCore/heap/HeapCell.h
Source/JavaScriptCore/heap/HeapCellInlines.h [new file with mode: 0644]
Source/JavaScriptCore/heap/HeapInlines.h
Source/JavaScriptCore/heap/HeapUtil.h [new file with mode: 0644]
Source/JavaScriptCore/heap/LargeAllocation.cpp [new file with mode: 0644]
Source/JavaScriptCore/heap/LargeAllocation.h [new file with mode: 0644]
Source/JavaScriptCore/heap/MarkedAllocator.cpp
Source/JavaScriptCore/heap/MarkedAllocator.h
Source/JavaScriptCore/heap/MarkedBlock.cpp
Source/JavaScriptCore/heap/MarkedBlock.h
Source/JavaScriptCore/heap/MarkedSpace.cpp
Source/JavaScriptCore/heap/MarkedSpace.h
Source/JavaScriptCore/heap/SlotVisitor.cpp
Source/JavaScriptCore/heap/SlotVisitor.h
Source/JavaScriptCore/heap/WeakBlock.cpp
Source/JavaScriptCore/heap/WeakBlock.h
Source/JavaScriptCore/heap/WeakSet.cpp
Source/JavaScriptCore/heap/WeakSet.h
Source/JavaScriptCore/heap/WeakSetInlines.h
Source/JavaScriptCore/inspector/InjectedScriptManager.cpp
Source/JavaScriptCore/inspector/JSGlobalObjectInspectorController.cpp
Source/JavaScriptCore/inspector/JSJavaScriptCallFrame.cpp
Source/JavaScriptCore/inspector/ScriptDebugServer.cpp
Source/JavaScriptCore/inspector/agents/InspectorDebuggerAgent.cpp
Source/JavaScriptCore/interpreter/CachedCall.h
Source/JavaScriptCore/interpreter/Interpreter.cpp
Source/JavaScriptCore/interpreter/Interpreter.h
Source/JavaScriptCore/jit/AssemblyHelpers.h
Source/JavaScriptCore/jit/GCAwareJITStubRoutine.cpp
Source/JavaScriptCore/jit/JIT.cpp
Source/JavaScriptCore/jit/JIT.h
Source/JavaScriptCore/jit/JITExceptions.cpp
Source/JavaScriptCore/jit/JITExceptions.h
Source/JavaScriptCore/jit/JITOpcodes.cpp
Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
Source/JavaScriptCore/jit/JITOperations.cpp
Source/JavaScriptCore/jit/JITOperations.h
Source/JavaScriptCore/jit/JITPropertyAccess.cpp
Source/JavaScriptCore/jit/JITThunks.cpp
Source/JavaScriptCore/jit/JITThunks.h
Source/JavaScriptCore/jsc.cpp
Source/JavaScriptCore/llint/LLIntData.cpp
Source/JavaScriptCore/llint/LLIntExceptions.cpp
Source/JavaScriptCore/llint/LLIntThunks.cpp
Source/JavaScriptCore/llint/LLIntThunks.h
Source/JavaScriptCore/llint/LowLevelInterpreter.asm
Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
Source/JavaScriptCore/parser/ModuleAnalyzer.cpp
Source/JavaScriptCore/parser/NodeConstructors.h
Source/JavaScriptCore/parser/Nodes.h
Source/JavaScriptCore/profiler/ProfilerBytecode.cpp
Source/JavaScriptCore/profiler/ProfilerBytecode.h
Source/JavaScriptCore/profiler/ProfilerBytecodeSequence.cpp
Source/JavaScriptCore/runtime/ArrayConventions.h
Source/JavaScriptCore/runtime/ArrayPrototype.cpp
Source/JavaScriptCore/runtime/ArrayStorage.h
Source/JavaScriptCore/runtime/AuxiliaryBarrier.h [new file with mode: 0644]
Source/JavaScriptCore/runtime/AuxiliaryBarrierInlines.h [new file with mode: 0644]
Source/JavaScriptCore/runtime/Butterfly.h
Source/JavaScriptCore/runtime/ButterflyInlines.h
Source/JavaScriptCore/runtime/ClonedArguments.cpp
Source/JavaScriptCore/runtime/CommonSlowPathsExceptions.cpp
Source/JavaScriptCore/runtime/CommonSlowPathsExceptions.h
Source/JavaScriptCore/runtime/DataView.cpp
Source/JavaScriptCore/runtime/DirectArguments.h
Source/JavaScriptCore/runtime/ECMAScriptSpecInternalFunctions.cpp
Source/JavaScriptCore/runtime/Error.cpp
Source/JavaScriptCore/runtime/Error.h
Source/JavaScriptCore/runtime/ErrorInstance.cpp
Source/JavaScriptCore/runtime/ErrorInstance.h
Source/JavaScriptCore/runtime/Exception.cpp
Source/JavaScriptCore/runtime/Exception.h
Source/JavaScriptCore/runtime/GeneratorFrame.cpp
Source/JavaScriptCore/runtime/GeneratorPrototype.cpp
Source/JavaScriptCore/runtime/IntlCollator.cpp
Source/JavaScriptCore/runtime/IntlCollatorConstructor.cpp
Source/JavaScriptCore/runtime/IntlCollatorPrototype.cpp
Source/JavaScriptCore/runtime/IntlDateTimeFormat.cpp
Source/JavaScriptCore/runtime/IntlDateTimeFormatConstructor.cpp
Source/JavaScriptCore/runtime/IntlDateTimeFormatPrototype.cpp
Source/JavaScriptCore/runtime/IntlNumberFormat.cpp
Source/JavaScriptCore/runtime/IntlNumberFormatConstructor.cpp
Source/JavaScriptCore/runtime/IntlNumberFormatPrototype.cpp
Source/JavaScriptCore/runtime/IntlObject.cpp
Source/JavaScriptCore/runtime/IteratorPrototype.cpp
Source/JavaScriptCore/runtime/JSArray.cpp
Source/JavaScriptCore/runtime/JSArray.h
Source/JavaScriptCore/runtime/JSArrayBufferView.h
Source/JavaScriptCore/runtime/JSCInlines.h
Source/JavaScriptCore/runtime/JSCJSValue.cpp
Source/JavaScriptCore/runtime/JSCallee.cpp
Source/JavaScriptCore/runtime/JSCell.cpp
Source/JavaScriptCore/runtime/JSCell.h
Source/JavaScriptCore/runtime/JSCellInlines.h
Source/JavaScriptCore/runtime/JSFunction.cpp
Source/JavaScriptCore/runtime/JSFunction.h
Source/JavaScriptCore/runtime/JSFunctionInlines.h
Source/JavaScriptCore/runtime/JSGenericTypedArrayViewInlines.h
Source/JavaScriptCore/runtime/JSInternalPromise.cpp
Source/JavaScriptCore/runtime/JSInternalPromiseConstructor.cpp
Source/JavaScriptCore/runtime/JSInternalPromiseDeferred.cpp
Source/JavaScriptCore/runtime/JSInternalPromisePrototype.cpp
Source/JavaScriptCore/runtime/JSJob.cpp
Source/JavaScriptCore/runtime/JSMapIterator.cpp
Source/JavaScriptCore/runtime/JSModuleNamespaceObject.cpp
Source/JavaScriptCore/runtime/JSModuleRecord.cpp
Source/JavaScriptCore/runtime/JSObject.cpp
Source/JavaScriptCore/runtime/JSObject.h
Source/JavaScriptCore/runtime/JSObjectInlines.h
Source/JavaScriptCore/runtime/JSPromise.cpp
Source/JavaScriptCore/runtime/JSPromiseConstructor.cpp
Source/JavaScriptCore/runtime/JSPromiseDeferred.cpp
Source/JavaScriptCore/runtime/JSPromisePrototype.cpp
Source/JavaScriptCore/runtime/JSPropertyNameIterator.cpp
Source/JavaScriptCore/runtime/JSScope.cpp
Source/JavaScriptCore/runtime/JSScope.h
Source/JavaScriptCore/runtime/JSSetIterator.cpp
Source/JavaScriptCore/runtime/JSStringIterator.cpp
Source/JavaScriptCore/runtime/JSTemplateRegistryKey.cpp
Source/JavaScriptCore/runtime/JSTypedArrayViewConstructor.cpp
Source/JavaScriptCore/runtime/JSTypedArrayViewPrototype.cpp
Source/JavaScriptCore/runtime/JSWeakMap.cpp
Source/JavaScriptCore/runtime/JSWeakSet.cpp
Source/JavaScriptCore/runtime/MapConstructor.cpp
Source/JavaScriptCore/runtime/MapIteratorPrototype.cpp
Source/JavaScriptCore/runtime/MapPrototype.cpp
Source/JavaScriptCore/runtime/NativeErrorConstructor.cpp
Source/JavaScriptCore/runtime/NativeStdFunctionCell.cpp
Source/JavaScriptCore/runtime/Operations.h
Source/JavaScriptCore/runtime/Options.h
Source/JavaScriptCore/runtime/PropertyTable.cpp
Source/JavaScriptCore/runtime/ProxyConstructor.cpp
Source/JavaScriptCore/runtime/ProxyObject.cpp
Source/JavaScriptCore/runtime/ProxyRevoke.cpp
Source/JavaScriptCore/runtime/RegExpMatchesArray.h
Source/JavaScriptCore/runtime/RuntimeType.cpp
Source/JavaScriptCore/runtime/SamplingProfiler.cpp
Source/JavaScriptCore/runtime/SetConstructor.cpp
Source/JavaScriptCore/runtime/SetIteratorPrototype.cpp
Source/JavaScriptCore/runtime/SetPrototype.cpp
Source/JavaScriptCore/runtime/StackFrame.cpp [new file with mode: 0644]
Source/JavaScriptCore/runtime/StackFrame.h [new file with mode: 0644]
Source/JavaScriptCore/runtime/StringConstructor.cpp
Source/JavaScriptCore/runtime/StringIteratorPrototype.cpp
Source/JavaScriptCore/runtime/TemplateRegistry.cpp
Source/JavaScriptCore/runtime/TestRunnerUtils.cpp
Source/JavaScriptCore/runtime/TestRunnerUtils.h
Source/JavaScriptCore/runtime/TypeProfilerLog.cpp
Source/JavaScriptCore/runtime/TypeSet.cpp
Source/JavaScriptCore/runtime/VM.cpp
Source/JavaScriptCore/runtime/VM.h
Source/JavaScriptCore/runtime/VMEntryScope.h
Source/JavaScriptCore/runtime/WeakMapConstructor.cpp
Source/JavaScriptCore/runtime/WeakMapData.cpp
Source/JavaScriptCore/runtime/WeakMapPrototype.cpp
Source/JavaScriptCore/runtime/WeakSetConstructor.cpp
Source/JavaScriptCore/runtime/WeakSetPrototype.cpp
Source/JavaScriptCore/tools/JSDollarVM.cpp
Source/JavaScriptCore/tools/JSDollarVMPrototype.cpp
Source/WTF/ChangeLog
Source/WTF/wtf/FastMalloc.cpp
Source/WTF/wtf/FastMalloc.h
Source/WTF/wtf/ParkingLot.cpp
Source/WTF/wtf/ParkingLot.h
Source/WTF/wtf/ScopedLambda.h
Source/WebCore/ChangeLog
Source/WebCore/ForwardingHeaders/heap/HeapInlines.h [new file with mode: 0644]
Source/WebCore/ForwardingHeaders/interpreter/Interpreter.h [deleted file]
Source/WebCore/ForwardingHeaders/runtime/AuxiliaryBarrierInlines.h [new file with mode: 0644]
Source/WebCore/Modules/indexeddb/IDBCursorWithValue.cpp
Source/WebCore/Modules/indexeddb/client/TransactionOperation.cpp
Source/WebCore/Modules/indexeddb/server/SQLiteIDBBackingStore.cpp
Source/WebCore/Modules/indexeddb/server/UniqueIDBDatabase.cpp
Source/WebCore/bindings/js/JSApplePayPaymentAuthorizedEventCustom.cpp
Source/WebCore/bindings/js/JSApplePayPaymentMethodSelectedEventCustom.cpp
Source/WebCore/bindings/js/JSApplePayShippingContactSelectedEventCustom.cpp
Source/WebCore/bindings/js/JSApplePayShippingMethodSelectedEventCustom.cpp
Source/WebCore/bindings/js/JSClientRectCustom.cpp
Source/WebCore/bindings/js/JSDOMBinding.cpp
Source/WebCore/bindings/js/JSDOMBinding.h
Source/WebCore/bindings/js/JSDeviceMotionEventCustom.cpp
Source/WebCore/bindings/js/JSDeviceOrientationEventCustom.cpp
Source/WebCore/bindings/js/JSErrorEventCustom.cpp
Source/WebCore/bindings/js/JSIDBCursorWithValueCustom.cpp
Source/WebCore/bindings/js/JSIDBIndexCustom.cpp
Source/WebCore/bindings/js/JSPopStateEventCustom.cpp
Source/WebCore/bindings/js/JSWebGL2RenderingContextCustom.cpp
Source/WebCore/bindings/js/JSWorkerGlobalScopeCustom.cpp
Source/WebCore/bindings/js/WorkerScriptController.cpp
Source/WebCore/contentextensions/ContentExtensionParser.cpp
Source/WebCore/dom/ErrorEvent.cpp
Source/WebCore/html/HTMLCanvasElement.cpp
Source/WebCore/html/MediaDocument.cpp
Source/WebCore/inspector/CommandLineAPIModule.cpp
Source/WebCore/loader/EmptyClients.cpp
Source/WebCore/page/CaptionUserPreferences.cpp
Source/WebCore/page/Frame.cpp
Source/WebCore/page/PageGroup.cpp
Source/WebCore/page/UserContentController.cpp
Source/WebCore/platform/mock/mediasource/MockBox.cpp
Source/WebCore/testing/GCObservation.cpp
Source/WebKit2/ChangeLog
Source/WebKit2/UIProcess/ViewGestureController.cpp
Source/WebKit2/UIProcess/WebPageProxy.cpp
Source/WebKit2/UIProcess/WebProcessPool.cpp
Source/WebKit2/UIProcess/WebProcessProxy.cpp
Source/WebKit2/WebProcess/InjectedBundle/DOM/InjectedBundleRangeHandle.cpp
Source/WebKit2/WebProcess/Plugins/Netscape/JSNPObject.cpp
Source/bmalloc/ChangeLog
Source/bmalloc/bmalloc/Allocator.cpp
Source/bmalloc/bmalloc/Allocator.h
Source/bmalloc/bmalloc/Cache.h
Source/bmalloc/bmalloc/bmalloc.h
Tools/ChangeLog
Tools/DumpRenderTree/TestRunner.cpp
Tools/DumpRenderTree/mac/DumpRenderTree.mm
Tools/Scripts/run-jsc-stress-tests
Tools/TestWebKitAPI/Tests/WTF/Vector.cpp

index a265a64..5d28685 100644 (file)
@@ -1,3 +1,18 @@
+2016-08-22  Filip Pizlo  <fpizlo@apple.com>
+
+        Butterflies should be allocated in Auxiliary MarkedSpace instead of CopiedSpace and we should rewrite as much of the GC as needed to make this not a regression
+        https://bugs.webkit.org/show_bug.cgi?id=160125
+
+        Reviewed by Geoffrey Garen.
+        
+        Most of the things I did properly covered by existing tests, but I found some simple cases of
+        unshifting that had sketchy coverage.
+
+        * stress/array-storage-array-unshift.js: Added.
+        * stress/contiguous-array-unshift.js: Added.
+        * stress/double-array-unshift.js: Added.
+        * stress/int32-array-unshift.js: Added.
+
 2016-08-23  Keith Miller  <keith_miller@apple.com>
 
         Update/add new test262 tests
diff --git a/JSTests/stress/array-storage-array-unshift.js b/JSTests/stress/array-storage-array-unshift.js
new file mode 100644 (file)
index 0000000..a4ae62c
--- /dev/null
@@ -0,0 +1,8 @@
+//@ runDefault
+var x = [2.5, 1.5];
+x.length = 1000000000;
+x.length = 2;
+Array.prototype.unshift.call(x, 3.5);
+if (x.toString() != "3.5,2.5,1.5")
+    throw "Error: bad result: " + describe(x);
+
diff --git a/JSTests/stress/contiguous-array-unshift.js b/JSTests/stress/contiguous-array-unshift.js
new file mode 100644 (file)
index 0000000..2509d9c
--- /dev/null
@@ -0,0 +1,6 @@
+//@ runDefault
+var x = ['b', 'a'];
+Array.prototype.unshift.call(x, 'c');
+if (x.toString() != "c,b,a")
+    throw "Error: bad result: " + describe(x);
+
diff --git a/JSTests/stress/double-array-unshift.js b/JSTests/stress/double-array-unshift.js
new file mode 100644 (file)
index 0000000..fc9aeb8
--- /dev/null
@@ -0,0 +1,6 @@
+//@ runDefault
+var x = [2.5, 1.5];
+Array.prototype.unshift.call(x, 3.5);
+if (x.toString() != "3.5,2.5,1.5")
+    throw "Error: bad result: " + describe(x);
+
diff --git a/JSTests/stress/int32-array-unshift.js b/JSTests/stress/int32-array-unshift.js
new file mode 100644 (file)
index 0000000..4d4c69d
--- /dev/null
@@ -0,0 +1,6 @@
+//@ runDefault
+var x = [2, 1];
+Array.prototype.unshift.call(x, 3);
+if (x.toString() != "3,2,1")
+    throw "Error: bad result: " + describe(x);
+
index b509970..0b75a5b 100644 (file)
@@ -32,7 +32,7 @@
 #include "ClassInfo.h"
 #include "Error.h"
 #include "JSArrayBufferViewInlines.h"
-#include "JSCJSValueInlines.h"
+#include "JSCInlines.h"
 #include "JSDataView.h"
 #include "JSGenericTypedArrayViewInlines.h"
 #include "JSTypedArrays.h"
index 10b3097..ffdc0d4 100644 (file)
@@ -31,9 +31,8 @@
 #import "APICallbackFunction.h"
 #import "APICast.h"
 #import "Error.h"
-#import "JSCJSValueInlines.h"
 #import "JSCell.h"
-#import "JSCellInlines.h"
+#import "JSCInlines.h"
 #import "JSContextInternal.h"
 #import "JSWrapperMap.h"
 #import "JSValueInternal.h"
index 3ed54b8..2986a9d 100644 (file)
@@ -66,6 +66,7 @@ set(JavaScriptCore_SOURCES
     assembler/MacroAssembler.cpp
     assembler/MacroAssemblerARM.cpp
     assembler/MacroAssemblerARMv7.cpp
+    assembler/MacroAssemblerCodeRef.cpp
     assembler/MacroAssemblerPrinter.cpp
     assembler/MacroAssemblerX86Common.cpp
 
@@ -445,6 +446,7 @@ set(JavaScriptCore_SOURCES
     heap/DestructionMode.cpp
     heap/EdenGCActivityCallback.cpp
     heap/FullGCActivityCallback.cpp
+    heap/FreeList.cpp
     heap/GCActivityCallback.cpp
     heap/GCLogging.cpp
     heap/HandleSet.cpp
@@ -460,6 +462,7 @@ set(JavaScriptCore_SOURCES
     heap/HeapVerifier.cpp
     heap/IncrementalSweeper.cpp
     heap/JITStubRoutineSet.cpp
+    heap/LargeAllocation.cpp
     heap/LiveObjectList.cpp
     heap/MachineStackMarker.cpp
     heap/MarkStack.cpp
@@ -798,6 +801,7 @@ set(JavaScriptCore_SOURCES
     runtime/SimpleTypedArrayController.cpp
     runtime/SmallStrings.cpp
     runtime/SparseArrayValueMap.cpp
+    runtime/StackFrame.cpp
     runtime/StrictEvalActivation.cpp
     runtime/StringConstructor.cpp
     runtime/StringIteratorPrototype.cpp
index 5e4a0b7..c0f925d 100644 (file)
@@ -1,3 +1,810 @@
+2016-08-22  Filip Pizlo  <fpizlo@apple.com>
+
+        Butterflies should be allocated in Auxiliary MarkedSpace instead of CopiedSpace and we should rewrite as much of the GC as needed to make this not a regression
+        https://bugs.webkit.org/show_bug.cgi?id=160125
+
+        Reviewed by Geoffrey Garen.
+
+        In order to make the GC concurrent (bug 149432), we would either need to enable concurrent
+        copying or we would need to not copy. Concurrent copying carries a 1-2% throughput overhead
+        from the barriers alone. Considering that MarkedSpace does a decent job of avoiding
+        fragmentation, it's unlikely that it's worth paying 1-2% throughput for copying. So, we want
+        to get rid of copied space. This change moves copied space's biggest client over to marked
+        space.
+        
+        Moving butterflies to marked space means having them use the new Auxiliary HeapCell
+        allocation path. This is a fairly mechanical change, but it caused performance regressions
+        everywhere, so this change also fixes MarkedSpace's performance issues.
+        
+        At a high level the mechanical changes are:
+        
+        - We use AuxiliaryBarrier instead of CopyBarrier.
+        
+        - We use tryAllocateAuxiliary instead of tryAllocateStorage. I got rid of the silly
+          CheckedBoolean stuff, since it's so much more trouble than it's worth.
+        
+        - The JITs have to emit inlined marked space allocations instead of inline copy space
+          allocations.
+        
+        - Everyone has to get used to zeroing their butterflies after allocation instead of relying
+          on them being pre-zeroed by the GC. Copied space would zero things for you, while marked
+          space doesn't.
+        
+        That's about 1/3 of this change. But this led to performance problems, which I fixed with
+        optimizations that amounted to a major MarkedSpace rewrite:
+        
+        - MarkedSpace always causes internal fragmentation for array allocations because the vector
+          length we choose when we resize usually leads to a cell size that doesn't correspond to any
+          size class. I got around this by making array allocations usually round up vectorLength to
+          the maximum allowed by the size class that we would have allocated in. Also,
+          ensureLengthSlow() and friends first make sure that the requested length can't just be
+          fulfilled with the current allocation size. This safeguard means that not every array
+          allocation has to do size class queries. For example, the fast path of new Array(length)
+          never does any size class queries, under the assumption that (1) the speed gained from
+          avoiding an ensureLengthSlow() call, which then just changes the vectorLength by doing the
+          size class query, is too small to offset the speed lost by doing the query on every
+          allocation and (2) new Array(length) is a pretty good hint that resizing is not very
+          likely.
+        
+        - Size classes in MarkedSpace were way too precise, which led to external fragmentation. This
+          changes MarkedSpace size classes to use a linear progression for very small sizes followed
+          by a geometric progression that naturally transitions to a hyperbolic progression. We want
+          hyperbolic sizes when we get close to blockSize: for example the largest size we want is
+          payloadSize / 2 rounded down, to ensure we get exactly two cells with minimal slop. The
+          next size down should be payloadSize / 3 rounded down, and so on. After the last precise
+          size (80 bytes), we proceed using a geometric progression, but round up each size to
+          minimize slop at the end of the block. This naturally causes the geometric progression to
+          turn hyperbolic for large sizes. The size class configuration happens at VM start-up, so
+          can be controlled with runtime options. I found that a base of 1.4 works pretty well.
+        
+        - Large allocations caused massive internal fragmentation, since the smallest large
+          allocation had to use exactly blockSize, and the largest small allocation used
+          blockSize / 2. The next size up - the first large allocation size to require two blocks -
+          also had 50% internal fragmentation. This is because we required large allocations to be
+          blockSize aligned, so that MarkedBlock::blockFor() would work. I decided to rewrite all of
+          that. Cells no longer have to be owned by a MarkedBlock. They can now alternatively be
+          owned by a LargeAllocation. These two things are abstracted as CellContainer. You know that
+          a cell is owned by a LargeAllocation if the MarkedBlock::atomSize / 2 bit is set.
+          Basically, large allocations are deliberately misaligned by 8 bytes. This actually works
+          out great since (1) typed arrays won't use large allocations anyway since they have their
+          own malloc fallback and (2) large array butterflies already have a 8 byte header, which
+          means that the 8 byte base misalignment aligns the large array payload on a 16 byte
+          boundary. I took extreme care to make sure that the isLargeAllocation bit checks are as
+          rare as possible; for example, ExecState::vm() skips the check because we know that callees
+          must be small allocations. It's also possible to use template tricks to do one check for
+          cell container kind, and then invoke a function specialized for MarkedBlock or a function
+          specialized for LargeAllocation. LargeAllocation includes stubs for all MarkedBlock methods
+          that get used from functions that are template-specialized like this. That's mostly to
+          speed up the GC marking code. Most other code can use CellContainer API or HeapCell API
+          directly. That's another thing: HeapCell, the common base of JSCell and auxiliary
+          allocations, is now smart enough to do a lot of things for you, like HeapCell::vm(),
+          HeapCell::heap(), HeapCell::isLargeAllocation(), and HeapCell::cellContainer(). The size
+          cutoff for large allocations is runtime-configurable, so long as you don't choose something
+          so small that callees end up large. I found that 400 bytes is roughly optimal. This means
+          that the MarkedBlock size classes end up being:
+          
+          16, 32, 48, 64, 80, 112, 160, 224, 320
+          
+          The next size class would have been 432, but that's above the 400 byte cutoff. All of this
+          is configurable with --sizeClassProgression and --largeAllocationCutoff. You can see what
+          size classes you end up with by doing --dumpSizeClasses=true.
+        
+        - Copied space uses 64KB blocks, while marked space used to use 16KB blocks. Allocating a lot
+          of stuff in 16KB blocks is slower than allocating it in 64KB blocks. I got more speed from
+          changing MarkedBlock::blockSize to 64KB. This would have been a space fail before, but now
+          that we have LargeAllocation, it ends up being an overall win.
+        
+        - Even after all of that, copying butterflies was still faster because it allowed us to skip
+          sweeping dead space. A good GC allocates over dead bytes without explicitly freeing them,
+          so the GC pause is O(size of live), not O(size of live + dead). O(dead) is usually much
+          larger than O(live), especially in an eden collection. Copying satisfies this premise while
+          mark+sweep does not. So, I invented a new kind of allocator: bump'n'pop. Previously, our
+          MarkedSpace allocator was a freelist pop. That's simple and easy to inline but requires
+          that we walk the block to build a free list. This means walking dead space. The new
+          allocator allows totally free MarkedBlocks to simply set up a bump-pointer arena instead.
+          The allocator is a hybrid of bump-pointer and freelist pop. It tries bump first. The bump
+          pointer always bumps by cellSize, so the result of filling a block with bumping looks as if
+          we had used freelist popping to fill it. Additionally, each MarkedBlock now has a bit to
+          quickly tell if the block is entirely free. This makes sweeping O(1) whenever a MarkedBlock
+          is completely empty, which is the common case because of the generational hypothesis: the
+          number of objects that survive an eden collection is a tiny fraction of the number of
+          objects that had been allocated, and this fraction is so small that there are typically
+          fewer than one survivors per MarkedBlock. This change was enough to make this change a net
+          win over tip-of-tree.
+        
+        - FTL now shares the same allocation fast paths as everything else, which is great, because
+          bump'n'pop has gnarly control flow. We don't really want B3 to have to think about that
+          control flow, since it won't be able to improve the machine code we write ourselves. GC
+          fast paths are best written in assembly. So, I've empowered B3 to have even better support
+          for Patchpoint terminals. It's now totally fine for a Patchpoint terminal to be non-Void.
+          So, the new FTL allocation fast paths are just Patchpoint terminals that call through to
+          AssemblyHelpers::emitAllocate(). B3 still reasons about things like constant-folding the
+          size class calculation and constant-hoisting the allocator. Also, I gave the FTL the
+          ability to constant-fold some allocator logic (in case we first assume that we're doing a
+          variable-length allocation but then realize that the length is known). I think it makes
+          sense to have constant folding rules in FTL::Output, or whatever the B3 IR builder is,
+          since this makes lowering easier (you can constant fold during lowering more easily) and it
+          reduces the amount of malloc traffic. In the future, we could teach B3 how to better
+          constant-fold this code. That would require allowing loads to be constant-folded, which is
+          doable but hella tricky.
+        
+        All of this put together gives us neutral perf on JetStream, Speedometer, and PLT3. SunSpider
+        sometimes gets penalized depending on how you run it. By comparison, the alternative approach
+        of using a copy barrier would have cost us 1-2%. That's the real apples-to-apples comparison
+        if your premise is that we should have a concurrent GC. After we finish removing copied
+        space, we will be barrier-ready for concurrent GC: we already have a marking barrier and we
+        simply won't need a copying barrier. This change gets us there for the purposes of our
+        benchmarks, since the remaining clients of copied space are not very important. On the other
+        hand, if we keep copying, then getting barrier-ready would mean adding back the copy barrier,
+        which costs more perf.
+        
+        We might get bigger speed-ups once we remove CopiedSpace altogether. That requires moving
+        typed arrays and a few other weird things over to Aux MarkedSpace.
+        
+        This also includes some header sanitization. The introduction of AuxiliaryBarrier, HeapCell,
+        and CellContainer meant that I had to include those files from everywhere. Fortunately,
+        just including JSCInlines.h (instead of manually including the files that includes) is
+        usually enough. So, I made most of JSC's cpp files include JSCInlines.h, which is something
+        that we were already basically doing. In places where JSCInlines.h would be too much, I just
+        included HeapInlines.h. This got weird, because we previously included HeapInlines.h from
+        JSObject.h. That's bad because it led to some circular dependencies, so I fixed it - but that
+        meant having to manually include HeapInlines.h from the places that previously got it
+        implicitly via JSObject.h. But that led to more problems for some reason: I started getting
+        build errors because non-JSC files were having trouble including Opcode.h. That's just silly,
+        since Opcode.h is meant to be an internal JSC header. So, I made it an internal header and
+        made it impossible to include it from outside JSC. This was a lot of work, but it was
+        necessary to get the patch to build on all ports. It's also a net win. There were many places
+        in WebCore that were transitively including a *ton* of JSC headers just because of the
+        JSObject.h->HeapInlines.h edge and a bunch of dependency edges that arose from some public
+        (for WebCore) JSC headers needing Interpreter.h or Opcode.h for bad reasons.
+
+        * API/JSTypedArray.cpp:
+        * API/ObjCCallbackFunction.mm:
+        * CMakeLists.txt:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * Scripts/builtins/builtins_generate_combined_implementation.py:
+        (BuiltinsCombinedImplementationGenerator.generate_secondary_header_includes):
+        * Scripts/builtins/builtins_generate_internals_wrapper_implementation.py:
+        (BuiltinsInternalsWrapperImplementationGenerator.generate_secondary_header_includes):
+        * Scripts/builtins/builtins_generate_separate_implementation.py:
+        (BuiltinsSeparateImplementationGenerator.generate_secondary_header_includes):
+        * assembler/AbstractMacroAssembler.h:
+        (JSC::AbstractMacroAssembler::JumpList::JumpList):
+        (JSC::AbstractMacroAssembler::JumpList::link):
+        (JSC::AbstractMacroAssembler::JumpList::linkTo):
+        (JSC::AbstractMacroAssembler::JumpList::append):
+        * assembler/MacroAssemblerARM64.h:
+        (JSC::MacroAssemblerARM64::add32):
+        * b3/B3BasicBlock.cpp:
+        (JSC::B3::BasicBlock::appendIntConstant):
+        (JSC::B3::BasicBlock::appendBoolConstant):
+        (JSC::B3::BasicBlock::clearSuccessors):
+        * b3/B3BasicBlock.h:
+        * b3/B3DuplicateTails.cpp:
+        * b3/B3StackmapGenerationParams.h:
+        * b3/testb3.cpp:
+        (JSC::B3::testBranchBitAndImmFusion):
+        (JSC::B3::testPatchpointTerminalReturnValue):
+        (JSC::B3::zero):
+        (JSC::B3::run):
+        * bindings/ScriptValue.cpp:
+        * bytecode/AdaptiveInferredPropertyValueWatchpointBase.cpp:
+        * bytecode/LLIntPrototypeLoadAdaptiveStructureWatchpoint.cpp:
+        * bytecode/ObjectAllocationProfile.h:
+        (JSC::ObjectAllocationProfile::initialize):
+        * bytecode/PolymorphicAccess.cpp:
+        (JSC::AccessCase::generateImpl):
+        * bytecode/StructureStubInfo.cpp:
+        * dfg/DFGOperations.cpp:
+        * dfg/DFGSpeculativeJIT.cpp:
+        (JSC::DFG::SpeculativeJIT::emitAllocateRawObject):
+        (JSC::DFG::SpeculativeJIT::compileMakeRope):
+        (JSC::DFG::SpeculativeJIT::compileAllocatePropertyStorage):
+        (JSC::DFG::SpeculativeJIT::compileReallocatePropertyStorage):
+        * dfg/DFGSpeculativeJIT.h:
+        (JSC::DFG::SpeculativeJIT::emitAllocateJSCell):
+        (JSC::DFG::SpeculativeJIT::emitAllocateJSObject):
+        * dfg/DFGSpeculativeJIT32_64.cpp:
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGStrengthReductionPhase.cpp:
+        (JSC::DFG::StrengthReductionPhase::handleNode):
+        * ftl/FTLAbstractHeapRepository.h:
+        * ftl/FTLCompile.cpp:
+        * ftl/FTLJITFinalizer.cpp:
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::compileCreateDirectArguments):
+        (JSC::FTL::DFG::LowerDFGToB3::compileNewArrayWithSize):
+        (JSC::FTL::DFG::LowerDFGToB3::compileMakeRope):
+        (JSC::FTL::DFG::LowerDFGToB3::compileMaterializeNewObject):
+        (JSC::FTL::DFG::LowerDFGToB3::initializeArrayElements):
+        (JSC::FTL::DFG::LowerDFGToB3::allocatePropertyStorageWithSizeImpl):
+        (JSC::FTL::DFG::LowerDFGToB3::emitRightShiftSnippet):
+        (JSC::FTL::DFG::LowerDFGToB3::allocateHeapCell):
+        (JSC::FTL::DFG::LowerDFGToB3::storeStructure):
+        (JSC::FTL::DFG::LowerDFGToB3::allocateCell):
+        (JSC::FTL::DFG::LowerDFGToB3::allocateObject):
+        (JSC::FTL::DFG::LowerDFGToB3::allocatorForSize):
+        (JSC::FTL::DFG::LowerDFGToB3::allocateVariableSizedObject):
+        (JSC::FTL::DFG::LowerDFGToB3::allocateJSArray):
+        * ftl/FTLOutput.cpp:
+        (JSC::FTL::Output::constBool):
+        (JSC::FTL::Output::constInt32):
+        (JSC::FTL::Output::add):
+        (JSC::FTL::Output::shl):
+        (JSC::FTL::Output::aShr):
+        (JSC::FTL::Output::lShr):
+        (JSC::FTL::Output::zeroExt):
+        (JSC::FTL::Output::equal):
+        (JSC::FTL::Output::notEqual):
+        (JSC::FTL::Output::above):
+        (JSC::FTL::Output::aboveOrEqual):
+        (JSC::FTL::Output::below):
+        (JSC::FTL::Output::belowOrEqual):
+        (JSC::FTL::Output::greaterThan):
+        (JSC::FTL::Output::greaterThanOrEqual):
+        (JSC::FTL::Output::lessThan):
+        (JSC::FTL::Output::lessThanOrEqual):
+        (JSC::FTL::Output::select):
+        (JSC::FTL::Output::unreachable):
+        (JSC::FTL::Output::appendSuccessor):
+        (JSC::FTL::Output::speculate):
+        (JSC::FTL::Output::addIncomingToPhi):
+        * ftl/FTLOutput.h:
+        * ftl/FTLValueFromBlock.h:
+        (JSC::FTL::ValueFromBlock::ValueFromBlock):
+        (JSC::FTL::ValueFromBlock::operator bool):
+        (JSC::FTL::ValueFromBlock::value):
+        (JSC::FTL::ValueFromBlock::block):
+        * ftl/FTLWeightedTarget.h:
+        (JSC::FTL::WeightedTarget::target):
+        (JSC::FTL::WeightedTarget::weight):
+        (JSC::FTL::WeightedTarget::frequentedBlock):
+        * heap/CellContainer.h: Added.
+        (JSC::CellContainer::CellContainer):
+        (JSC::CellContainer::operator bool):
+        (JSC::CellContainer::isMarkedBlock):
+        (JSC::CellContainer::isLargeAllocation):
+        (JSC::CellContainer::markedBlock):
+        (JSC::CellContainer::largeAllocation):
+        * heap/CellContainerInlines.h: Added.
+        (JSC::CellContainer::isMarkedOrRetired):
+        (JSC::CellContainer::isMarked):
+        (JSC::CellContainer::isMarkedOrNewlyAllocated):
+        (JSC::CellContainer::setHasAnyMarked):
+        (JSC::CellContainer::cellSize):
+        (JSC::CellContainer::weakSet):
+        * heap/ConservativeRoots.cpp:
+        (JSC::ConservativeRoots::ConservativeRoots):
+        (JSC::ConservativeRoots::~ConservativeRoots):
+        (JSC::ConservativeRoots::grow):
+        (JSC::ConservativeRoots::genericAddPointer):
+        (JSC::ConservativeRoots::genericAddSpan):
+        * heap/ConservativeRoots.h:
+        (JSC::ConservativeRoots::size):
+        (JSC::ConservativeRoots::roots):
+        * heap/CopyToken.h:
+        * heap/FreeList.cpp: Added.
+        (JSC::FreeList::dump):
+        * heap/FreeList.h: Added.
+        (JSC::FreeList::FreeList):
+        (JSC::FreeList::list):
+        (JSC::FreeList::bump):
+        (JSC::FreeList::operator==):
+        (JSC::FreeList::operator!=):
+        (JSC::FreeList::operator bool):
+        * heap/Heap.cpp:
+        (JSC::Heap::Heap):
+        (JSC::Heap::finalizeUnconditionalFinalizers):
+        (JSC::Heap::markRoots):
+        (JSC::Heap::copyBackingStores):
+        (JSC::Heap::gatherStackRoots):
+        (JSC::Heap::gatherJSStackRoots):
+        (JSC::Heap::gatherScratchBufferRoots):
+        (JSC::Heap::clearLivenessData):
+        (JSC::Heap::visitSmallStrings):
+        (JSC::Heap::visitConservativeRoots):
+        (JSC::Heap::removeDeadCompilerWorklistEntries):
+        (JSC::Heap::gatherExtraHeapSnapshotData):
+        (JSC::Heap::removeDeadHeapSnapshotNodes):
+        (JSC::Heap::visitProtectedObjects):
+        (JSC::Heap::visitArgumentBuffers):
+        (JSC::Heap::visitException):
+        (JSC::Heap::visitStrongHandles):
+        (JSC::Heap::visitHandleStack):
+        (JSC::Heap::visitSamplingProfiler):
+        (JSC::Heap::traceCodeBlocksAndJITStubRoutines):
+        (JSC::Heap::converge):
+        (JSC::Heap::visitWeakHandles):
+        (JSC::Heap::updateObjectCounts):
+        (JSC::Heap::clearUnmarkedExecutables):
+        (JSC::Heap::deleteUnmarkedCompiledCode):
+        (JSC::Heap::collectAllGarbage):
+        (JSC::Heap::collect):
+        (JSC::Heap::collectWithoutAnySweep):
+        (JSC::Heap::collectImpl):
+        (JSC::Heap::suspendCompilerThreads):
+        (JSC::Heap::willStartCollection):
+        (JSC::Heap::flushOldStructureIDTables):
+        (JSC::Heap::flushWriteBarrierBuffer):
+        (JSC::Heap::stopAllocation):
+        (JSC::Heap::reapWeakHandles):
+        (JSC::Heap::pruneStaleEntriesFromWeakGCMaps):
+        (JSC::Heap::sweepArrayBuffers):
+        (JSC::Heap::snapshotMarkedSpace):
+        (JSC::Heap::deleteSourceProviderCaches):
+        (JSC::Heap::notifyIncrementalSweeper):
+        (JSC::Heap::writeBarrierCurrentlyExecutingCodeBlocks):
+        (JSC::Heap::resetAllocators):
+        (JSC::Heap::updateAllocationLimits):
+        (JSC::Heap::didFinishCollection):
+        (JSC::Heap::resumeCompilerThreads):
+        (JSC::Zombify::visit):
+        * heap/Heap.h:
+        (JSC::Heap::subspaceForObjectDestructor):
+        (JSC::Heap::subspaceForAuxiliaryData):
+        (JSC::Heap::allocatorForObjectWithoutDestructor):
+        (JSC::Heap::allocatorForObjectWithDestructor):
+        (JSC::Heap::allocatorForAuxiliaryData):
+        (JSC::Heap::storageAllocator):
+        * heap/HeapCell.h:
+        (JSC::HeapCell::zap):
+        (JSC::HeapCell::isZapped):
+        * heap/HeapCellInlines.h: Added.
+        (JSC::HeapCell::isLargeAllocation):
+        (JSC::HeapCell::cellContainer):
+        (JSC::HeapCell::markedBlock):
+        (JSC::HeapCell::largeAllocation):
+        (JSC::HeapCell::heap):
+        (JSC::HeapCell::vm):
+        (JSC::HeapCell::cellSize):
+        (JSC::HeapCell::allocatorAttributes):
+        (JSC::HeapCell::destructionMode):
+        (JSC::HeapCell::cellKind):
+        * heap/HeapInlines.h:
+        (JSC::Heap::isCollecting):
+        (JSC::Heap::heap):
+        (JSC::Heap::isLive):
+        (JSC::Heap::isMarked):
+        (JSC::Heap::testAndSetMarked):
+        (JSC::Heap::setMarked):
+        (JSC::Heap::cellSize):
+        (JSC::Heap::writeBarrier):
+        (JSC::Heap::allocateWithoutDestructor):
+        (JSC::Heap::allocateObjectOfType):
+        (JSC::Heap::subspaceForObjectOfType):
+        (JSC::Heap::allocatorForObjectOfType):
+        (JSC::Heap::allocateAuxiliary):
+        (JSC::Heap::tryAllocateAuxiliary):
+        (JSC::Heap::tryReallocateAuxiliary):
+        (JSC::Heap::tryAllocateStorage):
+        (JSC::Heap::didFreeBlock):
+        (JSC::Heap::isPointerGCObject): Deleted.
+        (JSC::Heap::isValueGCObject): Deleted.
+        * heap/HeapUtil.h: Added.
+        (JSC::HeapUtil::findGCObjectPointersForMarking):
+        (JSC::HeapUtil::isPointerGCObjectJSCell):
+        (JSC::HeapUtil::isValueGCObject):
+        * heap/LargeAllocation.cpp: Added.
+        (JSC::LargeAllocation::tryCreate):
+        (JSC::LargeAllocation::LargeAllocation):
+        (JSC::LargeAllocation::lastChanceToFinalize):
+        (JSC::LargeAllocation::shrink):
+        (JSC::LargeAllocation::visitWeakSet):
+        (JSC::LargeAllocation::reapWeakSet):
+        (JSC::LargeAllocation::clearMarks):
+        (JSC::LargeAllocation::clearMarksWithCollectionType):
+        (JSC::LargeAllocation::isEmpty):
+        (JSC::LargeAllocation::sweep):
+        (JSC::LargeAllocation::destroy):
+        (JSC::LargeAllocation::dump):
+        * heap/LargeAllocation.h: Added.
+        (JSC::LargeAllocation::fromCell):
+        (JSC::LargeAllocation::cell):
+        (JSC::LargeAllocation::isLargeAllocation):
+        (JSC::LargeAllocation::heap):
+        (JSC::LargeAllocation::vm):
+        (JSC::LargeAllocation::weakSet):
+        (JSC::LargeAllocation::clearNewlyAllocated):
+        (JSC::LargeAllocation::isNewlyAllocated):
+        (JSC::LargeAllocation::isMarked):
+        (JSC::LargeAllocation::isMarkedOrNewlyAllocated):
+        (JSC::LargeAllocation::isLive):
+        (JSC::LargeAllocation::hasValidCell):
+        (JSC::LargeAllocation::cellSize):
+        (JSC::LargeAllocation::aboveLowerBound):
+        (JSC::LargeAllocation::belowUpperBound):
+        (JSC::LargeAllocation::contains):
+        (JSC::LargeAllocation::attributes):
+        (JSC::LargeAllocation::testAndSetMarked):
+        (JSC::LargeAllocation::setMarked):
+        (JSC::LargeAllocation::clearMarked):
+        (JSC::LargeAllocation::setHasAnyMarked):
+        (JSC::LargeAllocation::headerSize):
+        * heap/MarkedAllocator.cpp:
+        (JSC::MarkedAllocator::MarkedAllocator):
+        (JSC::isListPagedOut):
+        (JSC::MarkedAllocator::isPagedOut):
+        (JSC::MarkedAllocator::retire):
+        (JSC::MarkedAllocator::tryAllocateWithoutCollectingImpl):
+        (JSC::MarkedAllocator::tryAllocateWithoutCollecting):
+        (JSC::MarkedAllocator::allocateSlowCase):
+        (JSC::MarkedAllocator::tryAllocateSlowCase):
+        (JSC::MarkedAllocator::allocateSlowCaseImpl):
+        (JSC::blockHeaderSize):
+        (JSC::MarkedAllocator::blockSizeForBytes):
+        (JSC::MarkedAllocator::tryAllocateBlock):
+        (JSC::MarkedAllocator::addBlock):
+        (JSC::MarkedAllocator::removeBlock):
+        (JSC::MarkedAllocator::reset):
+        (JSC::MarkedAllocator::lastChanceToFinalize):
+        (JSC::MarkedAllocator::setFreeList):
+        (JSC::MarkedAllocator::tryAllocateHelper): Deleted.
+        (JSC::MarkedAllocator::tryPopFreeList): Deleted.
+        (JSC::MarkedAllocator::tryAllocate): Deleted.
+        (JSC::MarkedAllocator::allocateBlock): Deleted.
+        * heap/MarkedAllocator.h:
+        (JSC::MarkedAllocator::destruction):
+        (JSC::MarkedAllocator::cellKind):
+        (JSC::MarkedAllocator::heap):
+        (JSC::MarkedAllocator::takeLastActiveBlock):
+        (JSC::MarkedAllocator::offsetOfFreeList):
+        (JSC::MarkedAllocator::offsetOfCellSize):
+        (JSC::MarkedAllocator::tryAllocate):
+        (JSC::MarkedAllocator::allocate):
+        (JSC::MarkedAllocator::stopAllocating):
+        (JSC::MarkedAllocator::resumeAllocating):
+        (JSC::MarkedAllocator::offsetOfFreeListHead): Deleted.
+        (JSC::MarkedAllocator::MarkedAllocator): Deleted.
+        (JSC::MarkedAllocator::init): Deleted.
+        * heap/MarkedBlock.cpp:
+        (JSC::MarkedBlock::tryCreate):
+        (JSC::MarkedBlock::MarkedBlock):
+        (JSC::MarkedBlock::specializedSweep):
+        (JSC::MarkedBlock::sweep):
+        (JSC::MarkedBlock::sweepHelperSelectResetMode):
+        (JSC::MarkedBlock::sweepHelperSelectStateAndSweepMode):
+        (JSC::MarkedBlock::stopAllocating):
+        (JSC::MarkedBlock::clearMarksWithCollectionType):
+        (JSC::MarkedBlock::lastChanceToFinalize):
+        (JSC::MarkedBlock::resumeAllocating):
+        (JSC::MarkedBlock::didRetireBlock):
+        (JSC::MarkedBlock::forEachFreeCell):
+        (JSC::MarkedBlock::create): Deleted.
+        (JSC::MarkedBlock::callDestructor): Deleted.
+        (JSC::MarkedBlock::sweepHelper): Deleted.
+        * heap/MarkedBlock.h:
+        (JSC::MarkedBlock::VoidFunctor::returnValue):
+        (JSC::MarkedBlock::setHasAnyMarked):
+        (JSC::MarkedBlock::hasAnyMarked):
+        (JSC::MarkedBlock::clearHasAnyMarked):
+        (JSC::MarkedBlock::firstAtom):
+        (JSC::MarkedBlock::isAtomAligned):
+        (JSC::MarkedBlock::cellAlign):
+        (JSC::MarkedBlock::blockFor):
+        (JSC::MarkedBlock::isEmpty):
+        (JSC::MarkedBlock::cellSize):
+        (JSC::MarkedBlock::isMarkedOrRetired):
+        (JSC::MarkedBlock::FreeList::FreeList): Deleted.
+        * heap/MarkedSpace.cpp:
+        (JSC::MarkedSpace::initializeSizeClassForStepSize):
+        (JSC::MarkedSpace::MarkedSpace):
+        (JSC::MarkedSpace::lastChanceToFinalize):
+        (JSC::MarkedSpace::allocateLarge):
+        (JSC::MarkedSpace::tryAllocateLarge):
+        (JSC::MarkedSpace::sweep):
+        (JSC::MarkedSpace::sweepABit):
+        (JSC::MarkedSpace::sweepLargeAllocations):
+        (JSC::MarkedSpace::zombifySweep):
+        (JSC::MarkedSpace::resetAllocators):
+        (JSC::MarkedSpace::visitWeakSets):
+        (JSC::MarkedSpace::reapWeakSets):
+        (JSC::MarkedSpace::stopAllocating):
+        (JSC::MarkedSpace::resumeAllocating):
+        (JSC::MarkedSpace::isPagedOut):
+        (JSC::MarkedSpace::shrink):
+        (JSC::MarkedSpace::clearNewlyAllocated):
+        (JSC::MarkedSpace::clearMarks):
+        (JSC::MarkedSpace::didFinishIterating):
+        (JSC::MarkedSpace::objectCount):
+        (JSC::MarkedSpace::size):
+        (JSC::MarkedSpace::capacity):
+        (JSC::MarkedSpace::forEachAllocator): Deleted.
+        * heap/MarkedSpace.h:
+        (JSC::MarkedSpace::sizeClassIndex):
+        (JSC::MarkedSpace::subspaceForObjectsWithDestructor):
+        (JSC::MarkedSpace::subspaceForObjectsWithoutDestructor):
+        (JSC::MarkedSpace::subspaceForAuxiliaryData):
+        (JSC::MarkedSpace::blocksWithNewObjects):
+        (JSC::MarkedSpace::largeAllocations):
+        (JSC::MarkedSpace::largeAllocationsNurseryOffset):
+        (JSC::MarkedSpace::largeAllocationsOffsetForThisCollection):
+        (JSC::MarkedSpace::largeAllocationsForThisCollectionBegin):
+        (JSC::MarkedSpace::largeAllocationsForThisCollectionEnd):
+        (JSC::MarkedSpace::largeAllocationsForThisCollectionSize):
+        (JSC::MarkedSpace::forEachLiveCell):
+        (JSC::MarkedSpace::forEachDeadCell):
+        (JSC::MarkedSpace::allocatorFor):
+        (JSC::MarkedSpace::destructorAllocatorFor):
+        (JSC::MarkedSpace::auxiliaryAllocatorFor):
+        (JSC::MarkedSpace::allocate):
+        (JSC::MarkedSpace::tryAllocate):
+        (JSC::MarkedSpace::allocateWithoutDestructor):
+        (JSC::MarkedSpace::allocateWithDestructor):
+        (JSC::MarkedSpace::allocateAuxiliary):
+        (JSC::MarkedSpace::tryAllocateAuxiliary):
+        (JSC::MarkedSpace::forEachBlock):
+        (JSC::MarkedSpace::didAllocateInBlock):
+        (JSC::MarkedSpace::forEachAllocator):
+        (JSC::MarkedSpace::forEachSubspace):
+        (JSC::MarkedSpace::optimalSizeFor):
+        (JSC::MarkedSpace::objectCount): Deleted.
+        (JSC::MarkedSpace::size): Deleted.
+        (JSC::MarkedSpace::capacity): Deleted.
+        * heap/SlotVisitor.cpp:
+        (JSC::SlotVisitor::didStartMarking):
+        (JSC::SlotVisitor::reset):
+        (JSC::SlotVisitor::clearMarkStack):
+        (JSC::SlotVisitor::append):
+        (JSC::SlotVisitor::appendJSCellOrAuxiliary):
+        (JSC::SlotVisitor::setMarkedAndAppendToMarkStack):
+        (JSC::SlotVisitor::appendToMarkStack):
+        (JSC::SlotVisitor::markAuxiliary):
+        (JSC::SlotVisitor::noteLiveAuxiliaryCell):
+        (JSC::SetCurrentCellScope::SetCurrentCellScope):
+        (JSC::SlotVisitor::visitChildren):
+        * heap/SlotVisitor.h:
+        * heap/WeakBlock.cpp:
+        (JSC::WeakBlock::create):
+        (JSC::WeakBlock::destroy):
+        (JSC::WeakBlock::WeakBlock):
+        (JSC::WeakBlock::visit):
+        (JSC::WeakBlock::reap):
+        * heap/WeakBlock.h:
+        (JSC::WeakBlock::disconnectContainer):
+        (JSC::WeakBlock::disconnectMarkedBlock): Deleted.
+        * heap/WeakSet.cpp:
+        (JSC::WeakSet::sweep):
+        (JSC::WeakSet::addAllocator):
+        * heap/WeakSet.h:
+        (JSC::WeakSet::WeakSet):
+        * heap/WeakSetInlines.h:
+        (JSC::WeakSet::allocate):
+        * inspector/InjectedScriptManager.cpp:
+        * inspector/JSGlobalObjectInspectorController.cpp:
+        * inspector/JSJavaScriptCallFrame.cpp:
+        * inspector/ScriptDebugServer.cpp:
+        * inspector/agents/InspectorDebuggerAgent.cpp:
+        * interpreter/CachedCall.h:
+        (JSC::CachedCall::CachedCall):
+        * jit/AssemblyHelpers.h:
+        (JSC::AssemblyHelpers::emitAllocate):
+        (JSC::AssemblyHelpers::emitAllocateJSCell):
+        (JSC::AssemblyHelpers::emitAllocateJSObject):
+        (JSC::AssemblyHelpers::emitAllocateJSObjectWithKnownSize):
+        (JSC::AssemblyHelpers::emitAllocateVariableSized):
+        * jit/JITOpcodes.cpp:
+        (JSC::JIT::emit_op_new_object):
+        (JSC::JIT::emit_op_create_this):
+        * jit/JITOpcodes32_64.cpp:
+        (JSC::JIT::emit_op_new_object):
+        (JSC::JIT::emit_op_create_this):
+        * jit/JITOperations.cpp:
+        * jit/JITOperations.h:
+        * jit/JITPropertyAccess.cpp:
+        (JSC::JIT::emitWriteBarrier):
+        * jsc.cpp:
+        (functionDescribeArray):
+        * llint/LLIntData.cpp:
+        (JSC::LLInt::Data::performAssertions):
+        * llint/LowLevelInterpreter.asm:
+        * llint/LowLevelInterpreter32_64.asm:
+        * llint/LowLevelInterpreter64.asm:
+        * parser/ModuleAnalyzer.cpp:
+        * runtime/ArrayConventions.h:
+        (JSC::indexIsSufficientlyBeyondLengthForSparseMap):
+        (JSC::indexingHeaderForArrayStorage):
+        (JSC::baseIndexingHeaderForArrayStorage):
+        (JSC::indexingHeaderForArray): Deleted.
+        (JSC::baseIndexingHeaderForArray): Deleted.
+        * runtime/ArrayStorage.h:
+        (JSC::ArrayStorage::length):
+        (JSC::ArrayStorage::setLength):
+        (JSC::ArrayStorage::vectorLength):
+        (JSC::ArrayStorage::setVectorLength):
+        (JSC::ArrayStorage::copyHeaderFromDuringGC):
+        (JSC::ArrayStorage::sizeFor):
+        (JSC::ArrayStorage::totalSizeFor):
+        (JSC::ArrayStorage::totalSize):
+        (JSC::ArrayStorage::availableVectorLength):
+        (JSC::ArrayStorage::optimalVectorLength):
+        * runtime/AuxiliaryBarrier.h: Added.
+        (JSC::AuxiliaryBarrier::AuxiliaryBarrier):
+        (JSC::AuxiliaryBarrier::clear):
+        (JSC::AuxiliaryBarrier::get):
+        (JSC::AuxiliaryBarrier::slot):
+        (JSC::AuxiliaryBarrier::operator bool):
+        (JSC::AuxiliaryBarrier::setWithoutBarrier):
+        * runtime/AuxiliaryBarrierInlines.h: Added.
+        (JSC::AuxiliaryBarrier<T>::AuxiliaryBarrier):
+        (JSC::AuxiliaryBarrier<T>::set):
+        * runtime/Butterfly.h:
+        (JSC::Butterfly::fromBase):
+        (JSC::Butterfly::fromPointer):
+        * runtime/ButterflyInlines.h:
+        (JSC::Butterfly::availableContiguousVectorLength):
+        (JSC::Butterfly::optimalContiguousVectorLength):
+        (JSC::Butterfly::createUninitialized):
+        (JSC::Butterfly::growArrayRight):
+        * runtime/ClonedArguments.cpp:
+        (JSC::ClonedArguments::createEmpty):
+        * runtime/DataView.cpp:
+        * runtime/DirectArguments.h:
+        * runtime/ECMAScriptSpecInternalFunctions.cpp:
+        * runtime/GeneratorFrame.cpp:
+        * runtime/GeneratorPrototype.cpp:
+        * runtime/IntlCollator.cpp:
+        * runtime/IntlCollatorConstructor.cpp:
+        * runtime/IntlCollatorPrototype.cpp:
+        * runtime/IntlDateTimeFormat.cpp:
+        * runtime/IntlDateTimeFormatConstructor.cpp:
+        * runtime/IntlDateTimeFormatPrototype.cpp:
+        * runtime/IntlNumberFormat.cpp:
+        * runtime/IntlNumberFormatConstructor.cpp:
+        * runtime/IntlNumberFormatPrototype.cpp:
+        * runtime/JSArray.cpp:
+        (JSC::createArrayButterflyInDictionaryIndexingMode):
+        (JSC::JSArray::tryCreateUninitialized):
+        (JSC::JSArray::setLengthWritable):
+        (JSC::JSArray::unshiftCountSlowCase):
+        (JSC::JSArray::setLengthWithArrayStorage):
+        (JSC::JSArray::appendMemcpy):
+        (JSC::JSArray::setLength):
+        (JSC::JSArray::pop):
+        (JSC::JSArray::push):
+        (JSC::JSArray::fastSlice):
+        (JSC::JSArray::shiftCountWithArrayStorage):
+        (JSC::JSArray::shiftCountWithAnyIndexingType):
+        (JSC::JSArray::unshiftCountWithArrayStorage):
+        (JSC::JSArray::fillArgList):
+        (JSC::JSArray::copyToArguments):
+        * runtime/JSArray.h:
+        (JSC::createContiguousArrayButterfly):
+        (JSC::createArrayButterfly):
+        (JSC::JSArray::create):
+        (JSC::JSArray::tryCreateUninitialized): Deleted.
+        * runtime/JSArrayBufferView.h:
+        * runtime/JSCInlines.h:
+        * runtime/JSCJSValue.cpp:
+        * runtime/JSCallee.cpp:
+        * runtime/JSCell.cpp:
+        (JSC::JSCell::estimatedSize):
+        (JSC::JSCell::copyBackingStore):
+        * runtime/JSCell.h:
+        (JSC::JSCell::cellStateOffset):
+        * runtime/JSCellInlines.h:
+        (JSC::JSCell::visitChildren):
+        (JSC::ExecState::vm):
+        (JSC::JSCell::canUseFastGetOwnProperty):
+        (JSC::JSCell::classInfo):
+        (JSC::JSCell::toBoolean):
+        (JSC::JSCell::pureToBoolean):
+        (JSC::JSCell::callDestructor):
+        (JSC::JSCell::vm): Deleted.
+        * runtime/JSFunction.cpp:
+        (JSC::JSFunction::create):
+        (JSC::JSFunction::allocateAndInitializeRareData):
+        (JSC::JSFunction::initializeRareData):
+        (JSC::JSFunction::getOwnPropertySlot):
+        (JSC::JSFunction::put):
+        (JSC::JSFunction::deleteProperty):
+        (JSC::JSFunction::defineOwnProperty):
+        (JSC::JSFunction::setFunctionName):
+        (JSC::JSFunction::reifyLength):
+        (JSC::JSFunction::reifyName):
+        (JSC::JSFunction::reifyLazyPropertyIfNeeded):
+        (JSC::JSFunction::reifyBoundNameIfNeeded):
+        * runtime/JSFunction.h:
+        * runtime/JSFunctionInlines.h:
+        (JSC::JSFunction::createWithInvalidatedReallocationWatchpoint):
+        (JSC::JSFunction::JSFunction):
+        * runtime/JSGenericTypedArrayViewInlines.h:
+        (JSC::JSGenericTypedArrayView<Adaptor>::slowDownAndWasteMemory):
+        * runtime/JSInternalPromise.cpp:
+        * runtime/JSInternalPromiseConstructor.cpp:
+        * runtime/JSInternalPromiseDeferred.cpp:
+        * runtime/JSInternalPromisePrototype.cpp:
+        * runtime/JSJob.cpp:
+        * runtime/JSMapIterator.cpp:
+        * runtime/JSModuleNamespaceObject.cpp:
+        * runtime/JSModuleRecord.cpp:
+        * runtime/JSObject.cpp:
+        (JSC::getClassPropertyNames):
+        (JSC::JSObject::visitButterfly):
+        (JSC::JSObject::visitChildren):
+        (JSC::JSObject::heapSnapshot):
+        (JSC::JSObject::notifyPresenceOfIndexedAccessors):
+        (JSC::JSObject::createInitialIndexedStorage):
+        (JSC::JSObject::createInitialUndecided):
+        (JSC::JSObject::createInitialInt32):
+        (JSC::JSObject::createInitialDouble):
+        (JSC::JSObject::createInitialContiguous):
+        (JSC::JSObject::createArrayStorage):
+        (JSC::JSObject::createInitialArrayStorage):
+        (JSC::JSObject::convertUndecidedToInt32):
+        (JSC::JSObject::convertUndecidedToContiguous):
+        (JSC::JSObject::convertUndecidedToArrayStorage):
+        (JSC::JSObject::convertInt32ToDouble):
+        (JSC::JSObject::convertInt32ToArrayStorage):
+        (JSC::JSObject::convertDoubleToArrayStorage):
+        (JSC::JSObject::convertContiguousToArrayStorage):
+        (JSC::JSObject::putByIndexBeyondVectorLength):
+        (JSC::JSObject::putDirectIndexBeyondVectorLength):
+        (JSC::JSObject::putDirectNativeFunctionWithoutTransition):
+        (JSC::JSObject::getNewVectorLength):
+        (JSC::JSObject::increaseVectorLength):
+        (JSC::JSObject::ensureLengthSlow):
+        (JSC::JSObject::growOutOfLineStorage):
+        (JSC::JSObject::copyButterfly): Deleted.
+        (JSC::JSObject::copyBackingStore): Deleted.
+        * runtime/JSObject.h:
+        (JSC::JSObject::initializeIndex):
+        (JSC::JSObject::globalObject):
+        (JSC::JSObject::putDirectInternal):
+        (JSC::JSObject::putOwnDataProperty):
+        (JSC::JSObject::setStructureAndReallocateStorageIfNecessary): Deleted.
+        * runtime/JSObjectInlines.h:
+        * runtime/JSPromise.cpp:
+        * runtime/JSPromiseConstructor.cpp:
+        * runtime/JSPromiseDeferred.cpp:
+        * runtime/JSPromisePrototype.cpp:
+        * runtime/JSPropertyNameIterator.cpp:
+        * runtime/JSScope.cpp:
+        (JSC::JSScope::resolve):
+        * runtime/JSScope.h:
+        (JSC::JSScope::globalObject):
+        (JSC::Register::operator=):
+        (JSC::JSScope::vm): Deleted.
+        * runtime/JSSetIterator.cpp:
+        * runtime/JSStringIterator.cpp:
+        * runtime/JSTemplateRegistryKey.cpp:
+        * runtime/JSTypedArrayViewConstructor.cpp:
+        * runtime/JSTypedArrayViewPrototype.cpp:
+        * runtime/JSWeakMap.cpp:
+        * runtime/JSWeakSet.cpp:
+        * runtime/MapConstructor.cpp:
+        * runtime/MapPrototype.cpp:
+        * runtime/NativeStdFunctionCell.cpp:
+        * runtime/Operations.h:
+        (JSC::jsAdd):
+        (JSC::resetFreeCellsBadly):
+        (JSC::resetBadly):
+        * runtime/Options.h:
+        * runtime/PropertyTable.cpp:
+        * runtime/ProxyConstructor.cpp:
+        * runtime/ProxyObject.cpp:
+        * runtime/ProxyRevoke.cpp:
+        * runtime/RegExpMatchesArray.h:
+        (JSC::tryCreateUninitializedRegExpMatchesArray):
+        (JSC::createRegExpMatchesArray):
+        * runtime/RuntimeType.cpp:
+        * runtime/SamplingProfiler.cpp:
+        (JSC::SamplingProfiler::processUnverifiedStackTraces):
+        * runtime/SetConstructor.cpp:
+        * runtime/SetPrototype.cpp:
+        * runtime/TemplateRegistry.cpp:
+        * runtime/TypeProfilerLog.cpp:
+        * runtime/TypeSet.cpp:
+        * runtime/WeakMapConstructor.cpp:
+        * runtime/WeakMapData.cpp:
+        * runtime/WeakMapPrototype.cpp:
+        * runtime/WeakSetConstructor.cpp:
+        * runtime/WeakSetPrototype.cpp:
+        * tools/JSDollarVMPrototype.cpp:
+        (JSC::JSDollarVMPrototype::isInObjectSpace):
+        (JSC::JSDollarVMPrototype::isInStorageSpace):
+
 2016-08-23  Benjamin Poulain  <bpoulain@apple.com>
 
         [JSC] Make Math.cos() and Math.sin() work with any argument type
index ea03cb6..a603cfc 100644 (file)
                0F04396D1B03DC0B009598B7 /* DFGCombinedLiveness.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F04396B1B03DC0B009598B7 /* DFGCombinedLiveness.cpp */; };
                0F04396E1B03DC0B009598B7 /* DFGCombinedLiveness.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F04396C1B03DC0B009598B7 /* DFGCombinedLiveness.h */; };
                0F05C3B41683CF9200BAF45B /* DFGArrayifySlowPathGenerator.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F05C3B21683CF8F00BAF45B /* DFGArrayifySlowPathGenerator.h */; };
+               0F070A471D543A8B006E7232 /* CellContainer.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F070A421D543A89006E7232 /* CellContainer.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F070A481D543A90006E7232 /* CellContainerInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F070A431D543A89006E7232 /* CellContainerInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F070A491D543A93006E7232 /* HeapCellInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F070A441D543A89006E7232 /* HeapCellInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F070A4A1D543A95006E7232 /* LargeAllocation.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F070A451D543A89006E7232 /* LargeAllocation.cpp */; };
+               0F070A4B1D543A98006E7232 /* LargeAllocation.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F070A461D543A89006E7232 /* LargeAllocation.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F0776BF14FF002B00102332 /* JITCompilationEffort.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0776BD14FF002800102332 /* JITCompilationEffort.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F0A75221B94BFA900110660 /* InferredType.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F0A75201B94BFA900110660 /* InferredType.cpp */; };
                0F0A75231B94BFA900110660 /* InferredType.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0A75211B94BFA900110660 /* InferredType.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F38B01817CFE75500B144D3 /* DFGCompilationKey.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F38B01417CFE75500B144D3 /* DFGCompilationKey.h */; };
                0F38B01917CFE75500B144D3 /* DFGCompilationMode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F38B01517CFE75500B144D3 /* DFGCompilationMode.cpp */; };
                0F38B01A17CFE75500B144D3 /* DFGCompilationMode.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F38B01617CFE75500B144D3 /* DFGCompilationMode.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F38D2A21D44196800680499 /* AuxiliaryBarrier.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F38D2A01D44196600680499 /* AuxiliaryBarrier.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F38D2A31D44196D00680499 /* AuxiliaryBarrierInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F38D2A11D44196600680499 /* AuxiliaryBarrierInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F392C891B46188400844728 /* DFGOSRExitFuzz.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F392C871B46188400844728 /* DFGOSRExitFuzz.cpp */; };
                0F392C8A1B46188400844728 /* DFGOSRExitFuzz.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F392C881B46188400844728 /* DFGOSRExitFuzz.h */; };
                0F3A1BF91A9ECB7D000DE01A /* DFGPutStackSinkingPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F3A1BF71A9ECB7D000DE01A /* DFGPutStackSinkingPhase.cpp */; };
                0F4680CA14BBB16C00BFE272 /* LLIntCommon.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680C514BBB16900BFE272 /* LLIntCommon.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F4680CB14BBB17200BFE272 /* LLIntOfflineAsmConfig.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680C614BBB16900BFE272 /* LLIntOfflineAsmConfig.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F4680CC14BBB17A00BFE272 /* LowLevelInterpreter.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F4680C714BBB16900BFE272 /* LowLevelInterpreter.cpp */; };
-               0F4680CD14BBB17D00BFE272 /* LowLevelInterpreter.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680C814BBB16900BFE272 /* LowLevelInterpreter.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F4680CD14BBB17D00BFE272 /* LowLevelInterpreter.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680C814BBB16900BFE272 /* LowLevelInterpreter.h */; };
                0F4680D214BBD16500BFE272 /* LLIntData.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F4680CE14BBB3D100BFE272 /* LLIntData.cpp */; };
-               0F4680D314BBD16700BFE272 /* LLIntData.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680CF14BBB3D100BFE272 /* LLIntData.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F4680D314BBD16700BFE272 /* LLIntData.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680CF14BBB3D100BFE272 /* LLIntData.h */; };
                0F4680D414BBD24900BFE272 /* HostCallReturnValue.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F4680D014BBC5F800BFE272 /* HostCallReturnValue.cpp */; };
                0F4680D514BBD24B00BFE272 /* HostCallReturnValue.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4680D114BBC5F800BFE272 /* HostCallReturnValue.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F485321187750560083B687 /* DFGArithMode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F48531F187750560083B687 /* DFGArithMode.cpp */; };
                0F4F29DF18B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F4F29DD18B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.cpp */; };
                0F4F29E018B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F4F29DE18B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.h */; };
                0F50AF3C193E8B3900674EE8 /* DFGStructureClobberState.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F50AF3B193E8B3900674EE8 /* DFGStructureClobberState.h */; };
+               0F5513A61D5A682C00C32BD8 /* FreeList.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5513A51D5A682A00C32BD8 /* FreeList.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F5513A81D5A68CD00C32BD8 /* FreeList.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5513A71D5A68CB00C32BD8 /* FreeList.cpp */; };
                0F5541B11613C1FB00CE3E25 /* SpecialPointer.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5541AF1613C1FB00CE3E25 /* SpecialPointer.cpp */; };
                0F5541B21613C1FB00CE3E25 /* SpecialPointer.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5541B01613C1FB00CE3E25 /* SpecialPointer.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F55989817C86C5800A1E543 /* ToNativeFromValue.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F55989717C86C5600A1E543 /* ToNativeFromValue.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F6B8AE51C4EFE1700969052 /* B3FixSSA.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6B8AE11C4EFE1700969052 /* B3FixSSA.h */; };
                0F6C73501AC9F99F00BE1682 /* VariableWriteFireDetail.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6C734E1AC9F99F00BE1682 /* VariableWriteFireDetail.cpp */; };
                0F6C73511AC9F99F00BE1682 /* VariableWriteFireDetail.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6C734F1AC9F99F00BE1682 /* VariableWriteFireDetail.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F6DB7E91D6124B500CDBF8E /* StackFrame.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6DB7E81D6124B200CDBF8E /* StackFrame.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F6DB7EA1D6124B800CDBF8E /* StackFrame.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6DB7E71D6124B200CDBF8E /* StackFrame.cpp */; };
+               0F6DB7EC1D617D1100CDBF8E /* MacroAssemblerCodeRef.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6DB7EB1D617D0F00CDBF8E /* MacroAssemblerCodeRef.cpp */; };
                0F6E845A19030BEF00562741 /* DFGVariableAccessData.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6E845919030BEF00562741 /* DFGVariableAccessData.cpp */; };
                0F6FC750196110A800E1D02D /* ComplexGetStatus.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F6FC74E196110A800E1D02D /* ComplexGetStatus.cpp */; };
                0F6FC751196110A800E1D02D /* ComplexGetStatus.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F6FC74F196110A800E1D02D /* ComplexGetStatus.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FA7A8EB18B413C80052371D /* Reg.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FA7A8E918B413C80052371D /* Reg.cpp */; };
                0FA7A8EC18B413C80052371D /* Reg.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FA7A8EA18B413C80052371D /* Reg.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FA7A8EE18CE4FD80052371D /* ScratchRegisterAllocator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FA7A8ED18CE4FD80052371D /* ScratchRegisterAllocator.cpp */; };
+               0FADE6731D4D23BE00768457 /* HeapUtil.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FADE6721D4D23BC00768457 /* HeapUtil.h */; };
                0FAF7EFD165BA91B000C8455 /* JITDisassembler.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FAF7EFA165BA919000C8455 /* JITDisassembler.cpp */; };
                0FAF7EFE165BA91F000C8455 /* JITDisassembler.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FAF7EFB165BA919000C8455 /* JITDisassembler.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FB105851675480F00F8AB6E /* ExitKind.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FB105821675480C00F8AB6E /* ExitKind.cpp */; };
                14280865107EC11A0013E7B2 /* BooleanPrototype.cpp in Sources */ = {isa = PBXBuildFile; fileRef = BC7952340E15EB5600A898AB /* BooleanPrototype.cpp */; };
                14280870107EC1340013E7B2 /* JSWrapperObject.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65C7A1710A8EAACB00FA37EA /* JSWrapperObject.cpp */; };
                14280875107EC13E0013E7B2 /* JSLock.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65EA4C99092AF9E20093D800 /* JSLock.cpp */; };
-               1429D77C0ED20D7300B89619 /* Interpreter.h in Headers */ = {isa = PBXBuildFile; fileRef = 1429D77B0ED20D7300B89619 /* Interpreter.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               1429D77C0ED20D7300B89619 /* Interpreter.h in Headers */ = {isa = PBXBuildFile; fileRef = 1429D77B0ED20D7300B89619 /* Interpreter.h */; };
                1429D7D40ED2128200B89619 /* Interpreter.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1429D7D30ED2128200B89619 /* Interpreter.cpp */; };
                1429D8780ED21ACD00B89619 /* ExceptionHelpers.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1429D8770ED21ACD00B89619 /* ExceptionHelpers.cpp */; };
                1429D8DD0ED2205B00B89619 /* CallFrame.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1429D8DB0ED2205B00B89619 /* CallFrame.cpp */; };
                969A07980ED1D3AE00F1F681 /* EvalCodeCache.h in Headers */ = {isa = PBXBuildFile; fileRef = 969A07920ED1D3AE00F1F681 /* EvalCodeCache.h */; settings = {ATTRIBUTES = (Private, ); }; };
                969A07990ED1D3AE00F1F681 /* Instruction.h in Headers */ = {isa = PBXBuildFile; fileRef = 969A07930ED1D3AE00F1F681 /* Instruction.h */; settings = {ATTRIBUTES = (Private, ); }; };
                969A079A0ED1D3AE00F1F681 /* Opcode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 969A07940ED1D3AE00F1F681 /* Opcode.cpp */; };
-               969A079B0ED1D3AE00F1F681 /* Opcode.h in Headers */ = {isa = PBXBuildFile; fileRef = 969A07950ED1D3AE00F1F681 /* Opcode.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               969A079B0ED1D3AE00F1F681 /* Opcode.h in Headers */ = {isa = PBXBuildFile; fileRef = 969A07950ED1D3AE00F1F681 /* Opcode.h */; };
                978801401471AD920041B016 /* JSDateMath.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 9788FC221471AD0C0068CE2D /* JSDateMath.cpp */; };
                978801411471AD920041B016 /* JSDateMath.h in Headers */ = {isa = PBXBuildFile; fileRef = 9788FC231471AD0C0068CE2D /* JSDateMath.h */; settings = {ATTRIBUTES = (Private, ); }; };
                990DA67F1C8E316A00295159 /* generate_objc_protocol_type_conversions_implementation.py in Headers */ = {isa = PBXBuildFile; fileRef = 990DA67E1C8E311D00295159 /* generate_objc_protocol_type_conversions_implementation.py */; settings = {ATTRIBUTES = (Private, ); }; };
                FED94F2E171E3E2300BE77A4 /* Watchdog.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FED94F2B171E3E2300BE77A4 /* Watchdog.cpp */; };
                FED94F2F171E3E2300BE77A4 /* Watchdog.h in Headers */ = {isa = PBXBuildFile; fileRef = FED94F2C171E3E2300BE77A4 /* Watchdog.h */; settings = {ATTRIBUTES = (Private, ); }; };
                FEF040511AAE662D00BD28B0 /* CompareAndSwapTest.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FEF040501AAE662D00BD28B0 /* CompareAndSwapTest.cpp */; };
+               D9722752DC54459B9125B539 /* JSModuleLoader.h in Headers */ = {isa = PBXBuildFile; fileRef = 77B25CB2C3094A92A38E1DB3 /* JSModuleLoader.h */; };
+               13FECE06D3B445FCB6C93461 /* JSModuleLoader.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 1879510614C540FFB561C124 /* JSModuleLoader.cpp */; };
                FEFD6FC61D5E7992008F2F0B /* JSStringInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = FEFD6FC51D5E7970008F2F0B /* JSStringInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
 /* End PBXBuildFile section */
 
                0F04396B1B03DC0B009598B7 /* DFGCombinedLiveness.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGCombinedLiveness.cpp; path = dfg/DFGCombinedLiveness.cpp; sourceTree = "<group>"; };
                0F04396C1B03DC0B009598B7 /* DFGCombinedLiveness.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGCombinedLiveness.h; path = dfg/DFGCombinedLiveness.h; sourceTree = "<group>"; };
                0F05C3B21683CF8F00BAF45B /* DFGArrayifySlowPathGenerator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGArrayifySlowPathGenerator.h; path = dfg/DFGArrayifySlowPathGenerator.h; sourceTree = "<group>"; };
+               0F070A421D543A89006E7232 /* CellContainer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CellContainer.h; sourceTree = "<group>"; };
+               0F070A431D543A89006E7232 /* CellContainerInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CellContainerInlines.h; sourceTree = "<group>"; };
+               0F070A441D543A89006E7232 /* HeapCellInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HeapCellInlines.h; sourceTree = "<group>"; };
+               0F070A451D543A89006E7232 /* LargeAllocation.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = LargeAllocation.cpp; sourceTree = "<group>"; };
+               0F070A461D543A89006E7232 /* LargeAllocation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LargeAllocation.h; sourceTree = "<group>"; };
                0F0776BD14FF002800102332 /* JITCompilationEffort.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITCompilationEffort.h; sourceTree = "<group>"; };
                0F0A75201B94BFA900110660 /* InferredType.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = InferredType.cpp; sourceTree = "<group>"; };
                0F0A75211B94BFA900110660 /* InferredType.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = InferredType.h; sourceTree = "<group>"; };
                0F38B01417CFE75500B144D3 /* DFGCompilationKey.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGCompilationKey.h; path = dfg/DFGCompilationKey.h; sourceTree = "<group>"; };
                0F38B01517CFE75500B144D3 /* DFGCompilationMode.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGCompilationMode.cpp; path = dfg/DFGCompilationMode.cpp; sourceTree = "<group>"; };
                0F38B01617CFE75500B144D3 /* DFGCompilationMode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGCompilationMode.h; path = dfg/DFGCompilationMode.h; sourceTree = "<group>"; };
+               0F38D2A01D44196600680499 /* AuxiliaryBarrier.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AuxiliaryBarrier.h; sourceTree = "<group>"; };
+               0F38D2A11D44196600680499 /* AuxiliaryBarrierInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AuxiliaryBarrierInlines.h; sourceTree = "<group>"; };
                0F392C871B46188400844728 /* DFGOSRExitFuzz.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGOSRExitFuzz.cpp; path = dfg/DFGOSRExitFuzz.cpp; sourceTree = "<group>"; };
                0F392C881B46188400844728 /* DFGOSRExitFuzz.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGOSRExitFuzz.h; path = dfg/DFGOSRExitFuzz.h; sourceTree = "<group>"; };
                0F3A1BF71A9ECB7D000DE01A /* DFGPutStackSinkingPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGPutStackSinkingPhase.cpp; path = dfg/DFGPutStackSinkingPhase.cpp; sourceTree = "<group>"; };
                0F4F29DD18B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGStaticExecutionCountEstimationPhase.cpp; path = dfg/DFGStaticExecutionCountEstimationPhase.cpp; sourceTree = "<group>"; };
                0F4F29DE18B6AD1C0057BC15 /* DFGStaticExecutionCountEstimationPhase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGStaticExecutionCountEstimationPhase.h; path = dfg/DFGStaticExecutionCountEstimationPhase.h; sourceTree = "<group>"; };
                0F50AF3B193E8B3900674EE8 /* DFGStructureClobberState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGStructureClobberState.h; path = dfg/DFGStructureClobberState.h; sourceTree = "<group>"; };
+               0F5513A51D5A682A00C32BD8 /* FreeList.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = FreeList.h; sourceTree = "<group>"; };
+               0F5513A71D5A68CB00C32BD8 /* FreeList.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = FreeList.cpp; sourceTree = "<group>"; };
                0F5541AF1613C1FB00CE3E25 /* SpecialPointer.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SpecialPointer.cpp; sourceTree = "<group>"; };
                0F5541B01613C1FB00CE3E25 /* SpecialPointer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SpecialPointer.h; sourceTree = "<group>"; };
                0F55989717C86C5600A1E543 /* ToNativeFromValue.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = ToNativeFromValue.h; sourceTree = "<group>"; };
                0F6B8AE11C4EFE1700969052 /* B3FixSSA.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = B3FixSSA.h; path = b3/B3FixSSA.h; sourceTree = "<group>"; };
                0F6C734E1AC9F99F00BE1682 /* VariableWriteFireDetail.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = VariableWriteFireDetail.cpp; sourceTree = "<group>"; };
                0F6C734F1AC9F99F00BE1682 /* VariableWriteFireDetail.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = VariableWriteFireDetail.h; sourceTree = "<group>"; };
+               0F6DB7E71D6124B200CDBF8E /* StackFrame.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = StackFrame.cpp; sourceTree = "<group>"; };
+               0F6DB7E81D6124B200CDBF8E /* StackFrame.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StackFrame.h; sourceTree = "<group>"; };
+               0F6DB7EB1D617D0F00CDBF8E /* MacroAssemblerCodeRef.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = MacroAssemblerCodeRef.cpp; sourceTree = "<group>"; };
                0F6E845919030BEF00562741 /* DFGVariableAccessData.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGVariableAccessData.cpp; path = dfg/DFGVariableAccessData.cpp; sourceTree = "<group>"; };
                0F6FC74E196110A800E1D02D /* ComplexGetStatus.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ComplexGetStatus.cpp; sourceTree = "<group>"; };
                0F6FC74F196110A800E1D02D /* ComplexGetStatus.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ComplexGetStatus.h; sourceTree = "<group>"; };
                0FA7A8E918B413C80052371D /* Reg.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = Reg.cpp; sourceTree = "<group>"; };
                0FA7A8EA18B413C80052371D /* Reg.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Reg.h; sourceTree = "<group>"; };
                0FA7A8ED18CE4FD80052371D /* ScratchRegisterAllocator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ScratchRegisterAllocator.cpp; sourceTree = "<group>"; };
+               0FADE6721D4D23BC00768457 /* HeapUtil.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = HeapUtil.h; sourceTree = "<group>"; };
                0FAF7EFA165BA919000C8455 /* JITDisassembler.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITDisassembler.cpp; sourceTree = "<group>"; };
                0FAF7EFB165BA919000C8455 /* JITDisassembler.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITDisassembler.h; sourceTree = "<group>"; };
                0FB105821675480C00F8AB6E /* ExitKind.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ExitKind.cpp; sourceTree = "<group>"; };
                FEDA50D51B97F4D9009A3B4F /* PingPongStackOverflowTest.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = PingPongStackOverflowTest.h; path = API/tests/PingPongStackOverflowTest.h; sourceTree = "<group>"; };
                FEF040501AAE662D00BD28B0 /* CompareAndSwapTest.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = CompareAndSwapTest.cpp; path = API/tests/CompareAndSwapTest.cpp; sourceTree = "<group>"; };
                FEF040521AAEC4ED00BD28B0 /* CompareAndSwapTest.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = CompareAndSwapTest.h; path = API/tests/CompareAndSwapTest.h; sourceTree = "<group>"; };
+               77B25CB2C3094A92A38E1DB3 /* JSModuleLoader.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = JSModuleLoader.h; path = JSModuleLoader.h; sourceTree = "<group>"; };
+               1879510614C540FFB561C124 /* JSModuleLoader.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = JSModuleLoader.cpp; path = JSModuleLoader.cpp; sourceTree = "<group>"; };
                FEFD6FC51D5E7970008F2F0B /* JSStringInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSStringInlines.h; sourceTree = "<group>"; };
 /* End PBXFileReference section */
 
                        children = (
                                0F9630351D4192C3005609D9 /* AllocatorAttributes.cpp */,
                                0F9630361D4192C3005609D9 /* AllocatorAttributes.h */,
+                               0F070A421D543A89006E7232 /* CellContainer.h */,
+                               0F070A431D543A89006E7232 /* CellContainerInlines.h */,
                                0F1C3DD91BBCE09E00E523E4 /* CellState.h */,
                                0FD8A31117D4326C00CA2C40 /* CodeBlockSet.cpp */,
                                0FD8A31217D4326C00CA2C40 /* CodeBlockSet.h */,
                                0F9630381D4192C3005609D9 /* DestructionMode.h */,
                                2A83638318D7D0EE0000EBCC /* EdenGCActivityCallback.cpp */,
                                2A83638418D7D0EE0000EBCC /* EdenGCActivityCallback.h */,
+                               0F5513A71D5A68CB00C32BD8 /* FreeList.cpp */,
+                               0F5513A51D5A682A00C32BD8 /* FreeList.h */,
                                2A83638718D7D0FE0000EBCC /* FullGCActivityCallback.cpp */,
                                2A83638818D7D0FE0000EBCC /* FullGCActivityCallback.h */,
                                2AACE63A18CA5A0300ED0191 /* GCActivityCallback.cpp */,
                                14BA7A9613AADFF8005B7C2C /* Heap.h */,
                                DC3D2B0B1D34376E00BA918C /* HeapCell.cpp */,
                                DC3D2B091D34316100BA918C /* HeapCell.h */,
+                               0F070A441D543A89006E7232 /* HeapCellInlines.h */,
                                0F32BD0E1BB34F190093A57F /* HeapHelperPool.cpp */,
                                0F32BD0F1BB34F190093A57F /* HeapHelperPool.h */,
                                C2DA778218E259990066FCB6 /* HeapInlines.h */,
                                C24D31E1161CD695002AA4DB /* HeapStatistics.h */,
                                C2E526BB1590EF000054E48D /* HeapTimer.cpp */,
                                C2E526BC1590EF000054E48D /* HeapTimer.h */,
+                               0FADE6721D4D23BC00768457 /* HeapUtil.h */,
                                FE7BA60D1A1A7CEC00F1F7B4 /* HeapVerifier.cpp */,
                                FE7BA60E1A1A7CEC00F1F7B4 /* HeapVerifier.h */,
                                C25F8BCB157544A900245B71 /* IncrementalSweeper.cpp */,
                                C25F8BCC157544A900245B71 /* IncrementalSweeper.h */,
                                0F766D2915A8CC34008F363E /* JITStubRoutineSet.cpp */,
                                0F766D2A15A8CC34008F363E /* JITStubRoutineSet.h */,
+                               0F070A451D543A89006E7232 /* LargeAllocation.cpp */,
+                               0F070A461D543A89006E7232 /* LargeAllocation.h */,
                                0F431736146BAC65007E3890 /* ListableHandler.h */,
                                FE3913511B794AC900EDAF71 /* LiveObjectData.h */,
                                FE3913521B794AC900EDAF71 /* LiveObjectList.cpp */,
                                F692A84D0255597D01FF60F7 /* ArrayPrototype.cpp */,
                                F692A84E0255597D01FF60F7 /* ArrayPrototype.h */,
                                0FB7F38A15ED8E3800F167B2 /* ArrayStorage.h */,
+                               0F38D2A01D44196600680499 /* AuxiliaryBarrier.h */,
+                               0F38D2A11D44196600680499 /* AuxiliaryBarrierInlines.h */,
                                52678F8C1A031009006A306D /* BasicBlockLocation.cpp */,
                                52678F8D1A031009006A306D /* BasicBlockLocation.h */,
                                147B83AA0E6DB8C9004775A4 /* BatchedTransitionOptimizer.h */,
                                70DC3E081B2DF2C700054299 /* IteratorPrototype.h */,
                                93ADFCE60CCBD7AC00D30B08 /* JSArray.cpp */,
                                938772E5038BFE19008635CE /* JSArray.h */,
-                               539FB8B91C99DA7C00940FA1 /* JSArrayInlines.h */,
                                0F2B66B417B6B5AB00A7AE3F /* JSArrayBuffer.cpp */,
                                0F2B66B517B6B5AB00A7AE3F /* JSArrayBuffer.h */,
                                0F2B66B617B6B5AB00A7AE3F /* JSArrayBufferConstructor.cpp */,
                                0F2B66BA17B6B5AB00A7AE3F /* JSArrayBufferView.cpp */,
                                0F2B66BB17B6B5AB00A7AE3F /* JSArrayBufferView.h */,
                                0F2B66BC17B6B5AB00A7AE3F /* JSArrayBufferViewInlines.h */,
+                               539FB8B91C99DA7C00940FA1 /* JSArrayInlines.h */,
                                86FA9E8F142BBB2D001773B7 /* JSBoundFunction.cpp */,
                                86FA9E90142BBB2E001773B7 /* JSBoundFunction.h */,
                                657CF45619BF6662004ACBF2 /* JSCallee.cpp */,
                                A74DEF90182D991400522C22 /* JSMapIterator.h */,
                                E3D239C61B829C1C00BBEF67 /* JSModuleEnvironment.cpp */,
                                E3D239C71B829C1C00BBEF67 /* JSModuleEnvironment.h */,
+                               1879510614C540FFB561C124 /* JSModuleLoader.cpp */,
+                               77B25CB2C3094A92A38E1DB3 /* JSModuleLoader.h */,
                                E318CBBE1B8AEF5100A2929D /* JSModuleNamespaceObject.cpp */,
                                E318CBBF1B8AEF5100A2929D /* JSModuleNamespaceObject.h */,
                                E39DA4A41B7E8B7C0084F33A /* JSModuleRecord.cpp */,
                                0F0CD4C315F6B6B50032F1C0 /* SparseArrayValueMap.cpp */,
                                0FB7F39215ED8E3800F167B2 /* SparseArrayValueMap.h */,
                                0F3AC751183EA1040032029F /* StackAlignment.h */,
+                               0F6DB7E71D6124B200CDBF8E /* StackFrame.cpp */,
+                               0F6DB7E81D6124B200CDBF8E /* StackFrame.h */,
                                A730B6111250068F009D25B1 /* StrictEvalActivation.cpp */,
                                A730B6101250068F009D25B1 /* StrictEvalActivation.h */,
                                BC18C3C00E16EE3300B34460 /* StringConstructor.cpp */,
                                709FB8661AE335C60039D069 /* WeakSetPrototype.h */,
                                A7DCB77912E3D90500911940 /* WriteBarrier.h */,
                                C2B6D75218A33793004A9301 /* WriteBarrierInlines.h */,
-                               77B25CB2C3094A92A38E1DB3 /* JSModuleLoader.h */,
-                               1879510614C540FFB561C124 /* JSModuleLoader.cpp */,
                        );
                        path = runtime;
                        sourceTree = "<group>";
                                8640923C156EED3B00566CB2 /* MacroAssemblerARM64.h */,
                                A729009B17976C6000317298 /* MacroAssemblerARMv7.cpp */,
                                86ADD1440FDDEA980006EEC2 /* MacroAssemblerARMv7.h */,
+                               0F6DB7EB1D617D0F00CDBF8E /* MacroAssemblerCodeRef.cpp */,
                                863B23DF0FC60E6200703AA4 /* MacroAssemblerCodeRef.h */,
                                86C568DE11A213EE0007F7F0 /* MacroAssemblerMIPS.h */,
                                FE68C6351B90DDD90042BCB3 /* MacroAssemblerPrinter.cpp */,
                                99DA00A91BD5993100F4575C /* builtins_generate_separate_header.py in Headers */,
                                0F338E111BF0276C0013C88F /* B3OpaqueByproduct.h in Headers */,
                                FEA0C4031CDD7D1D00481991 /* FunctionWhitelist.h in Headers */,
+                               0F6DB7E91D6124B500CDBF8E /* StackFrame.h in Headers */,
                                99DA00AA1BD5993100F4575C /* builtins_generate_separate_implementation.py in Headers */,
                                99DA00A31BD5993100F4575C /* builtins_generator.py in Headers */,
                                412952781D2CF6BC00E78B89 /* builtins_generate_internals_wrapper_implementation.py in Headers */,
                                0F338E1C1BF286EA0013C88F /* B3BlockInsertionSet.h in Headers */,
                                0F9495881C57F47500413A48 /* B3StackSlot.h in Headers */,
                                C4F4B6F31A05C944005CAB76 /* cpp_generator_templates.py in Headers */,
+                               0F38D2A21D44196800680499 /* AuxiliaryBarrier.h in Headers */,
                                5DE6E5B30E1728EC00180407 /* create_hash_table in Headers */,
                                9959E92B1BD17FA4001AA413 /* cssmin.py in Headers */,
                                2A111246192FCE79005EE18D /* CustomGetterSetter.h in Headers */,
                                0FFFC96014EF90BD00C72532 /* DFGVirtualRegisterAllocationPhase.h in Headers */,
                                0FC97F4218202119002C9B26 /* DFGWatchpointCollectionPhase.h in Headers */,
                                0FDB2CE8174830A2007B3C1B /* DFGWorklist.h in Headers */,
+                               0F070A491D543A93006E7232 /* HeapCellInlines.h in Headers */,
                                0FE050181AA9091100D33B33 /* DirectArguments.h in Headers */,
                                0FE050161AA9091100D33B33 /* DirectArgumentsOffset.h in Headers */,
                                0FF42731158EBD54004CB9FF /* Disassembler.h in Headers */,
                                0FE0501A1AA9091100D33B33 /* GenericArgumentsInlines.h in Headers */,
                                FE3A06C01C11041A00390FDD /* JITRightShiftGenerator.h in Headers */,
                                708EBE241CE8F35800453146 /* IntlObjectInlines.h in Headers */,
+                               0F070A481D543A90006E7232 /* CellContainerInlines.h in Headers */,
                                0FE0501B1AA9091100D33B33 /* GenericOffset.h in Headers */,
                                0F2B66E017B6B5AB00A7AE3F /* GenericTypedArrayView.h in Headers */,
                                0F2B66E117B6B5AB00A7AE3F /* GenericTypedArrayViewInlines.h in Headers */,
                                A1587D6E1B4DC14100D69849 /* IntlDateTimeFormat.h in Headers */,
                                FE187A0F1C030D6C0038BBCA /* SnippetOperand.h in Headers */,
                                A1587D701B4DC14100D69849 /* IntlDateTimeFormatConstructor.h in Headers */,
+                               0FADE6731D4D23BE00768457 /* HeapUtil.h in Headers */,
                                A1587D751B4DC1C600D69849 /* IntlDateTimeFormatConstructor.lut.h in Headers */,
                                A5398FAB1C750DA40060A963 /* HeapProfiler.h in Headers */,
                                A1587D721B4DC14100D69849 /* IntlDateTimeFormatPrototype.h in Headers */,
                                C25D709C16DE99F400FCA6BC /* JSManagedValue.h in Headers */,
                                2A4BB7F318A41179008A0FCD /* JSManagedValueInternal.h in Headers */,
                                A700874217CBE8EB00C3E643 /* JSMap.h in Headers */,
+                               0F38D2A31D44196D00680499 /* AuxiliaryBarrierInlines.h in Headers */,
                                A74DEF96182D991400522C22 /* JSMapIterator.h in Headers */,
                                9959E92D1BD17FA4001AA413 /* jsmin.py in Headers */,
                                E3D239C91B829C1C00BBEF67 /* JSModuleEnvironment.h in Headers */,
                                BC18C4270E16F5CD00B34460 /* JSString.h in Headers */,
                                86E85539111B9968001AF51E /* JSStringBuilder.h in Headers */,
                                70EC0EC31AA0D7DA00B6AAFA /* JSStringIterator.h in Headers */,
+                               0F070A471D543A8B006E7232 /* CellContainer.h in Headers */,
                                2600B5A7152BAAA70091EE5F /* JSStringJoiner.h in Headers */,
                                BC18C4280E16F5CD00B34460 /* JSStringRef.h in Headers */,
                                43AB26C61C1A535900D82AE6 /* B3MathExtras.h in Headers */,
                                14B723B812D7DA6F003BD5ED /* MachineStackMarker.h in Headers */,
                                86C36EEA0EE1289D00B3DF59 /* MacroAssembler.h in Headers */,
                                43422A671C16267800E2EB98 /* B3ReduceDoubleToFloat.h in Headers */,
+                               0F070A4B1D543A98006E7232 /* LargeAllocation.h in Headers */,
                                86D3B2C610156BDE002865E7 /* MacroAssemblerARM.h in Headers */,
                                A1A009C01831A22D00CF8711 /* MacroAssemblerARM64.h in Headers */,
                                86ADD1460FDDEA980006EEC2 /* MacroAssemblerARMv7.h in Headers */,
                                BC18C4440E16F5CD00B34460 /* NumberPrototype.h in Headers */,
                                996B73211BDA08EF00331B84 /* NumberPrototype.lut.h in Headers */,
                                142D3939103E4560007DCB52 /* NumericStrings.h in Headers */,
+                               0F5513A61D5A682C00C32BD8 /* FreeList.h in Headers */,
                                A5EA710C19F6DE820098F5EC /* objc_generator.py in Headers */,
                                C4F4B6F61A05C984005CAB76 /* objc_generator_templates.py in Headers */,
                                86F3EEBD168CDE930077B92A /* ObjCCallbackFunction.h in Headers */,
                                0FEC856F1BDACDC70080FF74 /* AirArg.cpp in Sources */,
                                0F4DE1CE1C4C1B54004D6C11 /* AirFixObviousSpills.cpp in Sources */,
                                0FEC85711BDACDC70080FF74 /* AirBasicBlock.cpp in Sources */,
+                               0F070A4A1D543A95006E7232 /* LargeAllocation.cpp in Sources */,
                                0FEC85731BDACDC70080FF74 /* AirCCallSpecial.cpp in Sources */,
                                0FEC85751BDACDC70080FF74 /* AirCode.cpp in Sources */,
                                0F4570381BE44C910062A629 /* AirEliminateDeadCode.cpp in Sources */,
                                0FE34C191C4B39AE0003A512 /* AirLogRegisterPressure.cpp in Sources */,
                                A1B9E2391B4E0D6700BC7FED /* IntlCollator.cpp in Sources */,
                                A1B9E23B1B4E0D6700BC7FED /* IntlCollatorConstructor.cpp in Sources */,
+                               0F6DB7EA1D6124B800CDBF8E /* StackFrame.cpp in Sources */,
                                A1B9E23D1B4E0D6700BC7FED /* IntlCollatorPrototype.cpp in Sources */,
                                A1587D6D1B4DC14100D69849 /* IntlDateTimeFormat.cpp in Sources */,
                                A1587D6F1B4DC14100D69849 /* IntlDateTimeFormatConstructor.cpp in Sources */,
                                146FE51211A710430087AE66 /* JITCall32_64.cpp in Sources */,
                                0F8F94441667635400D61971 /* JITCode.cpp in Sources */,
                                0FAF7EFD165BA91B000C8455 /* JITDisassembler.cpp in Sources */,
+                               0F6DB7EC1D617D1100CDBF8E /* MacroAssemblerCodeRef.cpp in Sources */,
                                0F46808314BA573100BFE272 /* JITExceptions.cpp in Sources */,
                                0FB14E1E18124ACE009B6B4D /* JITInlineCacheGenerator.cpp in Sources */,
                                BCDD51EB0FB8DF74004A8BDC /* JITOpcodes.cpp in Sources */,
                                A71236E51195F33C00BD2174 /* JITOpcodes32_64.cpp in Sources */,
                                0F24E54C17EE274900ABB217 /* JITOperations.cpp in Sources */,
+                               0F5513A81D5A68CD00C32BD8 /* FreeList.cpp in Sources */,
                                FE99B24A1C24C3D700C82159 /* JITNegGenerator.cpp in Sources */,
                                86CC85C40EE7A89400288682 /* JITPropertyAccess.cpp in Sources */,
                                A7C1E8E4112E72EF00A37F98 /* JITPropertyAccess32_64.cpp in Sources */,
index 5331031..c60ee4d 100644 (file)
@@ -72,6 +72,9 @@ class BuiltinsCombinedImplementationGenerator(BuiltinsGenerator):
                 ("JavaScriptCore", "builtins/BuiltinExecutables.h"),
             ),
             (["JavaScriptCore", "WebCore"],
+                ("JavaScriptCore", "heap/HeapInlines.h"),
+            ),
+            (["JavaScriptCore", "WebCore"],
                 ("JavaScriptCore", "runtime/Executable.h"),
             ),
             (["JavaScriptCore", "WebCore"],
index 9f23f49..2597f28 100644 (file)
@@ -68,6 +68,9 @@ class BuiltinsInternalsWrapperImplementationGenerator(BuiltinsGenerator):
                 ("WebCore", "WebCoreJSClientData.h"),
             ),
             (["WebCore"],
+                ("JavaScriptCore", "heap/HeapInlines.h"),
+            ),
+            (["WebCore"],
                 ("JavaScriptCore", "heap/SlotVisitorInlines.h"),
             ),
             (["WebCore"],
index 7789fe3..5c7b554 100644 (file)
@@ -84,6 +84,9 @@ class BuiltinsSeparateImplementationGenerator(BuiltinsGenerator):
                 ("JavaScriptCore", "builtins/BuiltinExecutables.h"),
             ),
             (["JavaScriptCore", "WebCore"],
+                ("JavaScriptCore", "heap/HeapInlines.h"),
+            ),
+            (["JavaScriptCore", "WebCore"],
                 ("JavaScriptCore", "runtime/Executable.h"),
             ),
             (["JavaScriptCore", "WebCore"],
index 50bc264..d05c913 100644 (file)
@@ -725,20 +725,18 @@ public:
                 append(jump);
         }
 
-        void link(AbstractMacroAssemblerType* masm)
+        void link(AbstractMacroAssemblerType* masm) const
         {
             size_t size = m_jumps.size();
             for (size_t i = 0; i < size; ++i)
                 m_jumps[i].link(masm);
-            m_jumps.clear();
         }
         
-        void linkTo(Label label, AbstractMacroAssemblerType* masm)
+        void linkTo(Label label, AbstractMacroAssemblerType* masm) const
         {
             size_t size = m_jumps.size();
             for (size_t i = 0; i < size; ++i)
                 m_jumps[i].linkTo(label, masm);
-            m_jumps.clear();
         }
         
         void append(Jump jump)
index 3e2bb18..bfd483e 100644 (file)
@@ -28,6 +28,8 @@
 
 #if ENABLE(ASSEMBLER)
 
+#include "JSCJSValue.h"
+
 #if CPU(ARM_THUMB2)
 #include "MacroAssemblerARMv7.h"
 namespace JSC { typedef MacroAssemblerARMv7 MacroAssemblerBase; };
index 277c052..05d5803 100644 (file)
@@ -166,7 +166,10 @@ public:
             m_assembler.add<32>(dest, src, UInt12(imm.m_value));
         else if (isUInt12(-imm.m_value))
             m_assembler.sub<32>(dest, src, UInt12(-imm.m_value));
-        else {
+        else if (src != dest) {
+            move(imm, dest);
+            add32(src, dest);
+        } else {
             move(imm, getCachedDataTempRegisterIDAndInvalidate());
             m_assembler.add<32>(dest, src, dataTempRegister);
         }
diff --git a/Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.cpp b/Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.cpp
new file mode 100644 (file)
index 0000000..168b328
--- /dev/null
@@ -0,0 +1,68 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "MacroAssemblerCodeRef.h"
+
+#include "JSCInlines.h"
+#include "LLIntData.h"
+
+namespace JSC {
+
+MacroAssemblerCodePtr MacroAssemblerCodePtr::createLLIntCodePtr(OpcodeID codeId)
+{
+    return createFromExecutableAddress(LLInt::getCodePtr(codeId));
+}
+
+void MacroAssemblerCodePtr::dumpWithName(const char* name, PrintStream& out) const
+{
+    if (!m_value) {
+        out.print(name, "(null)");
+        return;
+    }
+    if (executableAddress() == dataLocation()) {
+        out.print(name, "(", RawPointer(executableAddress()), ")");
+        return;
+    }
+    out.print(name, "(executable = ", RawPointer(executableAddress()), ", dataLocation = ", RawPointer(dataLocation()), ")");
+}
+
+void MacroAssemblerCodePtr::dump(PrintStream& out) const
+{
+    dumpWithName("CodePtr", out);
+}
+
+MacroAssemblerCodeRef MacroAssemblerCodeRef::createLLIntCodeRef(OpcodeID codeId)
+{
+    return createSelfManagedCodeRef(MacroAssemblerCodePtr::createFromExecutableAddress(LLInt::getCodePtr(codeId)));
+}
+
+void MacroAssemblerCodeRef::dump(PrintStream& out) const
+{
+    m_codePtr.dumpWithName("CodeRef", out);
+}
+
+} // namespace JSC
+
index 94cabc7..f155f7d 100644 (file)
@@ -28,7 +28,6 @@
 
 #include "Disassembler.h"
 #include "ExecutableAllocator.h"
-#include "LLIntData.h"
 #include <wtf/DataLog.h>
 #include <wtf/PassRefPtr.h>
 #include <wtf/PrintStream.h>
@@ -53,6 +52,8 @@
 
 namespace JSC {
 
+enum OpcodeID : unsigned;
+
 // FunctionPtr:
 //
 // FunctionPtr should be used to wrap pointers to C/C++ functions in JSC
@@ -273,10 +274,7 @@ public:
         return result;
     }
 
-    static MacroAssemblerCodePtr createLLIntCodePtr(OpcodeID codeId)
-    {
-        return createFromExecutableAddress(LLInt::getCodePtr(codeId));
-    }
+    static MacroAssemblerCodePtr createLLIntCodePtr(OpcodeID codeId);
 
     explicit MacroAssemblerCodePtr(ReturnAddressPtr ra)
         : m_value(ra.value())
@@ -299,23 +297,9 @@ public:
         return m_value == other.m_value;
     }
 
-    void dumpWithName(const char* name, PrintStream& out) const
-    {
-        if (!m_value) {
-            out.print(name, "(null)");
-            return;
-        }
-        if (executableAddress() == dataLocation()) {
-            out.print(name, "(", RawPointer(executableAddress()), ")");
-            return;
-        }
-        out.print(name, "(executable = ", RawPointer(executableAddress()), ", dataLocation = ", RawPointer(dataLocation()), ")");
-    }
+    void dumpWithName(const char* name, PrintStream& out) const;
     
-    void dump(PrintStream& out) const
-    {
-        dumpWithName("CodePtr", out);
-    }
+    void dump(PrintStream& out) const;
     
     enum EmptyValueTag { EmptyValue };
     enum DeletedValueTag { DeletedValue };
@@ -389,10 +373,7 @@ public:
     }
     
     // Helper for creating self-managed code refs from LLInt.
-    static MacroAssemblerCodeRef createLLIntCodeRef(OpcodeID codeId)
-    {
-        return createSelfManagedCodeRef(MacroAssemblerCodePtr::createFromExecutableAddress(LLInt::getCodePtr(codeId)));
-    }
+    static MacroAssemblerCodeRef createLLIntCodeRef(OpcodeID codeId);
 
     ExecutableMemoryHandle* executableMemory() const
     {
@@ -418,10 +399,7 @@ public:
     
     explicit operator bool() const { return !!m_codePtr; }
     
-    void dump(PrintStream& out) const
-    {
-        m_codePtr.dumpWithName("CodeRef", out);
-    }
+    void dump(PrintStream& out) const;
 
 private:
     MacroAssemblerCodePtr m_codePtr;
index 0317c81..63a4e58 100644 (file)
@@ -85,6 +85,11 @@ Value* BasicBlock::appendIntConstant(Procedure& proc, Value* likeValue, int64_t
     return appendIntConstant(proc, likeValue->origin(), likeValue->type(), value);
 }
 
+Value* BasicBlock::appendBoolConstant(Procedure& proc, Origin origin, bool value)
+{
+    return appendIntConstant(proc, origin, Int32, value ? 1 : 0);
+}
+
 void BasicBlock::clearSuccessors()
 {
     m_successors.clear();
index 7d46bb9..9fb666a 100644 (file)
@@ -82,6 +82,7 @@ public:
 
     JS_EXPORT_PRIVATE Value* appendIntConstant(Procedure&, Origin, Type, int64_t value);
     Value* appendIntConstant(Procedure&, Value* likeValue, int64_t value);
+    Value* appendBoolConstant(Procedure&, Origin, bool);
 
     void removeLast(Procedure&);
     
index 8a8434b..eeccf57 100644 (file)
@@ -71,7 +71,11 @@ public:
         IndexSet<BasicBlock> candidates;
 
         for (BasicBlock* block : m_proc) {
-            if (block->size() > m_maxSize || block->numSuccessors() > m_maxSuccessors)
+            if (block->size() > m_maxSize)
+                continue;
+            if (block->numSuccessors() > m_maxSuccessors)
+                continue;
+            if (block->last()->type() != Void) // Demoting doesn't handle terminals with values.
                 continue;
 
             candidates.add(block);
index af8c3fb..77e17b8 100644 (file)
@@ -92,7 +92,7 @@ public:
     // This is computed lazily, so it won't work if you capture StackmapGenerationParams by value.
     // Returns true if the successor at the given index is going to be emitted right after the
     // patchpoint.
-    bool fallsThroughToSuccessor(unsigned successorIndex) const;
+    JS_EXPORT_PRIVATE bool fallsThroughToSuccessor(unsigned successorIndex) const;
 
     // This is provided for convenience; it means that you don't have to capture it if you don't want to.
     JS_EXPORT_PRIVATE Procedure& proc() const;
index 6f134dc..e7b8ca8 100644 (file)
@@ -12920,6 +12920,80 @@ void testBranchBitAndImmFusion(
     CHECK(terminal.args[2].kind() == Air::Arg::BitImm || terminal.args[2].kind() == Air::Arg::BitImm64);
 }
 
+void testPatchpointTerminalReturnValue(bool successIsRare)
+{
+    // This is a unit test for how FTL's heap allocation fast paths behave.
+    Procedure proc;
+    
+    BasicBlock* root = proc.addBlock();
+    BasicBlock* success = proc.addBlock();
+    BasicBlock* slowPath = proc.addBlock();
+    BasicBlock* continuation = proc.addBlock();
+    
+    Value* arg = root->appendNew<Value>(
+        proc, Trunc, Origin(),
+        root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR0));
+    
+    PatchpointValue* patchpoint = root->appendNew<PatchpointValue>(proc, Int32, Origin());
+    patchpoint->effects.terminal = true;
+    patchpoint->clobber(RegisterSet::macroScratchRegisters());
+    
+    if (successIsRare) {
+        root->appendSuccessor(FrequentedBlock(success, FrequencyClass::Rare));
+        root->appendSuccessor(slowPath);
+    } else {
+        root->appendSuccessor(success);
+        root->appendSuccessor(FrequentedBlock(slowPath, FrequencyClass::Rare));
+    }
+    
+    patchpoint->appendSomeRegister(arg);
+    
+    patchpoint->setGenerator(
+        [&] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+            AllowMacroScratchRegisterUsage allowScratch(jit);
+            
+            CCallHelpers::Jump jumpToSlow =
+                jit.branch32(CCallHelpers::Above, params[1].gpr(), CCallHelpers::TrustedImm32(42));
+            
+            jit.add32(CCallHelpers::TrustedImm32(31), params[1].gpr(), params[0].gpr());
+            
+            CCallHelpers::Jump jumpToSuccess;
+            if (!params.fallsThroughToSuccessor(0))
+                jumpToSuccess = jit.jump();
+            
+            Vector<Box<CCallHelpers::Label>> labels = params.successorLabels();
+            
+            params.addLatePath(
+                [=] (CCallHelpers& jit) {
+                    jumpToSlow.linkTo(*labels[1], &jit);
+                    if (jumpToSuccess.isSet())
+                        jumpToSuccess.linkTo(*labels[0], &jit);
+                });
+        });
+    
+    UpsilonValue* successUpsilon = success->appendNew<UpsilonValue>(proc, Origin(), patchpoint);
+    success->appendNew<Value>(proc, Jump, Origin());
+    success->setSuccessors(continuation);
+    
+    UpsilonValue* slowPathUpsilon = slowPath->appendNew<UpsilonValue>(
+        proc, Origin(), slowPath->appendNew<Const32Value>(proc, Origin(), 666));
+    slowPath->appendNew<Value>(proc, Jump, Origin());
+    slowPath->setSuccessors(continuation);
+    
+    Value* phi = continuation->appendNew<Value>(proc, Phi, Int32, Origin());
+    successUpsilon->setPhi(phi);
+    slowPathUpsilon->setPhi(phi);
+    continuation->appendNew<Value>(proc, Return, Origin(), phi);
+    
+    auto code = compile(proc);
+    CHECK_EQ(invoke<int>(*code, 0), 31);
+    CHECK_EQ(invoke<int>(*code, 1), 32);
+    CHECK_EQ(invoke<int>(*code, 41), 72);
+    CHECK_EQ(invoke<int>(*code, 42), 73);
+    CHECK_EQ(invoke<int>(*code, 43), 666);
+    CHECK_EQ(invoke<int>(*code, -1), 666);
+}
+
 // Make sure the compiler does not try to optimize anything out.
 NEVER_INLINE double zero()
 {
@@ -14337,6 +14411,8 @@ void run(const char* filter)
     RUN(testEntrySwitchLoop());
 
     RUN(testSomeEarlyRegister());
+    RUN(testPatchpointTerminalReturnValue(true));
+    RUN(testPatchpointTerminalReturnValue(false));
     
     if (isX86()) {
         RUN(testBranchBitAndImmFusion(Identity, Int64, 1, Air::BranchTest32, Air::Arg::Tmp));
index 126a468..7f6ea16 100644 (file)
@@ -32,8 +32,8 @@
 
 #include "APICast.h"
 #include "InspectorValues.h"
+#include "JSCInlines.h"
 #include "JSLock.h"
-#include "StructureInlines.h"
 
 using namespace JSC;
 using namespace Inspector;
index 04d98f6..2fd7031 100644 (file)
@@ -26,8 +26,7 @@
 #include "config.h"
 #include "AdaptiveInferredPropertyValueWatchpointBase.h"
 
-#include "JSCellInlines.h"
-#include "StructureInlines.h"
+#include "JSCInlines.h"
 
 namespace JSC {
 
index 7f17c0e..fe56d78 100644 (file)
@@ -27,6 +27,7 @@
 #include "BytecodeBasicBlock.h"
 
 #include "CodeBlock.h"
+#include "Interpreter.h"
 #include "JSCInlines.h"
 #include "PreciseJumpTargets.h"
 
index 17e82a2..d4cb01a 100644 (file)
@@ -31,6 +31,7 @@
 #include "BytecodeUseDef.h"
 #include "CodeBlock.h"
 #include "FullBytecodeLiveness.h"
+#include "Interpreter.h"
 #include "PreciseJumpTargets.h"
 
 namespace JSC {
index 6057ef6..435f610 100644 (file)
@@ -27,6 +27,7 @@
 #define BytecodeUseDef_h
 
 #include "CodeBlock.h"
+#include "Interpreter.h"
 
 namespace JSC {
 
index c004d8a..67248fe 100644 (file)
 #include "DFGOperations.h"
 #include "DFGThunks.h"
 #include "JSCInlines.h"
+#include "Opcode.h"
 #include "Repatch.h"
 #include <wtf/ListDump.h>
 
 #if ENABLE(JIT)
 namespace JSC {
 
+CallLinkInfo::CallType CallLinkInfo::callTypeFor(OpcodeID opcodeID)
+{
+    if (opcodeID == op_call || opcodeID == op_call_eval)
+        return Call;
+    if (opcodeID == op_call_varargs)
+        return CallVarargs;
+    if (opcodeID == op_construct)
+        return Construct;
+    if (opcodeID == op_construct_varargs)
+        return ConstructVarargs;
+    if (opcodeID == op_tail_call)
+        return TailCall;
+    ASSERT(opcodeID == op_tail_call_varargs || op_tail_call_forward_arguments);
+    return TailCallVarargs;
+}
+
 CallLinkInfo::CallLinkInfo()
     : m_hasSeenShouldRepatch(false)
     , m_hasSeenClosure(false)
index f7e6e73..d5402eb 100644 (file)
@@ -31,7 +31,6 @@
 #include "CodeSpecializationKind.h"
 #include "JITWriteBarrier.h"
 #include "JSFunction.h"
-#include "Opcode.h"
 #include "PolymorphicCallStubRoutine.h"
 #include "WriteBarrier.h"
 #include <wtf/SentinelLinkedList.h>
@@ -40,26 +39,13 @@ namespace JSC {
 
 #if ENABLE(JIT)
 
+enum OpcodeID : unsigned;
 struct CallFrameShuffleData;
 
 class CallLinkInfo : public BasicRawSentinelNode<CallLinkInfo> {
 public:
     enum CallType { None, Call, CallVarargs, Construct, ConstructVarargs, TailCall, TailCallVarargs };
-    static CallType callTypeFor(OpcodeID opcodeID)
-    {
-        if (opcodeID == op_call || opcodeID == op_call_eval)
-            return Call;
-        if (opcodeID == op_call_varargs)
-            return CallVarargs;
-        if (opcodeID == op_construct)
-            return Construct;
-        if (opcodeID == op_construct_varargs)
-            return ConstructVarargs;
-        if (opcodeID == op_tail_call)
-            return TailCall;
-        ASSERT(opcodeID == op_tail_call_varargs || op_tail_call_forward_arguments);
-        return TailCallVarargs;
-    }
+    static CallType callTypeFor(OpcodeID opcodeID);
 
     static bool isVarargsCallType(CallType callType)
     {
index 7ce79ab..d909508 100644 (file)
@@ -30,6 +30,7 @@
 #include "CodeBlock.h"
 #include "DFGJITCode.h"
 #include "InlineCallFrame.h"
+#include "Interpreter.h"
 #include "LLIntCallLinkInfo.h"
 #include "JSCInlines.h"
 #include <wtf/CommaPrinter.h>
index 1d9135a..32be631 100644 (file)
@@ -51,6 +51,7 @@
 #include "JSFunction.h"
 #include "JSLexicalEnvironment.h"
 #include "JSModuleEnvironment.h"
+#include "LLIntData.h"
 #include "LLIntEntrypoint.h"
 #include "LLIntPrototypeLoadAdaptiveStructureWatchpoint.h"
 #include "LowLevelInterpreter.h"
@@ -1910,7 +1911,7 @@ void CodeBlock::finishCreation(VM& vm, CopyParsedBlockTag, CodeBlock& other)
         m_rareData->m_liveCalleeLocalsAtYield = other.m_rareData->m_liveCalleeLocalsAtYield;
     }
     
-    heap()->m_codeBlocks.add(this);
+    heap()->m_codeBlocks->add(this);
 }
 
 CodeBlock::CodeBlock(VM* vm, Structure* structure, ScriptExecutable* ownerExecutable, UnlinkedCodeBlock* unlinkedCodeBlock,
@@ -2402,7 +2403,7 @@ void CodeBlock::finishCreation(VM& vm, ScriptExecutable* ownerExecutable, Unlink
     if (Options::dumpGeneratedBytecodes())
         dumpBytecode();
     
-    heap()->m_codeBlocks.add(this);
+    heap()->m_codeBlocks->add(this);
     heap()->reportExtraMemoryAllocated(m_instructions.size() * sizeof(Instruction));
 }
 
@@ -2439,7 +2440,7 @@ void CodeBlock::finishCreation(VM& vm, WebAssemblyExecutable*, JSGlobalObject*)
 {
     Base::finishCreation(vm);
 
-    heap()->m_codeBlocks.add(this);
+    heap()->m_codeBlocks->add(this);
 }
 #endif
 
@@ -2843,6 +2844,14 @@ void CodeBlock::WeakReferenceHarvester::visitWeakReferences(SlotVisitor& visitor
     codeBlock->determineLiveness(visitor);
 }
 
+void CodeBlock::clearLLIntGetByIdCache(Instruction* instruction)
+{
+    instruction[0].u.opcode = LLInt::getOpcode(op_get_by_id);
+    instruction[4].u.pointer = nullptr;
+    instruction[5].u.pointer = nullptr;
+    instruction[6].u.pointer = nullptr;
+}
+
 void CodeBlock::finalizeLLIntInlineCaches()
 {
 #if ENABLE(WEBASSEMBLY)
index ab92804..b1682c7 100644 (file)
@@ -297,6 +297,8 @@ public:
     {
         return m_jitCodeMap.get();
     }
+    
+    static void clearLLIntGetByIdCache(Instruction*);
 
     unsigned bytecodeOffset(Instruction* returnAddress)
     {
@@ -1303,14 +1305,6 @@ private:
 };
 #endif
 
-inline void clearLLIntGetByIdCache(Instruction* instruction)
-{
-    instruction[0].u.opcode = LLInt::getOpcode(op_get_by_id);
-    instruction[4].u.pointer = nullptr;
-    instruction[5].u.pointer = nullptr;
-    instruction[6].u.pointer = nullptr;
-}
-
 inline Register& ExecState::r(int index)
 {
     CodeBlock* codeBlock = this->codeBlock();
index 494b000..4f74320 100644 (file)
@@ -31,7 +31,6 @@
 
 #include "BasicBlockLocation.h"
 #include "MacroAssembler.h"
-#include "Opcode.h"
 #include "PutByIdFlags.h"
 #include "SymbolTable.h"
 #include "TypeLocation.h"
@@ -52,6 +51,12 @@ class WatchpointSet;
 struct LLIntCallLinkInfo;
 struct ValueProfile;
 
+#if ENABLE(COMPUTED_GOTO_OPCODES)
+typedef void* Opcode;
+#else
+typedef OpcodeID Opcode;
+#endif
+
 struct Instruction {
     Instruction()
     {
index 7ae2c0d..9a5ac01 100644 (file)
@@ -28,7 +28,7 @@
 
 #include "CodeBlock.h"
 #include "Instruction.h"
-#include "StructureInlines.h"
+#include "JSCInlines.h"
 
 namespace JSC {
 
@@ -59,7 +59,7 @@ void LLIntPrototypeLoadAdaptiveStructureWatchpoint::fireInternal(const FireDetai
 
     StringFireDetail stringDetail(out.toCString().data());
 
-    clearLLIntGetByIdCache(m_getByIdInstruction);
+    CodeBlock::clearLLIntGetByIdCache(m_getByIdInstruction);
 }
 
 } // namespace JSC
index 5fa706d..74267ff 100644 (file)
@@ -80,14 +80,15 @@ public:
         ASSERT(inlineCapacity <= JSFinalObject::maxInlineCapacity());
 
         size_t allocationSize = JSFinalObject::allocationSize(inlineCapacity);
-        MarkedAllocator* allocator = &vm.heap.allocatorForObjectWithoutDestructor(allocationSize);
-        ASSERT(allocator->cellSize());
-
+        MarkedAllocator* allocator = vm.heap.allocatorForObjectWithoutDestructor(allocationSize);
+        
         // Take advantage of extra inline capacity available in the size class.
-        size_t slop = (allocator->cellSize() - allocationSize) / sizeof(WriteBarrier<Unknown>);
-        inlineCapacity += slop;
-        if (inlineCapacity > JSFinalObject::maxInlineCapacity())
-            inlineCapacity = JSFinalObject::maxInlineCapacity();
+        if (allocator) {
+            size_t slop = (allocator->cellSize() - allocationSize) / sizeof(WriteBarrier<Unknown>);
+            inlineCapacity += slop;
+            if (inlineCapacity > JSFinalObject::maxInlineCapacity())
+                inlineCapacity = JSFinalObject::maxInlineCapacity();
+        }
 
         Structure* structure = vm.prototypeMap.emptyObjectStructureForPrototype(prototype, inlineCapacity);
 
index ee667c8..201d96c 100644 (file)
@@ -55,7 +55,7 @@ namespace JSC {
 
 
 #define OPCODE_ID_ENUM(opcode, length) opcode,
-    typedef enum { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) } OpcodeID;
+    enum OpcodeID : unsigned { FOR_EACH_OPCODE_ID(OPCODE_ID_ENUM) };
 #undef OPCODE_ID_ENUM
 
 const int maxOpcodeLength = 9;
index 5f057ee..5caca06 100644 (file)
@@ -1206,34 +1206,25 @@ void AccessCase::generateImpl(AccessGenerationState& state)
             size_t newSize = newStructure()->outOfLineCapacity() * sizeof(JSValue);
             
             if (allocatingInline) {
-                CopiedAllocator* copiedAllocator = &vm.heap.storageAllocator();
-
-                if (!reallocating) {
-                    jit.loadPtr(&copiedAllocator->m_currentRemaining, scratchGPR);
-                    slowPath.append(
-                        jit.branchSubPtr(
-                            CCallHelpers::Signed, CCallHelpers::TrustedImm32(newSize), scratchGPR));
-                    jit.storePtr(scratchGPR, &copiedAllocator->m_currentRemaining);
-                    jit.negPtr(scratchGPR);
-                    jit.addPtr(
-                        CCallHelpers::AbsoluteAddress(&copiedAllocator->m_currentPayloadEnd), scratchGPR);
-                    jit.addPtr(CCallHelpers::TrustedImm32(sizeof(JSValue)), scratchGPR);
-                } else {
+                MarkedAllocator* allocator = vm.heap.allocatorForAuxiliaryData(newSize);
+                
+                if (!allocator) {
+                    // Yuck, this case would suck!
+                    slowPath.append(jit.jump());
+                }
+                
+                jit.move(CCallHelpers::TrustedImmPtr(allocator), scratchGPR2);
+                jit.emitAllocate(scratchGPR, allocator, scratchGPR2, scratchGPR3, slowPath);
+                jit.addPtr(CCallHelpers::TrustedImm32(newSize + sizeof(IndexingHeader)), scratchGPR);
+                
+                if (reallocating) {
                     // Handle the case where we are reallocating (i.e. the old structure/butterfly
                     // already had out-of-line property storage).
                     size_t oldSize = structure()->outOfLineCapacity() * sizeof(JSValue);
                     ASSERT(newSize > oldSize);
             
                     jit.loadPtr(CCallHelpers::Address(baseGPR, JSObject::butterflyOffset()), scratchGPR3);
-                    jit.loadPtr(&copiedAllocator->m_currentRemaining, scratchGPR);
-                    slowPath.append(
-                        jit.branchSubPtr(
-                            CCallHelpers::Signed, CCallHelpers::TrustedImm32(newSize), scratchGPR));
-                    jit.storePtr(scratchGPR, &copiedAllocator->m_currentRemaining);
-                    jit.negPtr(scratchGPR);
-                    jit.addPtr(
-                        CCallHelpers::AbsoluteAddress(&copiedAllocator->m_currentPayloadEnd), scratchGPR);
-                    jit.addPtr(CCallHelpers::TrustedImm32(sizeof(JSValue)), scratchGPR);
+                    
                     // We have scratchGPR = new storage, scratchGPR3 = old storage,
                     // scratchGPR2 = available
                     for (size_t offset = 0; offset < oldSize; offset += sizeof(void*)) {
@@ -1659,6 +1650,7 @@ AccessGenerationResult PolymorphicAccess::regenerate(
         // Cascade through the list, preferring newer entries.
         for (unsigned i = cases.size(); i--;) {
             fallThrough.link(&jit);
+            fallThrough.clear();
             cases[i]->generateWithGuard(state, fallThrough);
         }
         state.failAndRepatch.append(fallThrough);
index b602c00..2ca4adc 100644 (file)
 #if ENABLE(JIT)
 
 #include "CodeOrigin.h"
+#include "JITStubRoutine.h"
 #include "JSFunctionInlines.h"
 #include "MacroAssembler.h"
 #include "ObjectPropertyConditionSet.h"
-#include "Opcode.h"
 #include "ScratchRegisterAllocator.h"
 #include "Structure.h"
 #include <wtf/Vector.h>
index e2cb000..dfcb713 100644 (file)
@@ -26,6 +26,7 @@
 #include "config.h"
 #include "PreciseJumpTargets.h"
 
+#include "Interpreter.h"
 #include "JSCInlines.h"
 
 namespace JSC {
index 30dff72..38e5cef 100644 (file)
@@ -27,6 +27,7 @@
 #include "StructureStubInfo.h"
 
 #include "JSObject.h"
+#include "JSCInlines.h"
 #include "PolymorphicAccess.h"
 #include "Repatch.h"
 
index 0138466..3dcc51a 100644 (file)
@@ -31,7 +31,6 @@
 #include "JITStubRoutine.h"
 #include "MacroAssembler.h"
 #include "ObjectPropertyConditionSet.h"
-#include "Opcode.h"
 #include "Options.h"
 #include "RegisterSet.h"
 #include "Structure.h"
index 005dff8..195cf60 100644 (file)
@@ -88,11 +88,6 @@ UnlinkedCodeBlock::UnlinkedCodeBlock(VM* vm, Structure* structure, CodeType code
     ASSERT(m_constructorKind == static_cast<unsigned>(info.constructorKind()));
 }
 
-VM* UnlinkedCodeBlock::vm() const
-{
-    return MarkedBlock::blockFor(this)->vm();
-}
-
 void UnlinkedCodeBlock::visitChildren(JSCell* cell, SlotVisitor& visitor)
 {
     UnlinkedCodeBlock* thisObject = jsCast<UnlinkedCodeBlock*>(cell);
index b4d2cf4..f2f301a 100644 (file)
@@ -268,8 +268,6 @@ public:
     void addExceptionHandler(const UnlinkedHandlerInfo& handler) { createRareDataIfNecessary(); return m_rareData->m_exceptionHandlers.append(handler); }
     UnlinkedHandlerInfo& exceptionHandler(int index) { ASSERT(m_rareData); return m_rareData->m_exceptionHandlers[index]; }
 
-    VM* vm() const;
-
     UnlinkedArrayProfile addArrayProfile() { return m_arrayProfileCount++; }
     unsigned numberOfArrayProfiles() { return m_arrayProfileCount; }
     UnlinkedArrayAllocationProfile addArrayAllocationProfile() { return m_arrayAllocationProfileCount++; }
index 6a300d3..e8762ff 100644 (file)
@@ -26,6 +26,8 @@
 #include "config.h"
 #include "UnlinkedInstructionStream.h"
 
+#include "Opcode.h"
+
 namespace JSC {
 
 static void append8(unsigned char*& ptr, unsigned char value)
index a875e49..07dc678 100644 (file)
@@ -27,6 +27,7 @@
 #ifndef UnlinkedInstructionStream_h
 #define UnlinkedInstructionStream_h
 
+#include "Opcode.h"
 #include "UnlinkedCodeBlock.h"
 #include <wtf/RefCountedArray.h>
 
index 00b186a..a717d16 100644 (file)
@@ -902,7 +902,6 @@ char* JIT_OPERATION operationNewArrayWithSize(ExecState* exec, Structure* arrayS
         return bitwise_cast<char*>(exec->vm().throwException(exec, createRangeError(exec, ASCIILiteral("Array size is not a small enough positive integer."))));
 
     JSArray* result = JSArray::create(*vm, arrayStructure, size);
-    result->butterfly(); // Ensure that the backing store is in to-space.
     return bitwise_cast<char*>(result);
 }
 
index 853034a..fced1ec 100644 (file)
@@ -94,7 +94,7 @@ void SpeculativeJIT::emitAllocateRawObject(GPRReg resultGPR, Structure* structur
     GPRReg scratch2GPR = scratch2.gpr();
 
     ASSERT(vectorLength >= numElements);
-    vectorLength = std::max(BASE_VECTOR_LEN, vectorLength);
+    vectorLength = Butterfly::optimalContiguousVectorLength(structure, vectorLength);
     
     JITCompiler::JumpList slowCases;
 
@@ -104,22 +104,28 @@ void SpeculativeJIT::emitAllocateRawObject(GPRReg resultGPR, Structure* structur
     size += outOfLineCapacity * sizeof(JSValue);
 
     if (size) {
-        slowCases.append(
-            emitAllocateBasicStorage(TrustedImm32(size), storageGPR));
-        if (hasIndexingHeader)
-            m_jit.subPtr(TrustedImm32(vectorLength * sizeof(JSValue)), storageGPR);
-        else
-            m_jit.addPtr(TrustedImm32(sizeof(IndexingHeader)), storageGPR);
+        if (MarkedAllocator* allocator = m_jit.vm()->heap.allocatorForAuxiliaryData(size)) {
+            m_jit.move(TrustedImmPtr(allocator), scratchGPR);
+            m_jit.emitAllocate(storageGPR, allocator, scratchGPR, scratch2GPR, slowCases);
+            
+            m_jit.addPtr(
+                TrustedImm32(outOfLineCapacity * sizeof(JSValue) + sizeof(IndexingHeader)),
+                storageGPR);
+        } else
+            slowCases.append(m_jit.jump());
     } else
         m_jit.move(TrustedImmPtr(0), storageGPR);
 
     size_t allocationSize = JSFinalObject::allocationSize(inlineCapacity);
-    MarkedAllocator* allocatorPtr = &m_jit.vm()->heap.allocatorForObjectWithoutDestructor(allocationSize);
-    m_jit.move(TrustedImmPtr(allocatorPtr), scratchGPR);
-    emitAllocateJSObject(resultGPR, scratchGPR, TrustedImmPtr(structure), storageGPR, scratch2GPR, slowCases);
-
-    if (hasIndexingHeader)
-        m_jit.store32(TrustedImm32(vectorLength), MacroAssembler::Address(storageGPR, Butterfly::offsetOfVectorLength()));
+    MarkedAllocator* allocatorPtr = m_jit.vm()->heap.allocatorForObjectWithoutDestructor(allocationSize);
+    if (allocatorPtr) {
+        m_jit.move(TrustedImmPtr(allocatorPtr), scratchGPR);
+        emitAllocateJSObject(resultGPR, allocatorPtr, scratchGPR, TrustedImmPtr(structure), storageGPR, scratch2GPR, slowCases);
+        
+        if (hasIndexingHeader)
+            m_jit.store32(TrustedImm32(vectorLength), MacroAssembler::Address(storageGPR, Butterfly::offsetOfVectorLength()));
+    } else
+        slowCases.append(m_jit.jump());
 
     // I want a slow path that also loads out the storage pointer, and that's
     // what this custom CallArrayAllocatorSlowPathGenerator gives me. It's a lot
@@ -128,14 +134,20 @@ void SpeculativeJIT::emitAllocateRawObject(GPRReg resultGPR, Structure* structur
         slowCases, this, operationNewRawObject, resultGPR, storageGPR,
         structure, vectorLength));
 
-    if (hasDouble(structure->indexingType()) && numElements < vectorLength) {
+    if (numElements < vectorLength) {
 #if USE(JSVALUE64)
-        m_jit.move(TrustedImm64(bitwise_cast<int64_t>(PNaN)), scratchGPR);
+        if (hasDouble(structure->indexingType()))
+            m_jit.move(TrustedImm64(bitwise_cast<int64_t>(PNaN)), scratchGPR);
+        else
+            m_jit.move(TrustedImm64(JSValue::encode(JSValue())), scratchGPR);
         for (unsigned i = numElements; i < vectorLength; ++i)
             m_jit.store64(scratchGPR, MacroAssembler::Address(storageGPR, sizeof(double) * i));
 #else
         EncodedValueDescriptor value;
-        value.asInt64 = JSValue::encode(JSValue(JSValue::EncodeAsDouble, PNaN));
+        if (hasDouble(structure->indexingType()))
+            value.asInt64 = JSValue::encode(JSValue(JSValue::EncodeAsDouble, PNaN));
+        else
+            value.asInt64 = JSValue::encode(JSValue());
         for (unsigned i = numElements; i < vectorLength; ++i) {
             m_jit.store32(TrustedImm32(value.asBits.tag), MacroAssembler::Address(storageGPR, sizeof(double) * i + OBJECT_OFFSETOF(JSValue, u.asBits.tag)));
             m_jit.store32(TrustedImm32(value.asBits.payload), MacroAssembler::Address(storageGPR, sizeof(double) * i + OBJECT_OFFSETOF(JSValue, u.asBits.payload)));
@@ -3822,9 +3834,10 @@ void SpeculativeJIT::compileMakeRope(Node* node)
     GPRReg scratchGPR = scratch.gpr();
     
     JITCompiler::JumpList slowPath;
-    MarkedAllocator& markedAllocator = m_jit.vm()->heap.allocatorForObjectWithDestructor(sizeof(JSRopeString));
-    m_jit.move(TrustedImmPtr(&markedAllocator), allocatorGPR);
-    emitAllocateJSCell(resultGPR, allocatorGPR, TrustedImmPtr(m_jit.vm()->stringStructure.get()), scratchGPR, slowPath);
+    MarkedAllocator* markedAllocator = m_jit.vm()->heap.allocatorForObjectWithDestructor(sizeof(JSRopeString));
+    RELEASE_ASSERT(markedAllocator);
+    m_jit.move(TrustedImmPtr(markedAllocator), allocatorGPR);
+    emitAllocateJSCell(resultGPR, markedAllocator, allocatorGPR, TrustedImmPtr(m_jit.vm()->stringStructure.get()), scratchGPR, slowPath);
         
     m_jit.storePtr(TrustedImmPtr(0), JITCompiler::Address(resultGPR, JSString::offsetOfValue()));
     for (unsigned i = 0; i < numOpGPRs; ++i)
@@ -6835,7 +6848,14 @@ void SpeculativeJIT::compileCheckStructure(Node* node)
 
 void SpeculativeJIT::compileAllocatePropertyStorage(Node* node)
 {
-    if (node->transition()->previous->couldHaveIndexingHeader()) {
+    ASSERT(!node->transition()->previous->outOfLineCapacity());
+    ASSERT(initialOutOfLineCapacity == node->transition()->next->outOfLineCapacity());
+    
+    size_t size = initialOutOfLineCapacity * sizeof(JSValue);
+
+    MarkedAllocator* allocator = m_jit.vm()->heap.allocatorForAuxiliaryData(size);
+
+    if (!allocator || node->transition()->previous->couldHaveIndexingHeader()) {
         SpeculateCellOperand base(this, node->child1());
         
         GPRReg baseGPR = base.gpr();
@@ -6852,18 +6872,18 @@ void SpeculativeJIT::compileAllocatePropertyStorage(Node* node)
     
     SpeculateCellOperand base(this, node->child1());
     GPRTemporary scratch1(this);
+    GPRTemporary scratch2(this);
+    GPRTemporary scratch3(this);
         
     GPRReg baseGPR = base.gpr();
     GPRReg scratchGPR1 = scratch1.gpr();
+    GPRReg scratchGPR2 = scratch2.gpr();
+    GPRReg scratchGPR3 = scratch3.gpr();
         
-    ASSERT(!node->transition()->previous->outOfLineCapacity());
-    ASSERT(initialOutOfLineCapacity == node->transition()->next->outOfLineCapacity());
-    
-    JITCompiler::Jump slowPath =
-        emitAllocateBasicStorage(
-            TrustedImm32(initialOutOfLineCapacity * sizeof(JSValue)), scratchGPR1);
-
-    m_jit.addPtr(JITCompiler::TrustedImm32(sizeof(IndexingHeader)), scratchGPR1);
+    m_jit.move(JITCompiler::TrustedImmPtr(allocator), scratchGPR2);
+    JITCompiler::JumpList slowPath;
+    m_jit.emitAllocate(scratchGPR1, allocator, scratchGPR2, scratchGPR3, slowPath);
+    m_jit.addPtr(JITCompiler::TrustedImm32(size + sizeof(IndexingHeader)), scratchGPR1);
         
     addSlowPathGenerator(
         slowPathCall(slowPath, this, operationAllocatePropertyStorageWithInitialCapacity, scratchGPR1));
@@ -6878,8 +6898,10 @@ void SpeculativeJIT::compileReallocatePropertyStorage(Node* node)
     size_t oldSize = node->transition()->previous->outOfLineCapacity() * sizeof(JSValue);
     size_t newSize = oldSize * outOfLineGrowthFactor;
     ASSERT(newSize == node->transition()->next->outOfLineCapacity() * sizeof(JSValue));
+    
+    MarkedAllocator* allocator = m_jit.vm()->heap.allocatorForAuxiliaryData(newSize);
 
-    if (node->transition()->previous->couldHaveIndexingHeader()) {
+    if (!allocator || node->transition()->previous->couldHaveIndexingHeader()) {
         SpeculateCellOperand base(this, node->child1());
         
         GPRReg baseGPR = base.gpr();
@@ -6898,16 +6920,19 @@ void SpeculativeJIT::compileReallocatePropertyStorage(Node* node)
     StorageOperand oldStorage(this, node->child2());
     GPRTemporary scratch1(this);
     GPRTemporary scratch2(this);
+    GPRTemporary scratch3(this);
         
     GPRReg baseGPR = base.gpr();
     GPRReg oldStorageGPR = oldStorage.gpr();
     GPRReg scratchGPR1 = scratch1.gpr();
     GPRReg scratchGPR2 = scratch2.gpr();
-        
-    JITCompiler::Jump slowPath =
-        emitAllocateBasicStorage(TrustedImm32(newSize), scratchGPR1);
-
-    m_jit.addPtr(JITCompiler::TrustedImm32(sizeof(IndexingHeader)), scratchGPR1);
+    GPRReg scratchGPR3 = scratch3.gpr();
+    
+    JITCompiler::JumpList slowPath;
+    m_jit.move(JITCompiler::TrustedImmPtr(allocator), scratchGPR2);
+    m_jit.emitAllocate(scratchGPR1, allocator, scratchGPR2, scratchGPR3, slowPath);
+    
+    m_jit.addPtr(JITCompiler::TrustedImm32(newSize + sizeof(IndexingHeader)), scratchGPR1);
         
     addSlowPathGenerator(
         slowPathCall(slowPath, this, operationAllocatePropertyStorage, scratchGPR1, newSize / sizeof(JSValue)));
index 5db0fd8..827b4e7 100644 (file)
@@ -2555,18 +2555,21 @@ public:
 
     // Allocator for a cell of a specific size.
     template <typename StructureType> // StructureType can be GPR or ImmPtr.
-    void emitAllocateJSCell(GPRReg resultGPR, GPRReg allocatorGPR, StructureType structure,
+    void emitAllocateJSCell(
+        GPRReg resultGPR, MarkedAllocator* allocator, GPRReg allocatorGPR, StructureType structure,
         GPRReg scratchGPR, MacroAssembler::JumpList& slowPath)
     {
-        m_jit.emitAllocateJSCell(resultGPR, allocatorGPR, structure, scratchGPR, slowPath);
+        m_jit.emitAllocateJSCell(resultGPR, allocator, allocatorGPR, structure, scratchGPR, slowPath);
     }
 
     // Allocator for an object of a specific size.
     template <typename StructureType, typename StorageType> // StructureType and StorageType can be GPR or ImmPtr.
-    void emitAllocateJSObject(GPRReg resultGPR, GPRReg allocatorGPR, StructureType structure,
+    void emitAllocateJSObject(
+        GPRReg resultGPR, MarkedAllocator* allocator, GPRReg allocatorGPR, StructureType structure,
         StorageType storage, GPRReg scratchGPR, MacroAssembler::JumpList& slowPath)
     {
-        m_jit.emitAllocateJSObject(resultGPR, allocatorGPR, structure, storage, scratchGPR, slowPath);
+        m_jit.emitAllocateJSObject(
+            resultGPR, allocator, allocatorGPR, structure, storage, scratchGPR, slowPath);
     }
 
     template <typename ClassType, typename StructureType, typename StorageType> // StructureType and StorageType can be GPR or ImmPtr.
index f2bb673..2381f52 100644 (file)
@@ -4068,7 +4068,7 @@ void SpeculativeJIT::compile(Node* node)
         m_jit.loadPtr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorGPR);
         m_jit.loadPtr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfStructure()), structureGPR);
         slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, allocatorGPR));
-        emitAllocateJSObject(resultGPR, allocatorGPR, structureGPR, TrustedImmPtr(0), scratchGPR, slowPath);
+        emitAllocateJSObject(resultGPR, nullptr, allocatorGPR, structureGPR, TrustedImmPtr(0), scratchGPR, slowPath);
 
         addSlowPathGenerator(slowPathCall(slowPath, this, operationCreateThis, resultGPR, calleeGPR, node->inlineCapacity()));
         
@@ -4089,10 +4089,10 @@ void SpeculativeJIT::compile(Node* node)
         
         Structure* structure = node->structure();
         size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
-        MarkedAllocator* allocatorPtr = &m_jit.vm()->heap.allocatorForObjectWithoutDestructor(allocationSize);
+        MarkedAllocator* allocatorPtr = m_jit.vm()->heap.allocatorForObjectWithoutDestructor(allocationSize);
 
         m_jit.move(TrustedImmPtr(allocatorPtr), allocatorGPR);
-        emitAllocateJSObject(resultGPR, allocatorGPR, TrustedImmPtr(structure), TrustedImmPtr(0), scratchGPR, slowPath);
+        emitAllocateJSObject(resultGPR, allocatorPtr, allocatorGPR, TrustedImmPtr(structure), TrustedImmPtr(0), scratchGPR, slowPath);
 
         addSlowPathGenerator(slowPathCall(slowPath, this, operationNewObject, resultGPR, structure));
         
@@ -5396,36 +5396,39 @@ void SpeculativeJIT::compileAllocateNewArrayWithSize(JSGlobalObject* globalObjec
     GPRReg storageGPR = storage.gpr();
     GPRReg scratchGPR = scratch.gpr();
     GPRReg scratch2GPR = scratch2.gpr();
-    
+            
     MacroAssembler::JumpList slowCases;
     if (shouldConvertLargeSizeToArrayStorage)
         slowCases.append(m_jit.branch32(MacroAssembler::AboveOrEqual, sizeGPR, TrustedImm32(MIN_ARRAY_STORAGE_CONSTRUCTION_LENGTH)));
-    
+            
     ASSERT((1 << 3) == sizeof(JSValue));
     m_jit.move(sizeGPR, scratchGPR);
     m_jit.lshift32(TrustedImm32(3), scratchGPR);
     m_jit.add32(TrustedImm32(sizeof(IndexingHeader)), scratchGPR, resultGPR);
-    slowCases.append(
-        emitAllocateBasicStorage(resultGPR, storageGPR));
-    m_jit.subPtr(scratchGPR, storageGPR);
+    m_jit.emitAllocateVariableSized(
+        storageGPR, m_jit.vm()->heap.subspaceForAuxiliaryData(), resultGPR, scratchGPR,
+        scratch2GPR, slowCases);
+    m_jit.addPtr(TrustedImm32(sizeof(IndexingHeader)), storageGPR);
     Structure* structure = globalObject->arrayStructureForIndexingTypeDuringAllocation(indexingType);
     emitAllocateJSObject<JSArray>(resultGPR, TrustedImmPtr(structure), storageGPR, scratchGPR, scratch2GPR, slowCases);
-    
+            
     m_jit.store32(sizeGPR, MacroAssembler::Address(storageGPR, Butterfly::offsetOfPublicLength()));
     m_jit.store32(sizeGPR, MacroAssembler::Address(storageGPR, Butterfly::offsetOfVectorLength()));
-    
-    if (hasDouble(indexingType)) {
-        JSValue nan = JSValue(JSValue::EncodeAsDouble, PNaN);
-        
-        m_jit.move(sizeGPR, scratchGPR);
-        MacroAssembler::Jump done = m_jit.branchTest32(MacroAssembler::Zero, scratchGPR);
-        MacroAssembler::Label loop = m_jit.label();
-        m_jit.sub32(TrustedImm32(1), scratchGPR);
-        m_jit.store32(TrustedImm32(nan.u.asBits.tag), MacroAssembler::BaseIndex(storageGPR, scratchGPR, MacroAssembler::TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag)));
-        m_jit.store32(TrustedImm32(nan.u.asBits.payload), MacroAssembler::BaseIndex(storageGPR, scratchGPR, MacroAssembler::TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload)));
-        m_jit.branchTest32(MacroAssembler::NonZero, scratchGPR).linkTo(loop, &m_jit);
-        done.link(&m_jit);
-    }
+            
+    JSValue hole;
+    if (hasDouble(indexingType))
+        hole = JSValue(JSValue::EncodeAsDouble, PNaN);
+    else
+        hole = JSValue();
+            
+    m_jit.move(sizeGPR, scratchGPR);
+    MacroAssembler::Jump done = m_jit.branchTest32(MacroAssembler::Zero, scratchGPR);
+    MacroAssembler::Label loop = m_jit.label();
+    m_jit.sub32(TrustedImm32(1), scratchGPR);
+    m_jit.store32(TrustedImm32(hole.u.asBits.tag), MacroAssembler::BaseIndex(storageGPR, scratchGPR, MacroAssembler::TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag)));
+    m_jit.store32(TrustedImm32(hole.u.asBits.payload), MacroAssembler::BaseIndex(storageGPR, scratchGPR, MacroAssembler::TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload)));
+    m_jit.branchTest32(MacroAssembler::NonZero, scratchGPR).linkTo(loop, &m_jit);
+    done.link(&m_jit);
     
     addSlowPathGenerator(std::make_unique<CallArrayAllocatorWithVariableSizeSlowPathGenerator>(
         slowCases, this, operationNewArrayWithSize, resultGPR,
index b6c2d01..bc681df 100644 (file)
@@ -4013,7 +4013,7 @@ void SpeculativeJIT::compile(Node* node)
         m_jit.loadPtr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfAllocator()), allocatorGPR);
         m_jit.loadPtr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfObjectAllocationProfile() + ObjectAllocationProfile::offsetOfStructure()), structureGPR);
         slowPath.append(m_jit.branchTestPtr(MacroAssembler::Zero, allocatorGPR));
-        emitAllocateJSObject(resultGPR, allocatorGPR, structureGPR, TrustedImmPtr(0), scratchGPR, slowPath);
+        emitAllocateJSObject(resultGPR, nullptr, allocatorGPR, structureGPR, TrustedImmPtr(0), scratchGPR, slowPath);
 
         addSlowPathGenerator(slowPathCall(slowPath, this, operationCreateThis, resultGPR, calleeGPR, node->inlineCapacity()));
         
@@ -4034,10 +4034,10 @@ void SpeculativeJIT::compile(Node* node)
 
         Structure* structure = node->structure();
         size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
-        MarkedAllocator* allocatorPtr = &m_jit.vm()->heap.allocatorForObjectWithoutDestructor(allocationSize);
+        MarkedAllocator* allocatorPtr = m_jit.vm()->heap.allocatorForObjectWithoutDestructor(allocationSize);
 
         m_jit.move(TrustedImmPtr(allocatorPtr), allocatorGPR);
-        emitAllocateJSObject(resultGPR, allocatorGPR, TrustedImmPtr(structure), TrustedImmPtr(0), scratchGPR, slowPath);
+        emitAllocateJSObject(resultGPR, allocatorPtr, allocatorGPR, TrustedImmPtr(structure), TrustedImmPtr(0), scratchGPR, slowPath);
 
         addSlowPathGenerator(slowPathCall(slowPath, this, operationNewObject, resultGPR, structure));
         
@@ -5272,7 +5272,7 @@ void SpeculativeJIT::compile(Node* node)
 
         unsigned bytecodeIndex = node->origin.semantic.bytecodeIndex;
         auto triggerIterator = m_jit.jitCode()->tierUpEntryTriggers.find(bytecodeIndex);
-        RELEASE_ASSERT(triggerIterator != m_jit.jitCode()->tierUpEntryTriggers.end());
+        DFG_ASSERT(m_jit.graph(), node, triggerIterator != m_jit.jitCode()->tierUpEntryTriggers.end());
         uint8_t* forceEntryTrigger = &(m_jit.jitCode()->tierUpEntryTriggers.find(bytecodeIndex)->value);
 
         MacroAssembler::Jump forceOSREntry = m_jit.branchTest8(MacroAssembler::NonZero, MacroAssembler::AbsoluteAddress(forceEntryTrigger));
@@ -5459,31 +5459,34 @@ void SpeculativeJIT::compileAllocateNewArrayWithSize(JSGlobalObject* globalObjec
     MacroAssembler::JumpList slowCases;
     if (shouldConvertLargeSizeToArrayStorage)
         slowCases.append(m_jit.branch32(MacroAssembler::AboveOrEqual, sizeGPR, TrustedImm32(MIN_ARRAY_STORAGE_CONSTRUCTION_LENGTH)));
-    
+            
     ASSERT((1 << 3) == sizeof(JSValue));
     m_jit.move(sizeGPR, scratchGPR);
     m_jit.lshift32(TrustedImm32(3), scratchGPR);
     m_jit.add32(TrustedImm32(sizeof(IndexingHeader)), scratchGPR, resultGPR);
-    slowCases.append(
-        emitAllocateBasicStorage(resultGPR, storageGPR));
-    m_jit.subPtr(scratchGPR, storageGPR);
+    m_jit.emitAllocateVariableSized(
+        storageGPR, m_jit.vm()->heap.subspaceForAuxiliaryData(), resultGPR, scratchGPR,
+        scratch2GPR, slowCases);
+    m_jit.addPtr(TrustedImm32(sizeof(IndexingHeader)), storageGPR);
     Structure* structure = globalObject->arrayStructureForIndexingTypeDuringAllocation(indexingType);
+            
     emitAllocateJSObject<JSArray>(resultGPR, TrustedImmPtr(structure), storageGPR, scratchGPR, scratch2GPR, slowCases);
     
     m_jit.store32(sizeGPR, MacroAssembler::Address(storageGPR, Butterfly::offsetOfPublicLength()));
     m_jit.store32(sizeGPR, MacroAssembler::Address(storageGPR, Butterfly::offsetOfVectorLength()));
-    
-    if (hasDouble(indexingType)) {
+            
+    if (hasDouble(indexingType))
         m_jit.move(TrustedImm64(bitwise_cast<int64_t>(PNaN)), scratchGPR);
-        m_jit.move(sizeGPR, scratch2GPR);
-        MacroAssembler::Jump done = m_jit.branchTest32(MacroAssembler::Zero, scratch2GPR);
-        MacroAssembler::Label loop = m_jit.label();
-        m_jit.sub32(TrustedImm32(1), scratch2GPR);
-        m_jit.store64(scratchGPR, MacroAssembler::BaseIndex(storageGPR, scratch2GPR, MacroAssembler::TimesEight));
-        m_jit.branchTest32(MacroAssembler::NonZero, scratch2GPR).linkTo(loop, &m_jit);
-        done.link(&m_jit);
-    }
-    
+    else
+        m_jit.move(TrustedImm64(JSValue::encode(JSValue())), scratchGPR);
+    m_jit.move(sizeGPR, scratch2GPR);
+    MacroAssembler::Jump done = m_jit.branchTest32(MacroAssembler::Zero, scratch2GPR);
+    MacroAssembler::Label loop = m_jit.label();
+    m_jit.sub32(TrustedImm32(1), scratch2GPR);
+    m_jit.store64(scratchGPR, MacroAssembler::BaseIndex(storageGPR, scratch2GPR, MacroAssembler::TimesEight));
+    m_jit.branchTest32(MacroAssembler::NonZero, scratch2GPR).linkTo(loop, &m_jit);
+    done.link(&m_jit);
+            
     addSlowPathGenerator(std::make_unique<CallArrayAllocatorWithVariableSizeSlowPathGenerator>(
         slowCases, this, operationNewArrayWithSize, resultGPR,
         structure,
index 2b2ea9e..0e5163e 100644 (file)
@@ -40,6 +40,7 @@
 #include "RegExpConstructor.h"
 #include "StringPrototype.h"
 #include <cstdlib>
+#include <wtf/text/StringBuilder.h>
 
 namespace JSC { namespace DFG {
 
@@ -420,7 +421,7 @@ private:
                     dataLog("Giving up because of pattern limit.\n");
                 break;
             }
-
+            
             unsigned lastIndex;
             if (regExp->globalOrSticky()) {
                 // This will only work if we can prove what the value of lastIndex is. To do this
@@ -514,7 +515,8 @@ private:
                     }
 
                     unsigned publicLength = resultArray.size();
-                    unsigned vectorLength = std::max(BASE_VECTOR_LEN, publicLength);
+                    unsigned vectorLength =
+                        Butterfly::optimalContiguousVectorLength(structure, publicLength);
 
                     UniquedStringImpl* indexUID = vm().propertyNames->index.impl();
                     UniquedStringImpl* inputUID = vm().propertyNames->input.impl();
index cff05a8..d2e16f3 100644 (file)
@@ -76,7 +76,6 @@ namespace JSC { namespace FTL {
     macro(JSString_value, JSString::offsetOfValue()) \
     macro(JSSymbolTableObject_symbolTable, JSSymbolTableObject::offsetOfSymbolTable()) \
     macro(JSWrapperObject_internalValue, JSWrapperObject::internalValueOffset()) \
-    macro(MarkedAllocator_freeListHead, MarkedAllocator::offsetOfFreeListHead()) \
     macro(RegExpConstructor_cachedResult_lastRegExp, RegExpConstructor::offsetOfCachedResult() + RegExpCachedResult::offsetOfLastRegExp()) \
     macro(RegExpConstructor_cachedResult_lastInput, RegExpConstructor::offsetOfCachedResult() + RegExpCachedResult::offsetOfLastInput()) \
     macro(RegExpConstructor_cachedResult_result_start, RegExpConstructor::offsetOfCachedResult() + RegExpCachedResult::offsetOfResult() + OBJECT_OFFSETOF(MatchResult, start)) \
@@ -109,8 +108,7 @@ namespace JSC { namespace FTL {
     macro(JSEnvironmentRecord_variables, JSEnvironmentRecord::offsetOfVariables(), sizeof(EncodedJSValue)) \
     macro(JSPropertyNameEnumerator_cachedPropertyNamesVectorContents, 0, sizeof(WriteBarrier<JSString>)) \
     macro(JSRopeString_fibers, JSRopeString::offsetOfFibers(), sizeof(WriteBarrier<JSString>)) \
-    macro(MarkedSpace_Subspace_impreciseAllocators, OBJECT_OFFSETOF(MarkedSpace::Subspace, impreciseAllocators), sizeof(MarkedAllocator)) \
-    macro(MarkedSpace_Subspace_preciseAllocators, OBJECT_OFFSETOF(MarkedSpace::Subspace, preciseAllocators), sizeof(MarkedAllocator)) \
+    macro(MarkedSpace_Subspace_allocatorForSizeStep, OBJECT_OFFSETOF(MarkedSpace::Subspace, allocatorForSizeStep), sizeof(MarkedAllocator*)) \
     macro(ScopedArguments_overflowStorage, ScopedArguments::overflowStorageOffset(), sizeof(EncodedJSValue)) \
     macro(WriteBarrierBuffer_bufferContents, 0, sizeof(JSCell*)) \
     macro(characters8, 0, sizeof(LChar)) \
index 5f8ae85..39eb666 100644 (file)
@@ -42,6 +42,7 @@
 #include "FTLJITCode.h"
 #include "FTLThunks.h"
 #include "JITSubGenerator.h"
+#include "JSCInlines.h"
 #include "LinkBuffer.h"
 #include "PCToCodeOriginMap.h"
 #include "ScratchRegisterAllocator.h"
index 00a88f0..dcf3dad 100644 (file)
@@ -32,6 +32,7 @@
 #include "DFGPlan.h"
 #include "FTLState.h"
 #include "FTLThunks.h"
+#include "JSCInlines.h"
 #include "ProfilerDatabase.h"
 
 namespace JSC { namespace FTL {
index add851a..dfef7d9 100644 (file)
@@ -3797,7 +3797,7 @@ private:
                 size, m_out.constInt32(DirectArguments::allocationSize(minCapacity)));
             
             fastObject = allocateVariableSizedObject<DirectArguments>(
-                size, structure, m_out.intPtrZero, slowPath);
+                m_out.zeroExtPtr(size), structure, m_out.intPtrZero, slowPath);
         }
         
         m_out.store32(length.value, fastObject, m_heaps.DirectArguments_length);
@@ -3898,7 +3898,7 @@ private:
             LValue arrayLength = lowInt32(m_node->child1());
             LBasicBlock loopStart = m_out.newBlock();
             bool shouldLargeArraySizeCreateArrayStorage = false;
-            LValue array = compileAllocateArrayWithSize(arrayLength, ArrayWithContiguous, shouldLargeArraySizeCreateArrayStorage);
+            LValue array = allocateArrayWithSize(arrayLength, ArrayWithContiguous, shouldLargeArraySizeCreateArrayStorage);
 
             LValue butterfly = m_out.loadPtr(array, m_heaps.JSObject_butterfly);
             ValueFromBlock startLength = m_out.anchor(arrayLength);
@@ -4071,7 +4071,7 @@ private:
             m_out.constIntPtr(m_node->numConstants())));
     }
 
-    LValue compileAllocateArrayWithSize(LValue publicLength, IndexingType indexingType, bool shouldLargeArraySizeCreateArrayStorage = true)
+    LValue allocateArrayWithSize(LValue publicLength, IndexingType indexingType, bool shouldLargeArraySizeCreateArrayStorage = true)
     {
         JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node->origin.semantic);
         Structure* structure = globalObject->arrayStructureForIndexingTypeDuringAllocation(indexingType);
@@ -4082,33 +4082,40 @@ private:
             || hasContiguous(structure->indexingType()));
 
         LBasicBlock fastCase = m_out.newBlock();
-        LBasicBlock largeCase = shouldLargeArraySizeCreateArrayStorage ? m_out.newBlock() : nullptr;
+        LBasicBlock largeCase = m_out.newBlock();
         LBasicBlock failCase = m_out.newBlock();
         LBasicBlock continuation = m_out.newBlock();
-        LBasicBlock lastNext = nullptr;
-        if (shouldLargeArraySizeCreateArrayStorage) {
-            m_out.branch(
-                m_out.aboveOrEqual(publicLength, m_out.constInt32(MIN_ARRAY_STORAGE_CONSTRUCTION_LENGTH)),
-                rarely(largeCase), usually(fastCase));
-            lastNext = m_out.appendTo(fastCase, largeCase);
-        }
-
+        LBasicBlock slowCase = m_out.newBlock();
+        
+        LBasicBlock lastNext = m_out.insertNewBlocksBefore(fastCase);
+        
+        LValue predicate;
+        if (shouldLargeArraySizeCreateArrayStorage)
+            predicate = m_out.aboveOrEqual(publicLength, m_out.constInt32(MIN_ARRAY_STORAGE_CONSTRUCTION_LENGTH));
+        else
+            predicate = m_out.booleanFalse;
+        
+        m_out.branch(predicate, rarely(largeCase), usually(fastCase));
         
+        m_out.appendTo(fastCase, largeCase);
+
         // We don't round up to BASE_VECTOR_LEN for new Array(blah).
         LValue vectorLength = publicLength;
-        
+            
         LValue payloadSize =
             m_out.shl(m_out.zeroExt(vectorLength, pointerType()), m_out.constIntPtr(3));
-        
+            
         LValue butterflySize = m_out.add(
             payloadSize, m_out.constIntPtr(sizeof(IndexingHeader)));
-        
-        LValue endOfStorage = allocateBasicStorageAndGetEnd(butterflySize, failCase);
-        
-        LValue butterfly = m_out.sub(endOfStorage, payloadSize);
-        
+            
+        LValue allocator = allocatorForSize(
+            vm().heap.subspaceForAuxiliaryData(), butterflySize, failCase);
+        LValue startOfStorage = allocateHeapCell(allocator, failCase);
+            
+        LValue butterfly = m_out.add(startOfStorage, m_out.constIntPtr(sizeof(IndexingHeader)));
+            
         LValue object = allocateObject<JSArray>(structure, butterfly, failCase);
-        
+            
         m_out.store32(publicLength, butterfly, m_heaps.Butterfly_publicLength);
         m_out.store32(vectorLength, butterfly, m_heaps.Butterfly_vectorLength);
 
@@ -4118,26 +4125,20 @@ private:
         m_out.jump(continuation);
         
         LValue structureValue;
-        if (shouldLargeArraySizeCreateArrayStorage) {
-            LBasicBlock slowCase = m_out.newBlock();
-
-            m_out.appendTo(largeCase, failCase);
-            ValueFromBlock largeStructure = m_out.anchor(m_out.constIntPtr(
+        
+        m_out.appendTo(largeCase, failCase);
+        ValueFromBlock largeStructure = m_out.anchor(
+            m_out.constIntPtr(
                 globalObject->arrayStructureForIndexingTypeDuringAllocation(ArrayWithArrayStorage)));
-            m_out.jump(slowCase);
-
-            m_out.appendTo(failCase, slowCase);
-            ValueFromBlock failStructure = m_out.anchor(m_out.constIntPtr(structure));
-            m_out.jump(slowCase);
-
-            m_out.appendTo(slowCase, continuation);
-            structureValue = m_out.phi(
-                pointerType(), largeStructure, failStructure);
-        } else {
-            ASSERT(!lastNext);
-            lastNext = m_out.appendTo(failCase, continuation);
-            structureValue = m_out.constIntPtr(structure);
-        }
+        m_out.jump(slowCase);
+        
+        m_out.appendTo(failCase, slowCase);
+        ValueFromBlock failStructure = m_out.anchor(m_out.constIntPtr(structure));
+        m_out.jump(slowCase);
+        
+        m_out.appendTo(slowCase, continuation);
+        structureValue = m_out.phi(
+            pointerType(), largeStructure, failStructure);
 
         LValue slowResultValue = lazySlowPath(
             [=] (const Vector<Location>& locations) -> RefPtr<LazySlowPath::Generator> {
@@ -4162,7 +4163,7 @@ private:
             m_node->indexingType());
         
         if (!globalObject->isHavingABadTime() && !hasAnyArrayStorage(m_node->indexingType())) {
-            setJSValue(compileAllocateArrayWithSize(publicLength, m_node->indexingType()));
+            setJSValue(allocateArrayWithSize(publicLength, m_node->indexingType()));
             return;
         }
         
@@ -4431,13 +4432,12 @@ private:
         
         LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
         
-        MarkedAllocator& allocator =
+        MarkedAllocator* allocator =
             vm().heap.allocatorForObjectWithDestructor(sizeof(JSRopeString));
+        DFG_ASSERT(m_graph, m_node, allocator);
         
         LValue result = allocateCell(
-            m_out.constIntPtr(&allocator),
-            vm().stringStructure.get(),
-            slowPath);
+            m_out.constIntPtr(allocator), vm().stringStructure.get(), slowPath);
         
         m_out.storePtr(m_out.intPtrZero, result, m_heaps.JSString_value);
         for (unsigned i = 0; i < numKids; ++i)
@@ -6944,7 +6944,8 @@ private:
             
             if (structure->outOfLineCapacity() || hasIndexedProperties(structure->indexingType())) {
                 size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
-                MarkedAllocator* allocator = &vm().heap.allocatorForObjectWithoutDestructor(allocationSize);
+                MarkedAllocator* cellAllocator = vm().heap.allocatorForObjectWithoutDestructor(allocationSize);
+                DFG_ASSERT(m_graph, m_node, cellAllocator);
 
                 bool hasIndexingHeader = hasIndexedProperties(structure->indexingType());
                 unsigned indexingHeaderSize = 0;
@@ -6969,7 +6970,7 @@ private:
                     indexingPayloadSizeInBytes =
                         m_out.mul(m_out.zeroExtPtr(vectorLength), m_out.intPtrEight);
                 }
-
+                
                 LValue butterflySize = m_out.add(
                     m_out.constIntPtr(
                         structure->outOfLineCapacity() * sizeof(JSValue) + indexingHeaderSize),
@@ -6980,16 +6981,19 @@ private:
                 
                 LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
                 
-                LValue endOfStorage = allocateBasicStorageAndGetEnd(butterflySize, slowPath);
+                LValue startOfStorage = allocateHeapCell(
+                    allocatorForSize(vm().heap.subspaceForAuxiliaryData(), butterflySize, slowPath),
+                    slowPath);
 
                 LValue fastButterflyValue = m_out.add(
-                    m_out.sub(endOfStorage, indexingPayloadSizeInBytes),
-                    m_out.constIntPtr(sizeof(IndexingHeader) - indexingHeaderSize));
+                    startOfStorage,
+                    m_out.constIntPtr(
+                        structure->outOfLineCapacity() * sizeof(JSValue) + sizeof(IndexingHeader)));
 
                 m_out.store32(vectorLength, fastButterflyValue, m_heaps.Butterfly_vectorLength);
                 
                 LValue fastObjectValue = allocateObject(
-                    m_out.constIntPtr(allocator), structure, fastButterflyValue, slowPath);
+                    m_out.constIntPtr(cellAllocator), structure, fastButterflyValue, slowPath);
 
                 ValueFromBlock fastObject = m_out.anchor(fastObjectValue);
                 ValueFromBlock fastButterfly = m_out.anchor(fastButterflyValue);
@@ -7778,10 +7782,11 @@ private:
 
     void initializeArrayElements(IndexingType indexingType, LValue vectorLength, LValue butterfly)
     {
-        if (!hasDouble(indexingType)) {
-            // The GC already initialized everything to JSValue() for us.
-            return;
-        }
+        LValue hole;
+        if (hasDouble(indexingType))
+            hole = m_out.constInt64(bitwise_cast<int64_t>(PNaN));
+        else
+            hole = m_out.constInt64(JSValue::encode(JSValue()));
 
         // Doubles must be initialized to PNaN.
         LBasicBlock initLoop = m_out.newBlock();
@@ -7796,9 +7801,7 @@ private:
         LValue index = m_out.phi(Int32, originalIndex);
         LValue pointer = m_out.phi(pointerType(), originalPointer);
         
-        m_out.store64(
-            m_out.constInt64(bitwise_cast<int64_t>(PNaN)),
-            TypedPointer(m_heaps.indexedDoubleProperties.atAnyIndex(), pointer));
+        m_out.store64(hole, TypedPointer(m_heaps.indexedDoubleProperties.atAnyIndex(), pointer));
         
         LValue nextIndex = m_out.sub(index, m_out.int32One);
         m_out.addIncomingToPhi(index, m_out.anchor(nextIndex));
@@ -7859,13 +7862,12 @@ private:
         LBasicBlock continuation = m_out.newBlock();
         
         LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
-        
-        LValue endOfStorage = allocateBasicStorageAndGetEnd(
-            m_out.constIntPtr(sizeInValues * sizeof(JSValue)), slowPath);
-        
+
+        size_t sizeInBytes = sizeInValues * sizeof(JSValue);
+        MarkedAllocator* allocator = vm().heap.allocatorForAuxiliaryData(sizeInBytes);
+        LValue startOfStorage = allocateHeapCell(m_out.constIntPtr(allocator), slowPath);
         ValueFromBlock fastButterfly = m_out.anchor(
-            m_out.add(m_out.constIntPtr(sizeof(IndexingHeader)), endOfStorage));
-        
+            m_out.add(m_out.constIntPtr(sizeInBytes + sizeof(IndexingHeader)), startOfStorage));
         m_out.jump(continuation);
         
         m_out.appendTo(slowPath, continuation);
@@ -8434,29 +8436,64 @@ private:
         setJSValue(patchpoint);
     }
 
-    LValue allocateCell(LValue allocator, LBasicBlock slowPath)
+    LValue allocateHeapCell(LValue allocator, LBasicBlock slowPath)
     {
-        LBasicBlock success = m_out.newBlock();
-    
-        LValue result;
-        LValue condition;
-        if (Options::forceGCSlowPaths()) {
-            result = m_out.intPtrZero;
-            condition = m_out.booleanFalse;
-        } else {
-            result = m_out.loadPtr(
-                allocator, m_heaps.MarkedAllocator_freeListHead);
-            condition = m_out.notNull(result);
+        MarkedAllocator* actualAllocator = nullptr;
+        if (allocator->hasIntPtr())
+            actualAllocator = bitwise_cast<MarkedAllocator*>(allocator->asIntPtr());
+        
+        if (!actualAllocator) {
+            // This means that either we know that the allocator is null or we don't know what the
+            // allocator is. In either case, we need the null check.
+            LBasicBlock haveAllocator = m_out.newBlock();
+            LBasicBlock lastNext = m_out.insertNewBlocksBefore(haveAllocator);
+            m_out.branch(allocator, usually(haveAllocator), rarely(slowPath));
+            m_out.appendTo(haveAllocator, lastNext);
         }
-        m_out.branch(condition, usually(success), rarely(slowPath));
         
-        m_out.appendTo(success);
+        LBasicBlock continuation = m_out.newBlock();
         
-        m_out.storePtr(
-            m_out.loadPtr(result, m_heaps.JSCell_freeListNext),
-            allocator, m_heaps.MarkedAllocator_freeListHead);
-
-        return result;
+        LBasicBlock lastNext = m_out.insertNewBlocksBefore(continuation);
+        
+        PatchpointValue* patchpoint = m_out.patchpoint(pointerType());
+        patchpoint->effects.terminal = true;
+        patchpoint->appendSomeRegister(allocator);
+        patchpoint->numGPScratchRegisters++;
+        patchpoint->resultConstraint = ValueRep::SomeEarlyRegister;
+        
+        m_out.appendSuccessor(usually(continuation));
+        m_out.appendSuccessor(rarely(slowPath));
+        
+        patchpoint->setGenerator(
+            [=] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+                CCallHelpers::JumpList jumpToSlowPath;
+                
+                // We use a patchpoint to emit the allocation path because whenever we mess with
+                // allocation paths, we already reason about them at the machine code level. We know
+                // exactly what instruction sequence we want. We're confident that no compiler
+                // optimization could make this code better. So, it's best to have the code in
+                // AssemblyHelpers::emitAllocate(). That way, the same optimized path is shared by
+                // all of the compiler tiers.
+                jit.emitAllocateWithNonNullAllocator(
+                    params[0].gpr(), actualAllocator, params[1].gpr(), params.gpScratch(0),
+                    jumpToSlowPath);
+                
+                CCallHelpers::Jump jumpToSuccess;
+                if (!params.fallsThroughToSuccessor(0))
+                    jumpToSuccess = jit.jump();
+                
+                Vector<Box<CCallHelpers::Label>> labels = params.successorLabels();
+                
+                params.addLatePath(
+                    [=] (CCallHelpers& jit) {
+                        jumpToSlowPath.linkTo(*labels[1], &jit);
+                        if (jumpToSuccess.isSet())
+                            jumpToSuccess.linkTo(*labels[0], &jit);
+                    });
+            });
+        
+        m_out.appendTo(continuation, lastNext);
+        return patchpoint;
     }
     
     void storeStructure(LValue object, Structure* structure)
@@ -8469,7 +8506,7 @@ private:
 
     LValue allocateCell(LValue allocator, Structure* structure, LBasicBlock slowPath)
     {
-        LValue result = allocateCell(allocator, slowPath);
+        LValue result = allocateHeapCell(allocator, slowPath);
         storeStructure(result, structure);
         return result;
     }
@@ -8486,7 +8523,7 @@ private:
     LValue allocateObject(
         size_t size, Structure* structure, LValue butterfly, LBasicBlock slowPath)
     {
-        MarkedAllocator* allocator = &vm().heap.allocatorForObjectOfType<ClassType>(size);
+        MarkedAllocator* allocator = vm().heap.allocatorForObjectOfType<ClassType>(size);
         return allocateObject(m_out.constIntPtr(allocator), structure, butterfly, slowPath);
     }
     
@@ -8497,46 +8534,60 @@ private:
             ClassType::allocationSize(0), structure, butterfly, slowPath);
     }
     
-    template<typename ClassType>
-    LValue allocateVariableSizedObject(
-        LValue size, Structure* structure, LValue butterfly, LBasicBlock slowPath)
+    LValue allocatorForSize(LValue subspace, LValue size, LBasicBlock slowPath)
     {
-        static_assert(!(MarkedSpace::preciseStep & (MarkedSpace::preciseStep - 1)), "MarkedSpace::preciseStep must be a power of two.");
-        static_assert(!(MarkedSpace::impreciseStep & (MarkedSpace::impreciseStep - 1)), "MarkedSpace::impreciseStep must be a power of two.");
-
-        LValue subspace = m_out.constIntPtr(&vm().heap.subspaceForObjectOfType<ClassType>());
+        static_assert(!(MarkedSpace::sizeStep & (MarkedSpace::sizeStep - 1)), "MarkedSpace::sizeStep must be a power of two.");
+        
+        // Try to do some constant-folding here.
+        if (subspace->hasIntPtr() && size->hasIntPtr()) {
+            MarkedSpace::Subspace* actualSubspace = bitwise_cast<MarkedSpace::Subspace*>(subspace->asIntPtr());
+            size_t actualSize = size->asIntPtr();
+            
+            MarkedAllocator* actualAllocator = MarkedSpace::allocatorFor(*actualSubspace, actualSize);
+            if (!actualAllocator) {
+                LBasicBlock continuation = m_out.newBlock();
+                LBasicBlock lastNext = m_out.insertNewBlocksBefore(continuation);
+                m_out.jump(slowPath);
+                m_out.appendTo(continuation, lastNext);
+                return m_out.intPtrZero;
+            }
+            
+            return m_out.constIntPtr(actualAllocator);
+        }
+        
+        unsigned stepShift = getLSBSet(MarkedSpace::sizeStep);
         
-        LBasicBlock smallCaseBlock = m_out.newBlock();
-        LBasicBlock largeOrOversizeCaseBlock = m_out.newBlock();
-        LBasicBlock largeCaseBlock = m_out.newBlock();
         LBasicBlock continuation = m_out.newBlock();
         
-        LValue uproundedSize = m_out.add(size, m_out.constInt32(MarkedSpace::preciseStep - 1));
-        LValue isSmall = m_out.below(uproundedSize, m_out.constInt32(MarkedSpace::preciseCutoff));
-        m_out.branch(isSmall, unsure(smallCaseBlock), unsure(largeOrOversizeCaseBlock));
+        LBasicBlock lastNext = m_out.insertNewBlocksBefore(continuation);
         
-        LBasicBlock lastNext = m_out.appendTo(smallCaseBlock, largeOrOversizeCaseBlock);
-        TypedPointer address = m_out.baseIndex(
-            m_heaps.MarkedSpace_Subspace_preciseAllocators, subspace,
-            m_out.zeroExtPtr(m_out.lShr(uproundedSize, m_out.constInt32(getLSBSet(MarkedSpace::preciseStep)))));
-        ValueFromBlock smallAllocator = m_out.anchor(address.value());
-        m_out.jump(continuation);
+        LValue sizeClassIndex = m_out.lShr(
+            m_out.add(size, m_out.constIntPtr(MarkedSpace::sizeStep - 1)),
+            m_out.constInt32(stepShift));
         
-        m_out.appendTo(largeOrOversizeCaseBlock, largeCaseBlock);
         m_out.branch(
-            m_out.below(uproundedSize, m_out.constInt32(MarkedSpace::impreciseCutoff)),
-            usually(largeCaseBlock), rarely(slowPath));
-        
-        m_out.appendTo(largeCaseBlock, continuation);
-        address = m_out.baseIndex(
-            m_heaps.MarkedSpace_Subspace_impreciseAllocators, subspace,
-            m_out.zeroExtPtr(m_out.lShr(uproundedSize, m_out.constInt32(getLSBSet(MarkedSpace::impreciseStep)))));
-        ValueFromBlock largeAllocator = m_out.anchor(address.value());
-        m_out.jump(continuation);
+            m_out.above(sizeClassIndex, m_out.constIntPtr(MarkedSpace::largeCutoff >> stepShift)),
+            rarely(slowPath), usually(continuation));
         
         m_out.appendTo(continuation, lastNext);
-        LValue allocator = m_out.phi(pointerType(), smallAllocator, largeAllocator);
         
+        return m_out.loadPtr(
+            m_out.baseIndex(
+                m_heaps.MarkedSpace_Subspace_allocatorForSizeStep,
+                subspace, m_out.sub(sizeClassIndex, m_out.intPtrOne)));
+    }
+    
+    LValue allocatorForSize(MarkedSpace::Subspace& subspace, LValue size, LBasicBlock slowPath)
+    {
+        return allocatorForSize(m_out.constIntPtr(&subspace), size, slowPath);
+    }
+    
+    template<typename ClassType>
+    LValue allocateVariableSizedObject(
+        LValue size, Structure* structure, LValue butterfly, LBasicBlock slowPath)
+    {
+        LValue allocator = allocatorForSize(
+            vm().heap.subspaceForObjectOfType<ClassType>(), size, slowPath);
         return allocateObject(allocator, structure, butterfly, slowPath);
     }
     
@@ -8569,7 +8620,11 @@ private:
     LValue allocateObject(Structure* structure)
     {
         size_t allocationSize = JSFinalObject::allocationSize(structure->inlineCapacity());
-        MarkedAllocator* allocator = &vm().heap.allocatorForObjectWithoutDestructor(allocationSize);
+        MarkedAllocator* allocator = vm().heap.allocatorForObjectWithoutDestructor(allocationSize);
+        
+        // FIXME: If the allocator is null, we could simply emit a normal C call to the allocator
+        // instead of putting it on the slow path.
+        // https://bugs.webkit.org/show_bug.cgi?id=161062
         
         LBasicBlock slowPath = m_out.newBlock();
         LBasicBlock continuation = m_out.newBlock();
@@ -8615,34 +8670,34 @@ private:
     ArrayValues allocateJSArray(
         Structure* structure, unsigned numElements, LBasicBlock slowPath)
     {
-        ASSERT(
+        DFG_ASSERT(
+            m_graph, m_node,
             hasUndecided(structure->indexingType())
             || hasInt32(structure->indexingType())
             || hasDouble(structure->indexingType())
             || hasContiguous(structure->indexingType()));
+        DFG_ASSERT(m_graph, m_node, !structure->outOfLineCapacity());
         
-        unsigned vectorLength = std::max(BASE_VECTOR_LEN, numElements);
+        unsigned vectorLength = Butterfly::optimalContiguousVectorLength(0lu, numElements);
         
-        LValue endOfStorage = allocateBasicStorageAndGetEnd(
-            m_out.constIntPtr(sizeof(JSValue) * vectorLength + sizeof(IndexingHeader)),
-            slowPath);
+        MarkedAllocator* allocator = vm().heap.allocatorForAuxiliaryData(
+            sizeof(JSValue) * vectorLength + sizeof(IndexingHeader));
+        LValue startOfStorage = allocateHeapCell(m_out.constIntPtr(allocator), slowPath);
         
-        LValue butterfly = m_out.sub(
-            endOfStorage, m_out.constIntPtr(sizeof(JSValue) * vectorLength));
+        LValue butterfly = m_out.add(startOfStorage, m_out.constIntPtr(sizeof(IndexingHeader)));
         
-        LValue object = allocateObject<JSArray>(
-            structure, butterfly, slowPath);
+        LValue object = allocateObject<JSArray>(structure, butterfly, slowPath);
         
         m_out.store32(m_out.constInt32(numElements), butterfly, m_heaps.Butterfly_publicLength);
         m_out.store32(m_out.constInt32(vectorLength), butterfly, m_heaps.Butterfly_vectorLength);
-        
-        if (hasDouble(structure->indexingType())) {
-            for (unsigned i = numElements; i < vectorLength; ++i) {
-                m_out.store64(
-                    m_out.constInt64(bitwise_cast<int64_t>(PNaN)),
-                    butterfly, m_heaps.indexedDoubleProperties[i]);
-            }
-        }
+
+        LValue hole;
+        if (hasDouble(structure->indexingType()))
+            hole = m_out.constInt64(bitwise_cast<int64_t>(PNaN));
+        else
+            hole = m_out.constInt64(JSValue::encode(JSValue()));
+        for (unsigned i = numElements; i < vectorLength; ++i)
+            m_out.store64(hole, butterfly, m_heaps.indexedDoubleProperties[i]);
         
         return ArrayValues(object, butterfly);
     }
index fea12dc..cab3050 100644 (file)
@@ -100,7 +100,9 @@ SlotBaseValue* Output::lockedStackSlot(size_t bytes)
 
 LValue Output::constBool(bool value)
 {
-    return m_block->appendNew<B3::Const32Value>(m_proc, origin(), value);
+    if (value)
+        return booleanTrue;
+    return booleanFalse;
 }
 
 LValue Output::constInt32(int32_t value)
@@ -125,6 +127,10 @@ LValue Output::phi(LType type)
 
 LValue Output::add(LValue left, LValue right)
 {
+    if (Value* result = left->addConstant(m_proc, right)) {
+        m_block->append(result);
+        return result;
+    }
     return m_block->appendNew<B3::Value>(m_proc, B3::Add, origin(), left, right);
 }
 
@@ -205,16 +211,28 @@ LValue Output::bitXor(LValue left, LValue right)
 
 LValue Output::shl(LValue left, LValue right)
 {
+    if (Value* result = left->shlConstant(m_proc, right)) {
+        m_block->append(result);
+        return result;
+    }
     return m_block->appendNew<B3::Value>(m_proc, B3::Shl, origin(), left, castToInt32(right));
 }
 
 LValue Output::aShr(LValue left, LValue right)
 {
+    if (Value* result = left->sShrConstant(m_proc, right)) {
+        m_block->append(result);
+        return result;
+    }
     return m_block->appendNew<B3::Value>(m_proc, B3::SShr, origin(), left, castToInt32(right));
 }
 
 LValue Output::lShr(LValue left, LValue right)
 {
+    if (Value* result = left->zShrConstant(m_proc, right)) {
+        m_block->append(result);
+        return result;
+    }
     return m_block->appendNew<B3::Value>(m_proc, B3::ZShr, origin(), left, castToInt32(right));
 }
 
@@ -343,6 +361,8 @@ LValue Output::zeroExt(LValue value, LType type)
 {
     if (value->type() == type)
         return value;
+    if (value->hasInt32())
+        return m_block->appendIntConstant(m_proc, origin(), Int64, static_cast<uint64_t>(static_cast<uint32_t>(value->asInt32())));
     return m_block->appendNew<B3::Value>(m_proc, B3::ZExt32, origin(), value);
 }
 
@@ -453,51 +473,81 @@ LValue Output::baseIndex(LValue base, LValue index, Scale scale, ptrdiff_t offse
 
 LValue Output::equal(LValue left, LValue right)
 {
+    TriState result = left->equalConstant(right);
+    if (result != MixedTriState)
+        return constBool(result == TrueTriState);
     return m_block->appendNew<B3::Value>(m_proc, B3::Equal, origin(), left, right);
 }
 
 LValue Output::notEqual(LValue left, LValue right)
 {
+    TriState result = left->notEqualConstant(right);
+    if (result != MixedTriState)
+        return constBool(result == TrueTriState);
     return m_block->appendNew<B3::Value>(m_proc, B3::NotEqual, origin(), left, right);
 }
 
 LValue Output::above(LValue left, LValue right)
 {
+    TriState result = left->aboveConstant(right);
+    if (result != MixedTriState)
+        return constBool(result == TrueTriState);
     return m_block->appendNew<B3::Value>(m_proc, B3::Above, origin(), left, right);
 }
 
 LValue Output::aboveOrEqual(LValue left, LValue right)
 {
+    TriState result = left->aboveEqualConstant(right);
+    if (result != MixedTriState)
+        return constBool(result == TrueTriState);
     return m_block->appendNew<B3::Value>(m_proc, B3::AboveEqual, origin(), left, right);
 }
 
 LValue Output::below(LValue left, LValue right)
 {
+    TriState result = left->belowConstant(right);
+    if (result != MixedTriState)
+        return constBool(result == TrueTriState);
     return m_block->appendNew<B3::Value>(m_proc, B3::Below, origin(), left, right);
 }
 
 LValue Output::belowOrEqual(LValue left, LValue right)
 {
+    TriState result = left->belowEqualConstant(right);
+    if (result != MixedTriState)
+        return constBool(result == TrueTriState);
     return m_block->appendNew<B3::Value>(m_proc, B3::BelowEqual, origin(), left, right);
 }
 
 LValue Output::greaterThan(LValue left, LValue right)
 {
+    TriState result = left->greaterThanConstant(right);
+    if (result != MixedTriState)
+        return constBool(result == TrueTriState);
     return m_block->appendNew<B3::Value>(m_proc, B3::GreaterThan, origin(), left, right);
 }
 
 LValue Output::greaterThanOrEqual(LValue left, LValue right)
 {
+    TriState result = left->greaterEqualConstant(right);
+    if (result != MixedTriState)
+        return constBool(result == TrueTriState);
     return m_block->appendNew<B3::Value>(m_proc, B3::GreaterEqual, origin(), left, right);
 }
 
 LValue Output::lessThan(LValue left, LValue right)
 {
+    TriState result = left->lessThanConstant(right);
+    if (result != MixedTriState)
+        return constBool(result == TrueTriState);
     return m_block->appendNew<B3::Value>(m_proc, B3::LessThan, origin(), left, right);
 }
 
 LValue Output::lessThanOrEqual(LValue left, LValue right)
 {
+    TriState result = left->lessEqualConstant(right);
+    if (result != MixedTriState)
+        return constBool(result == TrueTriState);
     return m_block->appendNew<B3::Value>(m_proc, B3::LessEqual, origin(), left, right);
 }
 
@@ -583,6 +633,12 @@ LValue Output::notZero64(LValue value)
 
 LValue Output::select(LValue value, LValue taken, LValue notTaken)
 {
+    if (value->hasInt32()) {
+        if (value->asInt32())
+            return taken;
+        else
+            return notTaken;
+    }
     return m_block->appendNew<B3::Value>(m_proc, B3::Select, origin(), value, taken, notTaken);
 }
 
@@ -621,6 +677,11 @@ void Output::unreachable()
     m_block->appendNewControlValue(m_proc, B3::Oops, origin());
 }
 
+void Output::appendSuccessor(WeightedTarget target)
+{
+    m_block->appendSuccessor(target.frequentedBlock());
+}
+
 CheckValue* Output::speculate(LValue value)
 {
     return m_block->appendNew<B3::CheckValue>(m_proc, B3::Check, origin(), value);
@@ -741,7 +802,8 @@ void Output::decrementSuperSamplerCount()
 
 void Output::addIncomingToPhi(LValue phi, ValueFromBlock value)
 {
-    value.value()->as<B3::UpsilonValue>()->setPhi(phi);
+    if (value)
+        value.value()->as<B3::UpsilonValue>()->setPhi(phi);
 }
 
 } } // namespace JSC::FTL
index 39d3edf..d0c867b 100644 (file)
@@ -398,6 +398,8 @@ public:
     void ret(LValue);
 
     void unreachable();
+    
+    void appendSuccessor(WeightedTarget);
 
     B3::CheckValue* speculate(LValue);
     B3::CheckValue* speculateAdd(LValue, LValue);
index 31246fd..bf39dd4 100644 (file)
@@ -45,6 +45,8 @@ public:
         , m_block(block)
     {
     }
+    
+    explicit operator bool() const { return m_value || m_block; }
 
     LValue value() const { return m_value; }
     LBasicBlock block() const { return m_block; }
index dab5a7e..a57fffe 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -55,6 +55,11 @@ public:
     LBasicBlock target() const { return m_target; }
     Weight weight() const { return m_weight; }
     
+    B3::FrequentedBlock frequentedBlock() const
+    {
+        return B3::FrequentedBlock(target(), weight().frequencyClass());
+    }
+    
 private:
     LBasicBlock m_target;
     Weight m_weight;
diff --git a/Source/JavaScriptCore/heap/CellContainer.h b/Source/JavaScriptCore/heap/CellContainer.h
new file mode 100644 (file)
index 0000000..f913a28
--- /dev/null
@@ -0,0 +1,90 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include <wtf/StdLibExtras.h>
+
+namespace JSC {
+
+class HeapCell;
+class LargeAllocation;
+class MarkedBlock;
+class WeakSet;
+
+// This is how we abstract over either MarkedBlock& or LargeAllocation&. Put things in here as you
+// find need for them.
+
+class CellContainer {
+public:
+    CellContainer()
+        : m_encodedPointer(0)
+    {
+    }
+    
+    CellContainer(MarkedBlock& markedBlock)
+        : m_encodedPointer(bitwise_cast<uintptr_t>(&markedBlock))
+    {
+    }
+    
+    CellContainer(LargeAllocation& largeAllocation)
+        : m_encodedPointer(bitwise_cast<uintptr_t>(&largeAllocation) | isLargeAllocationBit)
+    {
+    }
+    
+    explicit operator bool() const { return !!m_encodedPointer; }
+    
+    bool isMarkedBlock() const { return m_encodedPointer && !(m_encodedPointer & isLargeAllocationBit); }
+    bool isLargeAllocation() const { return m_encodedPointer & isLargeAllocationBit; }
+    
+    MarkedBlock& markedBlock() const
+    {
+        ASSERT(isMarkedBlock());
+        return *bitwise_cast<MarkedBlock*>(m_encodedPointer);
+    }
+    
+    LargeAllocation& largeAllocation() const
+    {
+        ASSERT(isLargeAllocation());
+        return *bitwise_cast<LargeAllocation*>(m_encodedPointer - isLargeAllocationBit);
+    }
+    
+    bool isMarkedOrRetired() const;
+    bool isMarked(HeapCell*) const;
+    bool isMarkedOrNewlyAllocated(HeapCell*) const;
+    
+    void setHasAnyMarked();
+    
+    size_t cellSize() const;
+    
+    WeakSet& weakSet() const;
+    
+private:
+    static const uintptr_t isLargeAllocationBit = 1;
+    uintptr_t m_encodedPointer;
+};
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/CellContainerInlines.h b/Source/JavaScriptCore/heap/CellContainerInlines.h
new file mode 100644 (file)
index 0000000..79392fa
--- /dev/null
@@ -0,0 +1,77 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "CellContainer.h"
+#include "JSCell.h"
+#include "LargeAllocation.h"
+#include "MarkedBlock.h"
+
+namespace JSC {
+
+inline bool CellContainer::isMarkedOrRetired() const
+{
+    if (isLargeAllocation())
+        return true;
+    return markedBlock().isMarkedOrRetired();
+}
+
+inline bool CellContainer::isMarked(HeapCell* cell) const
+{
+    if (isLargeAllocation())
+        return largeAllocation().isMarked();
+    return markedBlock().isMarked(cell);
+}
+
+inline bool CellContainer::isMarkedOrNewlyAllocated(HeapCell* cell) const
+{
+    if (isLargeAllocation())
+        return largeAllocation().isMarkedOrNewlyAllocated();
+    return markedBlock().isMarkedOrNewlyAllocated(cell);
+}
+
+inline void CellContainer::setHasAnyMarked()
+{
+    if (!isLargeAllocation())
+        markedBlock().setHasAnyMarked();
+}
+
+inline size_t CellContainer::cellSize() const
+{
+    if (isLargeAllocation())
+        return largeAllocation().cellSize();
+    return markedBlock().cellSize();
+}
+
+inline WeakSet& CellContainer::weakSet() const
+{
+    if (isLargeAllocation())
+        return largeAllocation().weakSet();
+    return markedBlock().weakSet();
+}
+
+} // namespace JSC
+
index e1e1200..87708ac 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -31,6 +31,8 @@
 #include "CopiedSpace.h"
 #include "CopiedSpaceInlines.h"
 #include "HeapInlines.h"
+#include "HeapUtil.h"
+#include "JITStubRoutineSet.h"
 #include "JSCell.h"
 #include "JSObject.h"
 #include "JSCInlines.h"
 
 namespace JSC {
 
-ConservativeRoots::ConservativeRoots(MarkedBlockSet* blocks, CopiedSpace* copiedSpace)
+ConservativeRoots::ConservativeRoots(Heap& heap)
     : m_roots(m_inlineRoots)
     , m_size(0)
     , m_capacity(inlineCapacity)
-    , m_blocks(blocks)
-    , m_copiedSpace(copiedSpace)
+    , m_heap(heap)
 {
 }
 
 ConservativeRoots::~ConservativeRoots()
 {
     if (m_roots != m_inlineRoots)
-        OSAllocator::decommitAndRelease(m_roots, m_capacity * sizeof(JSCell*));
+        OSAllocator::decommitAndRelease(m_roots, m_capacity * sizeof(HeapCell*));
 }
 
 void ConservativeRoots::grow()
 {
     size_t newCapacity = m_capacity == inlineCapacity ? nonInlineCapacity : m_capacity * 2;
-    JSCell** newRoots = static_cast<JSCell**>(OSAllocator::reserveAndCommit(newCapacity * sizeof(JSCell*)));
-    memcpy(newRoots, m_roots, m_size * sizeof(JSCell*));
+    HeapCell** newRoots = static_cast<HeapCell**>(OSAllocator::reserveAndCommit(newCapacity * sizeof(HeapCell*)));
+    memcpy(newRoots, m_roots, m_size * sizeof(HeapCell*));
     if (m_roots != m_inlineRoots)
-        OSAllocator::decommitAndRelease(m_roots, m_capacity * sizeof(JSCell*));
+        OSAllocator::decommitAndRelease(m_roots, m_capacity * sizeof(HeapCell*));
     m_capacity = newCapacity;
     m_roots = newRoots;
 }
@@ -70,15 +71,16 @@ inline void ConservativeRoots::genericAddPointer(void* p, TinyBloomFilter filter
 {
     markHook.mark(p);
 
-    m_copiedSpace->pinIfNecessary(p);
+    m_heap.storageSpace().pinIfNecessary(p);
 
-    if (!Heap::isPointerGCObject(filter, *m_blocks, p))
-        return;
-
-    if (m_size == m_capacity)
-        grow();
-
-    m_roots[m_size++] = static_cast<JSCell*>(p);
+    HeapUtil::findGCObjectPointersForMarking(
+        m_heap, filter, p,
+        [&] (void* p) {
+            if (m_size == m_capacity)
+                grow();
+            
+            m_roots[m_size++] = bitwise_cast<HeapCell*>(p);
+        });
 }
 
 template<typename MarkHook>
@@ -94,7 +96,7 @@ void ConservativeRoots::genericAddSpan(void* begin, void* end, MarkHook& markHoo
     RELEASE_ASSERT(isPointerAligned(begin));
     RELEASE_ASSERT(isPointerAligned(end));
 
-    TinyBloomFilter filter = m_blocks->filter(); // Make a local copy of filter to show the compiler it won't alias, and can be register-allocated.
+    TinyBloomFilter filter = m_heap.objectSpace().blocks().filter(); // Make a local copy of filter to show the compiler it won't alias, and can be register-allocated.
     for (char** it = static_cast<char**>(begin); it != static_cast<char**>(end); ++it)
         genericAddPointer(*it, filter, markHook);
 }
index 6343548..0e516e3 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2009 Apple Inc. All rights reserved.
+ * Copyright (C) 2009, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 namespace JSC {
 
 class CodeBlockSet;
+class HeapCell;
 class JITStubRoutineSet;
-class JSCell;
 
 class ConservativeRoots {
 public:
-    ConservativeRoots(MarkedBlockSet*, CopiedSpace*);
+    ConservativeRoots(Heap&);
     ~ConservativeRoots();
 
     void add(void* begin, void* end);
@@ -44,11 +44,11 @@ public:
     void add(void* begin, void* end, JITStubRoutineSet&, CodeBlockSet&);
     
     size_t size();
-    JSCell** roots();
+    HeapCell** roots();
 
 private:
     static const size_t inlineCapacity = 128;
-    static const size_t nonInlineCapacity = 8192 / sizeof(JSCell*);
+    static const size_t nonInlineCapacity = 8192 / sizeof(HeapCell*);
     
     template<typename MarkHook>
     void genericAddPointer(void*, TinyBloomFilter, MarkHook&);
@@ -58,12 +58,11 @@ private:
     
     void grow();
 
-    JSCell** m_roots;
+    HeapCell** m_roots;
     size_t m_size;
     size_t m_capacity;
-    MarkedBlockSet* m_blocks;
-    CopiedSpace* m_copiedSpace;
-    JSCell* m_inlineRoots[inlineCapacity];
+    Heap& m_heap;
+    HeapCell* m_inlineRoots[inlineCapacity];
 };
 
 inline size_t ConservativeRoots::size()
@@ -71,7 +70,7 @@ inline size_t ConservativeRoots::size()
     return m_size;
 }
 
-inline JSCell** ConservativeRoots::roots()
+inline HeapCell** ConservativeRoots::roots()
 {
     return m_roots;
 }
index e8f8109..927b24a 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2015-2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -29,7 +29,6 @@
 namespace JSC {
 
 enum CopyToken {
-    ButterflyCopyToken,
     TypedArrayVectorCopyToken,
     MapBackingStoreCopyToken,
     DirectArgumentsOverridesCopyToken
diff --git a/Source/JavaScriptCore/heap/FreeList.cpp b/Source/JavaScriptCore/heap/FreeList.cpp
new file mode 100644 (file)
index 0000000..43bc7ae
--- /dev/null
@@ -0,0 +1,37 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "FreeList.h"
+
+namespace JSC {
+
+void FreeList::dump(PrintStream& out) const
+{
+    out.print("{head = ", RawPointer(head), ", payloadEnd = ", RawPointer(payloadEnd), ", remaining = ", remaining, ", originalSize = ", originalSize, "}");
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/FreeList.h b/Source/JavaScriptCore/heap/FreeList.h
new file mode 100644 (file)
index 0000000..842caa6
--- /dev/null
@@ -0,0 +1,91 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include <wtf/PrintStream.h>
+
+namespace JSC {
+
+struct FreeCell {
+    FreeCell* next;
+};
+        
+// This representation of a FreeList is convenient for the MarkedAllocator.
+
+struct FreeList {
+    FreeCell* head { nullptr };
+    char* payloadEnd { nullptr };
+    unsigned remaining { 0 };
+    unsigned originalSize { 0 };
+    
+    FreeList()
+    {
+    }
+    
+    static FreeList list(FreeCell* head, unsigned bytes)
+    {
+        FreeList result;
+        result.head = head;
+        result.remaining = 0;
+        result.originalSize = bytes;
+        return result;
+    }
+    
+    static FreeList bump(char* payloadEnd, unsigned remaining)
+    {
+        FreeList result;
+        result.payloadEnd = payloadEnd;
+        result.remaining = remaining;
+        result.originalSize = remaining;
+        return result;
+    }
+    
+    bool operator==(const FreeList& other) const
+    {
+        return head == other.head
+            && payloadEnd == other.payloadEnd
+            && remaining == other.remaining
+            && originalSize == other.originalSize;
+    }
+    
+    bool operator!=(const FreeList& other) const
+    {
+        return !(*this == other);
+    }
+    
+    explicit operator bool() const
+    {
+        return *this != FreeList();
+    }
+    
+    bool allocationWillFail() const { return !head && !remaining; }
+    bool allocationWillSucceed() const { return !allocationWillFail(); }
+    
+    void dump(PrintStream&) const;
+};
+
+} // namespace JSC
+
index 09611aa..3a736ea 100644 (file)
@@ -40,6 +40,7 @@
 #include "HeapVerifier.h"
 #include "IncrementalSweeper.h"
 #include "Interpreter.h"
+#include "JITStubRoutineSet.h"
 #include "JITWorklist.h"
 #include "JSCInlines.h"
 #include "JSGlobalObject.h"
@@ -47,6 +48,7 @@
 #include "JSVirtualMachineInternal.h"
 #include "SamplingProfiler.h"
 #include "ShadowChicken.h"
+#include "SuperSampler.h"
 #include "TypeProfilerLog.h"
 #include "UnlinkedCodeBlock.h"
 #include "VM.h"
@@ -74,159 +76,16 @@ namespace JSC {
 namespace {
 
 static const size_t largeHeapSize = 32 * MB; // About 1.5X the average webpage.
-static const size_t smallHeapSize = 1 * MB; // Matches the FastMalloc per-thread cache.
-
-#define ENABLE_GC_LOGGING 0
-
-#if ENABLE(GC_LOGGING)
-#if COMPILER(CLANG)
-#define DEFINE_GC_LOGGING_GLOBAL(type, name, arguments) \
-_Pragma("clang diagnostic push") \
-_Pragma("clang diagnostic ignored \"-Wglobal-constructors\"") \
-_Pragma("clang diagnostic ignored \"-Wexit-time-destructors\"") \
-static type name arguments; \
-_Pragma("clang diagnostic pop")
-#else
-#define DEFINE_GC_LOGGING_GLOBAL(type, name, arguments) \
-static type name arguments;
-#endif // COMPILER(CLANG)
-
-struct GCTimer {
-    GCTimer(const char* name)
-        : name(name)
-    {
-    }
-    ~GCTimer()
-    {
-        logData(allCollectionData, "(All)");
-        logData(edenCollectionData, "(Eden)");
-        logData(fullCollectionData, "(Full)");
-    }
-
-    struct TimeRecord {
-        TimeRecord()
-            : time(0)
-            , min(std::numeric_limits<double>::infinity())
-            , max(0)
-            , count(0)
-        {
-        }
-
-        double time;
-        double min;
-        double max;
-        size_t count;
-    };
-
-    void logData(const TimeRecord& data, const char* extra)
-    {
-        dataLogF("[%d] %s (Parent: %s) %s: %.2lfms (avg. %.2lf, min. %.2lf, max. %.2lf, count %lu)\n", 
-            getCurrentProcessID(),
-            name,
-            parent ? parent->name : "nullptr",
-            extra, 
-            data.time * 1000, 
-            data.time * 1000 / data.count, 
-            data.min * 1000, 
-            data.max * 1000,
-            data.count);
-    }
+const size_t smallHeapSize = 1 * MB; // Matches the FastMalloc per-thread cache.
 
-    void updateData(TimeRecord& data, double duration)
-    {
-        if (duration < data.min)
-            data.min = duration;
-        if (duration > data.max)
-            data.max = duration;
-        data.count++;
-        data.time += duration;
-    }
-
-    void didFinishPhase(HeapOperation collectionType, double duration)
-    {
-        TimeRecord& data = collectionType == EdenCollection ? edenCollectionData : fullCollectionData;
-        updateData(data, duration);
-        updateData(allCollectionData, duration);
-    }
-
-    static GCTimer* s_currentGlobalTimer;
-
-    TimeRecord allCollectionData;
-    TimeRecord fullCollectionData;
-    TimeRecord edenCollectionData;
-    const char* name;
-    GCTimer* parent { nullptr };
-};
-
-GCTimer* GCTimer::s_currentGlobalTimer = nullptr;
-
-struct GCTimerScope {
-    GCTimerScope(GCTimer& timer, HeapOperation collectionType)
-        : timer(timer)
-        , start(WTF::monotonicallyIncreasingTime())
-        , collectionType(collectionType)
-    {
-        timer.parent = GCTimer::s_currentGlobalTimer;
-        GCTimer::s_currentGlobalTimer = &timer;
-    }
-    ~GCTimerScope()
-    {
-        double delta = WTF::monotonicallyIncreasingTime() - start;
-        timer.didFinishPhase(collectionType, delta);
-        GCTimer::s_currentGlobalTimer = timer.parent;
-    }
-    GCTimer& timer;
-    double start;
-    HeapOperation collectionType;
-};
-
-struct GCCounter {
-    GCCounter(const char* name)
-        : name(name)
-        , count(0)
-        , total(0)
-        , min(10000000)
-        , max(0)
-    {
-    }
-    
-    void add(size_t amount)
-    {
-        count++;
-        total += amount;
-        if (amount < min)
-            min = amount;
-        if (amount > max)
-            max = amount;
-    }
-    ~GCCounter()
-    {
-        dataLogF("[%d] %s: %zu values (avg. %zu, min. %zu, max. %zu)\n", getCurrentProcessID(), name, total, total / count, min, max);
-    }
-    const char* name;
-    size_t count;
-    size_t total;
-    size_t min;
-    size_t max;
-};
-
-#define GCPHASE(name) DEFINE_GC_LOGGING_GLOBAL(GCTimer, name##Timer, (#name)); GCTimerScope name##TimerScope(name##Timer, m_operationInProgress)
-#define GCCOUNTER(name, value) do { DEFINE_GC_LOGGING_GLOBAL(GCCounter, name##Counter, (#name)); name##Counter.add(value); } while (false)
-    
-#else
-
-#define GCPHASE(name) do { } while (false)
-#define GCCOUNTER(name, value) do { } while (false)
-#endif
-
-static inline size_t minHeapSize(HeapType heapType, size_t ramSize)
+size_t minHeapSize(HeapType heapType, size_t ramSize)
 {
     if (heapType == LargeHeap)
         return min(largeHeapSize, ramSize / 4);
     return smallHeapSize;
 }
 
-static inline size_t proportionalHeapSize(size_t heapSize, size_t ramSize)
+size_t proportionalHeapSize(size_t heapSize, size_t ramSize)
 {
     // Try to stay under 1/2 RAM size to leave room for the DOM, rendering, networking, etc.
     if (heapSize < ramSize / 4)
@@ -236,12 +95,12 @@ static inline size_t proportionalHeapSize(size_t heapSize, size_t ramSize)
     return 1.25 * heapSize;
 }
 
-static inline bool isValidSharedInstanceThreadState(VM* vm)
+bool isValidSharedInstanceThreadState(VM* vm)
 {
     return vm->currentThreadIsHoldingAPILock();
 }
 
-static inline bool isValidThreadState(VM* vm)
+bool isValidThreadState(VM* vm)
 {
     if (vm->atomicStringTable() != wtfThreadData().atomicStringTable())
         return false;
@@ -252,7 +111,7 @@ static inline bool isValidThreadState(VM* vm)
     return true;
 }
 
-static inline void recordType(TypeCountSet& set, JSCell* cell)
+void recordType(TypeCountSet& set, JSCell* cell)
 {
     const char* typeName = "[unknown]";
     const ClassInfo* info = cell->classInfo();
@@ -261,6 +120,32 @@ static inline void recordType(TypeCountSet& set, JSCell* cell)
     set.add(typeName);
 }
 
+bool measurePhaseTiming()
+{
+    return false;
+}
+
+class TimingScope {
+public:
+    TimingScope(const char* name)
+        : m_name(name)
+    {
+        if (measurePhaseTiming())
+            m_before = monotonicallyIncreasingTimeMS();
+    }
+    
+    ~TimingScope()
+    {
+        if (measurePhaseTiming()) {
+            double after = monotonicallyIncreasingTimeMS();
+            dataLog("[GC] ", m_name, " took: ", after - m_before, " ms.\n");
+        }
+    }
+private:
+    double m_before;
+    const char* m_name;
+};
+
 } // anonymous namespace
 
 Heap::Heap(VM* vm, HeapType heapType)
@@ -287,6 +172,8 @@ Heap::Heap(VM* vm, HeapType heapType)
     , m_machineThreads(this)
     , m_slotVisitor(*this)
     , m_handleSet(vm)
+    , m_codeBlocks(std::make_unique<CodeBlockSet>())
+    , m_jitStubRoutines(std::make_unique<JITStubRoutineSet>())
     , m_isSafeToCollect(false)
     , m_writeBarrierBuffer(256)
     , m_vm(vm)
@@ -331,7 +218,7 @@ void Heap::lastChanceToFinalize()
     RELEASE_ASSERT(m_operationInProgress == NoOperation);
 
     m_arrayBuffers.lastChanceToFinalize();
-    m_codeBlocks.lastChanceToFinalize();
+    m_codeBlocks->lastChanceToFinalize();
     m_objectSpace.lastChanceToFinalize();
     releaseDelayedReleasedObjects();
 
@@ -434,7 +321,6 @@ void Heap::harvestWeakReferences()
 
 void Heap::finalizeUnconditionalFinalizers()
 {
-    GCPHASE(FinalizeUnconditionalFinalizers);
     m_slotVisitor.finalizeUnconditionalFinalizers();
 }
 
@@ -460,15 +346,19 @@ void Heap::completeAllJITPlans()
 
 void Heap::markRoots(double gcStartTime, void* stackOrigin, void* stackTop, MachineThreads::RegisterState& calleeSavedRegisters)
 {
-    GCPHASE(MarkRoots);
+    TimingScope markRootsTimingScope("Heap::markRoots");
+    
     ASSERT(isValidThreadState(m_vm));
 
     // We gather conservative roots before clearing mark bits because conservative
     // gathering uses the mark bits to determine whether a reference is valid.
-    ConservativeRoots conservativeRoots(&m_objectSpace.blocks(), &m_storageSpace);
-    gatherStackRoots(conservativeRoots, stackOrigin, stackTop, calleeSavedRegisters);
-    gatherJSStackRoots(conservativeRoots);
-    gatherScratchBufferRoots(conservativeRoots);
+    ConservativeRoots conservativeRoots(*this);
+    {
+        SuperSamplerScope superSamplerScope(false);
+        gatherStackRoots(conservativeRoots, stackOrigin, stackTop, calleeSavedRegisters);
+        gatherJSStackRoots(conservativeRoots);
+        gatherScratchBufferRoots(conservativeRoots);
+    }
 
 #if ENABLE(DFG_JIT)
     DFG::rememberCodeBlocks(*m_vm);
@@ -528,6 +418,7 @@ void Heap::markRoots(double gcStartTime, void* stackOrigin, void* stackTop, Mach
     HeapRootVisitor heapRootVisitor(m_slotVisitor);
 
     {
+        SuperSamplerScope superSamplerScope(false);
         ParallelModeEnabler enabler(m_slotVisitor);
 
         m_slotVisitor.donateAndDrain();
@@ -561,7 +452,7 @@ void Heap::markRoots(double gcStartTime, void* stackOrigin, void* stackTop, Mach
 
 void Heap::copyBackingStores()
 {
-    GCPHASE(CopyBackingStores);
+    SuperSamplerScope superSamplerScope(true);
     if (m_operationInProgress == EdenCollection)
         m_storageSpace.startedCopying<EdenCollection>();
     else {
@@ -598,12 +489,6 @@ void Heap::copyBackingStores()
                         
                         CopyWorkList& workList = block->workList();
                         for (CopyWorklistItem item : workList) {
-                            if (item.token() == ButterflyCopyToken) {
-                                JSObject::copyBackingStore(
-                                    item.cell(), copyVisitor, ButterflyCopyToken);
-                                continue;
-                            }
-                            
                             item.cell()->methodTable()->copyBackingStore(
                                 item.cell(), copyVisitor, item.token());
                         }
@@ -619,16 +504,14 @@ void Heap::copyBackingStores()
 
 void Heap::gatherStackRoots(ConservativeRoots& roots, void* stackOrigin, void* stackTop, MachineThreads::RegisterState& calleeSavedRegisters)
 {
-    GCPHASE(GatherStackRoots);
-    m_jitStubRoutines.clearMarks();
-    m_machineThreads.gatherConservativeRoots(roots, m_jitStubRoutines, m_codeBlocks, stackOrigin, stackTop, calleeSavedRegisters);
+    m_jitStubRoutines->clearMarks();
+    m_machineThreads.gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks, stackOrigin, stackTop, calleeSavedRegisters);
 }
 
 void Heap::gatherJSStackRoots(ConservativeRoots& roots)
 {
 #if !ENABLE(JIT)
-    GCPHASE(GatherJSStackRoots);
-    m_vm->interpreter->cloopStack().gatherConservativeRoots(roots, m_jitStubRoutines, m_codeBlocks);
+    m_vm->interpreter->cloopStack().gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks);
 #else
     UNUSED_PARAM(roots);
 #endif
@@ -637,7 +520,6 @@ void Heap::gatherJSStackRoots(ConservativeRoots& roots)
 void Heap::gatherScratchBufferRoots(ConservativeRoots& roots)
 {
 #if ENABLE(DFG_JIT)
-    GCPHASE(GatherScratchBufferRoots);
     m_vm->gatherConservativeRoots(roots);
 #else
     UNUSED_PARAM(roots);
@@ -646,9 +528,8 @@ void Heap::gatherScratchBufferRoots(ConservativeRoots& roots)
 
 void Heap::clearLivenessData()
 {
-    GCPHASE(ClearLivenessData);
     if (m_operationInProgress == FullCollection)
-        m_codeBlocks.clearMarksForFullCollection();
+        m_codeBlocks->clearMarksForFullCollection();
 
     m_objectSpace.clearNewlyAllocated();
     m_objectSpace.clearMarks();
@@ -663,7 +544,6 @@ void Heap::visitExternalRememberedSet()
 
 void Heap::visitSmallStrings()
 {
-    GCPHASE(VisitSmallStrings);
     if (!m_vm->smallStrings.needsToBeVisited(m_operationInProgress))
         return;
 
@@ -675,7 +555,6 @@ void Heap::visitSmallStrings()
 
 void Heap::visitConservativeRoots(ConservativeRoots& roots)
 {
-    GCPHASE(VisitConservativeRoots);
     m_slotVisitor.append(roots);
 
     if (Options::logGC() == GCLogging::Verbose)
@@ -698,7 +577,6 @@ void Heap::visitCompilerWorklistWeakReferences()
 void Heap::removeDeadCompilerWorklistEntries()
 {
 #if ENABLE(DFG_JIT)
-    GCPHASE(FinalizeDFGWorklists);
     for (auto worklist : m_suspendedCompilerWorklists)
         worklist->removeDeadPlans(*m_vm);
 #endif
@@ -732,7 +610,6 @@ struct GatherHeapSnapshotData : MarkedBlock::CountFunctor {
 
 void Heap::gatherExtraHeapSnapshotData(HeapProfiler& heapProfiler)
 {
-    GCPHASE(GatherExtraHeapSnapshotData);
     if (HeapSnapshotBuilder* builder = heapProfiler.activeSnapshotBuilder()) {
         HeapIterationScope heapIterationScope(*this);
         GatherHeapSnapshotData functor(*builder);
@@ -758,7 +635,6 @@ struct RemoveDeadHeapSnapshotNodes : MarkedBlock::CountFunctor {
 
 void Heap::removeDeadHeapSnapshotNodes(HeapProfiler& heapProfiler)
 {
-    GCPHASE(RemoveDeadHeapSnapshotNodes);
     if (HeapSnapshot* snapshot = heapProfiler.mostRecentSnapshot()) {
         HeapIterationScope heapIterationScope(*this);
         RemoveDeadHeapSnapshotNodes functor(*snapshot);
@@ -769,8 +645,6 @@ void Heap::removeDeadHeapSnapshotNodes(HeapProfiler& heapProfiler)
 
 void Heap::visitProtectedObjects(HeapRootVisitor& heapRootVisitor)
 {
-    GCPHASE(VisitProtectedObjects);
-
     for (auto& pair : m_protectedValues)
         heapRootVisitor.visit(&pair.key);
 
@@ -782,7 +656,6 @@ void Heap::visitProtectedObjects(HeapRootVisitor& heapRootVisitor)
 
 void Heap::visitArgumentBuffers(HeapRootVisitor& visitor)
 {
-    GCPHASE(MarkingArgumentBuffers);
     if (!m_markListSet || !m_markListSet->size())
         return;
 
@@ -796,7 +669,6 @@ void Heap::visitArgumentBuffers(HeapRootVisitor& visitor)
 
 void Heap::visitException(HeapRootVisitor& visitor)
 {
-    GCPHASE(MarkingException);
     if (!m_vm->exception() && !m_vm->lastException())
         return;
 
@@ -811,7 +683,6 @@ void Heap::visitException(HeapRootVisitor& visitor)
 
 void Heap::visitStrongHandles(HeapRootVisitor& visitor)
 {
-    GCPHASE(VisitStrongHandles);
     m_handleSet.visitStrongHandles(visitor);
 
     if (Options::logGC() == GCLogging::Verbose)
@@ -822,7 +693,6 @@ void Heap::visitStrongHandles(HeapRootVisitor& visitor)
 
 void Heap::visitHandleStack(HeapRootVisitor& visitor)
 {
-    GCPHASE(VisitHandleStack);
     m_handleStack.visit(visitor);
 
     if (Options::logGC() == GCLogging::Verbose)
@@ -836,7 +706,6 @@ void Heap::visitSamplingProfiler()
 #if ENABLE(SAMPLING_PROFILER)
     if (SamplingProfiler* samplingProfiler = m_vm->samplingProfiler()) {
         ASSERT(samplingProfiler->getLock().isLocked());
-        GCPHASE(VisitSamplingProfiler);
         samplingProfiler->visit(m_slotVisitor);
         if (Options::logGC() == GCLogging::Verbose)
             dataLog("Sampling Profiler data:\n", m_slotVisitor);
@@ -854,8 +723,7 @@ void Heap::visitShadowChicken()
 
 void Heap::traceCodeBlocksAndJITStubRoutines()
 {
-    GCPHASE(TraceCodeBlocksAndJITStubRoutines);
-    m_jitStubRoutines.traceMarkedStubRoutines(m_slotVisitor);
+    m_jitStubRoutines->traceMarkedStubRoutines(m_slotVisitor);
 
     if (Options::logGC() == GCLogging::Verbose)
         dataLog("Code Blocks and JIT Stub Routines:\n", m_slotVisitor);
@@ -865,13 +733,11 @@ void Heap::traceCodeBlocksAndJITStubRoutines()
 
 void Heap::converge()
 {
-    GCPHASE(Convergence);
     m_slotVisitor.drainFromShared(SlotVisitor::MasterDrain);
 }
 
 void Heap::visitWeakHandles(HeapRootVisitor& visitor)
 {
-    GCPHASE(VisitingLiveWeakHandles);
     while (true) {
         m_objectSpace.visitWeakSets(visitor);
         harvestWeakReferences();
@@ -892,8 +758,6 @@ void Heap::visitWeakHandles(HeapRootVisitor& visitor)
 
 void Heap::updateObjectCounts(double gcStartTime)
 {
-    GCCOUNTER(VisitedValueCount, m_slotVisitor.visitCount() + threadVisitCount());
-
     if (Options::logGC() == GCLogging::Verbose) {
         size_t visitCount = m_slotVisitor.visitCount();
         visitCount += threadVisitCount();
@@ -1033,7 +897,6 @@ void Heap::deleteAllUnlinkedCodeBlocks()
 
 void Heap::clearUnmarkedExecutables()
 {
-    GCPHASE(ClearUnmarkedExecutables);
     for (unsigned i = m_executables.size(); i--;) {
         ExecutableBase* current = m_executables[i];
         if (isMarked(current))
@@ -1051,10 +914,9 @@ void Heap::clearUnmarkedExecutables()
 
 void Heap::deleteUnmarkedCompiledCode()
 {
-    GCPHASE(DeleteCodeBlocks);
     clearUnmarkedExecutables();
-    m_codeBlocks.deleteUnmarkedAndUnreferenced(m_operationInProgress);
-    m_jitStubRoutines.deleteUnmarkedJettisonedStubRoutines();
+    m_codeBlocks->deleteUnmarkedAndUnreferenced(m_operationInProgress);
+    m_jitStubRoutines->deleteUnmarkedJettisonedStubRoutines();
 }
 
 void Heap::addToRememberedSet(const JSCell* cell)
@@ -1073,10 +935,11 @@ void Heap::addToRememberedSet(const JSCell* cell)
 
 void Heap::collectAllGarbage()
 {
+    SuperSamplerScope superSamplerScope(false);
     if (!m_isSafeToCollect)
         return;
 
-    collect(FullCollection);
+    collectWithoutAnySweep(FullCollection);
 
     DeferGCForAWhile deferGC(*this);
     if (UNLIKELY(Options::useImmortalObjects()))
@@ -1090,7 +953,19 @@ void Heap::collectAllGarbage()
     sweepAllLogicallyEmptyWeakBlocks();
 }
 
-NEVER_INLINE void Heap::collect(HeapOperation collectionType)
+void Heap::collect(HeapOperation collectionType)
+{
+    SuperSamplerScope superSamplerScope(false);
+    if (!m_isSafeToCollect)
+        return;
+
+    collectWithoutAnySweep(collectionType);
+
+    DeferGCForAWhile deferGC(*this);
+    m_objectSpace.sweepLargeAllocations();
+}
+
+NEVER_INLINE void Heap::collectWithoutAnySweep(HeapOperation collectionType)
 {
     void* stackTop;
     ALLOCATE_AND_GET_REGISTER_STATE(registers);
@@ -1102,6 +977,8 @@ NEVER_INLINE void Heap::collect(HeapOperation collectionType)
 
 NEVER_INLINE void Heap::collectImpl(HeapOperation collectionType, void* stackOrigin, void* stackTop, MachineThreads::RegisterState& calleeSavedRegisters)
 {
+    TimingScope collectImplTimingScope("Heap::collectImpl");
+    
 #if ENABLE(ALLOCATION_LOGGING)
     dataLogF("JSC GC starting collection.\n");
 #endif
@@ -1134,7 +1011,6 @@ NEVER_INLINE void Heap::collectImpl(HeapOperation collectionType, void* stackOri
 
     suspendCompilerThreads();
     willStartCollection(collectionType);
-    GCPHASE(Collect);
 
     double gcStartTime = WTF::monotonicallyIncreasingTime();
     if (m_verifier) {
@@ -1148,9 +1024,12 @@ NEVER_INLINE void Heap::collectImpl(HeapOperation collectionType, void* stackOri
 
     flushOldStructureIDTables();
     stopAllocation();
+    prepareForMarking();
     flushWriteBarrierBuffer();
 
     markRoots(gcStartTime, stackOrigin, stackTop, calleeSavedRegisters);
+    
+    TimingScope lateTimingScope("Heap::collectImpl after markRoots");
 
     if (m_verifier) {
         m_verifier->gatherLiveObjects(HeapVerifier::Phase::AfterMarking);
@@ -1164,9 +1043,7 @@ NEVER_INLINE void Heap::collectImpl(HeapOperation collectionType, void* stackOri
     pruneStaleEntriesFromWeakGCMaps();
     sweepArrayBuffers();
     snapshotMarkedSpace();
-
     copyBackingStores();
-
     finalizeUnconditionalFinalizers();
     removeDeadCompilerWorklistEntries();
     deleteUnmarkedCompiledCode();
@@ -1179,7 +1056,7 @@ NEVER_INLINE void Heap::collectImpl(HeapOperation collectionType, void* stackOri
     updateAllocationLimits();
     didFinishCollection(gcStartTime);
     resumeCompilerThreads();
-
+    
     if (m_verifier) {
         m_verifier->trimDeadObjects();
         m_verifier->verify(HeapVerifier::Phase::AfterGC);
@@ -1194,7 +1071,6 @@ NEVER_INLINE void Heap::collectImpl(HeapOperation collectionType, void* stackOri
 void Heap::suspendCompilerThreads()
 {
 #if ENABLE(DFG_JIT)
-    GCPHASE(SuspendCompilerThreads);
     ASSERT(m_suspendedCompilerWorklists.isEmpty());
     for (unsigned i = DFG::numberOfWorklists(); i--;) {
         if (DFG::Worklist* worklist = DFG::worklistForIndexOrNull(i)) {
@@ -1207,8 +1083,6 @@ void Heap::suspendCompilerThreads()
 
 void Heap::willStartCollection(HeapOperation collectionType)
 {
-    GCPHASE(StartingCollection);
-    
     if (Options::logGC())
         dataLog("=> ");
     
@@ -1246,13 +1120,11 @@ void Heap::willStartCollection(HeapOperation collectionType)
 
 void Heap::flushOldStructureIDTables()
 {
-    GCPHASE(FlushOldStructureIDTables);
     m_structureIDTable.flushOldTables();
 }
 
 void Heap::flushWriteBarrierBuffer()
 {
-    GCPHASE(FlushWriteBarrierBuffer);
     if (m_operationInProgress == EdenCollection) {
         m_writeBarrierBuffer.flush(*this);
         return;
@@ -1262,21 +1134,23 @@ void Heap::flushWriteBarrierBuffer()
 
 void Heap::stopAllocation()
 {
-    GCPHASE(StopAllocation);
     m_objectSpace.stopAllocating();
     if (m_operationInProgress == FullCollection)
         m_storageSpace.didStartFullCollection();
 }
 
+void Heap::prepareForMarking()
+{
+    m_objectSpace.prepareForMarking();
+}
+
 void Heap::reapWeakHandles()
 {
-    GCPHASE(ReapingWeakHandles);
     m_objectSpace.reapWeakSets();
 }
 
 void Heap::pruneStaleEntriesFromWeakGCMaps()
 {
-    GCPHASE(PruningStaleEntriesFromWeakGCMaps);
     if (m_operationInProgress != FullCollection)
         return;
     for (auto& pruneCallback : m_weakGCMaps.values())
@@ -1285,7 +1159,6 @@ void Heap::pruneStaleEntriesFromWeakGCMaps()
 
 void Heap::sweepArrayBuffers()
 {
-    GCPHASE(SweepingArrayBuffers);
     m_arrayBuffers.sweep();
 }
 
@@ -1306,8 +1179,10 @@ struct MarkedBlockSnapshotFunctor : public MarkedBlock::VoidFunctor {
 
 void Heap::snapshotMarkedSpace()
 {
-    GCPHASE(SnapshotMarkedSpace);
-
+    // FIXME: This should probably be renamed. It's not actually snapshotting all of MarkedSpace.
+    // This is used by IncrementalSweeper, so it only needs to snapshot blocks. However, if we ever
+    // wanted to add other snapshotting login, we'd probably put it here.
+    
     if (m_operationInProgress == EdenCollection) {
         m_blockSnapshot.appendVector(m_objectSpace.blocksWithNewObjects());
         // Sort and deduplicate the block snapshot since we might be appending to an unfinished work list.
@@ -1322,14 +1197,11 @@ void Heap::snapshotMarkedSpace()
 
 void Heap::deleteSourceProviderCaches()
 {
-    GCPHASE(DeleteSourceProviderCaches);
     m_vm->clearSourceProviderCaches();
 }
 
 void Heap::notifyIncrementalSweeper()
 {
-    GCPHASE(NotifyIncrementalSweeper);
-
     if (m_operationInProgress == FullCollection) {
         if (!m_logicallyEmptyWeakBlocks.isEmpty())
             m_indexOfNextLogicallyEmptyWeakBlockToSweep = 0;
@@ -1340,19 +1212,22 @@ void Heap::notifyIncrementalSweeper()
 
 void Heap::writeBarrierCurrentlyExecutingCodeBlocks()
 {
-    GCPHASE(WriteBarrierCurrentlyExecutingCodeBlocks);
-    m_codeBlocks.writeBarrierCurrentlyExecutingCodeBlocks(this);
+    m_codeBlocks->writeBarrierCurrentlyExecutingCodeBlocks(this);
 }
 
 void Heap::resetAllocators()
 {
-    GCPHASE(ResetAllocators);
     m_objectSpace.resetAllocators();
 }
 
 void Heap::updateAllocationLimits()
 {
-    GCPHASE(UpdateAllocationLimits);
+    static const bool verbose = false;
+    
+    if (verbose) {
+        dataLog("\n");
+        dataLog("bytesAllocatedThisCycle = ", m_bytesAllocatedThisCycle, "\n");
+    }
     
     // Calculate our current heap size threshold for the purpose of figuring out when we should
     // run another collection. This isn't the same as either size() or capacity(), though it should
@@ -1369,6 +1244,8 @@ void Heap::updateAllocationLimits()
     // of fragmentation, this may be substantial. Fortunately, marked space rarely fragments because
     // cells usually have a narrow range of sizes. So, the underestimation is probably OK.
     currentHeapSize += m_totalBytesVisited;
+    if (verbose)
+        dataLog("totalBytesVisited = ", m_totalBytesVisited, ", currentHeapSize = ", currentHeapSize, "\n");
 
     // For copied space, we use the capacity of storage space. This is because copied space may get
     // badly fragmented between full collections. This arises when each eden collection evacuates
@@ -1384,10 +1261,15 @@ void Heap::updateAllocationLimits()
     // https://bugs.webkit.org/show_bug.cgi?id=150268
     ASSERT(m_totalBytesCopied <= m_storageSpace.size());
     currentHeapSize += m_storageSpace.capacity();
+    if (verbose)
+        dataLog("storageSpace.capacity() = ", m_storageSpace.capacity(), ", currentHeapSize = ", currentHeapSize, "\n");
 
     // It's up to the user to ensure that extraMemorySize() ends up corresponding to allocation-time
     // extra memory reporting.
     currentHeapSize += extraMemorySize();
+
+    if (verbose)
+        dataLog("extraMemorySize() = ", extraMemorySize(), ", currentHeapSize = ", currentHeapSize, "\n");
     
     if (Options::gcMaxHeapSize() && currentHeapSize > Options::gcMaxHeapSize())
         HeapStatistics::exitWithFailure();
@@ -1397,29 +1279,38 @@ void Heap::updateAllocationLimits()
         // the new allocation limit based on the current size of the heap, with a
         // fixed minimum.
         m_maxHeapSize = max(minHeapSize(m_heapType, m_ramSize), proportionalHeapSize(currentHeapSize, m_ramSize));
+        if (verbose)
+            dataLog("Full: maxHeapSize = ", m_maxHeapSize, "\n");
         m_maxEdenSize = m_maxHeapSize - currentHeapSize;
+        if (verbose)
+            dataLog("Full: maxEdenSize = ", m_maxEdenSize, "\n");
         m_sizeAfterLastFullCollect = currentHeapSize;
+        if (verbose)
+            dataLog("Full: sizeAfterLastFullCollect = ", currentHeapSize, "\n");
         m_bytesAbandonedSinceLastFullCollect = 0;
+        if (verbose)
+            dataLog("Full: bytesAbandonedSinceLastFullCollect = ", 0, "\n");
     } else {
-        static const bool verbose = false;
-        
         ASSERT(currentHeapSize >= m_sizeAfterLastCollect);
-        m_maxEdenSize = m_maxHeapSize - currentHeapSize;
+        // Theoretically, we shouldn't ever scan more memory than the heap size we planned to have.
+        // But we are sloppy, so we have to defend against the overflow.
+        m_maxEdenSize = currentHeapSize > m_maxHeapSize ? 0 : m_maxHeapSize - currentHeapSize;
+        if (verbose)
+            dataLog("Eden: maxEdenSize = ", m_maxEdenSize, "\n");
         m_sizeAfterLastEdenCollect = currentHeapSize;
-        if (verbose) {
-            dataLog("Max heap size: ", m_maxHeapSize, "\n");
-            dataLog("Current heap size: ", currentHeapSize, "\n");
-            dataLog("Size after last eden collection: ", m_sizeAfterLastEdenCollect, "\n");
-        }
-        double edenToOldGenerationRatio = (double)m_maxEdenSize / (double)m_maxHeapSize;
         if (verbose)
-            dataLog("Eden to old generation ratio: ", edenToOldGenerationRatio, "\n");
+            dataLog("Eden: sizeAfterLastEdenCollect = ", currentHeapSize, "\n");
+        double edenToOldGenerationRatio = (double)m_maxEdenSize / (double)m_maxHeapSize;
         double minEdenToOldGenerationRatio = 1.0 / 3.0;
         if (edenToOldGenerationRatio < minEdenToOldGenerationRatio)
             m_shouldDoFullCollection = true;
         // This seems suspect at first, but what it does is ensure that the nursery size is fixed.
         m_maxHeapSize += currentHeapSize - m_sizeAfterLastCollect;
+        if (verbose)
+            dataLog("Eden: maxHeapSize = ", m_maxHeapSize, "\n");
         m_maxEdenSize = m_maxHeapSize - currentHeapSize;
+        if (verbose)
+            dataLog("Eden: maxEdenSize = ", m_maxEdenSize, "\n");
         if (m_fullActivityCallback) {
             ASSERT(currentHeapSize >= m_sizeAfterLastFullCollect);
             m_fullActivityCallback->didAllocate(currentHeapSize - m_sizeAfterLastFullCollect);
@@ -1427,6 +1318,8 @@ void Heap::updateAllocationLimits()
     }
 
     m_sizeAfterLastCollect = currentHeapSize;
+    if (verbose)
+        dataLog("sizeAfterLastCollect = ", m_sizeAfterLastCollect, "\n");
     m_bytesAllocatedThisCycle = 0;
 
     if (Options::logGC())
@@ -1435,7 +1328,6 @@ void Heap::updateAllocationLimits()
 
 void Heap::didFinishCollection(double gcStartTime)
 {
-    GCPHASE(FinishingCollection);
     double gcEndTime = WTF::monotonicallyIncreasingTime();
     HeapOperation operation = m_operationInProgress;
     if (m_operationInProgress == FullCollection)
@@ -1471,7 +1363,6 @@ void Heap::didFinishCollection(double gcStartTime)
 void Heap::resumeCompilerThreads()
 {
 #if ENABLE(DFG_JIT)
-    GCPHASE(ResumeCompilerThreads);
     for (auto worklist : m_suspendedCompilerWorklists)
         worklist->resumeAllThreads();
     m_suspendedCompilerWorklists.clear();
@@ -1580,7 +1471,7 @@ public:
         if (cell->isZapped())
             current++;
 
-        void* limit = static_cast<void*>(reinterpret_cast<char*>(cell) + MarkedBlock::blockFor(cell)->cellSize());
+        void* limit = static_cast<void*>(reinterpret_cast<char*>(cell) + cell->cellSize());
         for (; current < limit; current++)
             *current = zombifiedBits;
     }
@@ -1686,4 +1577,12 @@ size_t Heap::threadBytesCopied()
     return result;
 }
 
+void Heap::forEachCodeBlockImpl(const ScopedLambda<bool(CodeBlock*)>& func)
+{
+    // We don't know the full set of CodeBlocks until compilation has terminated.
+    completeAllJITPlans();
+
+    return m_codeBlocks->iterate(func);
+}
+
 } // namespace JSC
index 5554a1f..dc79a69 100644 (file)
 #define Heap_h
 
 #include "ArrayBuffer.h"
-#include "CodeBlockSet.h"
 #include "CopyVisitor.h"
 #include "GCIncomingRefCountedSet.h"
 #include "HandleSet.h"
 #include "HandleStack.h"
 #include "HeapObserver.h"
 #include "HeapOperation.h"
-#include "JITStubRoutineSet.h"
 #include "ListableHandler.h"
 #include "MachineStackMarker.h"
 #include "MarkedAllocator.h"
@@ -53,6 +51,7 @@
 namespace JSC {
 
 class CodeBlock;
+class CodeBlockSet;
 class CopiedSpace;
 class EdenGCActivityCallback;
 class ExecutableBase;
@@ -65,6 +64,7 @@ class HeapRootVisitor;
 class HeapVerifier;
 class IncrementalSweeper;
 class JITStubRoutine;
+class JITStubRoutineSet;
 class JSCell;
 class JSValue;
 class LLIntOffsetsExtractor;
@@ -83,13 +83,15 @@ typedef HashCountedSet<const char*> TypeCountSet;
 
 enum HeapType { SmallHeap, LargeHeap };
 
+class HeapUtil;
+
 class Heap {
     WTF_MAKE_NONCOPYABLE(Heap);
 public:
     friend class JIT;
     friend class DFG::SpeculativeJIT;
     static Heap* heap(const JSValue); // 0 for immediate values
-    static Heap* heap(const JSCell*);
+    static Heap* heap(const HeapCell*);
 
     // This constant determines how many blocks we iterate between checks of our 
     // deadline when calling Heap::isPagedOut. Decreasing it will cause us to detect 
@@ -101,11 +103,8 @@ public:
     static bool isMarked(const void*);
     static bool testAndSetMarked(const void*);
     static void setMarked(const void*);
-
-    // This function must be run after stopAllocation() is called and 
-    // before liveness data is cleared to be accurate.
-    static bool isPointerGCObject(TinyBloomFilter, MarkedBlockSet&, void* pointer);
-    static bool isValueGCObject(TinyBloomFilter, MarkedBlockSet&, JSValue);
+    
+    static size_t cellSize(const void*);
 
     void writeBarrier(const JSCell*);
     void writeBarrier(const JSCell*, JSValue);
@@ -147,10 +146,14 @@ public:
     MarkedSpace::Subspace& subspaceForObjectDestructor() { return m_objectSpace.subspaceForObjectsWithDestructor(); }
     MarkedSpace::Subspace& subspaceForAuxiliaryData() { return m_objectSpace.subspaceForAuxiliaryData(); }
     template<typename ClassType> MarkedSpace::Subspace& subspaceForObjectOfType();
-    MarkedAllocator& allocatorForObjectWithoutDestructor(size_t bytes) { return m_objectSpace.allocatorFor(bytes); }
-    MarkedAllocator& allocatorForObjectWithDestructor(size_t bytes) { return m_objectSpace.destructorAllocatorFor(bytes); }
-    template<typename ClassType> MarkedAllocator& allocatorForObjectOfType(size_t bytes);
+    MarkedAllocator* allocatorForObjectWithoutDestructor(size_t bytes) { return m_objectSpace.allocatorFor(bytes); }
+    MarkedAllocator* allocatorForObjectWithDestructor(size_t bytes) { return m_objectSpace.destructorAllocatorFor(bytes); }
+    template<typename ClassType> MarkedAllocator* allocatorForObjectOfType(size_t bytes);
+    MarkedAllocator* allocatorForAuxiliaryData(size_t bytes) { return m_objectSpace.auxiliaryAllocatorFor(bytes); }
     CopiedAllocator& storageAllocator() { return m_storageSpace.allocator(); }
+    void* allocateAuxiliary(JSCell* intendedOwner, size_t);
+    void* tryAllocateAuxiliary(JSCell* intendedOwner, size_t);
+    void* tryReallocateAuxiliary(JSCell* intendedOwner, void* oldBase, size_t oldSize, size_t newSize);
     CheckedBoolean tryAllocateStorage(JSCell* intendedOwner, size_t, void**);
     CheckedBoolean tryReallocateStorage(JSCell* intendedOwner, void**, size_t, size_t);
     void ascribeOwner(JSCell* intendedOwner, void*);
@@ -230,7 +233,7 @@ public:
     void didAllocate(size_t);
     bool isPagedOut(double deadline);
     
-    const JITStubRoutineSet& jitStubRoutines() { return m_jitStubRoutines; }
+    const JITStubRoutineSet& jitStubRoutines() { return *m_jitStubRoutines; }
     
     void addReference(JSCell*, ArrayBuffer*);
     
@@ -238,7 +241,7 @@ public:
 
     StructureIDTable& structureIDTable() { return m_structureIDTable; }
 
-    CodeBlockSet& codeBlockSet() { return m_codeBlocks; }
+    CodeBlockSet& codeBlockSet() { return *m_codeBlocks; }
 
 #if USE(FOUNDATION)
     template<typename T> void releaseSoon(RetainPtr<T>&&);
@@ -267,6 +270,7 @@ private:
     friend class GCLogging;
     friend class GCThread;
     friend class HandleSet;
+    friend class HeapUtil;
     friend class HeapVerifier;
     friend class JITStubRoutine;
     friend class LLIntOffsetsExtractor;
@@ -283,6 +287,8 @@ private:
     template<typename T> friend void* allocateCell(Heap&);
     template<typename T> friend void* allocateCell(Heap&, size_t);
 
+    void collectWithoutAnySweep(HeapOperation collectionType = AnyCollection);
+
     void* allocateWithDestructor(size_t); // For use with objects with destructors.
     void* allocateWithoutDestructor(size_t); // For use with objects without destructors.
     template<typename ClassType> void* allocateObjectOfType(size_t); // Chooses one of the methods above based on type.
@@ -304,6 +310,7 @@ private:
     void flushOldStructureIDTables();
     void flushWriteBarrierBuffer();
     void stopAllocation();
+    void prepareForMarking();
     
     void markRoots(double gcStartTime, void* stackOrigin, void* stackTop, MachineThreads::RegisterState&);
     void gatherStackRoots(ConservativeRoots&, void* stackOrigin, void* stackTop, MachineThreads::RegisterState&);
@@ -362,6 +369,8 @@ private:
     size_t threadBytesVisited();
     size_t threadBytesCopied();
 
+    void forEachCodeBlockImpl(const ScopedLambda<bool(CodeBlock*)>&);
+
     const HeapType m_heapType;
     const size_t m_ramSize;
     const size_t m_minBytesPerCycle;
@@ -408,8 +417,8 @@ private:
 
     HandleSet m_handleSet;
     HandleStack m_handleStack;
-    CodeBlockSet m_codeBlocks;
-    JITStubRoutineSet m_jitStubRoutines;
+    std::unique_ptr<CodeBlockSet> m_codeBlocks;
+    std::unique_ptr<JITStubRoutineSet> m_jitStubRoutines;
     FinalizerOwner m_finalizerOwner;
     
     bool m_isSafeToCollect;
index 242cf45..73feeb1 100644 (file)
 
 #pragma once
 
+#include "DestructionMode.h"
+
 namespace JSC {
 
+class CellContainer;
+class Heap;
+class LargeAllocation;
+class MarkedBlock;
+class VM;
+struct AllocatorAttributes;
+
 class HeapCell {
 public:
     enum Kind : int8_t {
@@ -38,6 +47,25 @@ public:
     
     void zap() { *reinterpret_cast<uintptr_t**>(this) = 0; }
     bool isZapped() const { return !*reinterpret_cast<uintptr_t* const*>(this); }
+    
+    bool isLargeAllocation() const;
+    CellContainer cellContainer() const;
+    MarkedBlock& markedBlock() const;
+    LargeAllocation& largeAllocation() const;
+
+    // If you want performance and you know that your cell is small, you can do this instead:
+    // ASSERT(!cell->isLargeAllocation());
+    // cell->markedBlock().vm()
+    // We currently only use this hack for callees to make ExecState::vm() fast. It's not
+    // recommended to use it for too many other things, since the large allocation cutoff is
+    // a runtime option and its default value is small (400 bytes).
+    Heap* heap() const;
+    VM* vm() const;
+    
+    size_t cellSize() const;
+    AllocatorAttributes allocatorAttributes() const;
+    DestructionMode destructionMode() const;
+    Kind cellKind() const;
 };
 
 } // namespace JSC
diff --git a/Source/JavaScriptCore/heap/HeapCellInlines.h b/Source/JavaScriptCore/heap/HeapCellInlines.h
new file mode 100644 (file)
index 0000000..