DFG/FTL should be able to exit to the middle of a bytecode
authorkeith_miller@apple.com <keith_miller@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 24 Dec 2019 01:49:45 +0000 (01:49 +0000)
committerkeith_miller@apple.com <keith_miller@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 24 Dec 2019 01:49:45 +0000 (01:49 +0000)
https://bugs.webkit.org/show_bug.cgi?id=205232

Reviewed by Saam Barati.

JSTests:

* stress/apply-osr-exit-should-get-length-once-exceptions-occasionally.js: Added.
(expectedArgCount):
(callee):
(test):
(let.array.get length):
* stress/apply-osr-exit-should-get-length-once.js: Added.
(expectedArgCount):
(callee):
(test):
(let.array.get length):
* stress/load-varargs-then-inlined-call-and-exit-strict.js:
(checkEqual):
* stress/recursive-tail-call-with-different-argument-count.js:
* stress/rest-varargs-osr-exit-to-checkpoint.js: Added.
(foo):
(bar):

Source/JavaScriptCore:

It can be valuable to exit to the middle of a bytecode for a couple of reasons.
1) It can be used to combine bytecodes that share a majority of their operands, reducing bytecode steam size.
2) It enables creating bytecodes that are easier to reconstruct useful optimization information from.

To make exiting to the middle of a bytecode possible this patch
introduces the concept of a temporary operand. A temporary operand
is one that contains the result of effectful operations during the
process of executing a bytecode. tmp operands have no meaning when
executing in the LLInt or Baseline and are only used in the DFG to
preserve information for OSR exit. We use the term checkpoint to
refer to any point where an effectful component of a bytecode executes.
For example, in op_call_varargs there are two checkpoints the first is
before we have determined the number of variable arguments and the second
is the actual call.

When the DFG OSR exits if there are any active checkpoints inline
call stack we will emit a jit probe that allocates a side state
object keyed off the frame pointer of the bytecode whose
checkpoint needs to be finished. We need side state because we may
recursively inline several copies of the same
function. Alternatively, we could call back into ourselves after
OSR and exit again from optimized code before finishing the
checkpoint of our caller.

Another thing we need to be careful of is making sure we remove
side state as we unwind for an exception. To make sure we do this
correctly I've added an assertion to JSLock that there are no
pending checkpoint side states on the vm when releasing the lock.

A large amount of this patch is trying to remove as much code that
refers to virtual registers as an int as possible. Instead, this
patch replaces them with the VirtualRegister class. There are also
a couple of new classes/enums added to JSC:

1) There is now a class, Operand, that represents the combination
of a VirtualRegister and a temporary. This is handy in the DFG to
model OSR exit values all together. Additionally, Operands<T> has
been updated to work with respect to Operand values.

2) CallFrameSlot is now an enum class instead of a struct of
constexpr values. This lets us implicitly convert CallFrameSlots
to VirtualRegisters without allowing all ints to implicity
convert.

3) FTL::SelectPredictability is a new enum that describes to the
FTL whether or not we think a select is going to be
predictable. SelectPredictability has four options: Unpredictable,
Predictable, LeftLikely, and RightLikely. Unpredictable means we
think a branch predictor won't do a good job guessing this value
so we should compile the select to a cmov. The other options mean
we either think we are going to pick the same value every time or
there's a reasonable chance the branch predictor will be able to
guess the value.

In order to validate the correctness of this patch the various
varargs call opcodes have been reworked to use checkpoints. This
also fixed a long-standing issue where we could call length
getters twice if we OSR exit during LoadVarargs but before the
actually call.

Lastly, we have not enabled the probe-based OSR exit for a long
time in production, thus this patch removes that code since it
would be a non-trivial amount of work to get checkpoints working
with probe OSR.

* CMakeLists.txt:
* DerivedSources-input.xcfilelist:
* JavaScriptCore.xcodeproj/project.pbxproj:
* assembler/MacroAssemblerCodeRef.h:
* assembler/ProbeFrame.h:
(JSC::Probe::Frame::operand):
(JSC::Probe::Frame::setOperand):
* b3/testb3.h:
(populateWithInterestingValues):
(floatingPointOperands):
* bytecode/AccessCase.cpp:
(JSC::AccessCase::generateImpl):
* bytecode/AccessCaseSnippetParams.cpp:
(JSC::SlowPathCallGeneratorWithArguments::generateImpl):
* bytecode/BytecodeDumper.cpp:
(JSC::BytecodeDumperBase::dumpValue):
(JSC::BytecodeDumper<Block>::registerName const):
(JSC::BytecodeDumper<Block>::constantName const):
(JSC::Wasm::BytecodeDumper::constantName const):
* bytecode/BytecodeDumper.h:
* bytecode/BytecodeIndex.cpp:
(JSC::BytecodeIndex::dump const):
* bytecode/BytecodeIndex.h:
(JSC::BytecodeIndex::BytecodeIndex):
(JSC::BytecodeIndex::offset const):
(JSC::BytecodeIndex::checkpoint const):
(JSC::BytecodeIndex::asBits const):
(JSC::BytecodeIndex::hash const):
(JSC::BytecodeIndex::operator bool const):
(JSC::BytecodeIndex::pack):
(JSC::BytecodeIndex::fromBits):
* bytecode/BytecodeList.rb:
* bytecode/BytecodeLivenessAnalysis.cpp:
(JSC::enumValuesEqualAsIntegral):
(JSC::tmpLivenessForCheckpoint):
* bytecode/BytecodeLivenessAnalysis.h:
* bytecode/BytecodeLivenessAnalysisInlines.h:
(JSC::virtualRegisterIsAlwaysLive):
(JSC::virtualRegisterThatIsNotAlwaysLiveIsLive):
(JSC::virtualRegisterIsLive):
(JSC::operandIsAlwaysLive): Deleted.
(JSC::operandThatIsNotAlwaysLiveIsLive): Deleted.
(JSC::operandIsLive): Deleted.
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::finishCreation):
(JSC::CodeBlock::bytecodeIndexForExit const):
(JSC::CodeBlock::ensureCatchLivenessIsComputedForBytecodeIndexSlow):
(JSC::CodeBlock::updateAllValueProfilePredictionsAndCountLiveness):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::numTmps const):
(JSC::CodeBlock::isKnownNotImmediate):
(JSC::CodeBlock::isTemporaryRegister):
(JSC::CodeBlock::constantRegister):
(JSC::CodeBlock::getConstant const):
(JSC::CodeBlock::constantSourceCodeRepresentation const):
(JSC::CodeBlock::replaceConstant):
(JSC::CodeBlock::isTemporaryRegisterIndex): Deleted.
(JSC::CodeBlock::isConstantRegisterIndex): Deleted.
* bytecode/CodeOrigin.h:
* bytecode/FullBytecodeLiveness.h:
(JSC::FullBytecodeLiveness::virtualRegisterIsLive const):
(JSC::FullBytecodeLiveness::operandIsLive const): Deleted.
* bytecode/InlineCallFrame.h:
(JSC::InlineCallFrame::InlineCallFrame):
(JSC::InlineCallFrame::setTmpOffset):
(JSC::CodeOrigin::walkUpInlineStack const):
(JSC::CodeOrigin::inlineStackContainsActiveCheckpoint const):
(JSC::remapOperand):
(JSC::unmapOperand):
(JSC::CodeOrigin::walkUpInlineStack): Deleted.
* bytecode/LazyOperandValueProfile.h:
(JSC::LazyOperandValueProfileKey::LazyOperandValueProfileKey):
(JSC::LazyOperandValueProfileKey::hash const):
(JSC::LazyOperandValueProfileKey::operand const):
* bytecode/MethodOfGettingAValueProfile.cpp:
(JSC::MethodOfGettingAValueProfile::fromLazyOperand):
(JSC::MethodOfGettingAValueProfile::emitReportValue const):
(JSC::MethodOfGettingAValueProfile::reportValue):
* bytecode/MethodOfGettingAValueProfile.h:
* bytecode/Operands.h:
(JSC::Operand::Operand):
(JSC::Operand::tmp):
(JSC::Operand::kind const):
(JSC::Operand::value const):
(JSC::Operand::virtualRegister const):
(JSC::Operand::asBits const):
(JSC::Operand::isTmp const):
(JSC::Operand::isArgument const):
(JSC::Operand::isLocal const):
(JSC::Operand::isHeader const):
(JSC::Operand::isConstant const):
(JSC::Operand::toArgument const):
(JSC::Operand::toLocal const):
(JSC::Operand::operator== const):
(JSC::Operand::isValid const):
(JSC::Operand::fromBits):
(JSC::Operands::Operands):
(JSC::Operands::numberOfLocals const):
(JSC::Operands::numberOfTmps const):
(JSC::Operands::tmpIndex const):
(JSC::Operands::argumentIndex const):
(JSC::Operands::localIndex const):
(JSC::Operands::tmp):
(JSC::Operands::tmp const):
(JSC::Operands::argument):
(JSC::Operands::argument const):
(JSC::Operands::local):
(JSC::Operands::local const):
(JSC::Operands::sizeFor const):
(JSC::Operands::atFor):
(JSC::Operands::atFor const):
(JSC::Operands::ensureLocals):
(JSC::Operands::ensureTmps):
(JSC::Operands::getForOperandIndex):
(JSC::Operands::getForOperandIndex const):
(JSC::Operands::operandIndex const):
(JSC::Operands::operand):
(JSC::Operands::operand const):
(JSC::Operands::hasOperand const):
(JSC::Operands::setOperand):
(JSC::Operands::at const):
(JSC::Operands::at):
(JSC::Operands::operator[] const):
(JSC::Operands::operator[]):
(JSC::Operands::operandForIndex const):
(JSC::Operands::operator== const):
(JSC::Operands::isArgument const): Deleted.
(JSC::Operands::isLocal const): Deleted.
(JSC::Operands::virtualRegisterForIndex const): Deleted.
(JSC::Operands::setOperandFirstTime): Deleted.
* bytecode/OperandsInlines.h:
(JSC::Operand::dump const):
(JSC::Operands<T>::dumpInContext const):
(JSC::Operands<T>::dump const):
* bytecode/UnlinkedCodeBlock.cpp:
(JSC::UnlinkedCodeBlock::UnlinkedCodeBlock):
* bytecode/UnlinkedCodeBlock.h:
(JSC::UnlinkedCodeBlock::hasCheckpoints const):
(JSC::UnlinkedCodeBlock::setHasCheckpoints):
(JSC::UnlinkedCodeBlock::constantRegister const):
(JSC::UnlinkedCodeBlock::getConstant const):
(JSC::UnlinkedCodeBlock::isConstantRegisterIndex const): Deleted.
* bytecode/ValueProfile.h:
(JSC::ValueProfileAndVirtualRegisterBuffer::ValueProfileAndVirtualRegisterBuffer):
(JSC::ValueProfileAndVirtualRegisterBuffer::~ValueProfileAndVirtualRegisterBuffer):
(JSC::ValueProfileAndOperandBuffer::ValueProfileAndOperandBuffer): Deleted.
(JSC::ValueProfileAndOperandBuffer::~ValueProfileAndOperandBuffer): Deleted.
(JSC::ValueProfileAndOperandBuffer::forEach): Deleted.
* bytecode/ValueRecovery.cpp:
(JSC::ValueRecovery::recover const):
* bytecode/ValueRecovery.h:
* bytecode/VirtualRegister.h:
(JSC::virtualRegisterIsLocal):
(JSC::virtualRegisterIsArgument):
(JSC::VirtualRegister::VirtualRegister):
(JSC::VirtualRegister::isValid const):
(JSC::VirtualRegister::isLocal const):
(JSC::VirtualRegister::isArgument const):
(JSC::VirtualRegister::isConstant const):
(JSC::VirtualRegister::toConstantIndex const):
(JSC::operandIsLocal): Deleted.
(JSC::operandIsArgument): Deleted.
* bytecompiler/BytecodeGenerator.cpp:
(JSC::BytecodeGenerator::initializeNextParameter):
(JSC::BytecodeGenerator::initializeParameters):
(JSC::BytecodeGenerator::emitEqualityOpImpl):
(JSC::BytecodeGenerator::emitCallVarargs):
* bytecompiler/BytecodeGenerator.h:
(JSC::BytecodeGenerator::setUsesCheckpoints):
* bytecompiler/RegisterID.h:
(JSC::RegisterID::setIndex):
* dfg/DFGAbstractHeap.cpp:
(JSC::DFG::AbstractHeap::Payload::dumpAsOperand const):
(JSC::DFG::AbstractHeap::dump const):
* dfg/DFGAbstractHeap.h:
(JSC::DFG::AbstractHeap::Payload::Payload):
(JSC::DFG::AbstractHeap::AbstractHeap):
(JSC::DFG::AbstractHeap::operand const):
* dfg/DFGAbstractInterpreterInlines.h:
(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
* dfg/DFGArgumentPosition.h:
(JSC::DFG::ArgumentPosition::dump):
* dfg/DFGArgumentsEliminationPhase.cpp:
* dfg/DFGArgumentsUtilities.cpp:
(JSC::DFG::argumentsInvolveStackSlot):
(JSC::DFG::emitCodeToGetArgumentsArrayLength):
* dfg/DFGArgumentsUtilities.h:
* dfg/DFGAtTailAbstractState.h:
(JSC::DFG::AtTailAbstractState::operand):
* dfg/DFGAvailabilityMap.cpp:
(JSC::DFG::AvailabilityMap::pruneByLiveness):
* dfg/DFGAvailabilityMap.h:
(JSC::DFG::AvailabilityMap::closeStartingWithLocal):
* dfg/DFGBasicBlock.cpp:
(JSC::DFG::BasicBlock::BasicBlock):
(JSC::DFG::BasicBlock::ensureTmps):
* dfg/DFGBasicBlock.h:
* dfg/DFGBlockInsertionSet.cpp:
(JSC::DFG::BlockInsertionSet::insert):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::ByteCodeParser):
(JSC::DFG::ByteCodeParser::ensureTmps):
(JSC::DFG::ByteCodeParser::progressToNextCheckpoint):
(JSC::DFG::ByteCodeParser::newVariableAccessData):
(JSC::DFG::ByteCodeParser::getDirect):
(JSC::DFG::ByteCodeParser::get):
(JSC::DFG::ByteCodeParser::setDirect):
(JSC::DFG::ByteCodeParser::injectLazyOperandSpeculation):
(JSC::DFG::ByteCodeParser::getLocalOrTmp):
(JSC::DFG::ByteCodeParser::setLocalOrTmp):
(JSC::DFG::ByteCodeParser::setArgument):
(JSC::DFG::ByteCodeParser::findArgumentPositionForLocal):
(JSC::DFG::ByteCodeParser::findArgumentPosition):
(JSC::DFG::ByteCodeParser::flushImpl):
(JSC::DFG::ByteCodeParser::flushForTerminalImpl):
(JSC::DFG::ByteCodeParser::flush):
(JSC::DFG::ByteCodeParser::flushDirect):
(JSC::DFG::ByteCodeParser::addFlushOrPhantomLocal):
(JSC::DFG::ByteCodeParser::phantomLocalDirect):
(JSC::DFG::ByteCodeParser::flushForTerminal):
(JSC::DFG::ByteCodeParser::addToGraph):
(JSC::DFG::ByteCodeParser::InlineStackEntry::remapOperand const):
(JSC::DFG::ByteCodeParser::DelayedSetLocal::DelayedSetLocal):
(JSC::DFG::ByteCodeParser::DelayedSetLocal::execute):
(JSC::DFG::ByteCodeParser::allocateTargetableBlock):
(JSC::DFG::ByteCodeParser::allocateUntargetableBlock):
(JSC::DFG::ByteCodeParser::handleRecursiveTailCall):
(JSC::DFG::ByteCodeParser::inlineCall):
(JSC::DFG::ByteCodeParser::handleVarargsInlining):
(JSC::DFG::ByteCodeParser::handleInlining):
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
(JSC::DFG::ByteCodeParser::parse):
(JSC::DFG::ByteCodeParser::getLocal): Deleted.
(JSC::DFG::ByteCodeParser::setLocal): Deleted.
* dfg/DFGCFAPhase.cpp:
(JSC::DFG::CFAPhase::injectOSR):
* dfg/DFGCPSRethreadingPhase.cpp:
(JSC::DFG::CPSRethreadingPhase::run):
(JSC::DFG::CPSRethreadingPhase::canonicalizeGetLocal):
(JSC::DFG::CPSRethreadingPhase::canonicalizeFlushOrPhantomLocalFor):
(JSC::DFG::CPSRethreadingPhase::canonicalizeFlushOrPhantomLocal):
(JSC::DFG::CPSRethreadingPhase::canonicalizeSet):
(JSC::DFG::CPSRethreadingPhase::canonicalizeLocalsInBlock):
(JSC::DFG::CPSRethreadingPhase::propagatePhis):
(JSC::DFG::CPSRethreadingPhase::phiStackFor):
* dfg/DFGCSEPhase.cpp:
* dfg/DFGCapabilities.cpp:
(JSC::DFG::capabilityLevel):
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGCombinedLiveness.cpp:
(JSC::DFG::addBytecodeLiveness):
* dfg/DFGCommonData.cpp:
(JSC::DFG::CommonData::addCodeOrigin):
(JSC::DFG::CommonData::addUniqueCallSiteIndex):
(JSC::DFG::CommonData::lastCallSite const):
* dfg/DFGConstantFoldingPhase.cpp:
(JSC::DFG::ConstantFoldingPhase::foldConstants):
* dfg/DFGDoesGC.cpp:
(JSC::DFG::doesGC):
* dfg/DFGDriver.cpp:
(JSC::DFG::compileImpl):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
* dfg/DFGForAllKills.h:
(JSC::DFG::forAllKilledOperands):
(JSC::DFG::forAllKilledNodesAtNodeIndex):
(JSC::DFG::forAllKillsInBlock):
* dfg/DFGGraph.cpp:
(JSC::DFG::Graph::dump):
(JSC::DFG::Graph::dumpBlockHeader):
(JSC::DFG::Graph::substituteGetLocal):
(JSC::DFG::Graph::isLiveInBytecode):
(JSC::DFG::Graph::localsAndTmpsLiveInBytecode):
(JSC::DFG::Graph::methodOfGettingAValueProfileFor):
(JSC::DFG::Graph::localsLiveInBytecode): Deleted.
* dfg/DFGGraph.h:
(JSC::DFG::Graph::forAllLocalsAndTmpsLiveInBytecode):
(JSC::DFG::Graph::forAllLiveInBytecode):
(JSC::DFG::Graph::forAllLocalsLiveInBytecode): Deleted.
* dfg/DFGInPlaceAbstractState.cpp:
(JSC::DFG::InPlaceAbstractState::InPlaceAbstractState):
* dfg/DFGInPlaceAbstractState.h:
(JSC::DFG::InPlaceAbstractState::operand):
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::linkOSRExits):
(JSC::DFG::JITCompiler::noticeOSREntry):
* dfg/DFGJITCompiler.h:
(JSC::DFG::JITCompiler::emitStoreCallSiteIndex):
* dfg/DFGLiveCatchVariablePreservationPhase.cpp:
(JSC::DFG::LiveCatchVariablePreservationPhase::isValidFlushLocation):
(JSC::DFG::LiveCatchVariablePreservationPhase::handleBlockForTryCatch):
(JSC::DFG::LiveCatchVariablePreservationPhase::newVariableAccessData):
* dfg/DFGMovHintRemovalPhase.cpp:
* dfg/DFGNode.h:
(JSC::DFG::StackAccessData::StackAccessData):
(JSC::DFG::Node::hasArgumentsChild):
(JSC::DFG::Node::argumentsChild):
(JSC::DFG::Node::operand):
(JSC::DFG::Node::hasUnlinkedOperand):
(JSC::DFG::Node::unlinkedOperand):
(JSC::DFG::Node::hasLoadVarargsData):
(JSC::DFG::Node::local): Deleted.
(JSC::DFG::Node::hasUnlinkedLocal): Deleted.
(JSC::DFG::Node::unlinkedLocal): Deleted.
* dfg/DFGNodeType.h:
* dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
(JSC::DFG::OSRAvailabilityAnalysisPhase::run):
(JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
* dfg/DFGOSREntry.cpp:
(JSC::DFG::prepareOSREntry):
(JSC::DFG::prepareCatchOSREntry):
* dfg/DFGOSREntrypointCreationPhase.cpp:
(JSC::DFG::OSREntrypointCreationPhase::run):
* dfg/DFGOSRExit.cpp:
(JSC::DFG::OSRExit::emitRestoreArguments):
(JSC::DFG::OSRExit::compileExit):
(JSC::DFG::jsValueFor): Deleted.
(JSC::DFG::restoreCalleeSavesFor): Deleted.
(JSC::DFG::saveCalleeSavesFor): Deleted.
(JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer): Deleted.
(JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer): Deleted.
(JSC::DFG::saveOrCopyCalleeSavesFor): Deleted.
(JSC::DFG::createDirectArgumentsDuringExit): Deleted.
(JSC::DFG::createClonedArgumentsDuringExit): Deleted.
(JSC::DFG::emitRestoreArguments): Deleted.
(JSC::DFG::OSRExit::executeOSRExit): Deleted.
(JSC::DFG::reifyInlinedCallFrames): Deleted.
(JSC::DFG::adjustAndJumpToTarget): Deleted.
(JSC::DFG::printOSRExit): Deleted.
* dfg/DFGOSRExit.h:
* dfg/DFGOSRExitBase.h:
(JSC::DFG::OSRExitBase::isExitingToCheckpointHandler const):
* dfg/DFGOSRExitCompilerCommon.cpp:
(JSC::DFG::callerReturnPC):
(JSC::DFG::reifyInlinedCallFrames):
(JSC::DFG::adjustAndJumpToTarget):
* dfg/DFGObjectAllocationSinkingPhase.cpp:
* dfg/DFGOpInfo.h:
(JSC::DFG::OpInfo::OpInfo):
* dfg/DFGOperations.cpp:
* dfg/DFGPhantomInsertionPhase.cpp:
* dfg/DFGPreciseLocalClobberize.h:
(JSC::DFG::PreciseLocalClobberizeAdaptor::read):
(JSC::DFG::PreciseLocalClobberizeAdaptor::write):
(JSC::DFG::PreciseLocalClobberizeAdaptor::def):
(JSC::DFG::PreciseLocalClobberizeAdaptor::callIfAppropriate):
* dfg/DFGPredictionInjectionPhase.cpp:
(JSC::DFG::PredictionInjectionPhase::run):
* dfg/DFGPredictionPropagationPhase.cpp:
* dfg/DFGPutStackSinkingPhase.cpp:
* dfg/DFGSSAConversionPhase.cpp:
(JSC::DFG::SSAConversionPhase::run):
* dfg/DFGSafeToExecute.h:
(JSC::DFG::safeToExecute):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::compileMovHint):
(JSC::DFG::SpeculativeJIT::compileCurrentBlock):
(JSC::DFG::SpeculativeJIT::checkArgumentTypes):
(JSC::DFG::SpeculativeJIT::compileVarargsLength):
(JSC::DFG::SpeculativeJIT::compileLoadVarargs):
(JSC::DFG::SpeculativeJIT::compileForwardVarargs):
(JSC::DFG::SpeculativeJIT::compileCreateDirectArguments):
(JSC::DFG::SpeculativeJIT::compileGetArgumentCountIncludingThis):
* dfg/DFGSpeculativeJIT.h:
(JSC::DFG::SpeculativeJIT::recordSetLocal):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGStackLayoutPhase.cpp:
(JSC::DFG::StackLayoutPhase::run):
(JSC::DFG::StackLayoutPhase::assign):
* dfg/DFGStrengthReductionPhase.cpp:
(JSC::DFG::StrengthReductionPhase::handleNode):
* dfg/DFGThunks.cpp:
(JSC::DFG::osrExitThunkGenerator): Deleted.
* dfg/DFGThunks.h:
* dfg/DFGTypeCheckHoistingPhase.cpp:
(JSC::DFG::TypeCheckHoistingPhase::run):
(JSC::DFG::TypeCheckHoistingPhase::disableHoistingAcrossOSREntries):
* dfg/DFGValidate.cpp:
* dfg/DFGVarargsForwardingPhase.cpp:
* dfg/DFGVariableAccessData.cpp:
(JSC::DFG::VariableAccessData::VariableAccessData):
(JSC::DFG::VariableAccessData::shouldUseDoubleFormatAccordingToVote):
(JSC::DFG::VariableAccessData::tallyVotesForShouldUseDoubleFormat):
(JSC::DFG::VariableAccessData::couldRepresentInt52Impl):
* dfg/DFGVariableAccessData.h:
(JSC::DFG::VariableAccessData::operand):
(JSC::DFG::VariableAccessData::local): Deleted.
* dfg/DFGVariableEvent.cpp:
(JSC::DFG::VariableEvent::dump const):
* dfg/DFGVariableEvent.h:
(JSC::DFG::VariableEvent::spill):
(JSC::DFG::VariableEvent::setLocal):
(JSC::DFG::VariableEvent::movHint):
(JSC::DFG::VariableEvent::spillRegister const):
(JSC::DFG::VariableEvent::operand const):
(JSC::DFG::VariableEvent::bytecodeRegister const): Deleted.
* dfg/DFGVariableEventStream.cpp:
(JSC::DFG::VariableEventStream::logEvent):
(JSC::DFG::VariableEventStream::reconstruct const):
* dfg/DFGVariableEventStream.h:
(JSC::DFG::VariableEventStream::appendAndLog):
* ftl/FTLCapabilities.cpp:
(JSC::FTL::canCompile):
* ftl/FTLForOSREntryJITCode.cpp:
(JSC::FTL::ForOSREntryJITCode::ForOSREntryJITCode):
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::lower):
(JSC::FTL::DFG::LowerDFGToB3::compileNode):
(JSC::FTL::DFG::LowerDFGToB3::compileExtractOSREntryLocal):
(JSC::FTL::DFG::LowerDFGToB3::compileGetStack):
(JSC::FTL::DFG::LowerDFGToB3::compileGetCallee):
(JSC::FTL::DFG::LowerDFGToB3::compileSetCallee):
(JSC::FTL::DFG::LowerDFGToB3::compileSetArgumentCountIncludingThis):
(JSC::FTL::DFG::LowerDFGToB3::compileVarargsLength):
(JSC::FTL::DFG::LowerDFGToB3::compileLoadVarargs):
(JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargs):
(JSC::FTL::DFG::LowerDFGToB3::getSpreadLengthFromInlineCallFrame):
(JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargsWithSpread):
(JSC::FTL::DFG::LowerDFGToB3::compileLogShadowChickenPrologue):
(JSC::FTL::DFG::LowerDFGToB3::getArgumentsLength):
(JSC::FTL::DFG::LowerDFGToB3::getCurrentCallee):
(JSC::FTL::DFG::LowerDFGToB3::callPreflight):
(JSC::FTL::DFG::LowerDFGToB3::appendOSRExitDescriptor):
(JSC::FTL::DFG::LowerDFGToB3::buildExitArguments):
(JSC::FTL::DFG::LowerDFGToB3::addressFor):
(JSC::FTL::DFG::LowerDFGToB3::payloadFor):
(JSC::FTL::DFG::LowerDFGToB3::tagFor):
* ftl/FTLOSREntry.cpp:
(JSC::FTL::prepareOSREntry):
* ftl/FTLOSRExit.cpp:
(JSC::FTL::OSRExitDescriptor::OSRExitDescriptor):
* ftl/FTLOSRExit.h:
* ftl/FTLOSRExitCompiler.cpp:
(JSC::FTL::compileStub):
* ftl/FTLOperations.cpp:
(JSC::FTL::operationMaterializeObjectInOSR):
* ftl/FTLOutput.cpp:
(JSC::FTL::Output::select):
* ftl/FTLOutput.h:
* ftl/FTLSelectPredictability.h: Copied from Source/JavaScriptCore/ftl/FTLForOSREntryJITCode.cpp.
* ftl/FTLSlowPathCall.h:
(JSC::FTL::callOperation):
* generator/Checkpoints.rb: Added.
* generator/Opcode.rb:
* generator/Section.rb:
* heap/Heap.cpp:
(JSC::Heap::gatherStackRoots):
* interpreter/CallFrame.cpp:
(JSC::CallFrame::callSiteAsRawBits const):
(JSC::CallFrame::unsafeCallSiteAsRawBits const):
(JSC::CallFrame::callSiteIndex const):
(JSC::CallFrame::unsafeCallSiteIndex const):
(JSC::CallFrame::setCurrentVPC):
(JSC::CallFrame::bytecodeIndex):
(JSC::CallFrame::codeOrigin):
* interpreter/CallFrame.h:
(JSC::CallSiteIndex::CallSiteIndex):
(JSC::CallSiteIndex::operator bool const):
(JSC::CallSiteIndex::operator== const):
(JSC::CallSiteIndex::bits const):
(JSC::CallSiteIndex::fromBits):
(JSC::CallSiteIndex::bytecodeIndex const):
(JSC::DisposableCallSiteIndex::DisposableCallSiteIndex):
(JSC::CallFrame::callee const):
(JSC::CallFrame::unsafeCallee const):
(JSC::CallFrame::addressOfCodeBlock const):
(JSC::CallFrame::argumentCountIncludingThis const):
(JSC::CallFrame::offsetFor):
(JSC::CallFrame::setArgumentCountIncludingThis):
(JSC::CallFrame::setReturnPC):
* interpreter/CallFrameInlines.h:
(JSC::CallFrame::r):
(JSC::CallFrame::uncheckedR):
(JSC::CallFrame::guaranteedJSValueCallee const):
(JSC::CallFrame::jsCallee const):
(JSC::CallFrame::codeBlock const):
(JSC::CallFrame::unsafeCodeBlock const):
(JSC::CallFrame::setCallee):
(JSC::CallFrame::setCodeBlock):
* interpreter/CheckpointOSRExitSideState.h: Copied from Source/JavaScriptCore/dfg/DFGThunks.h.
* interpreter/Interpreter.cpp:
(JSC::eval):
(JSC::sizeOfVarargs):
(JSC::loadVarargs):
(JSC::setupVarargsFrame):
(JSC::UnwindFunctor::operator() const):
(JSC::Interpreter::executeCall):
(JSC::Interpreter::executeConstruct):
* interpreter/Interpreter.h:
* interpreter/StackVisitor.cpp:
(JSC::StackVisitor::readInlinedFrame):
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::emitGetFromCallFrameHeaderPtr):
(JSC::AssemblyHelpers::emitGetFromCallFrameHeader32):
(JSC::AssemblyHelpers::emitGetFromCallFrameHeader64):
(JSC::AssemblyHelpers::emitPutToCallFrameHeader):
(JSC::AssemblyHelpers::emitPutToCallFrameHeaderBeforePrologue):
(JSC::AssemblyHelpers::emitPutPayloadToCallFrameHeaderBeforePrologue):
(JSC::AssemblyHelpers::emitPutTagToCallFrameHeaderBeforePrologue):
(JSC::AssemblyHelpers::addressFor):
(JSC::AssemblyHelpers::tagFor):
(JSC::AssemblyHelpers::payloadFor):
(JSC::AssemblyHelpers::calleeFrameSlot):
(JSC::AssemblyHelpers::calleeArgumentSlot):
(JSC::AssemblyHelpers::calleeFrameTagSlot):
(JSC::AssemblyHelpers::calleeFramePayloadSlot):
(JSC::AssemblyHelpers::calleeFrameCallerFrame):
(JSC::AssemblyHelpers::argumentCount):
* jit/CallFrameShuffler.cpp:
(JSC::CallFrameShuffler::CallFrameShuffler):
* jit/CallFrameShuffler.h:
(JSC::CallFrameShuffler::setCalleeJSValueRegs):
(JSC::CallFrameShuffler::assumeCalleeIsCell):
* jit/JIT.h:
* jit/JITArithmetic.cpp:
(JSC::JIT::emit_op_unsigned):
(JSC::JIT::emit_compareAndJump):
(JSC::JIT::emit_compareAndJumpImpl):
(JSC::JIT::emit_compareUnsignedAndJump):
(JSC::JIT::emit_compareUnsignedAndJumpImpl):
(JSC::JIT::emit_compareUnsigned):
(JSC::JIT::emit_compareUnsignedImpl):
(JSC::JIT::emit_compareAndJumpSlow):
(JSC::JIT::emit_compareAndJumpSlowImpl):
(JSC::JIT::emit_op_inc):
(JSC::JIT::emit_op_dec):
(JSC::JIT::emit_op_mod):
(JSC::JIT::emitBitBinaryOpFastPath):
(JSC::JIT::emit_op_bitnot):
(JSC::JIT::emitRightShiftFastPath):
(JSC::JIT::emitMathICFast):
(JSC::JIT::emitMathICSlow):
(JSC::JIT::emit_op_div):
* jit/JITCall.cpp:
(JSC::JIT::emitPutCallResult):
(JSC::JIT::compileSetupFrame):
(JSC::JIT::compileOpCall):
* jit/JITExceptions.cpp:
(JSC::genericUnwind):
* jit/JITInlines.h:
(JSC::JIT::isOperandConstantDouble):
(JSC::JIT::getConstantOperand):
(JSC::JIT::emitPutIntToCallFrameHeader):
(JSC::JIT::appendCallWithExceptionCheckSetJSValueResult):
(JSC::JIT::appendCallWithExceptionCheckSetJSValueResultWithProfile):
(JSC::JIT::linkSlowCaseIfNotJSCell):
(JSC::JIT::isOperandConstantChar):
(JSC::JIT::getOperandConstantInt):
(JSC::JIT::getOperandConstantDouble):
(JSC::JIT::emitInitRegister):
(JSC::JIT::emitLoadTag):
(JSC::JIT::emitLoadPayload):
(JSC::JIT::emitGet):
(JSC::JIT::emitPutVirtualRegister):
(JSC::JIT::emitLoad):
(JSC::JIT::emitLoad2):
(JSC::JIT::emitLoadDouble):
(JSC::JIT::emitLoadInt32ToDouble):
(JSC::JIT::emitStore):
(JSC::JIT::emitStoreInt32):
(JSC::JIT::emitStoreCell):
(JSC::JIT::emitStoreBool):
(JSC::JIT::emitStoreDouble):
(JSC::JIT::emitJumpSlowCaseIfNotJSCell):
(JSC::JIT::isOperandConstantInt):
(JSC::JIT::emitGetVirtualRegister):
(JSC::JIT::emitGetVirtualRegisters):
* jit/JITOpcodes.cpp:
(JSC::JIT::emit_op_mov):
(JSC::JIT::emit_op_end):
(JSC::JIT::emit_op_new_object):
(JSC::JIT::emitSlow_op_new_object):
(JSC::JIT::emit_op_overrides_has_instance):
(JSC::JIT::emit_op_instanceof):
(JSC::JIT::emitSlow_op_instanceof):
(JSC::JIT::emit_op_is_empty):
(JSC::JIT::emit_op_is_undefined):
(JSC::JIT::emit_op_is_undefined_or_null):
(JSC::JIT::emit_op_is_boolean):
(JSC::JIT::emit_op_is_number):
(JSC::JIT::emit_op_is_cell_with_type):
(JSC::JIT::emit_op_is_object):
(JSC::JIT::emit_op_ret):
(JSC::JIT::emit_op_to_primitive):
(JSC::JIT::emit_op_set_function_name):
(JSC::JIT::emit_op_not):
(JSC::JIT::emit_op_jfalse):
(JSC::JIT::emit_op_jeq_null):
(JSC::JIT::emit_op_jneq_null):
(JSC::JIT::emit_op_jundefined_or_null):
(JSC::JIT::emit_op_jnundefined_or_null):
(JSC::JIT::emit_op_jneq_ptr):
(JSC::JIT::emit_op_eq):
(JSC::JIT::emit_op_jeq):
(JSC::JIT::emit_op_jtrue):
(JSC::JIT::emit_op_neq):
(JSC::JIT::emit_op_jneq):
(JSC::JIT::emit_op_throw):
(JSC::JIT::compileOpStrictEq):
(JSC::JIT::compileOpStrictEqJump):
(JSC::JIT::emit_op_to_number):
(JSC::JIT::emit_op_to_numeric):
(JSC::JIT::emit_op_to_string):
(JSC::JIT::emit_op_to_object):
(JSC::JIT::emit_op_catch):
(JSC::JIT::emit_op_get_parent_scope):
(JSC::JIT::emit_op_switch_imm):
(JSC::JIT::emit_op_switch_char):
(JSC::JIT::emit_op_switch_string):
(JSC::JIT::emit_op_eq_null):
(JSC::JIT::emit_op_neq_null):
(JSC::JIT::emit_op_enter):
(JSC::JIT::emit_op_get_scope):
(JSC::JIT::emit_op_to_this):
(JSC::JIT::emit_op_create_this):
(JSC::JIT::emit_op_check_tdz):
(JSC::JIT::emitSlow_op_eq):
(JSC::JIT::emitSlow_op_neq):
(JSC::JIT::emitSlow_op_instanceof_custom):
(JSC::JIT::emit_op_new_regexp):
(JSC::JIT::emitNewFuncCommon):
(JSC::JIT::emitNewFuncExprCommon):
(JSC::JIT::emit_op_new_array):
(JSC::JIT::emit_op_new_array_with_size):
(JSC::JIT::emit_op_has_structure_property):
(JSC::JIT::emit_op_has_indexed_property):
(JSC::JIT::emitSlow_op_has_indexed_property):
(JSC::JIT::emit_op_get_direct_pname):
(JSC::JIT::emit_op_enumerator_structure_pname):
(JSC::JIT::emit_op_enumerator_generic_pname):
(JSC::JIT::emit_op_profile_type):
(JSC::JIT::emit_op_log_shadow_chicken_prologue):
(JSC::JIT::emit_op_log_shadow_chicken_tail):
(JSC::JIT::emit_op_argument_count):
(JSC::JIT::emit_op_get_rest_length):
(JSC::JIT::emit_op_get_argument):
* jit/JITOpcodes32_64.cpp:
(JSC::JIT::emit_op_catch):
* jit/JITOperations.cpp:
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emit_op_get_by_val):
(JSC::JIT::emitSlow_op_get_by_val):
(JSC::JIT::emit_op_put_by_val):
(JSC::JIT::emitGenericContiguousPutByVal):
(JSC::JIT::emitArrayStoragePutByVal):
(JSC::JIT::emitPutByValWithCachedId):
(JSC::JIT::emitSlow_op_put_by_val):
(JSC::JIT::emit_op_put_getter_by_id):
(JSC::JIT::emit_op_put_setter_by_id):
(JSC::JIT::emit_op_put_getter_setter_by_id):
(JSC::JIT::emit_op_put_getter_by_val):
(JSC::JIT::emit_op_put_setter_by_val):
(JSC::JIT::emit_op_del_by_id):
(JSC::JIT::emit_op_del_by_val):
(JSC::JIT::emit_op_try_get_by_id):
(JSC::JIT::emitSlow_op_try_get_by_id):
(JSC::JIT::emit_op_get_by_id_direct):
(JSC::JIT::emitSlow_op_get_by_id_direct):
(JSC::JIT::emit_op_get_by_id):
(JSC::JIT::emit_op_get_by_id_with_this):
(JSC::JIT::emitSlow_op_get_by_id):
(JSC::JIT::emitSlow_op_get_by_id_with_this):
(JSC::JIT::emit_op_put_by_id):
(JSC::JIT::emit_op_in_by_id):
(JSC::JIT::emitSlow_op_in_by_id):
(JSC::JIT::emitResolveClosure):
(JSC::JIT::emit_op_resolve_scope):
(JSC::JIT::emitLoadWithStructureCheck):
(JSC::JIT::emitGetClosureVar):
(JSC::JIT::emit_op_get_from_scope):
(JSC::JIT::emitSlow_op_get_from_scope):
(JSC::JIT::emitPutGlobalVariable):
(JSC::JIT::emitPutGlobalVariableIndirect):
(JSC::JIT::emitPutClosureVar):
(JSC::JIT::emit_op_put_to_scope):
(JSC::JIT::emit_op_get_from_arguments):
(JSC::JIT::emit_op_put_to_arguments):
(JSC::JIT::emitWriteBarrier):
(JSC::JIT::emit_op_get_internal_field):
(JSC::JIT::emit_op_put_internal_field):
(JSC::JIT::emitIntTypedArrayPutByVal):
(JSC::JIT::emitFloatTypedArrayPutByVal):
* jit/JSInterfaceJIT.h:
(JSC::JSInterfaceJIT::emitLoadJSCell):
(JSC::JSInterfaceJIT::emitJumpIfNotJSCell):
(JSC::JSInterfaceJIT::emitLoadInt32):
(JSC::JSInterfaceJIT::emitLoadDouble):
(JSC::JSInterfaceJIT::emitGetFromCallFrameHeaderPtr):
(JSC::JSInterfaceJIT::emitPutToCallFrameHeader):
(JSC::JSInterfaceJIT::emitPutCellToCallFrameHeader):
* jit/SetupVarargsFrame.cpp:
(JSC::emitSetupVarargsFrameFastCase):
* jit/SpecializedThunkJIT.h:
(JSC::SpecializedThunkJIT::loadDoubleArgument):
(JSC::SpecializedThunkJIT::loadCellArgument):
(JSC::SpecializedThunkJIT::loadInt32Argument):
* jit/ThunkGenerators.cpp:
(JSC::absThunkGenerator):
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::getNonConstantOperand):
(JSC::LLInt::getOperand):
(JSC::LLInt::genericCall):
(JSC::LLInt::varargsSetup):
(JSC::LLInt::commonCallEval):
(JSC::LLInt::LLINT_SLOW_PATH_DECL):
(JSC::LLInt::handleVarargsCheckpoint):
(JSC::LLInt::dispatchToNextInstruction):
(JSC::LLInt::slow_path_checkpoint_osr_exit_from_inlined_call):
(JSC::LLInt::slow_path_checkpoint_osr_exit):
(JSC::LLInt::llint_throw_stack_overflow_error):
* llint/LLIntSlowPaths.h:
* llint/LowLevelInterpreter.asm:
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:
* runtime/ArgList.h:
(JSC::MarkedArgumentBuffer::fill):
* runtime/CachedTypes.cpp:
(JSC::CachedCodeBlock::hasCheckpoints const):
(JSC::UnlinkedCodeBlock::UnlinkedCodeBlock):
(JSC::CachedCodeBlock<CodeBlockType>::encode):
* runtime/CommonSlowPaths.cpp:
(JSC::SLOW_PATH_DECL):
* runtime/ConstructData.cpp:
(JSC::construct):
* runtime/ConstructData.h:
* runtime/DirectArguments.cpp:
(JSC::DirectArguments::copyToArguments):
* runtime/DirectArguments.h:
* runtime/GenericArguments.h:
* runtime/GenericArgumentsInlines.h:
(JSC::GenericArguments<Type>::copyToArguments):
* runtime/JSArray.cpp:
(JSC::JSArray::copyToArguments):
* runtime/JSArray.h:
* runtime/JSImmutableButterfly.cpp:
(JSC::JSImmutableButterfly::copyToArguments):
* runtime/JSImmutableButterfly.h:
* runtime/JSLock.cpp:
(JSC::JSLock::willReleaseLock):
* runtime/ModuleProgramExecutable.cpp:
(JSC::ModuleProgramExecutable::create):
* runtime/Options.cpp:
(JSC::recomputeDependentOptions):
* runtime/ScopedArguments.cpp:
(JSC::ScopedArguments::copyToArguments):
* runtime/ScopedArguments.h:
* runtime/VM.cpp:
(JSC::VM::addCheckpointOSRSideState):
(JSC::VM::findCheckpointOSRSideState):
(JSC::VM::scanSideState const):
* runtime/VM.h:
(JSC::VM::hasCheckpointOSRSideState const):
* tools/VMInspector.cpp:
(JSC::VMInspector::dumpRegisters):
* wasm/WasmFunctionCodeBlock.h:
(JSC::Wasm::FunctionCodeBlock::getConstant const):
(JSC::Wasm::FunctionCodeBlock::getConstantType const):
* wasm/WasmLLIntGenerator.cpp:
(JSC::Wasm::LLIntGenerator::setUsesCheckpoints const):
* wasm/WasmOperations.cpp:
(JSC::Wasm::operationWasmToJSException):
* wasm/WasmSlowPaths.cpp:

Source/WTF:

* WTF.xcodeproj/project.pbxproj:
* wtf/Bitmap.h:
(WTF::WordType>::invert):
(WTF::WordType>::operator):
(WTF::WordType>::operator const const):
* wtf/CMakeLists.txt:
* wtf/EnumClassOperatorOverloads.h: Added.
* wtf/FastBitVector.h:
(WTF::FastBitReference::operator bool const):
(WTF::FastBitReference::operator|=):
(WTF::FastBitReference::operator&=):
(WTF::FastBitVector::fill):
(WTF::FastBitVector::grow):
* wtf/UnalignedAccess.h:
(WTF::unalignedLoad):
(WTF::unalignedStore):

Tools:

* Scripts/run-jsc-stress-tests:

git-svn-id: http://svn.webkit.org/repository/webkit/trunk@253896 268f45cc-cd09-0410-ab3c-d52691b4dbfc

190 files changed:
JSTests/ChangeLog
JSTests/stress/apply-osr-exit-should-get-length-once-exceptions-occasionally.js [new file with mode: 0644]
JSTests/stress/apply-osr-exit-should-get-length-once.js [new file with mode: 0644]
JSTests/stress/load-varargs-then-inlined-call-and-exit-strict.js
JSTests/stress/recursive-tail-call-with-different-argument-count.js
JSTests/stress/rest-varargs-osr-exit-to-checkpoint.js [new file with mode: 0644]
Source/JavaScriptCore/CMakeLists.txt
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/DerivedSources-input.xcfilelist
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/assembler/MacroAssemblerCodeRef.h
Source/JavaScriptCore/assembler/ProbeFrame.h
Source/JavaScriptCore/b3/testb3.h
Source/JavaScriptCore/bytecode/AccessCase.cpp
Source/JavaScriptCore/bytecode/AccessCaseSnippetParams.cpp
Source/JavaScriptCore/bytecode/BytecodeDumper.cpp
Source/JavaScriptCore/bytecode/BytecodeDumper.h
Source/JavaScriptCore/bytecode/BytecodeIndex.cpp
Source/JavaScriptCore/bytecode/BytecodeIndex.h
Source/JavaScriptCore/bytecode/BytecodeList.rb
Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.cpp
Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysis.h
Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysisInlines.h
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/bytecode/CodeBlock.h
Source/JavaScriptCore/bytecode/CodeOrigin.h
Source/JavaScriptCore/bytecode/FullBytecodeLiveness.h
Source/JavaScriptCore/bytecode/InlineCallFrame.h
Source/JavaScriptCore/bytecode/LazyOperandValueProfile.h
Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.cpp
Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.h
Source/JavaScriptCore/bytecode/Operands.h
Source/JavaScriptCore/bytecode/OperandsInlines.h
Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.cpp
Source/JavaScriptCore/bytecode/UnlinkedCodeBlock.h
Source/JavaScriptCore/bytecode/ValueProfile.h
Source/JavaScriptCore/bytecode/ValueRecovery.cpp
Source/JavaScriptCore/bytecode/ValueRecovery.h
Source/JavaScriptCore/bytecode/VirtualRegister.h
Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h
Source/JavaScriptCore/bytecompiler/RegisterID.h
Source/JavaScriptCore/dfg/DFGAbstractHeap.cpp
Source/JavaScriptCore/dfg/DFGAbstractHeap.h
Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
Source/JavaScriptCore/dfg/DFGArgumentPosition.h
Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp
Source/JavaScriptCore/dfg/DFGArgumentsUtilities.cpp
Source/JavaScriptCore/dfg/DFGArgumentsUtilities.h
Source/JavaScriptCore/dfg/DFGAtTailAbstractState.h
Source/JavaScriptCore/dfg/DFGAvailabilityMap.cpp
Source/JavaScriptCore/dfg/DFGAvailabilityMap.h
Source/JavaScriptCore/dfg/DFGBasicBlock.cpp
Source/JavaScriptCore/dfg/DFGBasicBlock.h
Source/JavaScriptCore/dfg/DFGBlockInsertionSet.cpp
Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
Source/JavaScriptCore/dfg/DFGCFAPhase.cpp
Source/JavaScriptCore/dfg/DFGCPSRethreadingPhase.cpp
Source/JavaScriptCore/dfg/DFGCSEPhase.cpp
Source/JavaScriptCore/dfg/DFGCapabilities.cpp
Source/JavaScriptCore/dfg/DFGClobberize.h
Source/JavaScriptCore/dfg/DFGCombinedLiveness.cpp
Source/JavaScriptCore/dfg/DFGCommonData.cpp
Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp
Source/JavaScriptCore/dfg/DFGDoesGC.cpp
Source/JavaScriptCore/dfg/DFGDriver.cpp
Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
Source/JavaScriptCore/dfg/DFGForAllKills.h
Source/JavaScriptCore/dfg/DFGGraph.cpp
Source/JavaScriptCore/dfg/DFGGraph.h
Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.cpp
Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.h
Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
Source/JavaScriptCore/dfg/DFGJITCompiler.h
Source/JavaScriptCore/dfg/DFGLiveCatchVariablePreservationPhase.cpp
Source/JavaScriptCore/dfg/DFGMovHintRemovalPhase.cpp
Source/JavaScriptCore/dfg/DFGNode.h
Source/JavaScriptCore/dfg/DFGNodeType.h
Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
Source/JavaScriptCore/dfg/DFGOSREntry.cpp
Source/JavaScriptCore/dfg/DFGOSREntrypointCreationPhase.cpp
Source/JavaScriptCore/dfg/DFGOSRExit.cpp
Source/JavaScriptCore/dfg/DFGOSRExit.h
Source/JavaScriptCore/dfg/DFGOSRExitBase.h
Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp
Source/JavaScriptCore/dfg/DFGOpInfo.h
Source/JavaScriptCore/dfg/DFGOperations.cpp
Source/JavaScriptCore/dfg/DFGPhantomInsertionPhase.cpp
Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
Source/JavaScriptCore/dfg/DFGPredictionInjectionPhase.cpp
Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
Source/JavaScriptCore/dfg/DFGPutStackSinkingPhase.cpp
Source/JavaScriptCore/dfg/DFGSSAConversionPhase.cpp
Source/JavaScriptCore/dfg/DFGSafeToExecute.h
Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
Source/JavaScriptCore/dfg/DFGStackLayoutPhase.cpp
Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
Source/JavaScriptCore/dfg/DFGThunks.cpp
Source/JavaScriptCore/dfg/DFGThunks.h
Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp
Source/JavaScriptCore/dfg/DFGValidate.cpp
Source/JavaScriptCore/dfg/DFGVarargsForwardingPhase.cpp
Source/JavaScriptCore/dfg/DFGVariableAccessData.cpp
Source/JavaScriptCore/dfg/DFGVariableAccessData.h
Source/JavaScriptCore/dfg/DFGVariableEvent.cpp
Source/JavaScriptCore/dfg/DFGVariableEvent.h
Source/JavaScriptCore/dfg/DFGVariableEventStream.cpp
Source/JavaScriptCore/dfg/DFGVariableEventStream.h
Source/JavaScriptCore/ftl/FTLCapabilities.cpp
Source/JavaScriptCore/ftl/FTLForOSREntryJITCode.cpp
Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
Source/JavaScriptCore/ftl/FTLOSREntry.cpp
Source/JavaScriptCore/ftl/FTLOSRExit.cpp
Source/JavaScriptCore/ftl/FTLOSRExit.h
Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp
Source/JavaScriptCore/ftl/FTLOperations.cpp
Source/JavaScriptCore/ftl/FTLOutput.cpp
Source/JavaScriptCore/ftl/FTLOutput.h
Source/JavaScriptCore/ftl/FTLSelectPredictability.h [new file with mode: 0644]
Source/JavaScriptCore/ftl/FTLSlowPathCall.h
Source/JavaScriptCore/generator/Checkpoints.rb [new file with mode: 0644]
Source/JavaScriptCore/generator/Opcode.rb
Source/JavaScriptCore/generator/Section.rb
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/interpreter/CallFrame.cpp
Source/JavaScriptCore/interpreter/CallFrame.h
Source/JavaScriptCore/interpreter/CallFrameInlines.h
Source/JavaScriptCore/interpreter/CheckpointOSRExitSideState.h [new file with mode: 0644]
Source/JavaScriptCore/interpreter/Interpreter.cpp
Source/JavaScriptCore/interpreter/Interpreter.h
Source/JavaScriptCore/interpreter/StackVisitor.cpp
Source/JavaScriptCore/jit/AssemblyHelpers.h
Source/JavaScriptCore/jit/CallFrameShuffler.cpp
Source/JavaScriptCore/jit/CallFrameShuffler.h
Source/JavaScriptCore/jit/JIT.h
Source/JavaScriptCore/jit/JITArithmetic.cpp
Source/JavaScriptCore/jit/JITCall.cpp
Source/JavaScriptCore/jit/JITExceptions.cpp
Source/JavaScriptCore/jit/JITInlines.h
Source/JavaScriptCore/jit/JITOpcodes.cpp
Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
Source/JavaScriptCore/jit/JITOperations.cpp
Source/JavaScriptCore/jit/JITPropertyAccess.cpp
Source/JavaScriptCore/jit/JSInterfaceJIT.h
Source/JavaScriptCore/jit/SetupVarargsFrame.cpp
Source/JavaScriptCore/jit/SpecializedThunkJIT.h
Source/JavaScriptCore/jit/ThunkGenerators.cpp
Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
Source/JavaScriptCore/llint/LLIntSlowPaths.h
Source/JavaScriptCore/llint/LowLevelInterpreter.asm
Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
Source/JavaScriptCore/runtime/ArgList.h
Source/JavaScriptCore/runtime/CachedTypes.cpp
Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
Source/JavaScriptCore/runtime/ConstructData.cpp
Source/JavaScriptCore/runtime/ConstructData.h
Source/JavaScriptCore/runtime/DirectArguments.cpp
Source/JavaScriptCore/runtime/DirectArguments.h
Source/JavaScriptCore/runtime/GenericArguments.h
Source/JavaScriptCore/runtime/GenericArgumentsInlines.h
Source/JavaScriptCore/runtime/JSArray.cpp
Source/JavaScriptCore/runtime/JSArray.h
Source/JavaScriptCore/runtime/JSImmutableButterfly.cpp
Source/JavaScriptCore/runtime/JSImmutableButterfly.h
Source/JavaScriptCore/runtime/JSLock.cpp
Source/JavaScriptCore/runtime/ModuleProgramExecutable.cpp
Source/JavaScriptCore/runtime/Options.cpp
Source/JavaScriptCore/runtime/ScopedArguments.cpp
Source/JavaScriptCore/runtime/ScopedArguments.h
Source/JavaScriptCore/runtime/VM.cpp
Source/JavaScriptCore/runtime/VM.h
Source/JavaScriptCore/tools/VMInspector.cpp
Source/JavaScriptCore/wasm/WasmFunctionCodeBlock.h
Source/JavaScriptCore/wasm/WasmLLIntGenerator.cpp
Source/JavaScriptCore/wasm/WasmOperations.cpp
Source/JavaScriptCore/wasm/WasmSlowPaths.cpp
Source/WTF/ChangeLog
Source/WTF/WTF.xcodeproj/project.pbxproj
Source/WTF/wtf/Bitmap.h
Source/WTF/wtf/CMakeLists.txt
Source/WTF/wtf/EnumClassOperatorOverloads.h [new file with mode: 0644]
Source/WTF/wtf/FastBitVector.h
Source/WTF/wtf/UnalignedAccess.h
Tools/ChangeLog
Tools/Scripts/run-jsc-stress-tests

index ad7d804..af96db2 100644 (file)
@@ -1,3 +1,27 @@
+2019-12-23  Keith Miller  <keith_miller@apple.com>
+
+        DFG/FTL should be able to exit to the middle of a bytecode
+        https://bugs.webkit.org/show_bug.cgi?id=205232
+
+        Reviewed by Saam Barati.
+
+        * stress/apply-osr-exit-should-get-length-once-exceptions-occasionally.js: Added.
+        (expectedArgCount):
+        (callee):
+        (test):
+        (let.array.get length):
+        * stress/apply-osr-exit-should-get-length-once.js: Added.
+        (expectedArgCount):
+        (callee):
+        (test):
+        (let.array.get length):
+        * stress/load-varargs-then-inlined-call-and-exit-strict.js:
+        (checkEqual):
+        * stress/recursive-tail-call-with-different-argument-count.js:
+        * stress/rest-varargs-osr-exit-to-checkpoint.js: Added.
+        (foo):
+        (bar):
+
 2019-12-23  Yusuke Suzuki  <ysuzuki@apple.com>
 
         [JSC] Wasm OSR entry should capture top-most enclosing-stack
diff --git a/JSTests/stress/apply-osr-exit-should-get-length-once-exceptions-occasionally.js b/JSTests/stress/apply-osr-exit-should-get-length-once-exceptions-occasionally.js
new file mode 100644 (file)
index 0000000..f3f15a6
--- /dev/null
@@ -0,0 +1,36 @@
+
+let currentArgCount;
+function expectedArgCount() {
+    return currentArgCount;
+}
+noInline(expectedArgCount);
+
+function callee() {
+    if (arguments.length != expectedArgCount())
+        throw new Error();
+}
+
+function test(array) {
+    callee.apply(undefined, array);
+}
+noInline(test);
+
+let lengthCalls = 0;
+currentArgCount = 2;
+let array = { 0: 1, 1: 2, get length() {
+    if (lengthCalls++ % 10 == 1)
+        throw new Error("throwing an exception in length");
+    return currentArgCount
+} }
+for (let i = 0; i < 1e6; i++) {
+    try {
+        test(array);
+    } catch { }
+}
+
+currentArgCount = 100;
+lengthCalls = 0;
+test(array);
+
+if (lengthCalls !== 1)
+    throw new Error(lengthCalls);
diff --git a/JSTests/stress/apply-osr-exit-should-get-length-once.js b/JSTests/stress/apply-osr-exit-should-get-length-once.js
new file mode 100644 (file)
index 0000000..8bee45c
--- /dev/null
@@ -0,0 +1,31 @@
+
+let currentArgCount;
+function expectedArgCount() {
+    return currentArgCount;
+}
+noInline(expectedArgCount);
+
+function callee() {
+    if (arguments.length != expectedArgCount())
+        throw new Error();
+}
+
+function test(array) {
+    callee.apply(undefined, array);
+}
+noInline(test);
+
+let lengthCalls = 0;
+currentArgCount = 2;
+let array = { 0: 1, 1: 2, get length() { lengthCalls++; return currentArgCount } }
+for (let i = 0; i < 1e5; i++)
+    test(array);
+
+
+test(array);
+currentArgCount = 100;
+lengthCalls = 0;
+test(array);
+
+if (lengthCalls !== 1)
+    throw new Error(lengthCalls);
index 3618f8c..aa73231 100644 (file)
@@ -18,8 +18,8 @@ function checkEqual(a, b) {
         throw "Error: bad value of a: " + a.a + " versus " + b.a;
     if (a.b != b.b)
         throw "Error: bad value of b: " + a.b + " versus " + b.b;
-    if (a.c.length != b.c.length)
-        throw "Error: bad value of c, length mismatch: " + a.c + " versus " + b.c;
+    if (a.c.length !== b.c.length)
+        throw "Error: bad value of c, length mismatch: " + a.c.length + " versus " + b.c.length;
     for (var i = a.c.length; i--;) {
         if (a.c[i] != b.c[i])
             throw "Error: bad value of c, mismatch at i = " + i + ": " + a.c + " versus " + b.c;
index 047fb01..53d752d 100644 (file)
@@ -18,8 +18,8 @@ noInline(bar);
 for (var i = 0; i < 10000; ++i) {
     var result = foo(40, 2);
     if (result !== 42)
-        throw "Wrong result for foo, expected 42, got " + result;
+        throw Error("Wrong result for foo, expected 42, got " + result);
     result = bar(40, 2);
     if (result !== 42)
-        throw "Wrong result for bar, expected 42, got " + result;
+        throw Error("Wrong result for bar, expected 42, got " + result);
 }
diff --git a/JSTests/stress/rest-varargs-osr-exit-to-checkpoint.js b/JSTests/stress/rest-varargs-osr-exit-to-checkpoint.js
new file mode 100644 (file)
index 0000000..87ba055
--- /dev/null
@@ -0,0 +1,21 @@
+"use strict";
+
+function foo(a, b, ...rest) {
+    return rest.length;
+}
+
+function bar(a, b, ...rest) {
+    return foo.call(...rest);
+}
+noInline(bar);
+
+let array = new Array(10);
+for (let i = 0; i < 1e5; ++i) {
+    let result = bar(...array);
+    if (result !== array.length - bar.length - foo.length - 1)
+        throw new Error(i + " " + result);
+}
+
+array.length = 10000;
+if (bar(...array) !== array.length - bar.length - foo.length - 1)
+    throw new Error();
index 3dcce0f..ccef1af 100644 (file)
@@ -525,6 +525,7 @@ set(JavaScriptCore_PRIVATE_FRAMEWORK_HEADERS
     bytecode/ObjectPropertyCondition.h
     bytecode/Opcode.h
     bytecode/OpcodeSize.h
+    bytecode/Operands.h
     bytecode/PropertyCondition.h
     bytecode/PutByIdFlags.h
     bytecode/SpeculatedType.h
index e4e2309..a391d26 100644 (file)
@@ -1,3 +1,843 @@
+2019-12-23  Keith Miller  <keith_miller@apple.com>
+
+        DFG/FTL should be able to exit to the middle of a bytecode
+        https://bugs.webkit.org/show_bug.cgi?id=205232
+
+        Reviewed by Saam Barati.
+
+        It can be valuable to exit to the middle of a bytecode for a couple of reasons.
+        1) It can be used to combine bytecodes that share a majority of their operands, reducing bytecode steam size.
+        2) It enables creating bytecodes that are easier to reconstruct useful optimization information from.
+
+        To make exiting to the middle of a bytecode possible this patch
+        introduces the concept of a temporary operand. A temporary operand
+        is one that contains the result of effectful operations during the
+        process of executing a bytecode. tmp operands have no meaning when
+        executing in the LLInt or Baseline and are only used in the DFG to
+        preserve information for OSR exit. We use the term checkpoint to
+        refer to any point where an effectful component of a bytecode executes.
+        For example, in op_call_varargs there are two checkpoints the first is
+        before we have determined the number of variable arguments and the second
+        is the actual call.
+
+        When the DFG OSR exits if there are any active checkpoints inline
+        call stack we will emit a jit probe that allocates a side state
+        object keyed off the frame pointer of the bytecode whose
+        checkpoint needs to be finished. We need side state because we may
+        recursively inline several copies of the same
+        function. Alternatively, we could call back into ourselves after
+        OSR and exit again from optimized code before finishing the
+        checkpoint of our caller.
+
+        Another thing we need to be careful of is making sure we remove
+        side state as we unwind for an exception. To make sure we do this
+        correctly I've added an assertion to JSLock that there are no
+        pending checkpoint side states on the vm when releasing the lock.
+
+        A large amount of this patch is trying to remove as much code that
+        refers to virtual registers as an int as possible. Instead, this
+        patch replaces them with the VirtualRegister class. There are also
+        a couple of new classes/enums added to JSC:
+
+        1) There is now a class, Operand, that represents the combination
+        of a VirtualRegister and a temporary. This is handy in the DFG to
+        model OSR exit values all together. Additionally, Operands<T> has
+        been updated to work with respect to Operand values.
+
+        2) CallFrameSlot is now an enum class instead of a struct of
+        constexpr values. This lets us implicitly convert CallFrameSlots
+        to VirtualRegisters without allowing all ints to implicity
+        convert.
+
+        3) FTL::SelectPredictability is a new enum that describes to the
+        FTL whether or not we think a select is going to be
+        predictable. SelectPredictability has four options: Unpredictable,
+        Predictable, LeftLikely, and RightLikely. Unpredictable means we
+        think a branch predictor won't do a good job guessing this value
+        so we should compile the select to a cmov. The other options mean
+        we either think we are going to pick the same value every time or
+        there's a reasonable chance the branch predictor will be able to
+        guess the value.
+
+        In order to validate the correctness of this patch the various
+        varargs call opcodes have been reworked to use checkpoints. This
+        also fixed a long-standing issue where we could call length
+        getters twice if we OSR exit during LoadVarargs but before the
+        actually call.
+
+        Lastly, we have not enabled the probe-based OSR exit for a long
+        time in production, thus this patch removes that code since it
+        would be a non-trivial amount of work to get checkpoints working
+        with probe OSR.
+
+        * CMakeLists.txt:
+        * DerivedSources-input.xcfilelist:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * assembler/MacroAssemblerCodeRef.h:
+        * assembler/ProbeFrame.h:
+        (JSC::Probe::Frame::operand):
+        (JSC::Probe::Frame::setOperand):
+        * b3/testb3.h:
+        (populateWithInterestingValues):
+        (floatingPointOperands):
+        * bytecode/AccessCase.cpp:
+        (JSC::AccessCase::generateImpl):
+        * bytecode/AccessCaseSnippetParams.cpp:
+        (JSC::SlowPathCallGeneratorWithArguments::generateImpl):
+        * bytecode/BytecodeDumper.cpp:
+        (JSC::BytecodeDumperBase::dumpValue):
+        (JSC::BytecodeDumper<Block>::registerName const):
+        (JSC::BytecodeDumper<Block>::constantName const):
+        (JSC::Wasm::BytecodeDumper::constantName const):
+        * bytecode/BytecodeDumper.h:
+        * bytecode/BytecodeIndex.cpp:
+        (JSC::BytecodeIndex::dump const):
+        * bytecode/BytecodeIndex.h:
+        (JSC::BytecodeIndex::BytecodeIndex):
+        (JSC::BytecodeIndex::offset const):
+        (JSC::BytecodeIndex::checkpoint const):
+        (JSC::BytecodeIndex::asBits const):
+        (JSC::BytecodeIndex::hash const):
+        (JSC::BytecodeIndex::operator bool const):
+        (JSC::BytecodeIndex::pack):
+        (JSC::BytecodeIndex::fromBits):
+        * bytecode/BytecodeList.rb:
+        * bytecode/BytecodeLivenessAnalysis.cpp:
+        (JSC::enumValuesEqualAsIntegral):
+        (JSC::tmpLivenessForCheckpoint):
+        * bytecode/BytecodeLivenessAnalysis.h:
+        * bytecode/BytecodeLivenessAnalysisInlines.h:
+        (JSC::virtualRegisterIsAlwaysLive):
+        (JSC::virtualRegisterThatIsNotAlwaysLiveIsLive):
+        (JSC::virtualRegisterIsLive):
+        (JSC::operandIsAlwaysLive): Deleted.
+        (JSC::operandThatIsNotAlwaysLiveIsLive): Deleted.
+        (JSC::operandIsLive): Deleted.
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::finishCreation):
+        (JSC::CodeBlock::bytecodeIndexForExit const):
+        (JSC::CodeBlock::ensureCatchLivenessIsComputedForBytecodeIndexSlow):
+        (JSC::CodeBlock::updateAllValueProfilePredictionsAndCountLiveness):
+        * bytecode/CodeBlock.h:
+        (JSC::CodeBlock::numTmps const):
+        (JSC::CodeBlock::isKnownNotImmediate):
+        (JSC::CodeBlock::isTemporaryRegister):
+        (JSC::CodeBlock::constantRegister):
+        (JSC::CodeBlock::getConstant const):
+        (JSC::CodeBlock::constantSourceCodeRepresentation const):
+        (JSC::CodeBlock::replaceConstant):
+        (JSC::CodeBlock::isTemporaryRegisterIndex): Deleted.
+        (JSC::CodeBlock::isConstantRegisterIndex): Deleted.
+        * bytecode/CodeOrigin.h:
+        * bytecode/FullBytecodeLiveness.h:
+        (JSC::FullBytecodeLiveness::virtualRegisterIsLive const):
+        (JSC::FullBytecodeLiveness::operandIsLive const): Deleted.
+        * bytecode/InlineCallFrame.h:
+        (JSC::InlineCallFrame::InlineCallFrame):
+        (JSC::InlineCallFrame::setTmpOffset):
+        (JSC::CodeOrigin::walkUpInlineStack const):
+        (JSC::CodeOrigin::inlineStackContainsActiveCheckpoint const):
+        (JSC::remapOperand):
+        (JSC::unmapOperand):
+        (JSC::CodeOrigin::walkUpInlineStack): Deleted.
+        * bytecode/LazyOperandValueProfile.h:
+        (JSC::LazyOperandValueProfileKey::LazyOperandValueProfileKey):
+        (JSC::LazyOperandValueProfileKey::hash const):
+        (JSC::LazyOperandValueProfileKey::operand const):
+        * bytecode/MethodOfGettingAValueProfile.cpp:
+        (JSC::MethodOfGettingAValueProfile::fromLazyOperand):
+        (JSC::MethodOfGettingAValueProfile::emitReportValue const):
+        (JSC::MethodOfGettingAValueProfile::reportValue):
+        * bytecode/MethodOfGettingAValueProfile.h:
+        * bytecode/Operands.h:
+        (JSC::Operand::Operand):
+        (JSC::Operand::tmp):
+        (JSC::Operand::kind const):
+        (JSC::Operand::value const):
+        (JSC::Operand::virtualRegister const):
+        (JSC::Operand::asBits const):
+        (JSC::Operand::isTmp const):
+        (JSC::Operand::isArgument const):
+        (JSC::Operand::isLocal const):
+        (JSC::Operand::isHeader const):
+        (JSC::Operand::isConstant const):
+        (JSC::Operand::toArgument const):
+        (JSC::Operand::toLocal const):
+        (JSC::Operand::operator== const):
+        (JSC::Operand::isValid const):
+        (JSC::Operand::fromBits):
+        (JSC::Operands::Operands):
+        (JSC::Operands::numberOfLocals const):
+        (JSC::Operands::numberOfTmps const):
+        (JSC::Operands::tmpIndex const):
+        (JSC::Operands::argumentIndex const):
+        (JSC::Operands::localIndex const):
+        (JSC::Operands::tmp):
+        (JSC::Operands::tmp const):
+        (JSC::Operands::argument):
+        (JSC::Operands::argument const):
+        (JSC::Operands::local):
+        (JSC::Operands::local const):
+        (JSC::Operands::sizeFor const):
+        (JSC::Operands::atFor):
+        (JSC::Operands::atFor const):
+        (JSC::Operands::ensureLocals):
+        (JSC::Operands::ensureTmps):
+        (JSC::Operands::getForOperandIndex):
+        (JSC::Operands::getForOperandIndex const):
+        (JSC::Operands::operandIndex const):
+        (JSC::Operands::operand):
+        (JSC::Operands::operand const):
+        (JSC::Operands::hasOperand const):
+        (JSC::Operands::setOperand):
+        (JSC::Operands::at const):
+        (JSC::Operands::at):
+        (JSC::Operands::operator[] const):
+        (JSC::Operands::operator[]):
+        (JSC::Operands::operandForIndex const):
+        (JSC::Operands::operator== const):
+        (JSC::Operands::isArgument const): Deleted.
+        (JSC::Operands::isLocal const): Deleted.
+        (JSC::Operands::virtualRegisterForIndex const): Deleted.
+        (JSC::Operands::setOperandFirstTime): Deleted.
+        * bytecode/OperandsInlines.h:
+        (JSC::Operand::dump const):
+        (JSC::Operands<T>::dumpInContext const):
+        (JSC::Operands<T>::dump const):
+        * bytecode/UnlinkedCodeBlock.cpp:
+        (JSC::UnlinkedCodeBlock::UnlinkedCodeBlock):
+        * bytecode/UnlinkedCodeBlock.h:
+        (JSC::UnlinkedCodeBlock::hasCheckpoints const):
+        (JSC::UnlinkedCodeBlock::setHasCheckpoints):
+        (JSC::UnlinkedCodeBlock::constantRegister const):
+        (JSC::UnlinkedCodeBlock::getConstant const):
+        (JSC::UnlinkedCodeBlock::isConstantRegisterIndex const): Deleted.
+        * bytecode/ValueProfile.h:
+        (JSC::ValueProfileAndVirtualRegisterBuffer::ValueProfileAndVirtualRegisterBuffer):
+        (JSC::ValueProfileAndVirtualRegisterBuffer::~ValueProfileAndVirtualRegisterBuffer):
+        (JSC::ValueProfileAndOperandBuffer::ValueProfileAndOperandBuffer): Deleted.
+        (JSC::ValueProfileAndOperandBuffer::~ValueProfileAndOperandBuffer): Deleted.
+        (JSC::ValueProfileAndOperandBuffer::forEach): Deleted.
+        * bytecode/ValueRecovery.cpp:
+        (JSC::ValueRecovery::recover const):
+        * bytecode/ValueRecovery.h:
+        * bytecode/VirtualRegister.h:
+        (JSC::virtualRegisterIsLocal):
+        (JSC::virtualRegisterIsArgument):
+        (JSC::VirtualRegister::VirtualRegister):
+        (JSC::VirtualRegister::isValid const):
+        (JSC::VirtualRegister::isLocal const):
+        (JSC::VirtualRegister::isArgument const):
+        (JSC::VirtualRegister::isConstant const):
+        (JSC::VirtualRegister::toConstantIndex const):
+        (JSC::operandIsLocal): Deleted.
+        (JSC::operandIsArgument): Deleted.
+        * bytecompiler/BytecodeGenerator.cpp:
+        (JSC::BytecodeGenerator::initializeNextParameter):
+        (JSC::BytecodeGenerator::initializeParameters):
+        (JSC::BytecodeGenerator::emitEqualityOpImpl):
+        (JSC::BytecodeGenerator::emitCallVarargs):
+        * bytecompiler/BytecodeGenerator.h:
+        (JSC::BytecodeGenerator::setUsesCheckpoints):
+        * bytecompiler/RegisterID.h:
+        (JSC::RegisterID::setIndex):
+        * dfg/DFGAbstractHeap.cpp:
+        (JSC::DFG::AbstractHeap::Payload::dumpAsOperand const):
+        (JSC::DFG::AbstractHeap::dump const):
+        * dfg/DFGAbstractHeap.h:
+        (JSC::DFG::AbstractHeap::Payload::Payload):
+        (JSC::DFG::AbstractHeap::AbstractHeap):
+        (JSC::DFG::AbstractHeap::operand const):
+        * dfg/DFGAbstractInterpreterInlines.h:
+        (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
+        * dfg/DFGArgumentPosition.h:
+        (JSC::DFG::ArgumentPosition::dump):
+        * dfg/DFGArgumentsEliminationPhase.cpp:
+        * dfg/DFGArgumentsUtilities.cpp:
+        (JSC::DFG::argumentsInvolveStackSlot):
+        (JSC::DFG::emitCodeToGetArgumentsArrayLength):
+        * dfg/DFGArgumentsUtilities.h:
+        * dfg/DFGAtTailAbstractState.h:
+        (JSC::DFG::AtTailAbstractState::operand):
+        * dfg/DFGAvailabilityMap.cpp:
+        (JSC::DFG::AvailabilityMap::pruneByLiveness):
+        * dfg/DFGAvailabilityMap.h:
+        (JSC::DFG::AvailabilityMap::closeStartingWithLocal):
+        * dfg/DFGBasicBlock.cpp:
+        (JSC::DFG::BasicBlock::BasicBlock):
+        (JSC::DFG::BasicBlock::ensureTmps):
+        * dfg/DFGBasicBlock.h:
+        * dfg/DFGBlockInsertionSet.cpp:
+        (JSC::DFG::BlockInsertionSet::insert):
+        * dfg/DFGByteCodeParser.cpp:
+        (JSC::DFG::ByteCodeParser::ByteCodeParser):
+        (JSC::DFG::ByteCodeParser::ensureTmps):
+        (JSC::DFG::ByteCodeParser::progressToNextCheckpoint):
+        (JSC::DFG::ByteCodeParser::newVariableAccessData):
+        (JSC::DFG::ByteCodeParser::getDirect):
+        (JSC::DFG::ByteCodeParser::get):
+        (JSC::DFG::ByteCodeParser::setDirect):
+        (JSC::DFG::ByteCodeParser::injectLazyOperandSpeculation):
+        (JSC::DFG::ByteCodeParser::getLocalOrTmp):
+        (JSC::DFG::ByteCodeParser::setLocalOrTmp):
+        (JSC::DFG::ByteCodeParser::setArgument):
+        (JSC::DFG::ByteCodeParser::findArgumentPositionForLocal):
+        (JSC::DFG::ByteCodeParser::findArgumentPosition):
+        (JSC::DFG::ByteCodeParser::flushImpl):
+        (JSC::DFG::ByteCodeParser::flushForTerminalImpl):
+        (JSC::DFG::ByteCodeParser::flush):
+        (JSC::DFG::ByteCodeParser::flushDirect):
+        (JSC::DFG::ByteCodeParser::addFlushOrPhantomLocal):
+        (JSC::DFG::ByteCodeParser::phantomLocalDirect):
+        (JSC::DFG::ByteCodeParser::flushForTerminal):
+        (JSC::DFG::ByteCodeParser::addToGraph):
+        (JSC::DFG::ByteCodeParser::InlineStackEntry::remapOperand const):
+        (JSC::DFG::ByteCodeParser::DelayedSetLocal::DelayedSetLocal):
+        (JSC::DFG::ByteCodeParser::DelayedSetLocal::execute):
+        (JSC::DFG::ByteCodeParser::allocateTargetableBlock):
+        (JSC::DFG::ByteCodeParser::allocateUntargetableBlock):
+        (JSC::DFG::ByteCodeParser::handleRecursiveTailCall):
+        (JSC::DFG::ByteCodeParser::inlineCall):
+        (JSC::DFG::ByteCodeParser::handleVarargsInlining):
+        (JSC::DFG::ByteCodeParser::handleInlining):
+        (JSC::DFG::ByteCodeParser::parseBlock):
+        (JSC::DFG::ByteCodeParser::InlineStackEntry::InlineStackEntry):
+        (JSC::DFG::ByteCodeParser::parse):
+        (JSC::DFG::ByteCodeParser::getLocal): Deleted.
+        (JSC::DFG::ByteCodeParser::setLocal): Deleted.
+        * dfg/DFGCFAPhase.cpp:
+        (JSC::DFG::CFAPhase::injectOSR):
+        * dfg/DFGCPSRethreadingPhase.cpp:
+        (JSC::DFG::CPSRethreadingPhase::run):
+        (JSC::DFG::CPSRethreadingPhase::canonicalizeGetLocal):
+        (JSC::DFG::CPSRethreadingPhase::canonicalizeFlushOrPhantomLocalFor):
+        (JSC::DFG::CPSRethreadingPhase::canonicalizeFlushOrPhantomLocal):
+        (JSC::DFG::CPSRethreadingPhase::canonicalizeSet):
+        (JSC::DFG::CPSRethreadingPhase::canonicalizeLocalsInBlock):
+        (JSC::DFG::CPSRethreadingPhase::propagatePhis):
+        (JSC::DFG::CPSRethreadingPhase::phiStackFor):
+        * dfg/DFGCSEPhase.cpp:
+        * dfg/DFGCapabilities.cpp:
+        (JSC::DFG::capabilityLevel):
+        * dfg/DFGClobberize.h:
+        (JSC::DFG::clobberize):
+        * dfg/DFGCombinedLiveness.cpp:
+        (JSC::DFG::addBytecodeLiveness):
+        * dfg/DFGCommonData.cpp:
+        (JSC::DFG::CommonData::addCodeOrigin):
+        (JSC::DFG::CommonData::addUniqueCallSiteIndex):
+        (JSC::DFG::CommonData::lastCallSite const):
+        * dfg/DFGConstantFoldingPhase.cpp:
+        (JSC::DFG::ConstantFoldingPhase::foldConstants):
+        * dfg/DFGDoesGC.cpp:
+        (JSC::DFG::doesGC):
+        * dfg/DFGDriver.cpp:
+        (JSC::DFG::compileImpl):
+        * dfg/DFGFixupPhase.cpp:
+        (JSC::DFG::FixupPhase::fixupNode):
+        * dfg/DFGForAllKills.h:
+        (JSC::DFG::forAllKilledOperands):
+        (JSC::DFG::forAllKilledNodesAtNodeIndex):
+        (JSC::DFG::forAllKillsInBlock):
+        * dfg/DFGGraph.cpp:
+        (JSC::DFG::Graph::dump):
+        (JSC::DFG::Graph::dumpBlockHeader):
+        (JSC::DFG::Graph::substituteGetLocal):
+        (JSC::DFG::Graph::isLiveInBytecode):
+        (JSC::DFG::Graph::localsAndTmpsLiveInBytecode):
+        (JSC::DFG::Graph::methodOfGettingAValueProfileFor):
+        (JSC::DFG::Graph::localsLiveInBytecode): Deleted.
+        * dfg/DFGGraph.h:
+        (JSC::DFG::Graph::forAllLocalsAndTmpsLiveInBytecode):
+        (JSC::DFG::Graph::forAllLiveInBytecode):
+        (JSC::DFG::Graph::forAllLocalsLiveInBytecode): Deleted.
+        * dfg/DFGInPlaceAbstractState.cpp:
+        (JSC::DFG::InPlaceAbstractState::InPlaceAbstractState):
+        * dfg/DFGInPlaceAbstractState.h:
+        (JSC::DFG::InPlaceAbstractState::operand):
+        * dfg/DFGJITCompiler.cpp:
+        (JSC::DFG::JITCompiler::linkOSRExits):
+        (JSC::DFG::JITCompiler::noticeOSREntry):
+        * dfg/DFGJITCompiler.h:
+        (JSC::DFG::JITCompiler::emitStoreCallSiteIndex):
+        * dfg/DFGLiveCatchVariablePreservationPhase.cpp:
+        (JSC::DFG::LiveCatchVariablePreservationPhase::isValidFlushLocation):
+        (JSC::DFG::LiveCatchVariablePreservationPhase::handleBlockForTryCatch):
+        (JSC::DFG::LiveCatchVariablePreservationPhase::newVariableAccessData):
+        * dfg/DFGMovHintRemovalPhase.cpp:
+        * dfg/DFGNode.h:
+        (JSC::DFG::StackAccessData::StackAccessData):
+        (JSC::DFG::Node::hasArgumentsChild):
+        (JSC::DFG::Node::argumentsChild):
+        (JSC::DFG::Node::operand):
+        (JSC::DFG::Node::hasUnlinkedOperand):
+        (JSC::DFG::Node::unlinkedOperand):
+        (JSC::DFG::Node::hasLoadVarargsData):
+        (JSC::DFG::Node::local): Deleted.
+        (JSC::DFG::Node::hasUnlinkedLocal): Deleted.
+        (JSC::DFG::Node::unlinkedLocal): Deleted.
+        * dfg/DFGNodeType.h:
+        * dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
+        (JSC::DFG::OSRAvailabilityAnalysisPhase::run):
+        (JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
+        * dfg/DFGOSREntry.cpp:
+        (JSC::DFG::prepareOSREntry):
+        (JSC::DFG::prepareCatchOSREntry):
+        * dfg/DFGOSREntrypointCreationPhase.cpp:
+        (JSC::DFG::OSREntrypointCreationPhase::run):
+        * dfg/DFGOSRExit.cpp:
+        (JSC::DFG::OSRExit::emitRestoreArguments):
+        (JSC::DFG::OSRExit::compileExit):
+        (JSC::DFG::jsValueFor): Deleted.
+        (JSC::DFG::restoreCalleeSavesFor): Deleted.
+        (JSC::DFG::saveCalleeSavesFor): Deleted.
+        (JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer): Deleted.
+        (JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer): Deleted.
+        (JSC::DFG::saveOrCopyCalleeSavesFor): Deleted.
+        (JSC::DFG::createDirectArgumentsDuringExit): Deleted.
+        (JSC::DFG::createClonedArgumentsDuringExit): Deleted.
+        (JSC::DFG::emitRestoreArguments): Deleted.
+        (JSC::DFG::OSRExit::executeOSRExit): Deleted.
+        (JSC::DFG::reifyInlinedCallFrames): Deleted.
+        (JSC::DFG::adjustAndJumpToTarget): Deleted.
+        (JSC::DFG::printOSRExit): Deleted.
+        * dfg/DFGOSRExit.h:
+        * dfg/DFGOSRExitBase.h:
+        (JSC::DFG::OSRExitBase::isExitingToCheckpointHandler const):
+        * dfg/DFGOSRExitCompilerCommon.cpp:
+        (JSC::DFG::callerReturnPC):
+        (JSC::DFG::reifyInlinedCallFrames):
+        (JSC::DFG::adjustAndJumpToTarget):
+        * dfg/DFGObjectAllocationSinkingPhase.cpp:
+        * dfg/DFGOpInfo.h:
+        (JSC::DFG::OpInfo::OpInfo):
+        * dfg/DFGOperations.cpp:
+        * dfg/DFGPhantomInsertionPhase.cpp:
+        * dfg/DFGPreciseLocalClobberize.h:
+        (JSC::DFG::PreciseLocalClobberizeAdaptor::read):
+        (JSC::DFG::PreciseLocalClobberizeAdaptor::write):
+        (JSC::DFG::PreciseLocalClobberizeAdaptor::def):
+        (JSC::DFG::PreciseLocalClobberizeAdaptor::callIfAppropriate):
+        * dfg/DFGPredictionInjectionPhase.cpp:
+        (JSC::DFG::PredictionInjectionPhase::run):
+        * dfg/DFGPredictionPropagationPhase.cpp:
+        * dfg/DFGPutStackSinkingPhase.cpp:
+        * dfg/DFGSSAConversionPhase.cpp:
+        (JSC::DFG::SSAConversionPhase::run):
+        * dfg/DFGSafeToExecute.h:
+        (JSC::DFG::safeToExecute):
+        * dfg/DFGSpeculativeJIT.cpp:
+        (JSC::DFG::SpeculativeJIT::compileMovHint):
+        (JSC::DFG::SpeculativeJIT::compileCurrentBlock):
+        (JSC::DFG::SpeculativeJIT::checkArgumentTypes):
+        (JSC::DFG::SpeculativeJIT::compileVarargsLength):
+        (JSC::DFG::SpeculativeJIT::compileLoadVarargs):
+        (JSC::DFG::SpeculativeJIT::compileForwardVarargs):
+        (JSC::DFG::SpeculativeJIT::compileCreateDirectArguments):
+        (JSC::DFG::SpeculativeJIT::compileGetArgumentCountIncludingThis):
+        * dfg/DFGSpeculativeJIT.h:
+        (JSC::DFG::SpeculativeJIT::recordSetLocal):
+        * dfg/DFGSpeculativeJIT32_64.cpp:
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGStackLayoutPhase.cpp:
+        (JSC::DFG::StackLayoutPhase::run):
+        (JSC::DFG::StackLayoutPhase::assign):
+        * dfg/DFGStrengthReductionPhase.cpp:
+        (JSC::DFG::StrengthReductionPhase::handleNode):
+        * dfg/DFGThunks.cpp:
+        (JSC::DFG::osrExitThunkGenerator): Deleted.
+        * dfg/DFGThunks.h:
+        * dfg/DFGTypeCheckHoistingPhase.cpp:
+        (JSC::DFG::TypeCheckHoistingPhase::run):
+        (JSC::DFG::TypeCheckHoistingPhase::disableHoistingAcrossOSREntries):
+        * dfg/DFGValidate.cpp:
+        * dfg/DFGVarargsForwardingPhase.cpp:
+        * dfg/DFGVariableAccessData.cpp:
+        (JSC::DFG::VariableAccessData::VariableAccessData):
+        (JSC::DFG::VariableAccessData::shouldUseDoubleFormatAccordingToVote):
+        (JSC::DFG::VariableAccessData::tallyVotesForShouldUseDoubleFormat):
+        (JSC::DFG::VariableAccessData::couldRepresentInt52Impl):
+        * dfg/DFGVariableAccessData.h:
+        (JSC::DFG::VariableAccessData::operand):
+        (JSC::DFG::VariableAccessData::local): Deleted.
+        * dfg/DFGVariableEvent.cpp:
+        (JSC::DFG::VariableEvent::dump const):
+        * dfg/DFGVariableEvent.h:
+        (JSC::DFG::VariableEvent::spill):
+        (JSC::DFG::VariableEvent::setLocal):
+        (JSC::DFG::VariableEvent::movHint):
+        (JSC::DFG::VariableEvent::spillRegister const):
+        (JSC::DFG::VariableEvent::operand const):
+        (JSC::DFG::VariableEvent::bytecodeRegister const): Deleted.
+        * dfg/DFGVariableEventStream.cpp:
+        (JSC::DFG::VariableEventStream::logEvent):
+        (JSC::DFG::VariableEventStream::reconstruct const):
+        * dfg/DFGVariableEventStream.h:
+        (JSC::DFG::VariableEventStream::appendAndLog):
+        * ftl/FTLCapabilities.cpp:
+        (JSC::FTL::canCompile):
+        * ftl/FTLForOSREntryJITCode.cpp:
+        (JSC::FTL::ForOSREntryJITCode::ForOSREntryJITCode):
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::lower):
+        (JSC::FTL::DFG::LowerDFGToB3::compileNode):
+        (JSC::FTL::DFG::LowerDFGToB3::compileExtractOSREntryLocal):
+        (JSC::FTL::DFG::LowerDFGToB3::compileGetStack):
+        (JSC::FTL::DFG::LowerDFGToB3::compileGetCallee):
+        (JSC::FTL::DFG::LowerDFGToB3::compileSetCallee):
+        (JSC::FTL::DFG::LowerDFGToB3::compileSetArgumentCountIncludingThis):
+        (JSC::FTL::DFG::LowerDFGToB3::compileVarargsLength):
+        (JSC::FTL::DFG::LowerDFGToB3::compileLoadVarargs):
+        (JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargs):
+        (JSC::FTL::DFG::LowerDFGToB3::getSpreadLengthFromInlineCallFrame):
+        (JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargsWithSpread):
+        (JSC::FTL::DFG::LowerDFGToB3::compileLogShadowChickenPrologue):
+        (JSC::FTL::DFG::LowerDFGToB3::getArgumentsLength):
+        (JSC::FTL::DFG::LowerDFGToB3::getCurrentCallee):
+        (JSC::FTL::DFG::LowerDFGToB3::callPreflight):
+        (JSC::FTL::DFG::LowerDFGToB3::appendOSRExitDescriptor):
+        (JSC::FTL::DFG::LowerDFGToB3::buildExitArguments):
+        (JSC::FTL::DFG::LowerDFGToB3::addressFor):
+        (JSC::FTL::DFG::LowerDFGToB3::payloadFor):
+        (JSC::FTL::DFG::LowerDFGToB3::tagFor):
+        * ftl/FTLOSREntry.cpp:
+        (JSC::FTL::prepareOSREntry):
+        * ftl/FTLOSRExit.cpp:
+        (JSC::FTL::OSRExitDescriptor::OSRExitDescriptor):
+        * ftl/FTLOSRExit.h:
+        * ftl/FTLOSRExitCompiler.cpp:
+        (JSC::FTL::compileStub):
+        * ftl/FTLOperations.cpp:
+        (JSC::FTL::operationMaterializeObjectInOSR):
+        * ftl/FTLOutput.cpp:
+        (JSC::FTL::Output::select):
+        * ftl/FTLOutput.h:
+        * ftl/FTLSelectPredictability.h: Copied from Source/JavaScriptCore/ftl/FTLForOSREntryJITCode.cpp.
+        * ftl/FTLSlowPathCall.h:
+        (JSC::FTL::callOperation):
+        * generator/Checkpoints.rb: Added.
+        * generator/Opcode.rb:
+        * generator/Section.rb:
+        * heap/Heap.cpp:
+        (JSC::Heap::gatherStackRoots):
+        * interpreter/CallFrame.cpp:
+        (JSC::CallFrame::callSiteAsRawBits const):
+        (JSC::CallFrame::unsafeCallSiteAsRawBits const):
+        (JSC::CallFrame::callSiteIndex const):
+        (JSC::CallFrame::unsafeCallSiteIndex const):
+        (JSC::CallFrame::setCurrentVPC):
+        (JSC::CallFrame::bytecodeIndex):
+        (JSC::CallFrame::codeOrigin):
+        * interpreter/CallFrame.h:
+        (JSC::CallSiteIndex::CallSiteIndex):
+        (JSC::CallSiteIndex::operator bool const):
+        (JSC::CallSiteIndex::operator== const):
+        (JSC::CallSiteIndex::bits const):
+        (JSC::CallSiteIndex::fromBits):
+        (JSC::CallSiteIndex::bytecodeIndex const):
+        (JSC::DisposableCallSiteIndex::DisposableCallSiteIndex):
+        (JSC::CallFrame::callee const):
+        (JSC::CallFrame::unsafeCallee const):
+        (JSC::CallFrame::addressOfCodeBlock const):
+        (JSC::CallFrame::argumentCountIncludingThis const):
+        (JSC::CallFrame::offsetFor):
+        (JSC::CallFrame::setArgumentCountIncludingThis):
+        (JSC::CallFrame::setReturnPC):
+        * interpreter/CallFrameInlines.h:
+        (JSC::CallFrame::r):
+        (JSC::CallFrame::uncheckedR):
+        (JSC::CallFrame::guaranteedJSValueCallee const):
+        (JSC::CallFrame::jsCallee const):
+        (JSC::CallFrame::codeBlock const):
+        (JSC::CallFrame::unsafeCodeBlock const):
+        (JSC::CallFrame::setCallee):
+        (JSC::CallFrame::setCodeBlock):
+        * interpreter/CheckpointOSRExitSideState.h: Copied from Source/JavaScriptCore/dfg/DFGThunks.h.
+        * interpreter/Interpreter.cpp:
+        (JSC::eval):
+        (JSC::sizeOfVarargs):
+        (JSC::loadVarargs):
+        (JSC::setupVarargsFrame):
+        (JSC::UnwindFunctor::operator() const):
+        (JSC::Interpreter::executeCall):
+        (JSC::Interpreter::executeConstruct):
+        * interpreter/Interpreter.h:
+        * interpreter/StackVisitor.cpp:
+        (JSC::StackVisitor::readInlinedFrame):
+        * jit/AssemblyHelpers.h:
+        (JSC::AssemblyHelpers::emitGetFromCallFrameHeaderPtr):
+        (JSC::AssemblyHelpers::emitGetFromCallFrameHeader32):
+        (JSC::AssemblyHelpers::emitGetFromCallFrameHeader64):
+        (JSC::AssemblyHelpers::emitPutToCallFrameHeader):
+        (JSC::AssemblyHelpers::emitPutToCallFrameHeaderBeforePrologue):
+        (JSC::AssemblyHelpers::emitPutPayloadToCallFrameHeaderBeforePrologue):
+        (JSC::AssemblyHelpers::emitPutTagToCallFrameHeaderBeforePrologue):
+        (JSC::AssemblyHelpers::addressFor):
+        (JSC::AssemblyHelpers::tagFor):
+        (JSC::AssemblyHelpers::payloadFor):
+        (JSC::AssemblyHelpers::calleeFrameSlot):
+        (JSC::AssemblyHelpers::calleeArgumentSlot):
+        (JSC::AssemblyHelpers::calleeFrameTagSlot):
+        (JSC::AssemblyHelpers::calleeFramePayloadSlot):
+        (JSC::AssemblyHelpers::calleeFrameCallerFrame):
+        (JSC::AssemblyHelpers::argumentCount):
+        * jit/CallFrameShuffler.cpp:
+        (JSC::CallFrameShuffler::CallFrameShuffler):
+        * jit/CallFrameShuffler.h:
+        (JSC::CallFrameShuffler::setCalleeJSValueRegs):
+        (JSC::CallFrameShuffler::assumeCalleeIsCell):
+        * jit/JIT.h:
+        * jit/JITArithmetic.cpp:
+        (JSC::JIT::emit_op_unsigned):
+        (JSC::JIT::emit_compareAndJump):
+        (JSC::JIT::emit_compareAndJumpImpl):
+        (JSC::JIT::emit_compareUnsignedAndJump):
+        (JSC::JIT::emit_compareUnsignedAndJumpImpl):
+        (JSC::JIT::emit_compareUnsigned):
+        (JSC::JIT::emit_compareUnsignedImpl):
+        (JSC::JIT::emit_compareAndJumpSlow):
+        (JSC::JIT::emit_compareAndJumpSlowImpl):
+        (JSC::JIT::emit_op_inc):
+        (JSC::JIT::emit_op_dec):
+        (JSC::JIT::emit_op_mod):
+        (JSC::JIT::emitBitBinaryOpFastPath):
+        (JSC::JIT::emit_op_bitnot):
+        (JSC::JIT::emitRightShiftFastPath):
+        (JSC::JIT::emitMathICFast):
+        (JSC::JIT::emitMathICSlow):
+        (JSC::JIT::emit_op_div):
+        * jit/JITCall.cpp:
+        (JSC::JIT::emitPutCallResult):
+        (JSC::JIT::compileSetupFrame):
+        (JSC::JIT::compileOpCall):
+        * jit/JITExceptions.cpp:
+        (JSC::genericUnwind):
+        * jit/JITInlines.h:
+        (JSC::JIT::isOperandConstantDouble):
+        (JSC::JIT::getConstantOperand):
+        (JSC::JIT::emitPutIntToCallFrameHeader):
+        (JSC::JIT::appendCallWithExceptionCheckSetJSValueResult):
+        (JSC::JIT::appendCallWithExceptionCheckSetJSValueResultWithProfile):
+        (JSC::JIT::linkSlowCaseIfNotJSCell):
+        (JSC::JIT::isOperandConstantChar):
+        (JSC::JIT::getOperandConstantInt):
+        (JSC::JIT::getOperandConstantDouble):
+        (JSC::JIT::emitInitRegister):
+        (JSC::JIT::emitLoadTag):
+        (JSC::JIT::emitLoadPayload):
+        (JSC::JIT::emitGet):
+        (JSC::JIT::emitPutVirtualRegister):
+        (JSC::JIT::emitLoad):
+        (JSC::JIT::emitLoad2):
+        (JSC::JIT::emitLoadDouble):
+        (JSC::JIT::emitLoadInt32ToDouble):
+        (JSC::JIT::emitStore):
+        (JSC::JIT::emitStoreInt32):
+        (JSC::JIT::emitStoreCell):
+        (JSC::JIT::emitStoreBool):
+        (JSC::JIT::emitStoreDouble):
+        (JSC::JIT::emitJumpSlowCaseIfNotJSCell):
+        (JSC::JIT::isOperandConstantInt):
+        (JSC::JIT::emitGetVirtualRegister):
+        (JSC::JIT::emitGetVirtualRegisters):
+        * jit/JITOpcodes.cpp:
+        (JSC::JIT::emit_op_mov):
+        (JSC::JIT::emit_op_end):
+        (JSC::JIT::emit_op_new_object):
+        (JSC::JIT::emitSlow_op_new_object):
+        (JSC::JIT::emit_op_overrides_has_instance):
+        (JSC::JIT::emit_op_instanceof):
+        (JSC::JIT::emitSlow_op_instanceof):
+        (JSC::JIT::emit_op_is_empty):
+        (JSC::JIT::emit_op_is_undefined):
+        (JSC::JIT::emit_op_is_undefined_or_null):
+        (JSC::JIT::emit_op_is_boolean):
+        (JSC::JIT::emit_op_is_number):
+        (JSC::JIT::emit_op_is_cell_with_type):
+        (JSC::JIT::emit_op_is_object):
+        (JSC::JIT::emit_op_ret):
+        (JSC::JIT::emit_op_to_primitive):
+        (JSC::JIT::emit_op_set_function_name):
+        (JSC::JIT::emit_op_not):
+        (JSC::JIT::emit_op_jfalse):
+        (JSC::JIT::emit_op_jeq_null):
+        (JSC::JIT::emit_op_jneq_null):
+        (JSC::JIT::emit_op_jundefined_or_null):
+        (JSC::JIT::emit_op_jnundefined_or_null):
+        (JSC::JIT::emit_op_jneq_ptr):
+        (JSC::JIT::emit_op_eq):
+        (JSC::JIT::emit_op_jeq):
+        (JSC::JIT::emit_op_jtrue):
+        (JSC::JIT::emit_op_neq):
+        (JSC::JIT::emit_op_jneq):
+        (JSC::JIT::emit_op_throw):
+        (JSC::JIT::compileOpStrictEq):
+        (JSC::JIT::compileOpStrictEqJump):
+        (JSC::JIT::emit_op_to_number):
+        (JSC::JIT::emit_op_to_numeric):
+        (JSC::JIT::emit_op_to_string):
+        (JSC::JIT::emit_op_to_object):
+        (JSC::JIT::emit_op_catch):
+        (JSC::JIT::emit_op_get_parent_scope):
+        (JSC::JIT::emit_op_switch_imm):
+        (JSC::JIT::emit_op_switch_char):
+        (JSC::JIT::emit_op_switch_string):
+        (JSC::JIT::emit_op_eq_null):
+        (JSC::JIT::emit_op_neq_null):
+        (JSC::JIT::emit_op_enter):
+        (JSC::JIT::emit_op_get_scope):
+        (JSC::JIT::emit_op_to_this):
+        (JSC::JIT::emit_op_create_this):
+        (JSC::JIT::emit_op_check_tdz):
+        (JSC::JIT::emitSlow_op_eq):
+        (JSC::JIT::emitSlow_op_neq):
+        (JSC::JIT::emitSlow_op_instanceof_custom):
+        (JSC::JIT::emit_op_new_regexp):
+        (JSC::JIT::emitNewFuncCommon):
+        (JSC::JIT::emitNewFuncExprCommon):
+        (JSC::JIT::emit_op_new_array):
+        (JSC::JIT::emit_op_new_array_with_size):
+        (JSC::JIT::emit_op_has_structure_property):
+        (JSC::JIT::emit_op_has_indexed_property):
+        (JSC::JIT::emitSlow_op_has_indexed_property):
+        (JSC::JIT::emit_op_get_direct_pname):
+        (JSC::JIT::emit_op_enumerator_structure_pname):
+        (JSC::JIT::emit_op_enumerator_generic_pname):
+        (JSC::JIT::emit_op_profile_type):
+        (JSC::JIT::emit_op_log_shadow_chicken_prologue):
+        (JSC::JIT::emit_op_log_shadow_chicken_tail):
+        (JSC::JIT::emit_op_argument_count):
+        (JSC::JIT::emit_op_get_rest_length):
+        (JSC::JIT::emit_op_get_argument):
+        * jit/JITOpcodes32_64.cpp:
+        (JSC::JIT::emit_op_catch):
+        * jit/JITOperations.cpp:
+        * jit/JITPropertyAccess.cpp:
+        (JSC::JIT::emit_op_get_by_val):
+        (JSC::JIT::emitSlow_op_get_by_val):
+        (JSC::JIT::emit_op_put_by_val):
+        (JSC::JIT::emitGenericContiguousPutByVal):
+        (JSC::JIT::emitArrayStoragePutByVal):
+        (JSC::JIT::emitPutByValWithCachedId):
+        (JSC::JIT::emitSlow_op_put_by_val):
+        (JSC::JIT::emit_op_put_getter_by_id):
+        (JSC::JIT::emit_op_put_setter_by_id):
+        (JSC::JIT::emit_op_put_getter_setter_by_id):
+        (JSC::JIT::emit_op_put_getter_by_val):
+        (JSC::JIT::emit_op_put_setter_by_val):
+        (JSC::JIT::emit_op_del_by_id):
+        (JSC::JIT::emit_op_del_by_val):
+        (JSC::JIT::emit_op_try_get_by_id):
+        (JSC::JIT::emitSlow_op_try_get_by_id):
+        (JSC::JIT::emit_op_get_by_id_direct):
+        (JSC::JIT::emitSlow_op_get_by_id_direct):
+        (JSC::JIT::emit_op_get_by_id):
+        (JSC::JIT::emit_op_get_by_id_with_this):
+        (JSC::JIT::emitSlow_op_get_by_id):
+        (JSC::JIT::emitSlow_op_get_by_id_with_this):
+        (JSC::JIT::emit_op_put_by_id):
+        (JSC::JIT::emit_op_in_by_id):
+        (JSC::JIT::emitSlow_op_in_by_id):
+        (JSC::JIT::emitResolveClosure):
+        (JSC::JIT::emit_op_resolve_scope):
+        (JSC::JIT::emitLoadWithStructureCheck):
+        (JSC::JIT::emitGetClosureVar):
+        (JSC::JIT::emit_op_get_from_scope):
+        (JSC::JIT::emitSlow_op_get_from_scope):
+        (JSC::JIT::emitPutGlobalVariable):
+        (JSC::JIT::emitPutGlobalVariableIndirect):
+        (JSC::JIT::emitPutClosureVar):
+        (JSC::JIT::emit_op_put_to_scope):
+        (JSC::JIT::emit_op_get_from_arguments):
+        (JSC::JIT::emit_op_put_to_arguments):
+        (JSC::JIT::emitWriteBarrier):
+        (JSC::JIT::emit_op_get_internal_field):
+        (JSC::JIT::emit_op_put_internal_field):
+        (JSC::JIT::emitIntTypedArrayPutByVal):
+        (JSC::JIT::emitFloatTypedArrayPutByVal):
+        * jit/JSInterfaceJIT.h:
+        (JSC::JSInterfaceJIT::emitLoadJSCell):
+        (JSC::JSInterfaceJIT::emitJumpIfNotJSCell):
+        (JSC::JSInterfaceJIT::emitLoadInt32):
+        (JSC::JSInterfaceJIT::emitLoadDouble):
+        (JSC::JSInterfaceJIT::emitGetFromCallFrameHeaderPtr):
+        (JSC::JSInterfaceJIT::emitPutToCallFrameHeader):
+        (JSC::JSInterfaceJIT::emitPutCellToCallFrameHeader):
+        * jit/SetupVarargsFrame.cpp:
+        (JSC::emitSetupVarargsFrameFastCase):
+        * jit/SpecializedThunkJIT.h:
+        (JSC::SpecializedThunkJIT::loadDoubleArgument):
+        (JSC::SpecializedThunkJIT::loadCellArgument):
+        (JSC::SpecializedThunkJIT::loadInt32Argument):
+        * jit/ThunkGenerators.cpp:
+        (JSC::absThunkGenerator):
+        * llint/LLIntSlowPaths.cpp:
+        (JSC::LLInt::getNonConstantOperand):
+        (JSC::LLInt::getOperand):
+        (JSC::LLInt::genericCall):
+        (JSC::LLInt::varargsSetup):
+        (JSC::LLInt::commonCallEval):
+        (JSC::LLInt::LLINT_SLOW_PATH_DECL):
+        (JSC::LLInt::handleVarargsCheckpoint):
+        (JSC::LLInt::dispatchToNextInstruction):
+        (JSC::LLInt::slow_path_checkpoint_osr_exit_from_inlined_call):
+        (JSC::LLInt::slow_path_checkpoint_osr_exit):
+        (JSC::LLInt::llint_throw_stack_overflow_error):
+        * llint/LLIntSlowPaths.h:
+        * llint/LowLevelInterpreter.asm:
+        * llint/LowLevelInterpreter32_64.asm:
+        * llint/LowLevelInterpreter64.asm:
+        * runtime/ArgList.h:
+        (JSC::MarkedArgumentBuffer::fill):
+        * runtime/CachedTypes.cpp:
+        (JSC::CachedCodeBlock::hasCheckpoints const):
+        (JSC::UnlinkedCodeBlock::UnlinkedCodeBlock):
+        (JSC::CachedCodeBlock<CodeBlockType>::encode):
+        * runtime/CommonSlowPaths.cpp:
+        (JSC::SLOW_PATH_DECL):
+        * runtime/ConstructData.cpp:
+        (JSC::construct):
+        * runtime/ConstructData.h:
+        * runtime/DirectArguments.cpp:
+        (JSC::DirectArguments::copyToArguments):
+        * runtime/DirectArguments.h:
+        * runtime/GenericArguments.h:
+        * runtime/GenericArgumentsInlines.h:
+        (JSC::GenericArguments<Type>::copyToArguments):
+        * runtime/JSArray.cpp:
+        (JSC::JSArray::copyToArguments):
+        * runtime/JSArray.h:
+        * runtime/JSImmutableButterfly.cpp:
+        (JSC::JSImmutableButterfly::copyToArguments):
+        * runtime/JSImmutableButterfly.h:
+        * runtime/JSLock.cpp:
+        (JSC::JSLock::willReleaseLock):
+        * runtime/ModuleProgramExecutable.cpp:
+        (JSC::ModuleProgramExecutable::create):
+        * runtime/Options.cpp:
+        (JSC::recomputeDependentOptions):
+        * runtime/ScopedArguments.cpp:
+        (JSC::ScopedArguments::copyToArguments):
+        * runtime/ScopedArguments.h:
+        * runtime/VM.cpp:
+        (JSC::VM::addCheckpointOSRSideState):
+        (JSC::VM::findCheckpointOSRSideState):
+        (JSC::VM::scanSideState const):
+        * runtime/VM.h:
+        (JSC::VM::hasCheckpointOSRSideState const):
+        * tools/VMInspector.cpp:
+        (JSC::VMInspector::dumpRegisters):
+        * wasm/WasmFunctionCodeBlock.h:
+        (JSC::Wasm::FunctionCodeBlock::getConstant const):
+        (JSC::Wasm::FunctionCodeBlock::getConstantType const):
+        * wasm/WasmLLIntGenerator.cpp:
+        (JSC::Wasm::LLIntGenerator::setUsesCheckpoints const):
+        * wasm/WasmOperations.cpp:
+        (JSC::Wasm::operationWasmToJSException):
+        * wasm/WasmSlowPaths.cpp:
+
 2019-12-23  Yusuke Suzuki  <ysuzuki@apple.com>
 
         [JSC] Wasm OSR entry should capture top-most enclosing-stack
index f8c1b4a..9ead653 100644 (file)
@@ -64,6 +64,7 @@ $(PROJECT_DIR)/disassembler/udis86/optable.xml
 $(PROJECT_DIR)/disassembler/udis86/ud_itab.py
 $(PROJECT_DIR)/generator/Argument.rb
 $(PROJECT_DIR)/generator/Assertion.rb
+$(PROJECT_DIR)/generator/Checkpoints.rb
 $(PROJECT_DIR)/generator/DSL.rb
 $(PROJECT_DIR)/generator/Fits.rb
 $(PROJECT_DIR)/generator/GeneratedFile.rb
index dbce5ba..b264f4e 100644 (file)
                5333BBDB2110F7D2007618EC /* DFGSpeculativeJIT32_64.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 86880F1B14328BB900B08D42 /* DFGSpeculativeJIT32_64.cpp */; };
                5333BBDC2110F7D9007618EC /* DFGSpeculativeJIT.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 86EC9DC21328DF82002B2AD7 /* DFGSpeculativeJIT.cpp */; };
                5333BBDD2110F7E1007618EC /* DFGSpeculativeJIT64.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 86880F4C14353B2100B08D42 /* DFGSpeculativeJIT64.cpp */; };
+               5338E2A72396EFFB00C61BAD /* CheckpointOSRExitSideState.h in Headers */ = {isa = PBXBuildFile; fileRef = 5338E2A62396EFEC00C61BAD /* CheckpointOSRExitSideState.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               5338EBA323AB04B800382662 /* FTLSelectPredictability.h in Headers */ = {isa = PBXBuildFile; fileRef = 5338EBA223AB04A300382662 /* FTLSelectPredictability.h */; };
                5341FC721DAC343C00E7E4D7 /* B3WasmBoundsCheckValue.h in Headers */ = {isa = PBXBuildFile; fileRef = 5341FC711DAC343C00E7E4D7 /* B3WasmBoundsCheckValue.h */; };
                534638711E70CF3D00F12AC1 /* JSRunLoopTimer.h in Headers */ = {isa = PBXBuildFile; fileRef = 534638701E70CF3D00F12AC1 /* JSRunLoopTimer.h */; settings = {ATTRIBUTES = (Private, ); }; };
                534638751E70DDEC00F12AC1 /* PromiseTimer.h in Headers */ = {isa = PBXBuildFile; fileRef = 534638741E70DDEC00F12AC1 /* PromiseTimer.h */; settings = {ATTRIBUTES = (Private, ); }; };
                5318045D22EAAF0F004A7342 /* B3ExtractValue.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; name = B3ExtractValue.cpp; path = b3/B3ExtractValue.cpp; sourceTree = "<group>"; };
                531D4E191F59CDD200EC836C /* testapi.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; name = testapi.cpp; path = API/tests/testapi.cpp; sourceTree = "<group>"; };
                532631B3218777A5007B8191 /* JavaScriptCore.modulemap */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = "sourcecode.module-map"; path = JavaScriptCore.modulemap; sourceTree = "<group>"; };
+               5338E2A62396EFEC00C61BAD /* CheckpointOSRExitSideState.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CheckpointOSRExitSideState.h; sourceTree = "<group>"; };
+               5338EBA223AB04A300382662 /* FTLSelectPredictability.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLSelectPredictability.h; path = ftl/FTLSelectPredictability.h; sourceTree = "<group>"; };
                533B15DE1DC7F463004D500A /* WasmOps.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WasmOps.h; sourceTree = "<group>"; };
                5341FC6F1DAC33E500E7E4D7 /* B3WasmBoundsCheckValue.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = B3WasmBoundsCheckValue.cpp; path = b3/B3WasmBoundsCheckValue.cpp; sourceTree = "<group>"; };
                5341FC711DAC343C00E7E4D7 /* B3WasmBoundsCheckValue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = B3WasmBoundsCheckValue.h; path = b3/B3WasmBoundsCheckValue.h; sourceTree = "<group>"; };
                534C457A1BC703DC007476A7 /* TypedArrayConstructor.js */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.javascript; path = TypedArrayConstructor.js; sourceTree = "<group>"; };
                534C457B1BC72411007476A7 /* JSTypedArrayViewConstructor.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSTypedArrayViewConstructor.h; sourceTree = "<group>"; };
                534C457D1BC72549007476A7 /* JSTypedArrayViewConstructor.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSTypedArrayViewConstructor.cpp; sourceTree = "<group>"; };
+               534D9BF82363C55D0054524D /* SetIteratorPrototype.js */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.javascript; path = SetIteratorPrototype.js; sourceTree = "<group>"; };
+               534D9BF92363C55D0054524D /* IteratorHelpers.js */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.javascript; path = IteratorHelpers.js; sourceTree = "<group>"; };
+               534D9BFA2363C55D0054524D /* MapIteratorPrototype.js */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.javascript; path = MapIteratorPrototype.js; sourceTree = "<group>"; };
                534E034D1E4D4B1600213F64 /* AccessCase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AccessCase.h; sourceTree = "<group>"; };
                534E034F1E4D95ED00213F64 /* AccessCase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = AccessCase.cpp; sourceTree = "<group>"; };
                534E03531E53BD2900213F64 /* IntrinsicGetterAccessCase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = IntrinsicGetterAccessCase.h; sourceTree = "<group>"; };
                                0F485326187DFDEC0083B687 /* FTLRecoveryOpcode.h */,
                                0FCEFAA91804C13E00472CE4 /* FTLSaveRestore.cpp */,
                                0FCEFAAA1804C13E00472CE4 /* FTLSaveRestore.h */,
+                               5338EBA223AB04A300382662 /* FTLSelectPredictability.h */,
                                0F25F1AA181635F300522F39 /* FTLSlowPathCall.cpp */,
                                0F25F1AB181635F300522F39 /* FTLSlowPathCall.h */,
                                0F25F1AC181635F300522F39 /* FTLSlowPathCallKey.cpp */,
                                1429D8DC0ED2205B00B89619 /* CallFrame.h */,
                                A7F869EC0F95C2EC00558697 /* CallFrameClosure.h */,
                                FEA3BBA7212B655800E93AD1 /* CallFrameInlines.h */,
+                               5338E2A62396EFEC00C61BAD /* CheckpointOSRExitSideState.h */,
                                1429D85B0ED218E900B89619 /* CLoopStack.cpp */,
                                14D792640DAA03FB001A9F05 /* CLoopStack.h */,
                                A7C1EAEB17987AB600299DB2 /* CLoopStackInlines.h */,
                                A52704851D027C8800354C37 /* GlobalOperations.js */,
                                E35E03611B7AB4850073AD2A /* InspectorInstrumentationObject.js */,
                                E33F50881B844A1A00413856 /* InternalPromiseConstructor.js */,
+                               534D9BF92363C55D0054524D /* IteratorHelpers.js */,
                                7CF9BC5B1B65D9A3009DB1EF /* IteratorPrototype.js */,
+                               534D9BFA2363C55D0054524D /* MapIteratorPrototype.js */,
                                7035587C1C418419004BD7BF /* MapPrototype.js */,
                                E30677971B8BC6F5003F87F0 /* ModuleLoader.js */,
                                A52704861D027C8800354C37 /* NumberConstructor.js */,
                                7CF9BC5F1B65D9B1009DB1EF /* ReflectObject.js */,
                                654788421C937D2C000781A0 /* RegExpPrototype.js */,
                                84925A9C22B30CC800D1DFFF /* RegExpStringIteratorPrototype.js */,
+                               534D9BF82363C55D0054524D /* SetIteratorPrototype.js */,
                                7035587D1C418419004BD7BF /* SetPrototype.js */,
                                7CF9BC601B65D9B1009DB1EF /* StringConstructor.js */,
                                7CF9BC611B65D9B1009DB1EF /* StringIteratorPrototype.js */,
                                FE1BD0211E72027900134BC9 /* CellProfile.h in Headers */,
                                FEC160322339E9F900A04CB8 /* CellSize.h in Headers */,
                                0F1C3DDA1BBCE09E00E523E4 /* CellState.h in Headers */,
+                               5338E2A72396EFFB00C61BAD /* CheckpointOSRExitSideState.h in Headers */,
                                BC6AAAE50E1F426500AD87D8 /* ClassInfo.h in Headers */,
                                0FE050261AA9095600D33B33 /* ClonedArguments.h in Headers */,
                                BC18C45E0E16F5CD00B34460 /* CLoopStack.h in Headers */,
                                0F9D4C111C3E2C74006CD984 /* FTLPatchpointExceptionHandle.h in Headers */,
                                0F48532A187DFDEC0083B687 /* FTLRecoveryOpcode.h in Headers */,
                                0FCEFAAC1804C13E00472CE4 /* FTLSaveRestore.h in Headers */,
+                               5338EBA323AB04B800382662 /* FTLSelectPredictability.h in Headers */,
                                0F25F1B2181635F300522F39 /* FTLSlowPathCall.h in Headers */,
                                0F25F1B4181635F300522F39 /* FTLSlowPathCallKey.h in Headers */,
                                E322E5A71DA644A8006E7709 /* FTLSnippetParams.h in Headers */,
index 56d8a07..6884083 100644 (file)
@@ -239,13 +239,6 @@ public:
         ASSERT_VALID_CODE_POINTER(m_value);
     }
 
-    template<PtrTag tag>
-    explicit ReturnAddressPtr(FunctionPtr<tag> function)
-        : m_value(untagCodePtr<tag>(function.executableAddress()))
-    {
-        ASSERT_VALID_CODE_POINTER(m_value);
-    }
-
     const void* value() const
     {
         return m_value;
index cab368d..4229a0d 100644 (file)
@@ -46,14 +46,14 @@ public:
         return get<T>(CallFrame::argumentOffset(argument) * sizeof(Register));
     }
     template<typename T = JSValue>
-    T operand(int operand)
+    T operand(VirtualRegister operand)
     {
-        return get<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register));
+        return get<T>(operand.offset() * sizeof(Register));
     }
     template<typename T = JSValue>
-    T operand(int operand, ptrdiff_t offset)
+    T operand(VirtualRegister operand, ptrdiff_t offset)
     {
-        return get<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register) + offset);
+        return get<T>(operand.offset() * sizeof(Register) + offset);
     }
 
     template<typename T>
@@ -62,14 +62,14 @@ public:
         return set<T>(CallFrame::argumentOffset(argument) * sizeof(Register), value);
     }
     template<typename T>
-    void setOperand(int operand, T value)
+    void setOperand(VirtualRegister operand, T value)
     {
-        set<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register), value);
+        set<T>(operand.offset() * sizeof(Register), value);
     }
     template<typename T>
-    void setOperand(int operand, ptrdiff_t offset, T value)
+    void setOperand(VirtualRegister operand, ptrdiff_t offset, T value)
     {
-        set<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register) + offset, value);
+        set<T>(operand.offset() * sizeof(Register) + offset, value);
     }
 
     template<typename T = JSValue>
index 35d0175..41bc961 100644 (file)
@@ -263,20 +263,20 @@ inline void checkDoesNotUseInstruction(Compilation& compilation, const char* tex
 }
 
 template<typename Type>
-struct Operand {
+struct B3Operand {
     const char* name;
     Type value;
 };
 
-typedef Operand<int64_t> Int64Operand;
-typedef Operand<int32_t> Int32Operand;
-typedef Operand<int16_t> Int16Operand;
-typedef Operand<int8_t> Int8Operand;
+typedef B3Operand<int64_t> Int64Operand;
+typedef B3Operand<int32_t> Int32Operand;
+typedef B3Operand<int16_t> Int16Operand;
+typedef B3Operand<int8_t> Int8Operand;
 
-#define MAKE_OPERAND(value) Operand<decltype(value)> { #value, value }
+#define MAKE_OPERAND(value) B3Operand<decltype(value)> { #value, value }
 
 template<typename FloatType>
-void populateWithInterestingValues(Vector<Operand<FloatType>>& operands)
+void populateWithInterestingValues(Vector<B3Operand<FloatType>>& operands)
 {
     operands.append({ "0.", static_cast<FloatType>(0.) });
     operands.append({ "-0.", static_cast<FloatType>(-0.) });
@@ -302,9 +302,9 @@ void populateWithInterestingValues(Vector<Operand<FloatType>>& operands)
 }
 
 template<typename FloatType>
-Vector<Operand<FloatType>> floatingPointOperands()
+Vector<B3Operand<FloatType>> floatingPointOperands()
 {
-    Vector<Operand<FloatType>> operands;
+    Vector<B3Operand<FloatType>> operands;
     populateWithInterestingValues(operands);
     return operands;
 };
index dd9acca..55485b0 100644 (file)
@@ -1508,7 +1508,7 @@ void AccessCase::generateImpl(AccessGenerationState& state)
 
         jit.store32(
             CCallHelpers::TrustedImm32(state.callSiteIndexForExceptionHandlingOrOriginal().bits()),
-            CCallHelpers::tagFor(static_cast<VirtualRegister>(CallFrameSlot::argumentCountIncludingThis)));
+            CCallHelpers::tagFor(CallFrameSlot::argumentCountIncludingThis));
 
         if (m_type == Getter || m_type == Setter) {
             auto& access = this->as<GetterSetterAccessCase>();
@@ -1797,7 +1797,7 @@ void AccessCase::generateImpl(AccessGenerationState& state)
                 jit.store32(
                     CCallHelpers::TrustedImm32(
                         state.callSiteIndexForExceptionHandlingOrOriginal().bits()),
-                    CCallHelpers::tagFor(static_cast<VirtualRegister>(CallFrameSlot::argumentCountIncludingThis)));
+                    CCallHelpers::tagFor(CallFrameSlot::argumentCountIncludingThis));
                 
                 jit.makeSpaceOnStackForCCall();
                 
index b1d404a..69b5ee1 100644 (file)
@@ -55,7 +55,7 @@ public:
 
         jit.store32(
             CCallHelpers::TrustedImm32(state.callSiteIndexForExceptionHandlingOrOriginal().bits()),
-            CCallHelpers::tagFor(static_cast<VirtualRegister>(CallFrameSlot::argumentCountIncludingThis)));
+            CCallHelpers::tagFor(CallFrameSlot::argumentCountIncludingThis));
 
         jit.makeSpaceOnStackForCCall();
 
index 6058fb0..8365115 100644 (file)
@@ -63,7 +63,7 @@ void BytecodeDumperBase::printLocationAndOp(InstructionStream::Offset location,
 
 void BytecodeDumperBase::dumpValue(VirtualRegister reg)
 {
-    m_out.printf("%s", registerName(reg.offset()).data());
+    m_out.printf("%s", registerName(reg).data());
 }
 
 template<typename Traits>
@@ -83,12 +83,12 @@ template void BytecodeDumperBase::dumpValue(GenericBoundLabel<Wasm::GeneratorTra
 #endif // ENABLE(WEBASSEMBLY)
 
 template<class Block>
-CString BytecodeDumper<Block>::registerName(int r) const
+CString BytecodeDumper<Block>::registerName(VirtualRegister r) const
 {
-    if (isConstantRegisterIndex(r))
+    if (r.isConstant())
         return constantName(r);
 
-    return toCString(VirtualRegister(r));
+    return toCString(r);
 }
 
 template <class Block>
@@ -98,10 +98,10 @@ int BytecodeDumper<Block>::outOfLineJumpOffset(InstructionStream::Offset offset)
 }
 
 template<class Block>
-CString BytecodeDumper<Block>::constantName(int index) const
+CString BytecodeDumper<Block>::constantName(VirtualRegister reg) const
 {
-    auto value = block()->getConstant(index);
-    return toCString(value, "(", VirtualRegister(index), ")");
+    auto value = block()->getConstant(reg);
+    return toCString(value, "(", reg, ")");
 }
 
 template<class Block>
@@ -335,7 +335,7 @@ void BytecodeDumper::dumpConstants()
     }
 }
 
-CString BytecodeDumper::constantName(int index) const
+CString BytecodeDumper::constantName(VirtualRegister index) const
 {
     FunctionCodeBlock* block = this->block();
     auto value = formatConstant(block->getConstantType(index), block->getConstant(index));
index 0d71675..edb1129 100644 (file)
@@ -61,7 +61,7 @@ public:
     void dumpValue(T v) { m_out.print(v); }
 
 protected:
-    virtual CString registerName(int) const = 0;
+    virtual CString registerName(VirtualRegister) const = 0;
     virtual int outOfLineJumpOffset(InstructionStream::Offset) const = 0;
 
     BytecodeDumperBase(PrintStream& out)
@@ -91,11 +91,11 @@ protected:
 
     void dumpBytecode(const InstructionStream::Ref& it, const ICStatusMap&);
 
-    CString registerName(int r) const override;
+    CString registerName(VirtualRegister) const override;
     int outOfLineJumpOffset(InstructionStream::Offset offset) const override;
 
 private:
-    virtual CString constantName(int index) const;
+    virtual CString constantName(VirtualRegister) const;
 
     Block* m_block;
 };
@@ -135,7 +135,7 @@ private:
     using JSC::BytecodeDumper<FunctionCodeBlock>::BytecodeDumper;
 
     void dumpConstants();
-    CString constantName(int index) const override;
+    CString constantName(VirtualRegister index) const override;
     CString formatConstant(Type, uint64_t) const;
 };
 
index 15d61ba..a2c1f41 100644 (file)
@@ -25,6 +25,7 @@
 
 #include "config.h"
 #include "BytecodeIndex.h"
+
 #include <wtf/PrintStream.h>
 
 namespace JSC {
@@ -32,6 +33,8 @@ namespace JSC {
 void BytecodeIndex::dump(WTF::PrintStream& out) const
 {
     out.print("bc#", offset());
+    if (checkpoint())
+        out.print("cp#", checkpoint());
 }
 
 } // namespace JSC
index 5295c29..10fba44 100644 (file)
@@ -26,6 +26,7 @@
 #pragma once
 
 #include <wtf/HashTraits.h>
+#include <wtf/MathExtras.h>
 
 namespace WTF {
 class PrintStream;
@@ -37,24 +38,33 @@ class BytecodeIndex {
 public:
     BytecodeIndex() = default;
     BytecodeIndex(WTF::HashTableDeletedValueType)
-        : m_offset(deletedValue().asBits())
+        : m_packedBits(deletedValue().asBits())
     {
     }
-    explicit BytecodeIndex(uint32_t bytecodeOffset)
-        : m_offset(bytecodeOffset)
-    { }
 
-    uint32_t offset() const { return m_offset; }
-    uint32_t asBits() const { return m_offset; }
+    explicit BytecodeIndex(uint32_t bytecodeOffset, uint8_t checkpoint = 0)
+        : m_packedBits(pack(bytecodeOffset, checkpoint))
+    {
+        ASSERT(*this);
+    }
+
+    static constexpr uint32_t numberOfCheckpoints = 4;
+    static_assert(hasOneBitSet(numberOfCheckpoints), "numberOfCheckpoints should be a power of 2");
+    static constexpr uint32_t checkpointMask = numberOfCheckpoints - 1;
+    static constexpr uint32_t checkpointShift = WTF::getMSBSetConstexpr(numberOfCheckpoints);
+
+    uint32_t offset() const { return m_packedBits >> checkpointShift; }
+    uint8_t checkpoint() const { return m_packedBits & checkpointMask; }
+    uint32_t asBits() const { return m_packedBits; }
 
-    unsigned hash() const { return WTF::intHash(m_offset); }
+    unsigned hash() const { return WTF::intHash(m_packedBits); }
     static BytecodeIndex deletedValue() { return fromBits(invalidOffset - 1); }
     bool isHashTableDeletedValue() const { return *this == deletedValue(); }
 
     static BytecodeIndex fromBits(uint32_t bits);
 
     // Comparison operators.
-    explicit operator bool() const { return m_offset != invalidOffset && m_offset != deletedValue().offset(); }
+    explicit operator bool() const { return m_packedBits != invalidOffset && m_packedBits != deletedValue().offset(); }
     bool operator ==(const BytecodeIndex& other) const { return asBits() == other.asBits(); }
     bool operator !=(const BytecodeIndex& other) const { return !(*this == other); }
 
@@ -69,13 +79,22 @@ public:
 private:
     static constexpr uint32_t invalidOffset = std::numeric_limits<uint32_t>::max();
 
-    uint32_t m_offset { invalidOffset };
+    static uint32_t pack(uint32_t bytecodeIndex, uint8_t checkpoint);
+
+    uint32_t m_packedBits { invalidOffset };
 };
 
+inline uint32_t BytecodeIndex::pack(uint32_t bytecodeIndex, uint8_t checkpoint)
+{
+    ASSERT(checkpoint < numberOfCheckpoints);
+    ASSERT((bytecodeIndex << checkpointShift) >> checkpointShift == bytecodeIndex);
+    return bytecodeIndex << checkpointShift | checkpoint;
+}
+
 inline BytecodeIndex BytecodeIndex::fromBits(uint32_t bits)
 {
     BytecodeIndex result;
-    result.m_offset = bits;
+    result.m_packedBits = bits;
     return result;
 }
 
index a9bd483..60d98cf 100644 (file)
@@ -59,7 +59,7 @@ types [
     :WatchpointSet,
 
     :ValueProfile,
-    :ValueProfileAndOperandBuffer,
+    :ValueProfileAndVirtualRegisterBuffer,
     :UnaryArithProfile,
     :BinaryArithProfile,
     :ArrayProfile,
@@ -796,6 +796,13 @@ op :call_varargs,
     metadata: {
         arrayProfile: ArrayProfile,
         profile: ValueProfile,
+    },
+    tmps: {
+        argCountIncludingThis: unsigned,
+    },
+    checkpoints: {
+        determiningArgCount: nil,
+        makeCall: nil,
     }
 
 op :tail_call_varargs,
@@ -810,6 +817,13 @@ op :tail_call_varargs,
     metadata: {
         arrayProfile: ArrayProfile,
         profile: ValueProfile,
+    },
+    tmps: {
+        argCountIncludingThis: unsigned
+    },
+    checkpoints: {
+        determiningArgCount: nil,
+        makeCall: nil,
     }
 
 op :tail_call_forward_arguments,
@@ -850,6 +864,13 @@ op :construct_varargs,
     metadata: {
         arrayProfile: ArrayProfile,
         profile: ValueProfile,
+    },
+    tmps: {
+        argCountIncludingThis: unsigned
+    },
+    checkpoints: {
+        determiningArgCount: nil,
+        makeCall: nil,
     }
 
 op :ret,
@@ -997,7 +1018,7 @@ op :catch,
         thrownValue: VirtualRegister,
     },
     metadata: {
-        buffer: ValueProfileAndOperandBuffer.*,
+        buffer: ValueProfileAndVirtualRegisterBuffer.*,
     }
 
 op :throw,
@@ -1243,6 +1264,8 @@ op :llint_native_call_trampoline
 op :llint_native_construct_trampoline
 op :llint_internal_function_call_trampoline
 op :llint_internal_function_construct_trampoline
+op :checkpoint_osr_exit_from_inlined_call_trampoline
+op :checkpoint_osr_exit_trampoline
 op :handleUncaughtException
 op :op_call_return_location
 op :op_construct_return_location
index 918e1d2..7975fef 100644 (file)
@@ -201,4 +201,39 @@ void BytecodeLivenessAnalysis::dumpResults(CodeBlock* codeBlock)
     }
 }
 
+template<typename EnumType1, typename EnumType2>
+constexpr bool enumValuesEqualAsIntegral(EnumType1 v1, EnumType2 v2)
+{
+    using IntType1 = typename std::underlying_type<EnumType1>::type;
+    using IntType2 = typename std::underlying_type<EnumType2>::type;
+    if constexpr (sizeof(IntType1) > sizeof(IntType2))
+        return static_cast<IntType1>(v1) == static_cast<IntType1>(v2);
+    else
+        return static_cast<IntType2>(v1) == static_cast<IntType2>(v2);
+}
+
+Bitmap<maxNumCheckpointTmps> tmpLivenessForCheckpoint(const CodeBlock& codeBlock, BytecodeIndex bytecodeIndex)
+{
+    Bitmap<maxNumCheckpointTmps> result;
+    uint8_t checkpoint = bytecodeIndex.checkpoint();
+
+    if (!checkpoint)
+        return result;
+
+    switch (codeBlock.instructions().at(bytecodeIndex)->opcodeID()) {
+    case op_call_varargs:
+    case op_tail_call_varargs:
+    case op_construct_varargs: {
+        static_assert(enumValuesEqualAsIntegral(OpCallVarargs::makeCall, OpTailCallVarargs::makeCall) && enumValuesEqualAsIntegral(OpCallVarargs::argCountIncludingThis, OpTailCallVarargs::argCountIncludingThis));
+        static_assert(enumValuesEqualAsIntegral(OpCallVarargs::makeCall, OpConstructVarargs::makeCall) && enumValuesEqualAsIntegral(OpCallVarargs::argCountIncludingThis, OpConstructVarargs::argCountIncludingThis));
+        if (checkpoint == OpCallVarargs::makeCall)
+            result.set(OpCallVarargs::argCountIncludingThis);
+        return result;
+    }
+    default:
+        break;
+    }
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
 } // namespace JSC
index 7379b3c..3e5edc0 100644 (file)
@@ -28,6 +28,7 @@
 #include "BytecodeBasicBlock.h"
 #include "BytecodeGraph.h"
 #include "CodeBlock.h"
+#include <wtf/Bitmap.h>
 #include <wtf/FastBitVector.h>
 
 namespace JSC {
@@ -97,6 +98,8 @@ private:
     BytecodeGraph m_graph;
 };
 
+Bitmap<maxNumCheckpointTmps> tmpLivenessForCheckpoint(const CodeBlock&, BytecodeIndex);
+
 inline bool operandIsAlwaysLive(int operand);
 inline bool operandThatIsNotAlwaysLiveIsLive(const FastBitVector& out, int operand);
 inline bool operandIsLive(const FastBitVector& out, int operand);
index 3d12e19..de6ca8d 100644 (file)
 
 namespace JSC {
 
-inline bool operandIsAlwaysLive(int operand)
+inline bool virtualRegisterIsAlwaysLive(VirtualRegister reg)
 {
-    return !VirtualRegister(operand).isLocal();
+    return !reg.isLocal();
 }
 
-inline bool operandThatIsNotAlwaysLiveIsLive(const FastBitVector& out, int operand)
+inline bool virtualRegisterThatIsNotAlwaysLiveIsLive(const FastBitVector& out, VirtualRegister reg)
 {
-    unsigned local = VirtualRegister(operand).toLocal();
+    unsigned local = reg.toLocal();
     if (local >= out.numBits())
         return false;
     return out[local];
 }
 
-inline bool operandIsLive(const FastBitVector& out, int operand)
+inline bool virtualRegisterIsLive(const FastBitVector& out, VirtualRegister operand)
 {
-    return operandIsAlwaysLive(operand) || operandThatIsNotAlwaysLiveIsLive(out, operand);
+    return virtualRegisterIsAlwaysLive(operand) || virtualRegisterThatIsNotAlwaysLiveIsLive(out, operand);
 }
 
 inline bool isValidRegisterForLiveness(VirtualRegister operand)
index 869f53f..602dd52 100644 (file)
@@ -38,6 +38,7 @@
 #include "BytecodeStructs.h"
 #include "BytecodeUseDef.h"
 #include "CallLinkStatus.h"
+#include "CheckpointOSRExitSideState.h"
 #include "CodeBlockInlines.h"
 #include "CodeBlockSet.h"
 #include "DFGCapabilities.h"
@@ -407,7 +408,7 @@ bool CodeBlock::finishCreation(VM& vm, ScriptExecutable* ownerExecutable, Unlink
             ConcurrentJSLocker locker(clonedSymbolTable->m_lock);
             clonedSymbolTable->prepareForTypeProfiling(locker);
         }
-        replaceConstant(unlinkedModuleProgramCodeBlock->moduleEnvironmentSymbolTableConstantRegisterOffset(), clonedSymbolTable);
+        replaceConstant(VirtualRegister(unlinkedModuleProgramCodeBlock->moduleEnvironmentSymbolTableConstantRegisterOffset()), clonedSymbolTable);
     }
 
     bool shouldUpdateFunctionHasExecutedCache = m_unlinkedCode->wasCompiledWithTypeProfilerOpcodes() || m_unlinkedCode->wasCompiledWithControlFlowProfilerOpcodes();
@@ -641,7 +642,7 @@ bool CodeBlock::finishCreation(VM& vm, ScriptExecutable* ownerExecutable, Unlink
             if (bytecode.m_getPutInfo.resolveType() == LocalClosureVar) {
                 // Only do watching if the property we're putting to is not anonymous.
                 if (bytecode.m_var != UINT_MAX) {
-                    SymbolTable* symbolTable = jsCast<SymbolTable*>(getConstant(bytecode.m_symbolTableOrScopeDepth.symbolTable().offset()));
+                    SymbolTable* symbolTable = jsCast<SymbolTable*>(getConstant(bytecode.m_symbolTableOrScopeDepth.symbolTable()));
                     const Identifier& ident = identifier(bytecode.m_var);
                     ConcurrentJSLocker locker(symbolTable->m_lock);
                     auto iter = symbolTable->find(locker, ident.impl());
@@ -709,8 +710,7 @@ bool CodeBlock::finishCreation(VM& vm, ScriptExecutable* ownerExecutable, Unlink
                 break;
             }
             case ProfileTypeBytecodeLocallyResolved: {
-                int symbolTableIndex = bytecode.m_symbolTableOrScopeDepth.symbolTable().offset();
-                SymbolTable* symbolTable = jsCast<SymbolTable*>(getConstant(symbolTableIndex));
+                SymbolTable* symbolTable = jsCast<SymbolTable*>(getConstant(bytecode.m_symbolTableOrScopeDepth.symbolTable()));
                 const Identifier& ident = identifier(bytecode.m_identifier);
                 ConcurrentJSLocker locker(symbolTable->m_lock);
                 // If our parent scope was created while profiling was disabled, it will not have prepared for profiling yet.
@@ -1086,6 +1086,15 @@ static bool shouldMarkTransition(VM& vm, DFG::WeakReferenceTransition& transitio
     
     return true;
 }
+
+BytecodeIndex CodeBlock::bytecodeIndexForExit(BytecodeIndex exitIndex) const
+{
+    if (exitIndex.checkpoint()) {
+        const auto& instruction = instructions().at(exitIndex);
+        exitIndex = instruction.next().index();
+    }
+    return exitIndex;
+}
 #endif // ENABLE(DFG_JIT)
 
 void CodeBlock::propagateTransitions(const ConcurrentJSLocker&, SlotVisitor& visitor)
@@ -1811,10 +1820,10 @@ void CodeBlock::ensureCatchLivenessIsComputedForBytecodeIndexSlow(const OpCatch&
     for (int i = 0; i < numParameters(); ++i)
         liveOperands.append(virtualRegisterForArgument(i));
 
-    auto profiles = makeUnique<ValueProfileAndOperandBuffer>(liveOperands.size());
+    auto profiles = makeUnique<ValueProfileAndVirtualRegisterBuffer>(liveOperands.size());
     RELEASE_ASSERT(profiles->m_size == liveOperands.size());
     for (unsigned i = 0; i < profiles->m_size; ++i)
-        profiles->m_buffer.get()[i].m_operand = liveOperands[i].offset();
+        profiles->m_buffer.get()[i].m_operand = liveOperands[i];
 
     createRareDataIfNecessary();
 
@@ -2713,7 +2722,7 @@ void CodeBlock::updateAllValueProfilePredictionsAndCountLiveness(unsigned& numbe
 
     if (auto* rareData = m_rareData.get()) {
         for (auto& profileBucket : rareData->m_catchProfiles) {
-            profileBucket->forEach([&] (ValueProfileAndOperand& profile) {
+            profileBucket->forEach([&] (ValueProfileAndVirtualRegister& profile) {
                 profile.computeUpdatedPrediction(locker);
             });
         }
index 146742e..2506726 100644 (file)
@@ -161,6 +161,7 @@ public:
     int numCalleeLocals() const { return m_numCalleeLocals; }
 
     int numVars() const { return m_numVars; }
+    int numTmps() const { return m_unlinkedCode->hasCheckpoints() * maxNumCheckpointTmps; }
 
     int* addressOfNumParameters() { return &m_numParameters; }
     static ptrdiff_t offsetOfNumParameters() { return OBJECT_OFFSETOF(CodeBlock, m_numParameters); }
@@ -229,20 +230,20 @@ public:
     bool hasInstalledVMTrapBreakpoints() const;
     bool installVMTrapBreakpoints();
 
-    inline bool isKnownNotImmediate(int index)
+    inline bool isKnownNotImmediate(VirtualRegister reg)
     {
-        if (index == thisRegister().offset() && !isStrictMode())
+        if (reg == thisRegister() && !isStrictMode())
             return true;
 
-        if (isConstantRegisterIndex(index))
-            return getConstant(index).isCell();
+        if (reg.isConstant())
+            return getConstant(reg).isCell();
 
         return false;
     }
 
-    ALWAYS_INLINE bool isTemporaryRegisterIndex(int index)
+    ALWAYS_INLINE bool isTemporaryRegister(VirtualRegister reg)
     {
-        return index >= m_numVars;
+        return reg.offset() >= m_numVars;
     }
 
     HandlerInfo* handlerForBytecodeIndex(BytecodeIndex, RequiredHandler = RequiredHandler::AnyHandler);
@@ -581,10 +582,9 @@ public:
     }
 
     const Vector<WriteBarrier<Unknown>>& constantRegisters() { return m_constantRegisters; }
-    WriteBarrier<Unknown>& constantRegister(int index) { return m_constantRegisters[index - FirstConstantRegisterIndex]; }
-    static ALWAYS_INLINE bool isConstantRegisterIndex(int index) { return index >= FirstConstantRegisterIndex; }
-    ALWAYS_INLINE JSValue getConstant(int index) const { return m_constantRegisters[index - FirstConstantRegisterIndex].get(); }
-    ALWAYS_INLINE SourceCodeRepresentation constantSourceCodeRepresentation(int index) const { return m_constantsSourceCodeRepresentation[index - FirstConstantRegisterIndex]; }
+    WriteBarrier<Unknown>& constantRegister(VirtualRegister reg) { return m_constantRegisters[reg.toConstantIndex()]; }
+    ALWAYS_INLINE JSValue getConstant(VirtualRegister reg) const { return m_constantRegisters[reg.toConstantIndex()].get(); }
+    ALWAYS_INLINE SourceCodeRepresentation constantSourceCodeRepresentation(VirtualRegister reg) const { return m_constantsSourceCodeRepresentation[reg.toConstantIndex()]; }
 
     FunctionExecutable* functionDecl(int index) { return m_functionDecls[index].get(); }
     int numberOfFunctionDecls() { return m_functionDecls.size(); }
@@ -774,6 +774,7 @@ public:
 
     void setOptimizationThresholdBasedOnCompilationResult(CompilationResult);
     
+    BytecodeIndex bytecodeIndexForExit(BytecodeIndex) const;
     uint32_t osrExitCounter() const { return m_osrExitCounter; }
 
     void countOSRExit() { m_osrExitCounter++; }
@@ -874,7 +875,7 @@ public:
         Vector<SimpleJumpTable> m_switchJumpTables;
         Vector<StringJumpTable> m_stringSwitchJumpTables;
 
-        Vector<std::unique_ptr<ValueProfileAndOperandBuffer>> m_catchProfiles;
+        Vector<std::unique_ptr<ValueProfileAndVirtualRegisterBuffer>> m_catchProfiles;
 
         DirectEvalCodeCache m_directEvalCodeCache;
     };
@@ -941,10 +942,10 @@ private:
 
     void setConstantRegisters(const Vector<WriteBarrier<Unknown>>& constants, const Vector<SourceCodeRepresentation>& constantsSourceCodeRepresentation, ScriptExecutable* topLevelExecutable);
 
-    void replaceConstant(int index, JSValue value)
+    void replaceConstant(VirtualRegister reg, JSValue value)
     {
-        ASSERT(isConstantRegisterIndex(index) && static_cast<size_t>(index - FirstConstantRegisterIndex) < m_constantRegisters.size());
-        m_constantRegisters[index - FirstConstantRegisterIndex].set(*m_vm, this, value);
+        ASSERT(reg.isConstant() && static_cast<size_t>(reg.toConstantIndex()) < m_constantRegisters.size());
+        m_constantRegisters[reg.toConstantIndex()].set(*m_vm, this, value);
     }
 
     bool shouldVisitStrongly(const ConcurrentJSLocker&);
index 2106813..f9829a2 100644 (file)
@@ -160,7 +160,9 @@ public:
     unsigned approximateHash(InlineCallFrame* terminal = nullptr) const;
 
     template <typename Function>
-    void walkUpInlineStack(const Function&);
+    void walkUpInlineStack(const Function&) const;
+
+    inline bool inlineStackContainsActiveCheckpoint() const;
     
     // Get the inline stack. This is slow, and is intended for debugging only.
     Vector<CodeOrigin> inlineStack() const;
index e4a575b..47c657c 100644 (file)
 #pragma once
 
 #include "BytecodeLivenessAnalysis.h"
+#include "Operands.h"
 #include <wtf/FastBitVector.h>
 
 namespace JSC {
 
 class BytecodeLivenessAnalysis;
+class CodeBlock;
 
+// Note: Full bytecode liveness does not track any information about the liveness of temps.
+// If you want tmp liveness for a checkpoint ask tmpLivenessForCheckpoint.
 class FullBytecodeLiveness {
     WTF_MAKE_FAST_ALLOCATED;
 public:
@@ -47,15 +51,15 @@ public:
         RELEASE_ASSERT_NOT_REACHED();
     }
     
-    bool operandIsLive(int operand, BytecodeIndex bytecodeIndex, LivenessCalculationPoint point) const
+    bool virtualRegisterIsLive(VirtualRegister reg, BytecodeIndex bytecodeIndex, LivenessCalculationPoint point) const
     {
-        return operandIsAlwaysLive(operand) || operandThatIsNotAlwaysLiveIsLive(getLiveness(bytecodeIndex, point), operand);
+        return virtualRegisterIsAlwaysLive(reg) || virtualRegisterThatIsNotAlwaysLiveIsLive(getLiveness(bytecodeIndex, point), reg);
     }
     
 private:
     friend class BytecodeLivenessAnalysis;
     
-    // FIXME: Use FastBitVector's view mechansim to make them compact.
+    // FIXME: Use FastBitVector's view mechanism to make them compact.
     // https://bugs.webkit.org/show_bug.cgi?id=204427<Paste>
     Vector<FastBitVector, 0, UnsafeVectorOverflow> m_beforeUseVector;
     Vector<FastBitVector, 0, UnsafeVectorOverflow> m_afterUseVector;
index 97508bb..e29f384 100644 (file)
@@ -179,7 +179,8 @@ struct InlineCallFrame {
     WriteBarrier<CodeBlock> baselineCodeBlock;
     CodeOrigin directCaller;
 
-    unsigned argumentCountIncludingThis { 0 }; // Do not include fixups.
+    unsigned argumentCountIncludingThis : 22; // Do not include fixups.
+    unsigned tmpOffset : 10;
     signed stackOffset : 28;
     unsigned kind : 3; // real type is Kind
     bool isClosureCall : 1; // If false then we know that callee/scope are constants and the DFG won't treat them as variables, i.e. they have to be recovered manually.
@@ -191,7 +192,9 @@ struct InlineCallFrame {
     // InlineCallFrame's fields. This constructor is here just to reduce confusion if
     // we forgot to initialize explicitly.
     InlineCallFrame()
-        : stackOffset(0)
+        : argumentCountIncludingThis(0)
+        , tmpOffset(0)
+        , stackOffset(0)
         , kind(Call)
         , isClosureCall(false)
     {
@@ -219,6 +222,12 @@ struct InlineCallFrame {
         RELEASE_ASSERT(static_cast<signed>(stackOffset) == offset);
     }
 
+    void setTmpOffset(unsigned offset)
+    {
+        tmpOffset = offset;
+        RELEASE_ASSERT(static_cast<unsigned>(tmpOffset) == offset);
+    }
+
     ptrdiff_t callerFrameOffset() const { return stackOffset * sizeof(Register) + CallFrame::callerFrameOffset(); }
     ptrdiff_t returnPCOffset() const { return stackOffset * sizeof(Register) + CallFrame::returnPCOffset(); }
 
@@ -247,9 +256,9 @@ inline CodeBlock* baselineCodeBlockForOriginAndBaselineCodeBlock(const CodeOrigi
     return baselineCodeBlock;
 }
 
-// This function is defined here and not in CodeOrigin because it needs access to the directCaller field in InlineCallFrame
+// These function is defined here and not in CodeOrigin because it needs access to the directCaller field in InlineCallFrame
 template <typename Function>
-inline void CodeOrigin::walkUpInlineStack(const Function& function)
+inline void CodeOrigin::walkUpInlineStack(const Function& function) const
 {
     CodeOrigin codeOrigin = *this;
     while (true) {
@@ -261,11 +270,38 @@ inline void CodeOrigin::walkUpInlineStack(const Function& function)
     }
 }
 
-ALWAYS_INLINE VirtualRegister remapOperand(InlineCallFrame* inlineCallFrame, VirtualRegister reg)
+inline bool CodeOrigin::inlineStackContainsActiveCheckpoint() const
+{
+    bool result = false;
+    walkUpInlineStack([&] (CodeOrigin origin) {
+        if (origin.bytecodeIndex().checkpoint())
+            result = true;
+    });
+    return result;
+}
+
+ALWAYS_INLINE Operand remapOperand(InlineCallFrame* inlineCallFrame, Operand operand)
+{
+    if (inlineCallFrame)
+        return operand.isTmp() ? Operand::tmp(operand.value() + inlineCallFrame->tmpOffset) : operand.virtualRegister() + inlineCallFrame->stackOffset;
+    return operand;
+}
+
+ALWAYS_INLINE Operand remapOperand(InlineCallFrame* inlineCallFrame, VirtualRegister reg)
+{
+    return remapOperand(inlineCallFrame, Operand(reg));
+}
+
+ALWAYS_INLINE Operand unmapOperand(InlineCallFrame* inlineCallFrame, Operand operand)
 {
     if (inlineCallFrame)
-        return VirtualRegister(reg.offset() + inlineCallFrame->stackOffset);
-    return reg;
+        return operand.isTmp() ? Operand::tmp(operand.value() - inlineCallFrame->tmpOffset) : Operand(operand.virtualRegister() - inlineCallFrame->stackOffset);
+    return operand;
+}
+
+ALWAYS_INLINE Operand unmapOperand(InlineCallFrame* inlineCallFrame, VirtualRegister reg)
+{
+    return unmapOperand(inlineCallFrame, Operand(reg));
 }
 
 } // namespace JSC
index bfb94b3..81f5a0c 100644 (file)
@@ -26,6 +26,7 @@
 #pragma once
 
 #include "ConcurrentJSLock.h"
+#include "Operands.h"
 #include "ValueProfile.h"
 #include "VirtualRegister.h"
 #include <wtf/HashMap.h>
@@ -49,7 +50,7 @@ public:
     {
     }
     
-    LazyOperandValueProfileKey(BytecodeIndex bytecodeIndex, VirtualRegister operand)
+    LazyOperandValueProfileKey(BytecodeIndex bytecodeIndex, Operand operand)
         : m_bytecodeIndex(bytecodeIndex)
         , m_operand(operand)
     {
@@ -69,7 +70,7 @@ public:
     
     unsigned hash() const
     {
-        return m_bytecodeIndex.hash() + m_operand.offset();
+        return m_bytecodeIndex.hash() + m_operand.value() + static_cast<unsigned>(m_operand.kind());
     }
     
     BytecodeIndex bytecodeIndex() const
@@ -78,7 +79,7 @@ public:
         return m_bytecodeIndex;
     }
 
-    VirtualRegister operand() const
+    Operand operand() const
     {
         ASSERT(!!*this);
         return m_operand;
@@ -90,7 +91,7 @@ public:
     }
 private: 
     BytecodeIndex m_bytecodeIndex;
-    VirtualRegister m_operand;
+    Operand m_operand;
 };
 
 struct LazyOperandValueProfileKeyHash {
index 2b69f36..701c330 100644 (file)
@@ -42,7 +42,7 @@ MethodOfGettingAValueProfile MethodOfGettingAValueProfile::fromLazyOperand(
     result.m_kind = LazyOperand;
     result.u.lazyOperand.codeBlock = codeBlock;
     result.u.lazyOperand.bytecodeOffset = key.bytecodeIndex();
-    result.u.lazyOperand.operand = key.operand().offset();
+    result.u.lazyOperand.operand = key.operand();
     return result;
 }
 
@@ -57,7 +57,7 @@ void MethodOfGettingAValueProfile::emitReportValue(CCallHelpers& jit, JSValueReg
         return;
         
     case LazyOperand: {
-        LazyOperandValueProfileKey key(u.lazyOperand.bytecodeOffset, VirtualRegister(u.lazyOperand.operand));
+        LazyOperandValueProfileKey key(u.lazyOperand.bytecodeOffset, u.lazyOperand.operand);
         
         ConcurrentJSLocker locker(u.lazyOperand.codeBlock->m_lock);
         LazyOperandValueProfile* profile =
@@ -91,7 +91,7 @@ void MethodOfGettingAValueProfile::reportValue(JSValue value)
         return;
 
     case LazyOperand: {
-        LazyOperandValueProfileKey key(u.lazyOperand.bytecodeOffset, VirtualRegister(u.lazyOperand.operand));
+        LazyOperandValueProfileKey key(u.lazyOperand.bytecodeOffset, u.lazyOperand.operand);
 
         ConcurrentJSLocker locker(u.lazyOperand.codeBlock->m_lock);
         LazyOperandValueProfile* profile =
index 5d81c4c..36d369f 100644 (file)
@@ -106,7 +106,7 @@ private:
         struct {
             CodeBlock* codeBlock;
             BytecodeIndex bytecodeOffset;
-            int operand;
+            Operand operand;
         } lazyOperand;
     } u;
 };
index f29ff9f..d47a68f 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011-2018 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2019 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -35,111 +35,227 @@ namespace JSC {
 
 template<typename T> struct OperandValueTraits;
 
-enum OperandKind { ArgumentOperand, LocalOperand };
+constexpr unsigned maxNumCheckpointTmps = 4;
+
+// A OperandKind::Tmp is one that exists for exiting to a checkpoint but does not exist between bytecodes.
+enum class OperandKind { Argument, Local, Tmp };
+
+class Operand {
+public:
+    Operand() = default;
+    Operand(const Operand&) = default;
+
+    Operand(VirtualRegister operand)
+        : m_kind(operand.isLocal() ? OperandKind::Local : OperandKind::Argument)
+        , m_operand(operand.offset())
+    { }
+
+    Operand(OperandKind kind, int operand)
+        : m_kind(kind)
+        , m_operand(operand)
+    { 
+        ASSERT(kind == OperandKind::Tmp || VirtualRegister(operand).isLocal() == (kind == OperandKind::Local));
+    }
+    static Operand tmp(uint32_t index) { return Operand(OperandKind::Tmp, index); }
+
+    OperandKind kind() const { return m_kind; }
+    int value() const { return m_operand; }
+    VirtualRegister virtualRegister() const
+    {
+        ASSERT(m_kind != OperandKind::Tmp);
+        return VirtualRegister(m_operand);
+    }
+    uint64_t asBits() const { return bitwise_cast<uint64_t>(*this); }
+    static Operand fromBits(uint64_t value);
+
+    bool isTmp() const { return kind() == OperandKind::Tmp; }
+    bool isArgument() const { return kind() == OperandKind::Argument; }
+    bool isLocal() const { return kind() == OperandKind::Local && virtualRegister().isLocal(); }
+    bool isHeader() const { return kind() != OperandKind::Tmp && virtualRegister().isHeader(); }
+    bool isConstant() const { return kind() != OperandKind::Tmp && virtualRegister().isConstant(); }
+
+    int toArgument() const { ASSERT(isArgument()); return virtualRegister().toArgument(); }
+    int toLocal() const { ASSERT(isLocal()); return virtualRegister().toLocal(); }
+
+    inline bool isValid() const;
+
+    inline bool operator==(const Operand&) const;
+
+    void dump(PrintStream&) const;
+
+private:
+    OperandKind m_kind { OperandKind::Argument };
+    int m_operand { VirtualRegister::invalidVirtualRegister };
+};
+
+ALWAYS_INLINE bool Operand::operator==(const Operand& other) const
+{
+    if (kind() != other.kind())
+        return false;
+    if (isTmp())
+        return value() == other.value();
+    return virtualRegister() == other.virtualRegister();
+}
+
+inline bool Operand::isValid() const
+{
+    if (isTmp())
+        return value() >= 0;
+    return virtualRegister().isValid();
+}
+
+inline Operand Operand::fromBits(uint64_t value)
+{
+    Operand result = bitwise_cast<Operand>(value);
+    ASSERT(result.isValid());
+    return result;
+}
+
+static_assert(sizeof(Operand) == sizeof(uint64_t), "Operand::asBits() relies on this.");
 
 enum OperandsLikeTag { OperandsLike };
 
 template<typename T>
 class Operands {
 public:
-    Operands()
-        : m_numArguments(0) { }
-    
-    explicit Operands(size_t numArguments, size_t numLocals)
+    using Storage = std::conditional_t<std::is_same_v<T, bool>, FastBitVector, Vector<T, 0, UnsafeVectorOverflow>>;
+    using RefType = std::conditional_t<std::is_same_v<T, bool>, FastBitReference, T&>;
+    using ConstRefType = std::conditional_t<std::is_same_v<T, bool>, bool, const T&>;
+
+    Operands() = default;
+
+    explicit Operands(size_t numArguments, size_t numLocals, size_t numTmps)
         : m_numArguments(numArguments)
+        , m_numLocals(numLocals)
     {
-        if (WTF::VectorTraits<T>::needsInitialization) {
-            m_values.resize(numArguments + numLocals);
-        } else {
-            m_values.fill(T(), numArguments + numLocals);
-        }
+        size_t size = numArguments + numLocals + numTmps;
+        m_values.grow(size);
+        if (!WTF::VectorTraits<T>::needsInitialization)
+            m_values.fill(T());
     }
 
-    explicit Operands(size_t numArguments, size_t numLocals, const T& initialValue)
+    explicit Operands(size_t numArguments, size_t numLocals, size_t numTmps, const T& initialValue)
         : m_numArguments(numArguments)
+        , m_numLocals(numLocals)
     {
-        m_values.fill(initialValue, numArguments + numLocals);
+        m_values.grow(numArguments + numLocals + numTmps);
+        m_values.fill(initialValue);
     }
     
     template<typename U>
-    explicit Operands(OperandsLikeTag, const Operands<U>& other)
+    explicit Operands(OperandsLikeTag, const Operands<U>& other, const T& initialValue = T())
         : m_numArguments(other.numberOfArguments())
+        , m_numLocals(other.numberOfLocals())
     {
-        m_values.fill(T(), other.numberOfArguments() + other.numberOfLocals());
+        m_values.grow(other.size());
+        m_values.fill(initialValue);
     }
-    
+
     size_t numberOfArguments() const { return m_numArguments; }
-    size_t numberOfLocals() const { return m_values.size() - m_numArguments; }
-    
+    size_t numberOfLocals() const { return m_numLocals; }
+    size_t numberOfTmps() const { return m_values.size() - numberOfArguments() - numberOfLocals(); }
+
+    size_t tmpIndex(size_t idx) const
+    {
+        ASSERT(idx < numberOfTmps());
+        return idx + numberOfArguments() + numberOfLocals();
+    }
     size_t argumentIndex(size_t idx) const
     {
-        ASSERT(idx < m_numArguments);
+        ASSERT(idx < numberOfArguments());
         return idx;
     }
     
     size_t localIndex(size_t idx) const
     {
-        return m_numArguments + idx;
+        ASSERT(idx < numberOfLocals());
+        return numberOfArguments() + idx;
     }
+
+    RefType tmp(size_t idx) { return m_values[tmpIndex(idx)]; }
+    ConstRefType tmp(size_t idx) const { return m_values[tmpIndex(idx)]; }
     
-    T& argument(size_t idx)
-    {
-        return m_values[argumentIndex(idx)];
-    }
-    const T& argument(size_t idx) const
-    {
-        return m_values[argumentIndex(idx)];
-    }
+    RefType argument(size_t idx) { return m_values[argumentIndex(idx)]; }
+    ConstRefType argument(size_t idx) const { return m_values[argumentIndex(idx)]; }
     
-    T& local(size_t idx) { return m_values[localIndex(idx)]; }
-    const T& local(size_t idx) const { return m_values[localIndex(idx)]; }
+    RefType local(size_t idx) { return m_values[localIndex(idx)]; }
+    ConstRefType local(size_t idx) const { return m_values[localIndex(idx)]; }
     
     template<OperandKind operandKind>
     size_t sizeFor() const
     {
-        if (operandKind == ArgumentOperand)
+        switch (operandKind) {
+        case OperandKind::Tmp:
+            return numberOfTmps();
+        case OperandKind::Argument:
             return numberOfArguments();
-        return numberOfLocals();
+        case OperandKind::Local:
+            return numberOfLocals();
+        }
+        RELEASE_ASSERT_NOT_REACHED();
+        return 0;
     }
     template<OperandKind operandKind>
-    T& atFor(size_t idx)
+    RefType atFor(size_t idx)
     {
-        if (operandKind == ArgumentOperand)
+        switch (operandKind) {
+        case OperandKind::Tmp:
+            return tmp(idx);
+        case OperandKind::Argument:
             return argument(idx);
-        return local(idx);
+        case OperandKind::Local:
+            return local(idx);
+        }
+        RELEASE_ASSERT_NOT_REACHED();
+        return tmp(0);
     }
     template<OperandKind operandKind>
-    const T& atFor(size_t idx) const
+    ConstRefType atFor(size_t idx) const
     {
-        if (operandKind == ArgumentOperand)
+        switch (operandKind) {
+        case OperandKind::Tmp:
+            return tmp(idx);
+        case OperandKind::Argument:
             return argument(idx);
-        return local(idx);
+        case OperandKind::Local:
+            return local(idx);
+        }
+        RELEASE_ASSERT_NOT_REACHED();
+        return tmp(0);
     }
-    
-    void ensureLocals(size_t size)
+
+    void ensureLocals(size_t size, const T& ensuredValue = T())
     {
-        size_t oldSize = m_values.size();
-        size_t newSize = m_numArguments + size;
-        if (newSize <= oldSize)
+        if (size <= numberOfLocals())
             return;
 
+        size_t newSize = numberOfArguments() + numberOfTmps() + size;
+        size_t oldNumLocals = numberOfLocals();
+        size_t oldNumTmps = numberOfTmps();
         m_values.grow(newSize);
-        if (!WTF::VectorTraits<T>::needsInitialization) {
-            for (size_t i = oldSize; i < m_values.size(); ++i)
-                m_values[i] = T();
+        for (size_t i = 0; i < oldNumTmps; ++i)
+            m_values[newSize - 1 - i] = m_values[tmpIndex(oldNumTmps - 1 - i)];
+
+        m_numLocals = size;
+        if (ensuredValue != T() || !WTF::VectorTraits<T>::needsInitialization) {
+            for (size_t i = 0; i < size - oldNumLocals; ++i)
+                m_values[localIndex(oldNumLocals + i)] = ensuredValue;
         }
     }
 
-    void ensureLocals(size_t size, const T& ensuredValue)
+    void ensureTmps(size_t size, const T& ensuredValue = T())
     {
-        size_t oldSize = m_values.size();
-        size_t newSize = m_numArguments + size;
-        if (newSize <= oldSize)
+        if (size <= numberOfTmps())
             return;
 
+        size_t oldSize = m_values.size();
+        size_t newSize = numberOfArguments() + numberOfLocals() + size;
         m_values.grow(newSize);
-        for (size_t i = oldSize; i < m_values.size(); ++i)
-            m_values[i] = ensuredValue;
+
+        if (ensuredValue != T() || !WTF::VectorTraits<T>::needsInitialization) {
+            for (size_t i = oldSize; i < newSize; ++i)
+                m_values[i] = ensuredValue;
+        }
     }
     
     void setLocal(size_t idx, const T& value)
@@ -164,84 +280,76 @@ public:
         ASSERT(idx >= numberOfLocals() || local(idx) == T());
         setLocal(idx, value);
     }
-    
-    size_t operandIndex(int operand) const
+
+    RefType getForOperandIndex(size_t index) { return m_values[index]; }
+    ConstRefType getForOperandIndex(size_t index) const { return const_cast<Operands*>(this)->getForOperandIndex(index); }
+
+    size_t operandIndex(VirtualRegister operand) const
     {
-        if (operandIsArgument(operand))
-            return argumentIndex(VirtualRegister(operand).toArgument());
-        return localIndex(VirtualRegister(operand).toLocal());
+        if (operand.isArgument())
+            return argumentIndex(operand.toArgument());
+        return localIndex(operand.toLocal());
     }
     
-    size_t operandIndex(VirtualRegister virtualRegister) const
+    size_t operandIndex(Operand op) const
     {
-        return operandIndex(virtualRegister.offset());
+        if (!op.isTmp())
+            return operandIndex(op.virtualRegister());
+        return tmpIndex(op.value());
     }
     
-    T& operand(int operand)
+    RefType operand(VirtualRegister operand)
     {
-        if (operandIsArgument(operand))
-            return argument(VirtualRegister(operand).toArgument());
-        return local(VirtualRegister(operand).toLocal());
+        if (operand.isArgument())
+            return argument(operand.toArgument());
+        return local(operand.toLocal());
     }
 
-    T& operand(VirtualRegister virtualRegister)
+    RefType operand(Operand op)
     {
-        return operand(virtualRegister.offset());
+        if (!op.isTmp())
+            return operand(op.virtualRegister());
+        return tmp(op.value());
     }
 
-    const T& operand(int operand) const { return const_cast<const T&>(const_cast<Operands*>(this)->operand(operand)); }
-    const T& operand(VirtualRegister operand) const { return const_cast<const T&>(const_cast<Operands*>(this)->operand(operand)); }
+    ConstRefType operand(VirtualRegister operand) const { return const_cast<Operands*>(this)->operand(operand); }
+    ConstRefType operand(Operand operand) const { return const_cast<Operands*>(this)->operand(operand); }
     
-    bool hasOperand(int operand) const
+    bool hasOperand(VirtualRegister operand) const
     {
-        if (operandIsArgument(operand))
+        if (operand.isArgument())
             return true;
-        return static_cast<size_t>(VirtualRegister(operand).toLocal()) < numberOfLocals();
+        return static_cast<size_t>(operand.toLocal()) < numberOfLocals();
     }
-    bool hasOperand(VirtualRegister reg) const
+    bool hasOperand(Operand op) const
     {
-        return hasOperand(reg.offset());
+        if (op.isTmp()) {
+            ASSERT(op.value() >= 0);
+            return static_cast<size_t>(op.value()) < numberOfTmps();
+        }
+        return hasOperand(op.virtualRegister());
     }
     
-    void setOperand(int operand, const T& value)
+    void setOperand(Operand operand, const T& value)
     {
         this->operand(operand) = value;
     }
-    
-    void setOperand(VirtualRegister virtualRegister, const T& value)
-    {
-        setOperand(virtualRegister.offset(), value);
-    }
 
     size_t size() const { return m_values.size(); }
-    const T& at(size_t index) const { return m_values[index]; }
-    T& at(size_t index) { return m_values[index]; }
-    const T& operator[](size_t index) const { return at(index); }
-    T& operator[](size_t index) { return at(index); }
-
-    bool isArgument(size_t index) const { return index < m_numArguments; }
-    bool isLocal(size_t index) const { return !isArgument(index); }
-    int operandForIndex(size_t index) const
+    ConstRefType at(size_t index) const { return m_values[index]; }
+    RefType at(size_t index) { return m_values[index]; }
+    ConstRefType operator[](size_t index) const { return at(index); }
+    RefType operator[](size_t index) { return at(index); }
+
+    Operand operandForIndex(size_t index) const
     {
         if (index < numberOfArguments())
-            return virtualRegisterForArgument(index).offset();
-        return virtualRegisterForLocal(index - numberOfArguments()).offset();
-    }
-    VirtualRegister virtualRegisterForIndex(size_t index) const
-    {
-        return VirtualRegister(operandForIndex(index));
+            return virtualRegisterForArgument(index);
+        else if (index < numberOfLocals() + numberOfArguments())
+            return virtualRegisterForLocal(index - numberOfArguments());
+        return Operand::tmp(index - (numberOfLocals() + numberOfArguments()));
     }
-    
-    void setOperandFirstTime(int operand, const T& value)
-    {
-        if (operandIsArgument(operand)) {
-            setArgumentFirstTime(VirtualRegister(operand).toArgument(), value);
-            return;
-        }
-        
-        setLocalFirstTime(VirtualRegister(operand).toLocal(), value);
-    }
-    
+
     void fill(T value)
     {
         for (size_t i = 0; i < m_values.size(); ++i)
@@ -257,6 +365,7 @@ public:
     {
         ASSERT(numberOfArguments() == other.numberOfArguments());
         ASSERT(numberOfLocals() == other.numberOfLocals());
+        ASSERT(numberOfTmps() == other.numberOfTmps());
         
         return m_values == other.m_values;
     }
@@ -265,9 +374,10 @@ public:
     void dump(PrintStream& out) const;
     
 private:
-    // The first m_numArguments of m_values are arguments, the rest are locals.
-    Vector<T, 0, UnsafeVectorOverflow> m_values;
-    unsigned m_numArguments;
+    // The first m_numArguments of m_values are arguments, the next m_numLocals are locals, and the rest are tmps.
+    Storage m_values;
+    unsigned m_numArguments { 0 };
+    unsigned m_numLocals { 0 };
 };
 
 } // namespace JSC
index 65fedda..0b1cc1f 100644 (file)
 
 namespace JSC {
 
+inline void Operand::dump(PrintStream& out) const
+{
+    if (isTmp())
+        out.print("tmp", value());
+    else
+        out.print(virtualRegister());
+}
+
 template<typename T>
 void Operands<T>::dumpInContext(PrintStream& out, DumpContext* context) const
 {
@@ -44,6 +52,11 @@ void Operands<T>::dumpInContext(PrintStream& out, DumpContext* context) const
             continue;
         out.print(comma, "loc", localIndex, ":", inContext(local(localIndex), context));
     }
+    for (size_t tmpIndex = 0; tmpIndex < numberOfTmps(); ++tmpIndex) {
+        if (!tmp(tmpIndex))
+            continue;
+        out.print(comma, "tmp", tmpIndex, ":", inContext(tmp(tmpIndex), context));
+    }
 }
 
 template<typename T>
@@ -60,6 +73,11 @@ void Operands<T>::dump(PrintStream& out) const
             continue;
         out.print(comma, "loc", localIndex, ":", local(localIndex));
     }
+    for (size_t tmpIndex = 0; tmpIndex < numberOfTmps(); ++tmpIndex) {
+        if (!tmp(tmpIndex))
+            continue;
+        out.print(comma, "tmp", tmpIndex, ":", tmp(tmpIndex));
+    }
 }
 
 } // namespace JSC
index d5ddee8..7e93262 100644 (file)
@@ -72,6 +72,7 @@ UnlinkedCodeBlock::UnlinkedCodeBlock(VM& vm, Structure* structure, CodeType code
     , m_codeType(static_cast<unsigned>(codeType))
     , m_didOptimize(static_cast<unsigned>(MixedTriState))
     , m_age(0)
+    , m_hasCheckpoints(false)
     , m_parseMode(info.parseMode())
     , m_codeGenerationMode(codeGenerationMode)
     , m_metadata(UnlinkedMetadataTable::create())
index ad1f90b..3082b0a 100644 (file)
@@ -141,6 +141,9 @@ public:
     bool hasExpressionInfo() { return m_expressionInfo.size(); }
     const Vector<ExpressionRangeInfo>& expressionInfo() { return m_expressionInfo; }
 
+    bool hasCheckpoints() const { return m_hasCheckpoints; }
+    void setHasCheckpoints() { m_hasCheckpoints = true; }
+
     // Special registers
     void setThisRegister(VirtualRegister thisRegister) { m_thisRegister = thisRegister; }
     void setScopeRegister(VirtualRegister scopeRegister) { m_scopeRegister = scopeRegister; }
@@ -198,9 +201,8 @@ public:
     }
 
     const Vector<WriteBarrier<Unknown>>& constantRegisters() { return m_constantRegisters; }
-    const WriteBarrier<Unknown>& constantRegister(int index) const { return m_constantRegisters[index - FirstConstantRegisterIndex]; }
-    ALWAYS_INLINE bool isConstantRegisterIndex(int index) const { return index >= FirstConstantRegisterIndex; }
-    ALWAYS_INLINE JSValue getConstant(int index) const { return m_constantRegisters[index - FirstConstantRegisterIndex].get(); }
+    const WriteBarrier<Unknown>& constantRegister(VirtualRegister reg) const { return m_constantRegisters[reg.toConstantIndex()]; }
+    ALWAYS_INLINE JSValue getConstant(VirtualRegister reg) const { return m_constantRegisters[reg.toConstantIndex()].get(); }
     const Vector<SourceCodeRepresentation>& constantsSourceCodeRepresentation() { return m_constantsSourceCodeRepresentation; }
 
     unsigned numberOfConstantIdentifierSets() const { return m_rareData ? m_rareData->m_constantIdentifierSets.size() : 0; }
@@ -419,6 +421,7 @@ private:
     unsigned m_didOptimize : 2;
     unsigned m_age : 3;
     static_assert(((1U << 3) - 1) >= maxAge);
+    bool m_hasCheckpoints : 1;
 public:
     ConcurrentJSLock m_lock;
 private:
index bbb8a4b..eb91794 100644 (file)
@@ -176,28 +176,28 @@ inline BytecodeIndex getRareCaseProfileBytecodeIndex(RareCaseProfile* rareCasePr
     return rareCaseProfile->m_bytecodeIndex;
 }
 
-struct ValueProfileAndOperand : public ValueProfile {
-    int m_operand;
+struct ValueProfileAndVirtualRegister : public ValueProfile {
+    VirtualRegister m_operand;
 };
 
-struct ValueProfileAndOperandBuffer {
+struct ValueProfileAndVirtualRegisterBuffer {
     WTF_MAKE_STRUCT_FAST_ALLOCATED;
 
-    ValueProfileAndOperandBuffer(unsigned size)
+    ValueProfileAndVirtualRegisterBuffer(unsigned size)
         : m_size(size)
     {
         // FIXME: ValueProfile has more stuff than we need. We could optimize these value profiles
         // to be more space efficient.
         // https://bugs.webkit.org/show_bug.cgi?id=175413
-        m_buffer = MallocPtr<ValueProfileAndOperand>::malloc(m_size * sizeof(ValueProfileAndOperand));
+        m_buffer = MallocPtr<ValueProfileAndVirtualRegister>::malloc(m_size * sizeof(ValueProfileAndVirtualRegister));
         for (unsigned i = 0; i < m_size; ++i)
-            new (&m_buffer.get()[i]) ValueProfileAndOperand();
+            new (&m_buffer.get()[i]) ValueProfileAndVirtualRegister();
     }
 
-    ~ValueProfileAndOperandBuffer()
+    ~ValueProfileAndVirtualRegisterBuffer()
     {
         for (unsigned i = 0; i < m_size; ++i)
-            m_buffer.get()[i].~ValueProfileAndOperand();
+            m_buffer.get()[i].~ValueProfileAndVirtualRegister();
     }
 
     template <typename Function>
@@ -208,7 +208,7 @@ struct ValueProfileAndOperandBuffer {
     }
 
     unsigned m_size;
-    MallocPtr<ValueProfileAndOperand> m_buffer;
+    MallocPtr<ValueProfileAndVirtualRegister> m_buffer;
 };
 
 } // namespace JSC
index c5ca4de..b13947f 100644 (file)
@@ -35,22 +35,22 @@ JSValue ValueRecovery::recover(CallFrame* callFrame) const
 {
     switch (technique()) {
     case DisplacedInJSStack:
-        return callFrame->r(virtualRegister().offset()).jsValue();
+        return callFrame->r(virtualRegister()).jsValue();
     case Int32DisplacedInJSStack:
-        return jsNumber(callFrame->r(virtualRegister().offset()).unboxedInt32());
+        return jsNumber(callFrame->r(virtualRegister()).unboxedInt32());
     case Int52DisplacedInJSStack:
-        return jsNumber(callFrame->r(virtualRegister().offset()).unboxedInt52());
+        return jsNumber(callFrame->r(virtualRegister()).unboxedInt52());
     case StrictInt52DisplacedInJSStack:
-        return jsNumber(callFrame->r(virtualRegister().offset()).unboxedStrictInt52());
+        return jsNumber(callFrame->r(virtualRegister()).unboxedStrictInt52());
     case DoubleDisplacedInJSStack:
-        return jsNumber(purifyNaN(callFrame->r(virtualRegister().offset()).unboxedDouble()));
+        return jsNumber(purifyNaN(callFrame->r(virtualRegister()).unboxedDouble()));
     case CellDisplacedInJSStack:
-        return callFrame->r(virtualRegister().offset()).unboxedCell();
+        return callFrame->r(virtualRegister()).unboxedCell();
     case BooleanDisplacedInJSStack:
 #if USE(JSVALUE64)
-        return callFrame->r(virtualRegister().offset()).jsValue();
+        return callFrame->r(virtualRegister()).jsValue();
 #else
-        return jsBoolean(callFrame->r(virtualRegister().offset()).unboxedBoolean());
+        return jsBoolean(callFrame->r(virtualRegister()).unboxedBoolean());
 #endif
     case Constant:
         return constant();
index d8f18c3..5e43bc7 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2013, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2019 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
index 3619529..fd17007 100644 (file)
 
 namespace JSC {
 
-inline bool operandIsLocal(int operand)
+inline bool virtualRegisterIsLocal(int operand)
 {
     return operand < 0;
 }
 
-inline bool operandIsArgument(int operand)
+inline bool virtualRegisterIsArgument(int operand)
 {
     return operand >= 0;
 }
@@ -49,25 +49,32 @@ public:
     friend VirtualRegister virtualRegisterForLocal(int);
     friend VirtualRegister virtualRegisterForArgument(int, int);
 
+    static constexpr int invalidVirtualRegister = 0x3fffffff;
+    static constexpr int firstConstantRegisterIndex = FirstConstantRegisterIndex;
+
     VirtualRegister(RegisterID*);
     VirtualRegister(RefPtr<RegisterID>);
 
     VirtualRegister()
-        : m_virtualRegister(s_invalidVirtualRegister)
+        : m_virtualRegister(invalidVirtualRegister)
     { }
 
     explicit VirtualRegister(int virtualRegister)
         : m_virtualRegister(virtualRegister)
     { }
 
-    bool isValid() const { return (m_virtualRegister != s_invalidVirtualRegister); }
-    bool isLocal() const { return operandIsLocal(m_virtualRegister); }
-    bool isArgument() const { return operandIsArgument(m_virtualRegister); }
+    VirtualRegister(CallFrameSlot slot)
+        : m_virtualRegister(static_cast<int>(slot))
+    { }
+
+    bool isValid() const { return (m_virtualRegister != invalidVirtualRegister); }
+    bool isLocal() const { return virtualRegisterIsLocal(m_virtualRegister); }
+    bool isArgument() const { return virtualRegisterIsArgument(m_virtualRegister); }
     bool isHeader() const { return m_virtualRegister >= 0 && m_virtualRegister < CallFrameSlot::thisArgument; }
-    bool isConstant() const { return m_virtualRegister >= s_firstConstantRegisterIndex; }
+    bool isConstant() const { return m_virtualRegister >= firstConstantRegisterIndex; }
     int toLocal() const { ASSERT(isLocal()); return operandToLocal(m_virtualRegister); }
     int toArgument() const { ASSERT(isArgument()); return operandToArgument(m_virtualRegister); }
-    int toConstantIndex() const { ASSERT(isConstant()); return m_virtualRegister - s_firstConstantRegisterIndex; }
+    int toConstantIndex() const { ASSERT(isConstant()); return m_virtualRegister - firstConstantRegisterIndex; }
     int offset() const { return m_virtualRegister; }
     int offsetInBytes() const { return m_virtualRegister * sizeof(Register); }
 
@@ -106,9 +113,6 @@ public:
     void dump(PrintStream& out) const;
 
 private:
-    static constexpr int s_invalidVirtualRegister = 0x3fffffff;
-    static constexpr int s_firstConstantRegisterIndex = FirstConstantRegisterIndex;
-
     static int localToOperand(int local) { return -1 - local; }
     static int operandToLocal(int operand) { return -1 - operand; }
     static int operandToArgument(int operand) { return operand - CallFrame::thisArgumentOffset(); }
index b52fc62..cc8143e 100644 (file)
@@ -1187,7 +1187,7 @@ RegisterID* BytecodeGenerator::initializeNextParameter()
     VirtualRegister reg = virtualRegisterForArgument(m_codeBlock->numParameters());
     m_parameters.grow(m_parameters.size() + 1);
     auto& parameter = registerFor(reg);
-    parameter.setIndex(reg.offset());
+    parameter.setIndex(reg);
     m_codeBlock->addParameter();
     return &parameter;
 }
@@ -1196,7 +1196,7 @@ void BytecodeGenerator::initializeParameters(FunctionParameters& parameters)
 {
     // Make sure the code block knows about all of our parameters, and make sure that parameters
     // needing destructuring are noted.
-    m_thisRegister.setIndex(initializeNextParameter()->index()); // this
+    m_thisRegister.setIndex(VirtualRegister(initializeNextParameter()->index())); // this
 
     bool nonSimpleArguments = false;
     for (unsigned i = 0; i < parameters.size(); ++i) {
@@ -1637,11 +1637,11 @@ bool BytecodeGenerator::emitEqualityOpImpl(RegisterID* dst, RegisterID* src1, Re
 
     if (m_lastInstruction->is<OpTypeof>()) {
         auto op = m_lastInstruction->as<OpTypeof>();
-        if (src1->index() == op.m_dst.offset()
+        if (src1->virtualRegister() == op.m_dst
             && src1->isTemporary()
-            && m_codeBlock->isConstantRegisterIndex(src2->index())
-            && m_codeBlock->constantRegister(src2->index()).get().isString()) {
-            const String& value = asString(m_codeBlock->constantRegister(src2->index()).get())->tryGetValue();
+            && src2->virtualRegister().isConstant()
+            && m_codeBlock->constantRegister(src2->virtualRegister()).get().isString()) {
+            const String& value = asString(m_codeBlock->constantRegister(src2->virtualRegister()).get())->tryGetValue();
             if (value == "undefined") {
                 rewind();
                 OpIsUndefined::emit(this, dst, op.m_value);
@@ -3235,6 +3235,8 @@ RegisterID* BytecodeGenerator::emitCallVarargs(RegisterID* dst, RegisterID* func
     // Emit call.
     ASSERT(dst != ignoredResult());
     VarargsOp::emit(this, dst, func, thisRegister, arguments ? arguments : VirtualRegister(0), firstFreeRegister, firstVarArgOffset);
+    if (VarargsOp::opcodeID != op_tail_call_forward_arguments)
+        ASSERT(m_codeBlock->hasCheckpoints());
     return dst;
 }
 
index e14aa16..bc5bbae 100644 (file)
@@ -1008,6 +1008,7 @@ namespace JSC {
         bool shouldEmitControlFlowProfilerHooks() const { return m_codeGenerationMode.contains(CodeGenerationMode::ControlFlowProfiler); }
         
         bool isStrictMode() const { return m_codeBlock->isStrictMode(); }
+        void setUsesCheckpoints() { m_codeBlock->setHasCheckpoints(); }
 
         SourceParseMode parseMode() const { return m_codeBlock->parseMode(); }
         
index 35620a1..f819b12 100644 (file)
@@ -69,12 +69,12 @@ namespace JSC {
         {
         }
 
-        void setIndex(int index)
+        void setIndex(VirtualRegister index)
         {
 #ifndef NDEBUG
             m_didSetIndex = true;
 #endif
-            m_virtualRegister = VirtualRegister(index);
+            m_virtualRegister = index;
         }
 
         void setTemporary()
index b83fee0..e64071b 100644 (file)
@@ -40,6 +40,14 @@ void AbstractHeap::Payload::dump(PrintStream& out) const
         out.print(value());
 }
 
+void AbstractHeap::Payload::dumpAsOperand(PrintStream& out) const
+{
+    if (isTop())
+        out.print("TOP");
+    else
+        out.print(Operand::fromBits(value()));
+}
+
 void AbstractHeap::dump(PrintStream& out) const
 {
     out.print(kind());
@@ -49,6 +57,13 @@ void AbstractHeap::dump(PrintStream& out) const
         out.print("(", DOMJIT::HeapRange::fromRaw(payload().value32()), ")");
         return;
     }
+    if (kind() == Stack) {
+        out.print("(");
+        payload().dumpAsOperand(out);
+        out.print(")");
+        return;
+    }
+
     out.print("(", payload(), ")");
 }
 
index bf99377..3c4ce3d 100644 (file)
@@ -28,6 +28,7 @@
 #if ENABLE(DFG_JIT)
 
 #include "DOMJITHeapRange.h"
+#include "OperandsInlines.h"
 #include "VirtualRegister.h"
 #include <wtf/HashMap.h>
 #include <wtf/PrintStream.h>
@@ -123,10 +124,15 @@ public:
             , m_value(bitwise_cast<intptr_t>(pointer))
         {
         }
-        
-        Payload(VirtualRegister operand)
+
+        Payload(Operand operand)
             : m_isTop(false)
-            , m_value(operand.offset())
+            , m_value(operand.asBits())
+        {
+        }
+
+        Payload(VirtualRegister operand)
+            : Payload(Operand(operand))
         {
         }
         
@@ -183,6 +189,7 @@ public:
         }
         
         void dump(PrintStream&) const;
+        void dumpAsOperand(PrintStream&) const;
         
     private:
         bool m_isTop;
@@ -204,6 +211,7 @@ public:
     {
         ASSERT(kind != InvalidAbstractHeap && kind != World && kind != Heap && kind != SideState);
         m_value = encode(kind, payload);
+        ASSERT(this->kind() == kind && this->payload() == payload);
     }
     
     AbstractHeap(WTF::HashTableDeletedValueType)
@@ -219,6 +227,11 @@ public:
         ASSERT(kind() != World && kind() != InvalidAbstractHeap);
         return payloadImpl();
     }
+    Operand operand() const
+    {
+        ASSERT(kind() == Stack && !payload().isTop());
+        return Operand::fromBits(payload().value());
+    }
     
     AbstractHeap supertype() const
     {
index dc1c64e..ed2507c 100644 (file)
@@ -381,7 +381,7 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
             
     case GetLocal: {
         VariableAccessData* variableAccessData = node->variableAccessData();
-        AbstractValue value = m_state.operand(variableAccessData->local().offset());
+        AbstractValue value = m_state.operand(variableAccessData->operand());
         // The value in the local should already be checked.
         DFG_ASSERT(m_graph, node, value.isType(typeFilterFor(variableAccessData->flushFormat())));
         if (value.value())
@@ -392,7 +392,7 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
         
     case GetStack: {
         StackAccessData* data = node->stackAccessData();
-        AbstractValue value = m_state.operand(data->local);
+        AbstractValue value = m_state.operand(data->operand);
         // The value in the local should already be checked.
         DFG_ASSERT(m_graph, node, value.isType(typeFilterFor(data->format)));
         if (value.value())
@@ -402,12 +402,12 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
     }
         
     case SetLocal: {
-        m_state.operand(node->local()) = forNode(node->child1());
+        m_state.operand(node->operand()) = forNode(node->child1());
         break;
     }
         
     case PutStack: {
-        m_state.operand(node->stackAccessData()->local) = forNode(node->child1());
+        m_state.operand(node->stackAccessData()->operand) = forNode(node->child1());
         break;
     }
         
@@ -429,7 +429,7 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
         // Assert that the state of arguments has been set. SetArgumentDefinitely/SetArgumentMaybe means
         // that someone set the argument values out-of-band, and currently this always means setting to a
         // non-clear value.
-        ASSERT(!m_state.operand(node->local()).isClear());
+        ASSERT(!m_state.operand(node->operand()).isClear());
         break;
 
     case InitializeEntrypointArguments: {
@@ -458,6 +458,12 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
         break;
     }
 
+    case VarargsLength: {
+        clobberWorld();
+        setTypeForNode(node, SpecInt32Only);
+        break;
+    }
+
     case LoadVarargs:
     case ForwardVarargs: {
         // FIXME: ForwardVarargs should check if the count becomes known, and if it does, it should turn
@@ -476,7 +482,7 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
         LoadVarargsData* data = node->loadVarargsData();
         m_state.operand(data->count).setNonCellType(SpecInt32Only);
         for (unsigned i = data->limit - 1; i--;)
-            m_state.operand(data->start.offset() + i).makeHeapTop();
+            m_state.operand(data->start + i).makeHeapTop();
         break;
     }
 
@@ -2358,7 +2364,7 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
             unsigned argumentIndex;
             if (argumentIndexChecked.safeGet(argumentIndex) != CheckedState::DidOverflow) {
                 if (inlineCallFrame) {
-                    if (argumentIndex < inlineCallFrame->argumentCountIncludingThis - 1) {
+                    if (argumentIndex < static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1)) {
                         setForNode(node, m_state.operand(
                             virtualRegisterForArgument(argumentIndex + 1) + inlineCallFrame->stackOffset));
                         m_state.setShouldTryConstantFolding(true);
index d73247b..89c08fa 100644 (file)
@@ -120,7 +120,7 @@ public:
     {
         for (unsigned i = 0; i < m_variables.size(); ++i) {
             VariableAccessData* variable = m_variables[i]->find();
-            VirtualRegister operand = variable->local();
+            Operand operand = variable->operand();
 
             if (i)
                 out.print(" ");
index 713e799..1fe20fe 100644 (file)
@@ -358,9 +358,12 @@ private:
                 case NewArrayBuffer:
                     break;
                     
+                case VarargsLength:
+                    break;
+
                 case LoadVarargs:
-                    if (node->loadVarargsData()->offset && (node->child1()->op() == NewArrayWithSpread || node->child1()->op() == Spread || node->child1()->op() == NewArrayBuffer))
-                        escape(node->child1(), node);
+                    if (node->loadVarargsData()->offset && (node->argumentsChild()->op() == NewArrayWithSpread || node->argumentsChild()->op() == Spread || node->argumentsChild()->op() == NewArrayBuffer))
+                        escape(node->argumentsChild(), node);
                     break;
                     
                 case CallVarargs:
@@ -493,10 +496,10 @@ private:
                             return;
                         }
                         ASSERT(!heap.payload().isTop());
-                        VirtualRegister reg(heap.payload().value32());
+                        Operand operand = heap.operand();
                         // The register may not point to an argument or local, for example if we are looking at SetArgumentCountIncludingThis.
-                        if (!reg.isHeader())
-                            clobberedByThisBlock.operand(reg) = true;
+                        if (!operand.isHeader())
+                            clobberedByThisBlock.operand(operand) = true;
                     },
                     NoOpClobberize());
             }
@@ -560,16 +563,16 @@ private:
                         if (inlineCallFrame) {
                             if (inlineCallFrame->isVarargs()) {
                                 isClobberedByBlock |= clobberedByThisBlock.operand(
-                                    inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis);
+                                    VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis));
                             }
 
                             if (!isClobberedByBlock || inlineCallFrame->isClosureCall) {
                                 isClobberedByBlock |= clobberedByThisBlock.operand(
-                                    inlineCallFrame->stackOffset + CallFrameSlot::callee);
+                                    VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::callee));
                             }
 
                             if (!isClobberedByBlock) {
-                                for (unsigned i = 0; i < inlineCallFrame->argumentCountIncludingThis - 1; ++i) {
+                                for (unsigned i = 0; i < static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1); ++i) {
                                     VirtualRegister reg =
                                         VirtualRegister(inlineCallFrame->stackOffset) +
                                         CallFrame::argumentOffset(i);
@@ -627,7 +630,7 @@ private:
                                 m_graph, node, NoOpClobberize(),
                                 [&] (AbstractHeap heap) {
                                     if (heap.kind() == Stack && !heap.payload().isTop()) {
-                                        if (argumentsInvolveStackSlot(inlineCallFrame, VirtualRegister(heap.payload().value32())))
+                                        if (argumentsInvolveStackSlot(inlineCallFrame, heap.operand()))
                                             found = true;
                                         return;
                                     }
@@ -806,7 +809,7 @@ private:
                         
                         bool safeToGetStack = index >= numberOfArgumentsToSkip;
                         if (inlineCallFrame)
-                            safeToGetStack &= index < inlineCallFrame->argumentCountIncludingThis - 1;
+                            safeToGetStack &= index < static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1);
                         else {
                             safeToGetStack &=
                                 index < static_cast<unsigned>(codeBlock()->numParameters()) - 1;
@@ -845,9 +848,23 @@ private:
                     node->convertToIdentityOn(result);
                     break;
                 }
-                    
+                
+                case VarargsLength: {
+                    Node* candidate = node->argumentsChild().node();
+                    if (!isEliminatedAllocation(candidate))
+                        break;
+
+                    // VarargsLength can exit, so it better be exitOK.
+                    DFG_ASSERT(m_graph, node, node->origin.exitOK);
+                    NodeOrigin origin = node->origin.withExitOK(true);
+
+
+                    node->convertToIdentityOn(emitCodeToGetArgumentsArrayLength(insertionSet, candidate, nodeIndex, origin, /* addThis = */ true));
+                    break;
+                }
+
                 case LoadVarargs: {
-                    Node* candidate = node->child1().node();
+                    Node* candidate = node->argumentsChild().node();
                     if (!isEliminatedAllocation(candidate))
                         break;
                     
@@ -862,10 +879,10 @@ private:
                             jsNumber(argumentCountIncludingThis));
                         insertionSet.insertNode(
                             nodeIndex, SpecNone, KillStack, node->origin.takeValidExit(canExit),
-                            OpInfo(varargsData->count.offset()));
+                            OpInfo(varargsData->count));
                         insertionSet.insertNode(
                             nodeIndex, SpecNone, MovHint, node->origin.takeValidExit(canExit),
-                            OpInfo(varargsData->count.offset()), Edge(argumentCountIncludingThisNode));
+                            OpInfo(varargsData->count), Edge(argumentCountIncludingThisNode));
                         insertionSet.insertNode(
                             nodeIndex, SpecNone, PutStack, node->origin.withExitOK(canExit),
                             OpInfo(m_graph.m_stackAccessData.add(varargsData->count, FlushedInt32)),
@@ -874,14 +891,15 @@ private:
 
                     auto storeValue = [&] (Node* value, unsigned storeIndex) {
                         VirtualRegister reg = varargsData->start + storeIndex;
+                        ASSERT(reg.isLocal());
                         StackAccessData* data =
                             m_graph.m_stackAccessData.add(reg, FlushedJSValue);
                         
                         insertionSet.insertNode(
-                            nodeIndex, SpecNone, KillStack, node->origin.takeValidExit(canExit), OpInfo(reg.offset()));
+                            nodeIndex, SpecNone, KillStack, node->origin.takeValidExit(canExit), OpInfo(reg));
                         insertionSet.insertNode(
                             nodeIndex, SpecNone, MovHint, node->origin.takeValidExit(canExit),
-                            OpInfo(reg.offset()), Edge(value));
+                            OpInfo(reg), Edge(value));
                         insertionSet.insertNode(
                             nodeIndex, SpecNone, PutStack, node->origin.withExitOK(canExit),
                             OpInfo(data), Edge(value));
@@ -935,7 +953,7 @@ private:
                                 ASSERT(candidate->op() == PhantomCreateRest);
                                 unsigned numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
                                 InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
-                                unsigned frameArgumentCount = inlineCallFrame->argumentCountIncludingThis - 1;
+                                unsigned frameArgumentCount = static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1);
                                 if (frameArgumentCount >= numberOfArgumentsToSkip)
                                     return frameArgumentCount - numberOfArgumentsToSkip;
                                 return 0;
@@ -983,7 +1001,7 @@ private:
                                     ASSERT(candidate->op() == PhantomCreateRest);
                                     unsigned numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
                                     InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
-                                    unsigned frameArgumentCount = inlineCallFrame->argumentCountIncludingThis - 1;
+                                    unsigned frameArgumentCount = static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1);
                                     for (unsigned loadIndex = numberOfArgumentsToSkip; loadIndex < frameArgumentCount; ++loadIndex) {
                                         VirtualRegister reg = virtualRegisterForArgument(loadIndex + 1) + inlineCallFrame->stackOffset;
                                         StackAccessData* data = m_graph.m_stackAccessData.add(reg, FlushedJSValue);
@@ -1019,9 +1037,7 @@ private:
 
                         InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame();
 
-                        if (inlineCallFrame
-                            && !inlineCallFrame->isVarargs()) {
-
+                        if (inlineCallFrame && !inlineCallFrame->isVarargs()) {
                             unsigned argumentCountIncludingThis = inlineCallFrame->argumentCountIncludingThis;
                             if (argumentCountIncludingThis > varargsData->offset)
                                 argumentCountIncludingThis -= varargsData->offset;
@@ -1030,7 +1046,6 @@ private:
                             RELEASE_ASSERT(argumentCountIncludingThis >= 1);
 
                             if (argumentCountIncludingThis <= varargsData->limit) {
-                                
                                 storeArgumentCountIncludingThis(argumentCountIncludingThis);
 
                                 DFG_ASSERT(m_graph, node, varargsData->limit - 1 >= varargsData->mandatoryMinimum, varargsData->limit, varargsData->mandatoryMinimum);
index ff1d568..1fdc309 100644 (file)
 
 namespace JSC { namespace DFG {
 
-bool argumentsInvolveStackSlot(InlineCallFrame* inlineCallFrame, VirtualRegister reg)
+bool argumentsInvolveStackSlot(InlineCallFrame* inlineCallFrame, Operand operand)
 {
+    if (operand.isTmp())
+        return false;
+
+    VirtualRegister reg = operand.virtualRegister();
     if (!inlineCallFrame)
         return (reg.isArgument() && reg.toArgument()) || reg.isHeader();
-    
+
     if (inlineCallFrame->isClosureCall
         && reg == VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::callee))
         return true;
@@ -46,19 +50,19 @@ bool argumentsInvolveStackSlot(InlineCallFrame* inlineCallFrame, VirtualRegister
         return true;
     
     // We do not include fixups here since it is not related to |arguments|, rest parameters, and varargs.
-    unsigned numArguments = inlineCallFrame->argumentCountIncludingThis - 1;
+    unsigned numArguments = static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1);
     VirtualRegister argumentStart =
         VirtualRegister(inlineCallFrame->stackOffset) + CallFrame::argumentOffset(0);
     return reg >= argumentStart && reg < argumentStart + numArguments;
 }
 
-bool argumentsInvolveStackSlot(Node* candidate, VirtualRegister reg)
+bool argumentsInvolveStackSlot(Node* candidate, Operand operand)
 {
-    return argumentsInvolveStackSlot(candidate->origin.semantic.inlineCallFrame(), reg);
+    return argumentsInvolveStackSlot(candidate->origin.semantic.inlineCallFrame(), operand);
 }
 
 Node* emitCodeToGetArgumentsArrayLength(
-    InsertionSet& insertionSet, Node* arguments, unsigned nodeIndex, NodeOrigin origin)
+    InsertionSet& insertionSet, Node* arguments, unsigned nodeIndex, NodeOrigin origin, bool addThis)
 {
     Graph& graph = insertionSet.graph();
 
@@ -69,11 +73,14 @@ Node* emitCodeToGetArgumentsArrayLength(
         || arguments->op() == NewArrayBuffer
         || arguments->op() == PhantomDirectArguments || arguments->op() == PhantomClonedArguments
         || arguments->op() == PhantomCreateRest || arguments->op() == PhantomNewArrayBuffer
-        || arguments->op() == PhantomNewArrayWithSpread,
+        || arguments->op() == PhantomNewArrayWithSpread || arguments->op() == PhantomSpread,
         arguments->op());
 
+    if (arguments->op() == PhantomSpread)
+        return emitCodeToGetArgumentsArrayLength(insertionSet, arguments->child1().node(), nodeIndex, origin, addThis);
+
     if (arguments->op() == PhantomNewArrayWithSpread) {
-        unsigned numberOfNonSpreadArguments = 0;
+        unsigned numberOfNonSpreadArguments = addThis;
         BitVector* bitVector = arguments->bitVector();
         Node* currentSum = nullptr;
         for (unsigned i = 0; i < arguments->numChildren(); i++) {
@@ -103,7 +110,7 @@ Node* emitCodeToGetArgumentsArrayLength(
 
     if (arguments->op() == NewArrayBuffer || arguments->op() == PhantomNewArrayBuffer) {
         return insertionSet.insertConstant(
-            nodeIndex, origin, jsNumber(arguments->castOperand<JSImmutableButterfly*>()->length()));
+            nodeIndex, origin, jsNumber(arguments->castOperand<JSImmutableButterfly*>()->length() + addThis));
     }
     
     InlineCallFrame* inlineCallFrame = arguments->origin.semantic.inlineCallFrame();
@@ -113,7 +120,7 @@ Node* emitCodeToGetArgumentsArrayLength(
         numberOfArgumentsToSkip = arguments->numberOfArgumentsToSkip();
     
     if (inlineCallFrame && !inlineCallFrame->isVarargs()) {
-        unsigned argumentsSize = inlineCallFrame->argumentCountIncludingThis - 1;
+        unsigned argumentsSize = inlineCallFrame->argumentCountIncludingThis - !addThis;
         if (argumentsSize >= numberOfArgumentsToSkip)
             argumentsSize -= numberOfArgumentsToSkip;
         else
@@ -129,14 +136,14 @@ Node* emitCodeToGetArgumentsArrayLength(
         nodeIndex, SpecInt32Only, ArithSub, origin, OpInfo(Arith::Unchecked),
         Edge(argumentCount, Int32Use),
         insertionSet.insertConstantForUse(
-            nodeIndex, origin, jsNumber(1 + numberOfArgumentsToSkip), Int32Use));
+            nodeIndex, origin, jsNumber(numberOfArgumentsToSkip + !addThis), Int32Use));
 
     if (numberOfArgumentsToSkip) {
         // The above subtraction may produce a negative number if this number is non-zero. We correct that here.
         result = insertionSet.insertNode(
             nodeIndex, SpecInt32Only, ArithMax, origin, 
             Edge(result, Int32Use), 
-            insertionSet.insertConstantForUse(nodeIndex, origin, jsNumber(0), Int32Use));
+            insertionSet.insertConstantForUse(nodeIndex, origin, jsNumber(static_cast<unsigned>(addThis)), Int32Use));
         result->setResult(NodeResultInt32);
     }
 
index 9a7718f..74f939e 100644 (file)
 
 namespace JSC { namespace DFG {
 
-bool argumentsInvolveStackSlot(InlineCallFrame*, VirtualRegister);
-bool argumentsInvolveStackSlot(Node* candidate, VirtualRegister);
+bool argumentsInvolveStackSlot(InlineCallFrame*, Operand);
+bool argumentsInvolveStackSlot(Node* candidate, Operand);
 
 Node* emitCodeToGetArgumentsArrayLength(
-    InsertionSet&, Node* arguments, unsigned nodeIndex, NodeOrigin);
+    InsertionSet&, Node* arguments, unsigned nodeIndex, NodeOrigin, bool addThis = false);
 
 } } // namespace JSC::DFG
 
index 96dac39..897c70d 100644 (file)
@@ -139,8 +139,7 @@ public:
     
     unsigned numberOfArguments() const { return m_block->valuesAtTail.numberOfArguments(); }
     unsigned numberOfLocals() const { return m_block->valuesAtTail.numberOfLocals(); }
-    AbstractValue& operand(int operand) { return m_block->valuesAtTail.operand(operand); }
-    AbstractValue& operand(VirtualRegister operand) { return m_block->valuesAtTail.operand(operand); }
+    AbstractValue& operand(Operand operand) { return m_block->valuesAtTail.operand(operand); }
     AbstractValue& local(size_t index) { return m_block->valuesAtTail.local(index); }
     AbstractValue& argument(size_t index) { return m_block->valuesAtTail.argument(index); }
     
index 7743e7d..c2b55aa 100644 (file)
@@ -65,10 +65,10 @@ void AvailabilityMap::pruneHeap()
 
 void AvailabilityMap::pruneByLiveness(Graph& graph, CodeOrigin where)
 {
-    Operands<Availability> localsCopy(m_locals.numberOfArguments(), m_locals.numberOfLocals(), Availability::unavailable());
+    Operands<Availability> localsCopy(OperandsLike, m_locals, Availability::unavailable());
     graph.forAllLiveInBytecode(
         where,
-        [&] (VirtualRegister reg) {
+        [&] (Operand reg) {
             localsCopy.operand(reg) = m_locals.operand(reg);
         });
     m_locals = WTFMove(localsCopy);
index 5355256..80c12bf 100644 (file)
@@ -66,9 +66,9 @@ struct AvailabilityMap {
     }
     
     template<typename HasFunctor, typename AddFunctor>
-    void closeStartingWithLocal(VirtualRegister reg, const HasFunctor& has, const AddFunctor& add)
+    void closeStartingWithLocal(Operand op, const HasFunctor& has, const AddFunctor& add)
     {
-        Availability availability = m_locals.operand(reg);
+        Availability availability = m_locals.operand(op);
         if (!availability.hasNode())
             return;
         
index 0584496..e21b28b 100644 (file)
@@ -32,7 +32,7 @@
 
 namespace JSC { namespace DFG {
 
-BasicBlock::BasicBlock(BytecodeIndex bytecodeBegin, unsigned numArguments, unsigned numLocals, float executionCount)
+BasicBlock::BasicBlock(BytecodeIndex bytecodeBegin, unsigned numArguments, unsigned numLocals, unsigned numTmps, float executionCount)
     : bytecodeBegin(bytecodeBegin)
     , index(NoBlock)
     , cfaStructureClobberStateAtHead(StructuresAreWatched)
@@ -48,11 +48,11 @@ BasicBlock::BasicBlock(BytecodeIndex bytecodeBegin, unsigned numArguments, unsig
     , isLinked(false)
 #endif
     , isReachable(false)
-    , variablesAtHead(numArguments, numLocals)
-    , variablesAtTail(numArguments, numLocals)
-    , valuesAtHead(numArguments, numLocals)
-    , valuesAtTail(numArguments, numLocals)
-    , intersectionOfPastValuesAtHead(numArguments, numLocals, AbstractValue::fullTop())
+    , variablesAtHead(numArguments, numLocals, numTmps)
+    , variablesAtTail(numArguments, numLocals, numTmps)
+    , valuesAtHead(numArguments, numLocals, numTmps)
+    , valuesAtTail(numArguments, numLocals, numTmps)
+    , intersectionOfPastValuesAtHead(numArguments, numLocals, numTmps, AbstractValue::fullTop())
     , executionCount(executionCount)
 {
 }
@@ -70,6 +70,15 @@ void BasicBlock::ensureLocals(unsigned newNumLocals)
     intersectionOfPastValuesAtHead.ensureLocals(newNumLocals, AbstractValue::fullTop());
 }
 
+void BasicBlock::ensureTmps(unsigned newNumTmps)
+{
+    variablesAtHead.ensureTmps(newNumTmps);
+    variablesAtTail.ensureTmps(newNumTmps);
+    valuesAtHead.ensureTmps(newNumTmps);
+    valuesAtTail.ensureTmps(newNumTmps);
+    intersectionOfPastValuesAtHead.ensureTmps(newNumTmps, AbstractValue::fullTop());
+}
+
 void BasicBlock::replaceTerminal(Graph& graph, Node* node)
 {
     NodeAndIndex result = findTerminal();
index b5adf09..1dd38c7 100644 (file)
@@ -46,11 +46,12 @@ typedef Vector<Node*, 8> BlockNodeList;
 
 struct BasicBlock : RefCounted<BasicBlock> {
     BasicBlock(
-        BytecodeIndex bytecodeBegin, unsigned numArguments, unsigned numLocals,
+        BytecodeIndex bytecodeBegin, unsigned numArguments, unsigned numLocals, unsigned numTmps,
         float executionCount);
     ~BasicBlock();
     
     void ensureLocals(unsigned newNumLocals);
+    void ensureTmps(unsigned newNumTmps);
     
     size_t size() const { return m_nodes.size(); }
     bool isEmpty() const { return !size(); }
index 7508e21..008cf52 100644 (file)
@@ -51,7 +51,7 @@ void BlockInsertionSet::insert(size_t index, Ref<BasicBlock>&& block)
 
 BasicBlock* BlockInsertionSet::insert(size_t index, float executionCount)
 {
-    Ref<BasicBlock> block = adoptRef(*new BasicBlock(BytecodeIndex(), m_graph.block(0)->variablesAtHead.numberOfArguments(), m_graph.block(0)->variablesAtHead.numberOfLocals(), executionCount));
+    Ref<BasicBlock> block = adoptRef(*new BasicBlock(BytecodeIndex(), m_graph.block(0)->variablesAtHead.numberOfArguments(), m_graph.block(0)->variablesAtHead.numberOfLocals(), m_graph.block(0)->variablesAtHead.numberOfTmps(), executionCount));
     block->isReachable = true;
     auto* result = block.ptr();
     insert(index, WTFMove(block));
index 030fabf..ebb46da 100644 (file)
@@ -111,6 +111,7 @@ public:
         , m_constantOne(graph.freeze(jsNumber(1)))
         , m_numArguments(m_codeBlock->numParameters())
         , m_numLocals(m_codeBlock->numCalleeLocals())
+        , m_numTmps(m_codeBlock->numTmps())
         , m_parameterSlots(0)
         , m_numPassedVarArgs(0)
         , m_inlineStackTop(0)
@@ -139,6 +140,17 @@ private:
             m_graph.block(i)->ensureLocals(newNumLocals);
     }
 
+    void ensureTmps(unsigned newNumTmps)
+    {
+        VERBOSE_LOG("   ensureTmps: trying to raise m_numTmps from ", m_numTmps, " to ", newNumTmps, "\n");
+        if (newNumTmps <= m_numTmps)
+            return;
+        m_numTmps = newNumTmps;
+        for (size_t i = 0; i < m_graph.numBlocks(); ++i)
+            m_graph.block(i)->ensureTmps(newNumTmps);
+    }
+
+
     // Helper for min and max.
     template<typename ChecksFunctor>
     bool handleMinMax(VirtualRegister result, NodeType op, int registerOffset, int argumentCountIncludingThis, const ChecksFunctor& insertChecks);
@@ -270,7 +282,17 @@ private:
     void linkBlock(BasicBlock*, Vector<BasicBlock*>& possibleTargets);
     void linkBlocks(Vector<BasicBlock*>& unlinkedBlocks, Vector<BasicBlock*>& possibleTargets);
     
-    VariableAccessData* newVariableAccessData(VirtualRegister operand)
+    void progressToNextCheckpoint()
+    {
+        m_currentIndex = BytecodeIndex(m_currentIndex.offset(), m_currentIndex.checkpoint() + 1);
+        // At this point, it's again OK to OSR exit.
+        m_exitOK = true;
+        addToGraph(ExitOK);
+
+        processSetLocalQueue();
+    }
+
+    VariableAccessData* newVariableAccessData(Operand operand)
     {
         ASSERT(!operand.isConstant());
         
@@ -279,16 +301,14 @@ private:
     }
     
     // Get/Set the operands/result of a bytecode instruction.
-    Node* getDirect(VirtualRegister operand)
+    Node* getDirect(Operand operand)
     {
         ASSERT(!operand.isConstant());
 
-        // Is this an argument?
         if (operand.isArgument())
-            return getArgument(operand);
+            return getArgument(operand.virtualRegister());
 
-        // Must be a local.
-        return getLocal(operand);
+        return getLocalOrTmp(operand);
     }
 
     Node* get(VirtualRegister operand)
@@ -298,8 +318,8 @@ private:
             unsigned oldSize = m_constants.size();
             if (constantIndex >= oldSize || !m_constants[constantIndex]) {
                 const CodeBlock& codeBlock = *m_inlineStackTop->m_codeBlock;
-                JSValue value = codeBlock.getConstant(operand.offset());
-                SourceCodeRepresentation sourceCodeRepresentation = codeBlock.constantSourceCodeRepresentation(operand.offset());
+                JSValue value = codeBlock.getConstant(operand);
+                SourceCodeRepresentation sourceCodeRepresentation = codeBlock.constantSourceCodeRepresentation(operand);
                 if (constantIndex >= oldSize) {
                     m_constants.grow(constantIndex + 1);
                     for (unsigned i = oldSize; i < m_constants.size(); ++i)
@@ -358,9 +378,10 @@ private:
         // initializing locals at the top of a function.
         ImmediateNakedSet
     };
-    Node* setDirect(VirtualRegister operand, Node* value, SetMode setMode = NormalSet)
+
+    Node* setDirect(Operand operand, Node* value, SetMode setMode = NormalSet)
     {
-        addToGraph(MovHint, OpInfo(operand.offset()), value);
+        addToGraph(MovHint, OpInfo(operand), value);
 
         // We can't exit anymore because our OSR exit state has changed.
         m_exitOK = false;
@@ -374,7 +395,7 @@ private:
         
         return delayed.execute(this);
     }
-    
+
     void processSetLocalQueue()
     {
         for (unsigned i = 0; i < m_setLocalQueue.size(); ++i)
@@ -392,18 +413,17 @@ private:
         ASSERT(node->op() == GetLocal);
         ASSERT(node->origin.semantic.bytecodeIndex() == m_currentIndex);
         ConcurrentJSLocker locker(m_inlineStackTop->m_profiledBlock->m_lock);
-        LazyOperandValueProfileKey key(m_currentIndex, node->local());
+        LazyOperandValueProfileKey key(m_currentIndex, node->operand());
         SpeculatedType prediction = m_inlineStackTop->m_lazyOperands.prediction(locker, key);
         node->variableAccessData()->predict(prediction);
         return node;
     }
 
     // Used in implementing get/set, above, where the operand is a local variable.
-    Node* getLocal(VirtualRegister operand)
+    Node* getLocalOrTmp(Operand operand)
     {
-        unsigned local = operand.toLocal();
-
-        Node* node = m_currentBlock->variablesAtTail.local(local);
+        ASSERT(operand.isTmp() || operand.isLocal());
+        Node*& node = m_currentBlock->variablesAtTail.operand(operand);
         
         // This has two goals: 1) link together variable access datas, and 2)
         // try to avoid creating redundant GetLocals. (1) is required for
@@ -428,20 +448,26 @@ private:
             variable = newVariableAccessData(operand);
         
         node = injectLazyOperandSpeculation(addToGraph(GetLocal, OpInfo(variable)));
-        m_currentBlock->variablesAtTail.local(local) = node;
         return node;
     }
-    Node* setLocal(const CodeOrigin& semanticOrigin, VirtualRegister operand, Node* value, SetMode setMode = NormalSet)
+    Node* setLocalOrTmp(const CodeOrigin& semanticOrigin, Operand operand, Node* value, SetMode setMode = NormalSet)
     {
+        ASSERT(operand.isTmp() || operand.isLocal());
         SetForScope<CodeOrigin> originChange(m_currentSemanticOrigin, semanticOrigin);
 
-        unsigned local = operand.toLocal();
-        
-        if (setMode != ImmediateNakedSet) {
-            ArgumentPosition* argumentPosition = findArgumentPositionForLocal(operand);
+        if (operand.isTmp() && static_cast<unsigned>(operand.value()) >= m_numTmps) {
+            if (inlineCallFrame())
+                dataLogLn(*inlineCallFrame());
+            dataLogLn("Bad operand: ", operand, " but current number of tmps is: ", m_numTmps, " code block has: ", m_profiledBlock->numTmps(), " tmps.");
+            CRASH();
+        }
+
+        if (setMode != ImmediateNakedSet && !operand.isTmp()) {
+            VirtualRegister reg = operand.virtualRegister();
+            ArgumentPosition* argumentPosition = findArgumentPositionForLocal(reg);
             if (argumentPosition)
                 flushDirect(operand, argumentPosition);
-            else if (m_graph.needsScopeRegister() && operand == m_codeBlock->scopeRegister())
+            else if (m_graph.needsScopeRegister() && reg == m_codeBlock->scopeRegister())
                 flush(operand);
         }
 
@@ -451,7 +477,7 @@ private:
         variableAccessData->mergeCheckArrayHoistingFailed(
             m_inlineStackTop->m_exitProfile.hasExitSite(semanticOrigin.bytecodeIndex(), BadIndexingType));
         Node* node = addToGraph(SetLocal, OpInfo(variableAccessData), value);
-        m_currentBlock->variablesAtTail.local(local) = node;
+        m_currentBlock->variablesAtTail.operand(operand) = node;
         return node;
     }
 
@@ -483,20 +509,21 @@ private:
         m_currentBlock->variablesAtTail.argument(argument) = node;
         return node;
     }
-    Node* setArgument(const CodeOrigin& semanticOrigin, VirtualRegister operand, Node* value, SetMode setMode = NormalSet)
+    Node* setArgument(const CodeOrigin& semanticOrigin, Operand operand, Node* value, SetMode setMode = NormalSet)
     {
         SetForScope<CodeOrigin> originChange(m_currentSemanticOrigin, semanticOrigin);
 
-        unsigned argument = operand.toArgument();
+        VirtualRegister reg = operand.virtualRegister();
+        unsigned argument = reg.toArgument();
         ASSERT(argument < m_numArguments);
         
-        VariableAccessData* variableAccessData = newVariableAccessData(operand);
+        VariableAccessData* variableAccessData = newVariableAccessData(reg);
 
         // Always flush arguments, except for 'this'. If 'this' is created by us,
         // then make sure that it's never unboxed.
         if (argument || m_graph.needsFlushedThis()) {
             if (setMode != ImmediateNakedSet)
-                flushDirect(operand);
+                flushDirect(reg);
         }
         
         if (!argument && m_codeBlock->specializationKind() == CodeForConstruct)
@@ -532,14 +559,16 @@ private:
             int argument = VirtualRegister(operand.offset() - inlineCallFrame->stackOffset).toArgument();
             return stack->m_argumentPositions[argument];
         }
-        return 0;
+        return nullptr;
     }
     
-    ArgumentPosition* findArgumentPosition(VirtualRegister operand)
+    ArgumentPosition* findArgumentPosition(Operand operand)
     {
+        if (operand.isTmp())
+            return nullptr;
         if (operand.isArgument())
             return findArgumentPositionForArgument(operand.toArgument());
-        return findArgumentPositionForLocal(operand);
+        return findArgumentPositionForLocal(operand.virtualRegister());
     }
 
     template<typename AddFlushDirectFunc>
@@ -550,9 +579,9 @@ private:
             ASSERT(!m_graph.hasDebuggerEnabled());
             numArguments = inlineCallFrame->argumentsWithFixup.size();
             if (inlineCallFrame->isClosureCall)
-                addFlushDirect(inlineCallFrame, remapOperand(inlineCallFrame, VirtualRegister(CallFrameSlot::callee)));
+                addFlushDirect(inlineCallFrame, remapOperand(inlineCallFrame, CallFrameSlot::callee));
             if (inlineCallFrame->isVarargs())
-                addFlushDirect(inlineCallFrame, remapOperand(inlineCallFrame, VirtualRegister(CallFrameSlot::argumentCountIncludingThis)));
+                addFlushDirect(inlineCallFrame, remapOperand(inlineCallFrame, CallFrameSlot::argumentCountIncludingThis));
         } else
             numArguments = m_graph.baselineCodeBlockFor(inlineCallFrame)->numParameters();
 
@@ -575,6 +604,7 @@ private:
 
                 CodeBlock* codeBlock = m_graph.baselineCodeBlockFor(inlineCallFrame);
                 FullBytecodeLiveness& fullLiveness = m_graph.livenessFor(codeBlock);
+                // Note: We don't need to handle tmps here because tmps are not required to be flushed to the stack.
                 const auto& livenessAtBytecode = fullLiveness.getLiveness(bytecodeIndex, m_graph.appropriateLivenessCalculationPoint(origin, isCallerOrigin));
                 for (unsigned local = codeBlock->numCalleeLocals(); local--;) {
                     if (livenessAtBytecode[local])
@@ -584,27 +614,27 @@ private:
             });
     }
 
-    void flush(VirtualRegister operand)
+    void flush(Operand operand)
     {
         flushDirect(m_inlineStackTop->remapOperand(operand));
     }
     
-    void flushDirect(VirtualRegister operand)
+    void flushDirect(Operand operand)
     {
         flushDirect(operand, findArgumentPosition(operand));
     }
 
-    void flushDirect(VirtualRegister operand, ArgumentPosition* argumentPosition)
+    void flushDirect(Operand operand, ArgumentPosition* argumentPosition)
     {
         addFlushOrPhantomLocal<Flush>(operand, argumentPosition);
     }
 
     template<NodeType nodeType>
-    void addFlushOrPhantomLocal(VirtualRegister operand, ArgumentPosition* argumentPosition)
+    void addFlushOrPhantomLocal(Operand operand, ArgumentPosition* argumentPosition)
     {
         ASSERT(!operand.isConstant());
         
-        Node* node = m_currentBlock->variablesAtTail.operand(operand);
+        Node*& node = m_currentBlock->variablesAtTail.operand(operand);
         
         VariableAccessData* variable;
         
@@ -614,26 +644,25 @@ private:
             variable = newVariableAccessData(operand);
         
         node = addToGraph(nodeType, OpInfo(variable));
-        m_currentBlock->variablesAtTail.operand(operand) = node;
         if (argumentPosition)
             argumentPosition->addVariable(variable);
     }
 
-    void phantomLocalDirect(VirtualRegister operand)
+    void phantomLocalDirect(Operand operand)
     {
         addFlushOrPhantomLocal<PhantomLocal>(operand, findArgumentPosition(operand));
     }
 
     void flush(InlineStackEntry* inlineStackEntry)
     {
-        auto addFlushDirect = [&] (InlineCallFrame*, VirtualRegister reg) { flushDirect(reg); };
+        auto addFlushDirect = [&] (InlineCallFrame*, Operand operand) { flushDirect(operand); };
         flushImpl(inlineStackEntry->m_inlineCallFrame, addFlushDirect);
     }
 
     void flushForTerminal()
     {
-        auto addFlushDirect = [&] (InlineCallFrame*, VirtualRegister reg) { flushDirect(reg); };
-        auto addPhantomLocalDirect = [&] (InlineCallFrame*, VirtualRegister reg) { phantomLocalDirect(reg); };
+        auto addFlushDirect = [&] (InlineCallFrame*, Operand operand) { flushDirect(operand); };
+        auto addPhantomLocalDirect = [&] (InlineCallFrame*, Operand operand) { phantomLocalDirect(operand); };
         flushForTerminalImpl(currentCodeOrigin(), addFlushDirect, addPhantomLocalDirect);
     }
 
@@ -761,6 +790,11 @@ private:
             Edge(child1), Edge(child2), Edge(child3));
         return addToGraph(result);
     }
+    Node* addToGraph(NodeType op, Operand operand, Node* child1)
+    {
+        ASSERT(op == MovHint);
+        return addToGraph(op, OpInfo(operand.kind()), OpInfo(operand.value()), child1);
+    }
     Node* addToGraph(NodeType op, OpInfo info1, OpInfo info2, Edge child1, Edge child2 = Edge(), Edge child3 = Edge())
     {
         Node* result = m_graph.addNode(
@@ -1091,8 +1125,10 @@ private:
 
     // The number of arguments passed to the function.
     unsigned m_numArguments;
-    // The number of locals (vars + temporaries) used in the function.
+    // The number of locals (vars + temporaries) used by the bytecode for the function.
     unsigned m_numLocals;
+    // The max number of temps used for forwarding data to an OSR exit checkpoint.
+    unsigned m_numTmps;
     // The number of slots (in units of sizeof(Register)) that we need to
     // preallocate for arguments to outgoing calls from this frame. This
     // number includes the CallFrame slots that we initialize for the callee
@@ -1159,14 +1195,17 @@ private:
         
         ~InlineStackEntry();
         
-        VirtualRegister remapOperand(VirtualRegister operand) const
+        Operand remapOperand(Operand operand) const
         {
             if (!m_inlineCallFrame)
                 return operand;
+
+            if (operand.isTmp())
+                return Operand::tmp(operand.value() + m_inlineCallFrame->tmpOffset);
             
-            ASSERT(!operand.isConstant());
+            ASSERT(!operand.virtualRegister().isConstant());
 
-            return VirtualRegister(operand.offset() + m_inlineCallFrame->stackOffset);
+            return operand.virtualRegister() + m_inlineCallFrame->stackOffset;
         }
     };
     
@@ -1175,13 +1214,8 @@ private:
     ICStatusContextStack m_icContextStack;
     
     struct DelayedSetLocal {
-        CodeOrigin m_origin;
-        VirtualRegister m_operand;
-        Node* m_value;
-        SetMode m_setMode;
-        
         DelayedSetLocal() { }
-        DelayedSetLocal(const CodeOrigin& origin, VirtualRegister operand, Node* value, SetMode setMode)
+        DelayedSetLocal(const CodeOrigin& origin, Operand operand, Node* value, SetMode setMode)
             : m_origin(origin)
             , m_operand(operand)
             , m_value(value)
@@ -1194,8 +1228,13 @@ private:
         {
             if (m_operand.isArgument())
                 return parser->setArgument(m_origin, m_operand, m_value, m_setMode);
-            return parser->setLocal(m_origin, m_operand, m_value, m_setMode);
+            return parser->setLocalOrTmp(m_origin, m_operand, m_value, m_setMode);
         }
+
+        CodeOrigin m_origin;
+        Operand m_operand;
+        Node* m_value { nullptr };
+        SetMode m_setMode;
     };
     
     Vector<DelayedSetLocal, 2> m_setLocalQueue;
@@ -1208,7 +1247,7 @@ private:
 BasicBlock* ByteCodeParser::allocateTargetableBlock(BytecodeIndex bytecodeIndex)
 {
     ASSERT(bytecodeIndex);
-    Ref<BasicBlock> block = adoptRef(*new BasicBlock(bytecodeIndex, m_numArguments, m_numLocals, 1));
+    Ref<BasicBlock> block = adoptRef(*new BasicBlock(bytecodeIndex, m_numArguments, m_numLocals, m_numTmps, 1));
     BasicBlock* blockPtr = block.ptr();
     // m_blockLinkingTargets must always be sorted in increasing order of bytecodeBegin
     if (m_inlineStackTop->m_blockLinkingTargets.size())
@@ -1220,7 +1259,7 @@ BasicBlock* ByteCodeParser::allocateTargetableBlock(BytecodeIndex bytecodeIndex)
 
 BasicBlock* ByteCodeParser::allocateUntargetableBlock()
 {
-    Ref<BasicBlock> block = adoptRef(*new BasicBlock(BytecodeIndex(), m_numArguments, m_numLocals, 1));
+    Ref<BasicBlock> block = adoptRef(*new BasicBlock(BytecodeIndex(), m_numArguments, m_numLocals, m_numTmps, 1));
     BasicBlock* blockPtr = block.ptr();
     m_graph.appendBlock(WTFMove(block));
     return blockPtr;
@@ -1423,7 +1462,7 @@ bool ByteCodeParser::handleRecursiveTailCall(Node* callTargetNode, CallVariant c
             if (argumentCountIncludingThis != static_cast<int>(callFrame->argumentCountIncludingThis))
                 continue;
             // If the target InlineCallFrame is Varargs, we do not know how many arguments are actually filled by LoadVarargs. Varargs InlineCallFrame's
-            // argumentCountIncludingThis is maximum number of potentially filled arguments by LoadVarargs. We "continue" to the upper frame which may be
+            // argumentCountIncludingThis is maximum number of potentially filled arguments by xkLoadVarargs. We "continue" to the upper frame which may be
             // a good target to jump into.
             if (callFrame->isVarargs())
                 continue;
@@ -1449,7 +1488,7 @@ bool ByteCodeParser::handleRecursiveTailCall(Node* callTargetNode, CallVariant c
         // We must set the callee to the right value
         if (stackEntry->m_inlineCallFrame) {
             if (stackEntry->m_inlineCallFrame->isClosureCall)
-                setDirect(stackEntry->remapOperand(VirtualRegister(CallFrameSlot::callee)), callTargetNode, NormalSet);
+                setDirect(remapOperand(stackEntry->m_inlineCallFrame, CallFrameSlot::callee), callTargetNode, NormalSet);
         } else
             addToGraph(SetCallee, callTargetNode);
 
@@ -1614,16 +1653,18 @@ void ByteCodeParser::inlineCall(Node* callTargetNode, VirtualRegister result, Ca
     ASSERT(!(numberOfStackPaddingSlots % stackAlignmentRegisters()));
     int registerOffsetAfterFixup = registerOffset - numberOfStackPaddingSlots;
     
-    int inlineCallFrameStart = m_inlineStackTop->remapOperand(VirtualRegister(registerOffsetAfterFixup)).offset() + CallFrame::headerSizeInRegisters;
+    Operand inlineCallFrameStart = VirtualRegister(m_inlineStackTop->remapOperand(VirtualRegister(registerOffsetAfterFixup)).value() + CallFrame::headerSizeInRegisters);
     
     ensureLocals(
-        VirtualRegister(inlineCallFrameStart).toLocal() + 1 +
+        inlineCallFrameStart.toLocal() + 1 +
         CallFrame::headerSizeInRegisters + codeBlock->numCalleeLocals());
     
+    ensureTmps((m_inlineStackTop->m_inlineCallFrame ? m_inlineStackTop->m_inlineCallFrame->tmpOffset : 0) + m_inlineStackTop->m_codeBlock->numTmps() + codeBlock->numTmps());
+
     size_t argumentPositionStart = m_graph.m_argumentPositions.size();
 
     if (result.isValid())
-        result = m_inlineStackTop->remapOperand(result);
+        result = m_inlineStackTop->remapOperand(result).virtualRegister();
 
     VariableAccessData* calleeVariable = nullptr;
     if (callee.isClosureCall()) {
@@ -1636,7 +1677,7 @@ void ByteCodeParser::inlineCall(Node* callTargetNode, VirtualRegister result, Ca
 
     InlineStackEntry* callerStackTop = m_inlineStackTop;
     InlineStackEntry inlineStackEntry(this, codeBlock, codeBlock, callee.function(), result,
-        (VirtualRegister)inlineCallFrameStart, argumentCountIncludingThis, kind, continuationBlock);
+        inlineCallFrameStart.virtualRegister(), argumentCountIncludingThis, kind, continuationBlock);
 
     // This is where the actual inlining really happens.
     BytecodeIndex oldIndex = m_currentIndex;
@@ -1671,9 +1712,9 @@ void ByteCodeParser::inlineCall(Node* callTargetNode, VirtualRegister result, Ca
         // However, when we begin executing the callee, we need OSR exit to be aware of where it can recover the arguments to the setter, loc9 and loc10. The MovHints in the inlined
         // callee make it so that if we exit at <HERE>, we can recover loc9 and loc10.
         for (int index = 0; index < argumentCountIncludingThis; ++index) {
-            VirtualRegister argumentToGet = callerStackTop->remapOperand(virtualRegisterForArgument(index, registerOffset));
+            Operand argumentToGet = callerStackTop->remapOperand(virtualRegisterForArgument(index, registerOffset));
             Node* value = getDirect(argumentToGet);
-            addToGraph(MovHint, OpInfo(argumentToGet.offset()), value);
+            addToGraph(MovHint, OpInfo(argumentToGet), value);
             m_setLocalQueue.append(DelayedSetLocal { currentCodeOrigin(), argumentToGet, value, ImmediateNakedSet });
         }
         break;
@@ -1717,16 +1758,16 @@ void ByteCodeParser::inlineCall(Node* callTargetNode, VirtualRegister result, Ca
         // In such cases, we do not need to move frames.
         if (registerOffsetAfterFixup != registerOffset) {
             for (int index = 0; index < argumentCountIncludingThis; ++index) {
-                VirtualRegister argumentToGet = callerStackTop->remapOperand(virtualRegisterForArgument(index, registerOffset));
+                Operand argumentToGet = callerStackTop->remapOperand(virtualRegisterForArgument(index, registerOffset));
                 Node* value = getDirect(argumentToGet);
-                VirtualRegister argumentToSet = m_inlineStackTop->remapOperand(virtualRegisterForArgument(index));
-                addToGraph(MovHint, OpInfo(argumentToSet.offset()), value);
+                Operand argumentToSet = m_inlineStackTop->remapOperand(virtualRegisterForArgument(index));
+                addToGraph(MovHint, OpInfo(argumentToSet), value);
                 m_setLocalQueue.append(DelayedSetLocal { currentCodeOrigin(), argumentToSet, value, ImmediateNakedSet });
             }
         }
         for (int index = 0; index < arityFixupCount; ++index) {
-            VirtualRegister argumentToSet = m_inlineStackTop->remapOperand(virtualRegisterForArgument(argumentCountIncludingThis + index));
-            addToGraph(MovHint, OpInfo(argumentToSet.offset()), undefined);
+            Operand argumentToSet = m_inlineStackTop->remapOperand(virtualRegisterForArgument(argumentCountIncludingThis + index));
+            addToGraph(MovHint, OpInfo(argumentToSet), undefined);
             m_setLocalQueue.append(DelayedSetLocal { currentCodeOrigin(), argumentToSet, undefined, ImmediateNakedSet });
         }
 
@@ -1893,12 +1934,12 @@ bool ByteCodeParser::handleVarargsInlining(Node* callTargetNode, VirtualRegister
         emitFunctionChecks(callVariant, callTargetNode, thisArgument);
         
         int remappedRegisterOffset =
-        m_inlineStackTop->remapOperand(VirtualRegister(registerOffset)).offset();
+        m_inlineStackTop->remapOperand(VirtualRegister(registerOffset)).virtualRegister().offset();
         
         ensureLocals(VirtualRegister(remappedRegisterOffset).toLocal());
         
         int argumentStart = registerOffset + CallFrame::headerSizeInRegisters;
-        int remappedArgumentStart = m_inlineStackTop->remapOperand(VirtualRegister(argumentStart)).offset();
+        int remappedArgumentStart = m_inlineStackTop->remapOperand(VirtualRegister(argumentStart)).virtualRegister().offset();
         
         LoadVarargsData* data = m_graph.m_loadVarargsData.add();
         data->start = VirtualRegister(remappedArgumentStart + 1);
@@ -1906,11 +1947,24 @@ bool ByteCodeParser::handleVarargsInlining(Node* callTargetNode, VirtualRegister
         data->offset = argumentsOffset;
         data->limit = maxArgumentCountIncludingThis;
         data->mandatoryMinimum = mandatoryMinimum;
-        
-        if (callOp == TailCallForwardVarargs)
-            addToGraph(ForwardVarargs, OpInfo(data));
-        else
-            addToGraph(LoadVarargs, OpInfo(data), get(argumentsArgument));
+
+        if (callOp == TailCallForwardVarargs) {
+            Node* argumentCount;
+            if (!inlineCallFrame())
+                argumentCount = addToGraph(GetArgumentCountIncludingThis);
+            else if (inlineCallFrame()->isVarargs())
+                argumentCount = getDirect(remapOperand(inlineCallFrame(), CallFrameSlot::argumentCountIncludingThis));
+            else 
+                argumentCount = addToGraph(JSConstant, OpInfo(m_graph.freeze(jsNumber(inlineCallFrame()->argumentCountIncludingThis))));
+            addToGraph(ForwardVarargs, OpInfo(data), argumentCount);
+        } else {
+            Node* arguments = get(argumentsArgument);
+            auto argCountTmp = m_inlineStackTop->remapOperand(Operand::tmp(OpCallVarargs::argCountIncludingThis));
+            setDirect(argCountTmp, addToGraph(VarargsLength, OpInfo(data), arguments));
+            progressToNextCheckpoint();
+
+            addToGraph(LoadVarargs, OpInfo(data), getLocalOrTmp(argCountTmp), arguments);
+        }
         
         // LoadVarargs may OSR exit. Hence, we need to keep alive callTargetNode, thisArgument
         // and argumentsArgument for the baseline JIT. However, we only need a Phantom for
@@ -1922,14 +1976,14 @@ bool ByteCodeParser::handleVarargsInlining(Node* callTargetNode, VirtualRegister
         // SSA. Fortunately, we also have other reasons for not inserting control flow
         // before SSA.
         
-        VariableAccessData* countVariable = newVariableAccessData(VirtualRegister(remappedRegisterOffset + CallFrameSlot::argumentCountIncludingThis));
+        VariableAccessData* countVariable = newVariableAccessData(data->count);
         // This is pretty lame, but it will force the count to be flushed as an int. This doesn't
         // matter very much, since our use of a SetArgumentDefinitely and Flushes for this local slot is
         // mostly just a formality.
         countVariable->predict(SpecInt32Only);
         countVariable->mergeIsProfitableToUnbox(true);
         Node* setArgumentCount = addToGraph(SetArgumentDefinitely, OpInfo(countVariable));
-        m_currentBlock->variablesAtTail.setOperand(countVariable->local(), setArgumentCount);
+        m_currentBlock->variablesAtTail.setOperand(countVariable->operand(), setArgumentCount);
         
         set(VirtualRegister(argumentStart), get(thisArgument), ImmediateNakedSet);
         unsigned numSetArguments = 0;
@@ -1953,7 +2007,7 @@ bool ByteCodeParser::handleVarargsInlining(Node* callTargetNode, VirtualRegister
             }
             
             Node* setArgument = addToGraph(numSetArguments >= mandatoryMinimum ? SetArgumentMaybe : SetArgumentDefinitely, OpInfo(variable));
-            m_currentBlock->variablesAtTail.setOperand(variable->local(), setArgument);
+            m_currentBlock->variablesAtTail.setOperand(variable->operand(), setArgument);
             ++numSetArguments;
         }
     };
@@ -2053,7 +2107,7 @@ ByteCodeParser::CallOptimizationResult ByteCodeParser::handleInlining(
     // yet.
     VERBOSE_LOG("Register offset: ", registerOffset);
     VirtualRegister calleeReg(registerOffset + CallFrameSlot::callee);
-    calleeReg = m_inlineStackTop->remapOperand(calleeReg);
+    calleeReg = m_inlineStackTop->remapOperand(calleeReg).virtualRegister();
     VERBOSE_LOG("Callee is going to be ", calleeReg, "\n");
     setDirect(calleeReg, callTargetNode, ImmediateSetWithFlush);
 
@@ -5102,7 +5156,7 @@ void ByteCodeParser::parseBlock(unsigned limit)
         case op_new_regexp: {
             auto bytecode = currentInstruction->as<OpNewRegexp>();
             ASSERT(bytecode.m_regexp.isConstant());
-            FrozenValue* frozenRegExp = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_regexp.offset()));
+            FrozenValue* frozenRegExp = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_regexp));
             set(bytecode.m_dst, addToGraph(NewRegexp, OpInfo(frozenRegExp), jsConstant(jsNumber(0))));
             NEXT_OPCODE(op_new_regexp);
         }
@@ -6178,7 +6232,7 @@ void ByteCodeParser::parseBlock(unsigned limit)
 
             RELEASE_ASSERT(!m_currentBlock->size() || (m_graph.compilation() && m_currentBlock->size() == 1 && m_currentBlock->at(0)->op() == CountExecution));
 
-            ValueProfileAndOperandBuffer* buffer = bytecode.metadata(codeBlock).m_buffer;
+            ValueProfileAndVirtualRegisterBuffer* buffer = bytecode.metadata(codeBlock).m_buffer;
 
             if (!buffer) {
                 NEXT_OPCODE(op_catch); // This catch has yet to execute. Note: this load can be racy with the main thread.
@@ -6195,7 +6249,7 @@ void ByteCodeParser::parseBlock(unsigned limit)
             {
                 ConcurrentJSLocker locker(m_inlineStackTop->m_profiledBlock->m_lock);
 
-                buffer->forEach([&] (ValueProfileAndOperand& profile) {
+                buffer->forEach([&] (ValueProfileAndVirtualRegister& profile) {
                     VirtualRegister operand(profile.m_operand);
                     SpeculatedType prediction = profile.computeUpdatedPrediction(locker);
                     if (operand.isLocal())
@@ -6223,14 +6277,14 @@ void ByteCodeParser::parseBlock(unsigned limit)
             m_exitOK = false; 
 
             unsigned numberOfLocals = 0;
-            buffer->forEach([&] (ValueProfileAndOperand& profile) {
+            buffer->forEach([&] (ValueProfileAndVirtualRegister& profile) {
                 VirtualRegister operand(profile.m_operand);
                 if (operand.isArgument())
                     return;
                 ASSERT(operand.isLocal());
                 Node* value = addToGraph(ExtractCatchLocal, OpInfo(numberOfLocals), OpInfo(localPredictions[numberOfLocals]));
                 ++numberOfLocals;
-                addToGraph(MovHint, OpInfo(profile.m_operand), value);
+                addToGraph(MovHint, OpInfo(operand), value);
                 localsToSet.uncheckedAppend(std::make_pair(operand, value));
             });
             if (numberOfLocals)
@@ -6358,7 +6412,7 @@ void ByteCodeParser::parseBlock(unsigned limit)
             
         case op_jneq_ptr: {
             auto bytecode = currentInstruction->as<OpJneqPtr>();
-            FrozenValue* frozenPointer = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_specialPointer.offset()));
+            FrozenValue* frozenPointer = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_specialPointer));
             unsigned relativeOffset = jumpTarget(bytecode.m_targetLabel);
             Node* child = get(bytecode.m_value);
             if (bytecode.metadata(codeBlock).m_hasJumped) {
@@ -6826,8 +6880,8 @@ void ByteCodeParser::parseBlock(unsigned limit)
         case op_create_lexical_environment: {
             auto bytecode = currentInstruction->as<OpCreateLexicalEnvironment>();
             ASSERT(bytecode.m_symbolTable.isConstant() && bytecode.m_initialValue.isConstant());
-            FrozenValue* symbolTable = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_symbolTable.offset()));
-            FrozenValue* initialValue = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_initialValue.offset()));
+            FrozenValue* symbolTable = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_symbolTable));
+            FrozenValue* initialValue = m_graph.freezeStrong(m_inlineStackTop->m_codeBlock->getConstant(bytecode.m_initialValue));
             Node* scope = get(bytecode.m_scope);
             Node* lexicalEnvironment = addToGraph(CreateActivation, OpInfo(symbolTable), OpInfo(initialValue), scope);
             set(bytecode.m_dst, lexicalEnvironment);
@@ -6857,7 +6911,7 @@ void ByteCodeParser::parseBlock(unsigned limit)
             // loads from the scope register later, as that would prevent the DFG from tracking the
             // bytecode-level liveness of the scope register.
             auto bytecode = currentInstruction->as<OpGetScope>();
-            Node* callee = get(VirtualRegister(CallFrameSlot::callee));
+            Node* callee = get(CallFrameSlot::callee);
             Node* result;
             if (JSFunction* function = callee->dynamicCastConstant<JSFunction*>(*m_vm))
                 result = weakJSConstant(function->scope());
@@ -7282,8 +7336,10 @@ ByteCodeParser::InlineStackEntry::InlineStackEntry(
         // The owner is the machine code block, and we already have a barrier on that when the
         // plan finishes.
         m_inlineCallFrame->baselineCodeBlock.setWithoutWriteBarrier(codeBlock->baselineVersion());
+        m_inlineCallFrame->setTmpOffset((m_caller->m_inlineCallFrame ? m_caller->m_inlineCallFrame->tmpOffset : 0) + m_caller->m_codeBlock->numTmps());
         m_inlineCallFrame->setStackOffset(inlineCallFrameStart.offset() - CallFrame::headerSizeInRegisters);
         m_inlineCallFrame->argumentCountIncludingThis = argumentCountIncludingThis;
+        RELEASE_ASSERT(m_inlineCallFrame->argumentCountIncludingThis == argumentCountIncludingThis);
         if (callee) {
             m_inlineCallFrame->calleeRecovery = ValueRecovery::constant(callee);
             m_inlineCallFrame->isClosureCall = false;
@@ -7645,7 +7701,7 @@ void ByteCodeParser::parse()
                     Node* node = block->at(nodeIndex);
 
                     if (node->hasVariableAccessData(m_graph))
-                        mapping.operand(node->local()) = node->variableAccessData();
+                        mapping.operand(node->operand()) = node->variableAccessData();
 
                     if (node->op() != ForceOSRExit)
                         continue;
@@ -7665,24 +7721,24 @@ void ByteCodeParser::parse()
                             RELEASE_ASSERT(successor->predecessors.isEmpty());
                     }
 
-                    auto insertLivenessPreservingOp = [&] (InlineCallFrame* inlineCallFrame, NodeType op, VirtualRegister operand) {
+                    auto insertLivenessPreservingOp = [&] (InlineCallFrame* inlineCallFrame, NodeType op, Operand operand) {
                         VariableAccessData* variable = mapping.operand(operand);
                         if (!variable) {
                             variable = newVariableAccessData(operand);
                             mapping.operand(operand) = variable;
                         }
 
-                        VirtualRegister argument = operand - (inlineCallFrame ? inlineCallFrame->stackOffset : 0);
+                        Operand argument = unmapOperand(inlineCallFrame, operand);
                         if (argument.isArgument() && !argument.isHeader()) {
                             const Vector<ArgumentPosition*>& arguments = m_inlineCallFrameToArgumentPositions.get(inlineCallFrame);
                             arguments[argument.toArgument()]->addVariable(variable);
                         }
                         insertionSet.insertNode(nodeIndex, SpecNone, op, origin, OpInfo(variable));
                     };
-                    auto addFlushDirect = [&] (InlineCallFrame* inlineCallFrame, VirtualRegister operand) {
+                    auto addFlushDirect = [&] (InlineCallFrame* inlineCallFrame, Operand operand) {
                         insertLivenessPreservingOp(inlineCallFrame, Flush, operand);
                     };
-                    auto addPhantomLocalDirect = [&] (InlineCallFrame* inlineCallFrame, VirtualRegister operand) {
+                    auto addPhantomLocalDirect = [&] (InlineCallFrame* inlineCallFrame, Operand operand) {
                         insertLivenessPreservingOp(inlineCallFrame, PhantomLocal, operand);
                     };
                     flushForTerminalImpl(origin.semantic, addFlushDirect, addPhantomLocalDirect);
@@ -7747,6 +7803,7 @@ void ByteCodeParser::parse()
         ASSERT(block->variablesAtTail.numberOfArguments() == m_graph.block(0)->variablesAtHead.numberOfArguments());
     }
 
+    m_graph.m_tmps = m_numTmps;
     m_graph.m_localVars = m_numLocals;
     m_graph.m_parameterSlots = m_parameterSlots;
 }
index 8e9bb4a..3dc7568 100644 (file)
@@ -170,22 +170,22 @@ private:
         bool changed = false;
         const Operands<Optional<JSValue>>& mustHandleValues = m_graph.m_plan.mustHandleValues();
         for (size_t i = mustHandleValues.size(); i--;) {
-            int operand = mustHandleValues.operandForIndex(i);
+            Operand operand = mustHandleValues.operandForIndex(i);
             Optional<JSValue> value = mustHandleValues[i];
             if (!value) {
                 if (m_verbose)
-                    dataLog("   Not live in bytecode: ", VirtualRegister(operand), "\n");
+                    dataLog("   Not live in bytecode: ", operand, "\n");
                 continue;
             }
             Node* node = block->variablesAtHead.operand(operand);
             if (!node) {
                 if (m_verbose)
-                    dataLog("   Not live: ", VirtualRegister(operand), "\n");
+                    dataLog("   Not live: ", operand, "\n");
                 continue;
             }
             
             if (m_verbose)
-                dataLog("   Widening ", VirtualRegister(operand), " with ", value.value(), "\n");
+                dataLog("   Widening ", operand, " with ", value.value(), "\n");
             
             AbstractValue& target = block->valuesAtHead.operand(operand);
             changed |= target.mergeOSREntryValue(m_graph, value.value(), node->variableAccessData(), node);
index d98fd4a..873fa04 100644 (file)
@@ -54,8 +54,9 @@ public:
         m_graph.clearReplacements();
         canonicalizeLocalsInBlocks();
         specialCaseArguments();
-        propagatePhis<LocalOperand>();
-        propagatePhis<ArgumentOperand>();
+        propagatePhis<OperandKind::Local>();
+        propagatePhis<OperandKind::Argument>();
+        propagatePhis<OperandKind::Tmp>();
         computeIsFlushed();
         
         m_graph.m_form = ThreadedCPS;
@@ -211,10 +212,20 @@ private:
     void canonicalizeGetLocal(Node* node)
     {
         VariableAccessData* variable = node->variableAccessData();
-        if (variable->local().isArgument())
-            canonicalizeGetLocalFor<ArgumentOperand>(node, variable, variable->local().toArgument());
-        else
-            canonicalizeGetLocalFor<LocalOperand>(node, variable, variable->local().toLocal());
+        switch (variable->operand().kind()) {
+        case OperandKind::Argument: {
+            canonicalizeGetLocalFor<OperandKind::Argument>(node, variable, variable->operand().toArgument());
+            break;
+        }
+        case OperandKind::Local: {
+            canonicalizeGetLocalFor<OperandKind::Local>(node, variable, variable->operand().toLocal());
+            break;
+        }
+        case OperandKind::Tmp: {
+            canonicalizeGetLocalFor<OperandKind::Tmp>(node, variable, variable->operand().value());
+            break;
+        }
+        }
     }
     
     template<NodeType nodeType, OperandKind operandKind>
@@ -229,6 +240,7 @@ private:
             case Flush:
             case PhantomLocal:
             case GetLocal:
+                ASSERT(otherNode->child1().node());
                 otherNode = otherNode->child1().node();
                 break;
             default:
@@ -270,15 +282,25 @@ private:
     void canonicalizeFlushOrPhantomLocal(Node* node)
     {
         VariableAccessData* variable = node->variableAccessData();
-        if (variable->local().isArgument())
-            canonicalizeFlushOrPhantomLocalFor<nodeType, ArgumentOperand>(node, variable, variable->local().toArgument());
-        else
-            canonicalizeFlushOrPhantomLocalFor<nodeType, LocalOperand>(node, variable, variable->local().toLocal());
+        switch (variable->operand().kind()) {
+        case OperandKind::Argument: {
+            canonicalizeFlushOrPhantomLocalFor<nodeType, OperandKind::Argument>(node, variable, variable->operand().toArgument());
+            break;
+        }
+        case OperandKind::Local: {
+            canonicalizeFlushOrPhantomLocalFor<nodeType, OperandKind::Local>(node, variable, variable->operand().toLocal());
+            break;
+        }
+        case OperandKind::Tmp: {
+            canonicalizeFlushOrPhantomLocalFor<nodeType, OperandKind::Tmp>(node, variable, variable->operand().value());
+            break;
+        }
+        }
     }
     
     void canonicalizeSet(Node* node)
     {
-        m_block->variablesAtTail.setOperand(node->local(), node);
+        m_block->variablesAtTail.setOperand(node->operand(), node);
     }
     
     void canonicalizeLocalsInBlock()
@@ -287,8 +309,9 @@ private:
             return;
         ASSERT(m_block->isReachable);
         
-        clearVariables<ArgumentOperand>();
-        clearVariables<LocalOperand>();
+        clearVariables<OperandKind::Argument>();
+        clearVariables<OperandKind::Local>();
+        clearVariables<OperandKind::Tmp>();
         
         // Assumes that all phi references have been removed. Assumes that things that
         // should be live have a non-zero ref count, but doesn't assume that the ref
@@ -388,7 +411,7 @@ private:
     template<OperandKind operandKind>
     void propagatePhis()
     {
-        Vector<PhiStackEntry, 128>& phiStack = operandKind == ArgumentOperand ? m_argumentPhiStack : m_localPhiStack;
+        Vector<PhiStackEntry, 128>& phiStack = phiStackFor<operandKind>();
         
         // Ensure that attempts to use this fail instantly.
         m_block = 0;
@@ -466,9 +489,12 @@ private:
     template<OperandKind operandKind>
     Vector<PhiStackEntry, 128>& phiStackFor()
     {
-        if (operandKind == ArgumentOperand)
-            return m_argumentPhiStack;
-        return m_localPhiStack;
+        switch (operandKind) {
+        case OperandKind::Argument: return m_argumentPhiStack;
+        case OperandKind::Local: return m_localPhiStack;
+        case OperandKind::Tmp: return m_tmpPhiStack;
+        }
+        RELEASE_ASSERT_NOT_REACHED();
     }
     
     void computeIsFlushed()
@@ -521,6 +547,7 @@ private:
     BasicBlock* m_block;
     Vector<PhiStackEntry, 128> m_argumentPhiStack;
     Vector<PhiStackEntry, 128> m_localPhiStack;
+    Vector<PhiStackEntry, 128> m_tmpPhiStack;
     Vector<Node*, 128> m_flushedLocalOpWorklist;
 };
 
index 2381c6b..4cce174 100644 (file)
@@ -147,8 +147,7 @@ public:
             break;
         case Stack: {
             ASSERT(!heap.payload().isTop());
-            ASSERT(heap.payload().value() == heap.payload().value32());
-            m_abstractHeapStackMap.remove(heap.payload().value32());
+            m_abstractHeapStackMap.remove(heap.payload().value());
             if (clobberConservatively)
                 m_fallbackStackMap.clear();
             else
@@ -172,7 +171,7 @@ public:
                 if (!clobberConservatively)
                     break;
                 if (pair.key.heap().kind() == Stack) {
-                    auto iterator = m_abstractHeapStackMap.find(pair.key.heap().payload().value32());
+                    auto iterator = m_abstractHeapStackMap.find(pair.key.heap().payload().value());
                     if (iterator != m_abstractHeapStackMap.end() && iterator->value->key == pair.key)
                         return false;
                     return true;
@@ -226,8 +225,7 @@ private:
             AbstractHeap abstractHeap = location.heap();
             if (abstractHeap.payload().isTop())
                 return add(m_fallbackStackMap, location, node);
-            ASSERT(abstractHeap.payload().value() == abstractHeap.payload().value32());
-            auto addResult = m_abstractHeapStackMap.add(abstractHeap.payload().value32(), nullptr);
+            auto addResult = m_abstractHeapStackMap.add(abstractHeap.payload().value(), nullptr);
             if (addResult.isNewEntry) {
                 addResult.iterator->value.reset(new ImpureDataSlot {location, node, 0});
                 return nullptr;
@@ -249,8 +247,7 @@ private:
         case SideState:
             RELEASE_ASSERT_NOT_REACHED();
         case Stack: {
-            ASSERT(location.heap().payload().value() == location.heap().payload().value32());
-            auto iterator = m_abstractHeapStackMap.find(location.heap().payload().value32());
+            auto iterator = m_abstractHeapStackMap.find(location.heap().payload().value());
             if (iterator != m_abstractHeapStackMap.end()
                 && iterator->value->key == location)
                 return iterator->value->value;
@@ -298,7 +295,7 @@ private:
     // a duplicate in the past and now only live in m_fallbackStackMap.
     //
     // Obviously, TOP always goes into m_fallbackStackMap since it does not have a unique value.
-    HashMap<int32_t, std::unique_ptr<ImpureDataSlot>, DefaultHash<int32_t>::Hash, WTF::SignedWithZeroKeyHashTraits<int32_t>> m_abstractHeapStackMap;
+    HashMap<int64_t, std::unique_ptr<ImpureDataSlot>, DefaultHash<int64_t>::Hash, WTF::SignedWithZeroKeyHashTraits<int64_t>> m_abstractHeapStackMap;
     Map m_fallbackStackMap;
 
     Map m_heapMap;
index 2477af0..aa222b8 100644 (file)
@@ -307,6 +307,8 @@ CapabilityLevel capabilityLevel(OpcodeID opcodeID, CodeBlock* codeBlock, const I
     case llint_native_construct_trampoline:
     case llint_internal_function_call_trampoline:
     case llint_internal_function_construct_trampoline:
+    case checkpoint_osr_exit_from_inlined_call_trampoline:
+    case checkpoint_osr_exit_trampoline:
     case handleUncaughtException:
     case op_call_return_location:
     case op_construct_return_location:
index 5f02ac0..d6a4635 100644 (file)
@@ -111,9 +111,9 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
     // scan would read. That's what this does.
     for (InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame(); inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame()) {
         if (inlineCallFrame->isClosureCall)
-            read(AbstractHeap(Stack, inlineCallFrame->stackOffset + CallFrameSlot::callee));
+            read(AbstractHeap(Stack, VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::callee)));
         if (inlineCallFrame->isVarargs())
-            read(AbstractHeap(Stack, inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis));
+            read(AbstractHeap(Stack, VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCountIncludingThis)));
     }
 
     // We don't want to specifically account which nodes can read from the scope
@@ -441,7 +441,7 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
         return;
 
     case KillStack:
-        write(AbstractHeap(Stack, node->unlinkedLocal()));
+        write(AbstractHeap(Stack, node->unlinkedOperand()));
         return;
          
     case MovHint:
@@ -497,7 +497,7 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
         return;
 
     case Flush:
-        read(AbstractHeap(Stack, node->local()));
+        read(AbstractHeap(Stack, node->operand()));
         write(SideState);
         return;
 
@@ -765,12 +765,12 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
         return;
         
     case GetCallee:
-        read(AbstractHeap(Stack, CallFrameSlot::callee));
-        def(HeapLocation(StackLoc, AbstractHeap(Stack, CallFrameSlot::callee)), LazyNode(node));
+        read(AbstractHeap(Stack, VirtualRegister(CallFrameSlot::callee)));
+        def(HeapLocation(StackLoc, AbstractHeap(Stack, VirtualRegister(CallFrameSlot::callee))), LazyNode(node));
         return;
 
     case SetCallee:
-        write(AbstractHeap(Stack, CallFrameSlot::callee));
+        write(AbstractHeap(Stack, VirtualRegister(CallFrameSlot::callee)));
         return;
         
     case GetArgumentCountIncludingThis: {
@@ -781,7 +781,7 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
     }
 
     case SetArgumentCountIncludingThis:
-        write(AbstractHeap(Stack, CallFrameSlot::argumentCountIncludingThis));
+        write(AbstractHeap(Stack, VirtualRegister(CallFrameSlot::argumentCountIncludingThis)));
         return;
 
     case GetRestLength:
@@ -789,36 +789,42 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
         return;
         
     case GetLocal:
-        read(AbstractHeap(Stack, node->local()));
-        def(HeapLocation(StackLoc, AbstractHeap(Stack, node->local())), LazyNode(node));
+        read(AbstractHeap(Stack, node->operand()));
+        def(HeapLocation(StackLoc, AbstractHeap(Stack, node->operand())), LazyNode(node));
         return;
         
     case SetLocal:
-        write(AbstractHeap(Stack, node->local()));
-        def(HeapLocation(StackLoc, AbstractHeap(Stack, node->local())), LazyNode(node->child1().node()));
+        write(AbstractHeap(Stack, node->operand()));
+        def(HeapLocation(StackLoc, AbstractHeap(Stack, node->operand())), LazyNode(node->child1().node()));
         return;
         
     case GetStack: {
-        AbstractHeap heap(Stack, node->stackAccessData()->local);
+        AbstractHeap heap(Stack, node->stackAccessData()->operand);
         read(heap);
         def(HeapLocation(StackLoc, heap), LazyNode(node));
         return;
     }
         
     case PutStack: {
-        AbstractHeap heap(Stack, node->stackAccessData()->local);
+        AbstractHeap heap(Stack, node->stackAccessData()->operand);
         write(heap);
         def(HeapLocation(StackLoc, heap), LazyNode(node->child1().node()));
         return;
     }
         
+    case VarargsLength: {
+        read(World);
+        write(Heap);
+        return;  
+    }
+
     case LoadVarargs: {
         read(World);
         write(Heap);
         LoadVarargsData* data = node->loadVarargsData();
-        write(AbstractHeap(Stack, data->count.offset()));
+        write(AbstractHeap(Stack, data->count));
         for (unsigned i = data->limit; i--;)
-            write(AbstractHeap(Stack, data->start.offset() + static_cast<int>(i)));
+            write(AbstractHeap(Stack, data->start + static_cast<int>(i)));
         return;
     }
         
@@ -827,9 +833,9 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
         read(Stack);
         
         LoadVarargsData* data = node->loadVarargsData();
-        write(AbstractHeap(Stack, data->count.offset()));
+        write(AbstractHeap(Stack, data->count));
         for (unsigned i = data->limit; i--;)
-            write(AbstractHeap(Stack, data->start.offset() + static_cast<int>(i)));
+            write(AbstractHeap(Stack, data->start + static_cast<int>(i)));
         return;
     }
         
index 95866a7..15746c0 100644 (file)
@@ -39,7 +39,7 @@ static void addBytecodeLiveness(Graph& graph, AvailabilityMap& availabilityMap,
 {
     graph.forAllLiveInBytecode(
         node->origin.forExit,
-        [&] (VirtualRegister reg) {
+        [&] (Operand reg) {
             availabilityMap.closeStartingWithLocal(
                 reg,
                 [&] (Node* node) -> bool {
index 7987b87..7c80750 100644 (file)
@@ -56,7 +56,7 @@ CallSiteIndex CommonData::addCodeOrigin(CodeOrigin codeOrigin)
         codeOrigins.append(codeOrigin);
     unsigned index = codeOrigins.size() - 1;
     ASSERT(codeOrigins[index] == codeOrigin);
-    return CallSiteIndex(BytecodeIndex(index));
+    return CallSiteIndex(index);
 }
 
 CallSiteIndex CommonData::addUniqueCallSiteIndex(CodeOrigin codeOrigin)
@@ -64,13 +64,13 @@ CallSiteIndex CommonData::addUniqueCallSiteIndex(CodeOrigin codeOrigin)
     codeOrigins.append(codeOrigin);
     unsigned index = codeOrigins.size() - 1;
     ASSERT(codeOrigins[index] == codeOrigin);
-    return CallSiteIndex(BytecodeIndex(index));
+    return CallSiteIndex(index);
 }
 
 CallSiteIndex CommonData::lastCallSite() const
 {
     RELEASE_ASSERT(codeOrigins.size());
-    return CallSiteIndex(BytecodeIndex(codeOrigins.size() - 1));
+    return CallSiteIndex(codeOrigins.size() - 1);
 }
 
 DisposableCallSiteIndex CommonData::addDisposableCallSiteIndex(CodeOrigin codeOrigin)
index 3b165f2..185366d 100644 (file)
@@ -383,7 +383,7 @@ private:
                 // GetMyArgumentByVal in such statically-out-of-bounds accesses; we just lose CFA unless
                 // GCSE removes the access entirely.
                 if (inlineCallFrame) {
-                    if (index >= inlineCallFrame->argumentCountIncludingThis - 1)
+                    if (index >= static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1))
                         break;
                 } else {
                     if (index >= m_state.numberOfArguments() - 1)
@@ -404,7 +404,7 @@ private:
                         virtualRegisterForArgument(index + 1), FlushedJSValue);
                 }
                 
-                if (inlineCallFrame && !inlineCallFrame->isVarargs() && index < inlineCallFrame->argumentCountIncludingThis - 1) {
+                if (inlineCallFrame && !inlineCallFrame->isVarargs() && index < static_cast<unsigned>(inlineCallFrame->argumentCountIncludingThis - 1)) {
                     node->convertToGetStack(data);
                     eliminated = true;
                     break;
index 8b53943..f5bd1f5 100644 (file)
@@ -295,6 +295,7 @@ bool doesGC(Graph& graph, Node* node)
     case InByVal:
     case InstanceOf:
     case InstanceOfCustom:
+    case VarargsLength:
     case LoadVarargs:
     case NumberToStringWithRadix:
     case NumberToStringWithValidRadixConstant:
index 17b3707..45fbc43 100644 (file)
@@ -90,7 +90,6 @@ static CompilationResult compileImpl(
     // Make sure that any stubs that the DFG is going to use are initialized. We want to
     // make sure that all JIT code generation does finalization on the main thread.
     vm.getCTIStub(arityFixupGenerator);
-    vm.getCTIStub(osrExitThunkGenerator);
     vm.getCTIStub(osrExitGenerationThunkGenerator);
     vm.getCTIStub(throwExceptionFromCallSlowPathGenerator);
     vm.getCTIStub(linkCallThunkGenerator);
index 8ab88cc..7649242 100644 (file)
@@ -2432,6 +2432,14 @@ private:
             break;
         }
 
+        case ForwardVarargs:
+        case LoadVarargs: {
+            fixEdge<KnownInt32Use>(node->child1());
+            break;
+        }
+
+
+
 #if !ASSERT_DISABLED
         // Have these no-op cases here to ensure that nobody forgets to add handlers for new opcodes.
         case SetArgumentDefinitely:
@@ -2464,8 +2472,7 @@ private:
         case ConstructForwardVarargs:
         case TailCallForwardVarargs:
         case TailCallForwardVarargsInlinedCaller:
-        case LoadVarargs:
-        case ForwardVarargs:
+        case VarargsLength:
         case ProfileControlFlow:
         case NewObject:
         case NewPromise:
index 0bf1c3f..bf4350c 100644 (file)
 
 namespace JSC { namespace DFG {
 
+namespace ForAllKillsInternal {
+constexpr bool verbose = false;
+}
+
 // Utilities for finding the last points where a node is live in DFG SSA. This accounts for liveness due
 // to OSR exit. This is usually used for enumerating over all of the program points where a node is live,
 // by exploring all blocks where the node is live at tail and then exploring all program points where the
@@ -53,13 +57,13 @@ void forAllKilledOperands(Graph& graph, Node* nodeBefore, Node* nodeAfter, const
     
     CodeOrigin after = nodeAfter->origin.forExit;
     
-    VirtualRegister alreadyNoted;
+    Operand alreadyNoted;
     // If we MovHint something that is live at the time, then we kill the old value.
     if (nodeAfter->containsMovHint()) {
-        VirtualRegister reg = nodeAfter->unlinkedLocal();
-        if (graph.isLiveInBytecode(reg, after)) {
-            functor(reg);
-            alreadyNoted = reg;
+        Operand operand = nodeAfter->unlinkedOperand();
+        if (graph.isLiveInBytecode(operand, after)) {
+            functor(operand);
+            alreadyNoted = operand;
         }
     }
     
@@ -70,29 +74,48 @@ void forAllKilledOperands(Graph& graph, Node* nodeBefore, Node* nodeAfter, const
     // other loop, below.
     auto* beforeInlineCallFrame = before.inlineCallFrame();
     if (beforeInlineCallFrame == after.inlineCallFrame()) {
-        int stackOffset = beforeInlineCallFrame ? beforeInlineCallFrame->stackOffset : 0;
         CodeBlock* codeBlock = graph.baselineCodeBlockFor(beforeInlineCallFrame);
+        if (after.bytecodeIndex().checkpoint()) {
+            ASSERT(before.bytecodeIndex().checkpoint() != after.bytecodeIndex().checkpoint());
+            ASSERT_WITH_MESSAGE(before.bytecodeIndex().offset() == after.bytecodeIndex().offset(), "When the DFG does code motion it should change the forExit origin to match the surrounding bytecodes.");
+
+            auto liveBefore = tmpLivenessForCheckpoint(*codeBlock, before.bytecodeIndex());
+            auto liveAfter = tmpLivenessForCheckpoint(*codeBlock, after.bytecodeIndex());
+            liveAfter.invert();
+            liveBefore.filter(liveAfter);
+
+            liveBefore.forEachSetBit([&] (size_t tmp) {
+                functor(remapOperand(beforeInlineCallFrame, Operand::tmp(tmp)));
+            });
+            // No locals can die at a checkpoint.
+            return;
+        }
+
         FullBytecodeLiveness& fullLiveness = graph.livenessFor(codeBlock);
         const FastBitVector& liveBefore = fullLiveness.getLiveness(before.bytecodeIndex(), LivenessCalculationPoint::BeforeUse);
         const FastBitVector& liveAfter = fullLiveness.getLiveness(after.bytecodeIndex(), LivenessCalculationPoint::BeforeUse);
         
         (liveBefore & ~liveAfter).forEachSetBit(
             [&] (size_t relativeLocal) {
-                functor(virtualRegisterForLocal(relativeLocal) + stackOffset);
+                functor(remapOperand(beforeInlineCallFrame, virtualRegisterForLocal(relativeLocal)));
             });
         return;
     }
-    
+
+    ASSERT_WITH_MESSAGE(!after.bytecodeIndex().checkpoint(), "Transitioning across a checkpoint but before and after don't share an inlineCallFrame.");
+
     // Detect kills the super conservative way: it is killed if it was live before and dead after.
-    BitVector liveAfter = graph.localsLiveInBytecode(after);
-    graph.forAllLocalsLiveInBytecode(
+    BitVector liveAfter = graph.localsAndTmpsLiveInBytecode(after);
+    unsigned numLocals = graph.block(0)->variablesAtHead.numberOfLocals();
+    graph.forAllLocalsAndTmpsLiveInBytecode(
         before,
-        [&] (VirtualRegister reg) {
-            if (reg == alreadyNoted)
+        [&] (Operand operand) {
+            if (operand == alreadyNoted)
                 return;
-            if (liveAfter.get(reg.toLocal()))
+            unsigned offset = operand.isTmp() ? numLocals + operand.value() : operand.toLocal();
+            if (liveAfter.get(offset))
                 return;
-            functor(reg);
+            functor(operand);
         });
 }
     
@@ -105,7 +128,8 @@ void forAllKilledNodesAtNodeIndex(
     static constexpr unsigned seenInClosureFlag = 1;
     static constexpr unsigned calledFunctorFlag = 2;
     HashMap<Node*, unsigned> flags;
-    
+
+    ASSERT(nodeIndex);
     Node* node = block->at(nodeIndex);
     
     graph.doToChildren(
@@ -120,15 +144,13 @@ void forAllKilledNodesAtNodeIndex(
             }
         });
 
-    Node* before = nullptr;
-    if (nodeIndex)
-        before = block->at(nodeIndex - 1);
+    Node* before = block->at(nodeIndex - 1);
 
     forAllKilledOperands(
         graph, before, node,
-        [&] (VirtualRegister reg) {
+        [&] (Operand operand) {
             availabilityMap.closeStartingWithLocal(
-                reg,
+                operand,
                 [&] (Node* node) -> bool {
                     return flags.get(node) & seenInClosureFlag;
                 },
@@ -159,6 +181,7 @@ void forAllKillsInBlock(
     // Start at the second node, because the functor is expected to only inspect nodes from the start of
     // the block up to nodeIndex (exclusive), so if nodeIndex is zero then the functor has nothing to do.
     for (unsigned nodeIndex = 1; nodeIndex < block->size(); ++nodeIndex) {
+        dataLogLnIf(ForAllKillsInternal::verbose, "local availability at index: ", nodeIndex, " ", localAvailability.m_availability);
         forAllKilledNodesAtNodeIndex(
             graph, localAvailability.m_availability, block, nodeIndex,
             [&] (Node* node) {
index 7d0fa36..418696a 100644 (file)
@@ -308,8 +308,8 @@ void Graph::dump(PrintStream& out, const char* prefixStr, Node* node, DumpContex
     if (node->hasVariableAccessData(*this)) {
         VariableAccessData* variableAccessData = node->tryGetVariableAccessData();
         if (variableAccessData) {
-            VirtualRegister operand = variableAccessData->local();
-            out.print(comma, variableAccessData->local(), "(", VariableAccessDataDump(*this, variableAccessData), ")");
+            Operand operand = variableAccessData->operand();
+            out.print(comma, variableAccessData->operand(), "(", VariableAccessDataDump(*this, variableAccessData), ")");
             operand = variableAccessData->machineLocal();
             if (operand.isValid())
                 out.print(comma, "machine:", operand);
@@ -317,13 +317,13 @@ void Graph::dump(PrintStream& out, const char* prefixStr, Node* node, DumpContex
     }
     if (node->hasStackAccessData()) {
         StackAccessData* data = node->stackAccessData();
-        out.print(comma, data->local);
+        out.print(comma, data->operand);
         if (data->machineLocal.isValid())
             out.print(comma, "machine:", data->machineLocal);
         out.print(comma, data->format);
     }
-    if (node->hasUnlinkedLocal()) 
-        out.print(comma, node->unlinkedLocal());
+    if (node->hasUnlinkedOperand())
+        out.print(comma, node->unlinkedOperand());
     if (node->hasVectorLengthHint())
         out.print(comma, "vectorLengthHint = ", node->vectorLengthHint());
     if (node->hasLazyJSValue())
@@ -515,9 +515,10 @@ void Graph::dumpBlockHeader(PrintStream& out, const char* prefixStr, BasicBlock*
         out.print(prefix, "  Phi Nodes:");
         for (size_t i = 0; i < block->phis.size(); ++i) {
             Node* phiNode = block->phis[i];
+            ASSERT(phiNode->op() == Phi);
             if (!phiNode->shouldGenerate() && phiNodeDumpMode == DumpLivePhisOnly)
                 continue;
-            out.print(" @", phiNode->index(), "<", phiNode->local(), ",", phiNode->refCount(), ">->(");
+            out.print(" @", phiNode->index(), "<", phiNode->operand(), ",", phiNode->refCount(), ">->(");
             if (phiNode->child1()) {
                 out.print("@", phiNode->child1()->index());
                 if (phiNode->child2()) {
@@ -884,7 +885,7 @@ void Graph::substituteGetLocal(BasicBlock& block, unsigned startIndexInBlock, Va
         bool shouldContinue = true;
         switch (node->op()) {
         case SetLocal: {
-            if (node->local() == variableAccessData->local())
+            if (node->operand() == variableAccessData->operand())
                 shouldContinue = false;
             break;
         }
@@ -893,9 +894,9 @@ void Graph::substituteGetLocal(BasicBlock& block, unsigned startIndexInBlock, Va
             if (node->variableAccessData() != variableAccessData)
                 continue;
             substitute(block, indexInBlock, node, newGetLocal);
-            Node* oldTailNode = block.variablesAtTail.operand(variableAccessData->local());
+            Node* oldTailNode = block.variablesAtTail.operand(variableAccessData->operand());
             if (oldTailNode == node)
-                block.variablesAtTail.operand(variableAccessData->local()) = newGetLocal;
+                block.variablesAtTail.operand(variableAccessData->operand()) = newGetLocal;
             shouldContinue = false;
             break;
         }
@@ -1133,36 +1134,56 @@ BytecodeKills& Graph::killsFor(InlineCallFrame* inlineCallFrame)
     return killsFor(baselineCodeBlockFor(inlineCallFrame));
 }
 
-bool Graph::isLiveInBytecode(VirtualRegister operand, CodeOrigin codeOrigin)
+bool Graph::isLiveInBytecode(Operand operand, CodeOrigin codeOrigin)
 {
     static constexpr bool verbose = false;
     
     if (verbose)
         dataLog("Checking of operand is live: ", operand, "\n");
     bool isCallerOrigin = false;
+
     CodeOrigin* codeOriginPtr = &codeOrigin;
-    for (;;) {
-        VirtualRegister reg = VirtualRegister(
-            operand.offset() - codeOriginPtr->stackOffset());
+    auto* inlineCallFrame = codeOriginPtr->inlineCallFrame();
+    // We need to handle tail callers because we may decide to exit to the
+    // the return bytecode following the tail call.
+    for (; codeOriginPtr; codeOriginPtr = inlineCallFrame ? &inlineCallFrame->directCaller : nullptr) {
+        inlineCallFrame = codeOriginPtr->inlineCallFrame();
+        if (operand.isTmp()) {
+            unsigned tmpOffset = inlineCallFrame ? inlineCallFrame->tmpOffset : 0;
+            unsigned operandIndex = static_cast<unsigned>(operand.value());
+
+            ASSERT(operand.value() >= 0);
+            // This tmp should have belonged to someone we inlined.
+            if (operandIndex > tmpOffset + maxNumCheckpointTmps)
+                return false;
+
+            CodeBlock* codeBlock = baselineCodeBlockFor(inlineCallFrame);
+            if (!codeBlock->numTmps() || operandIndex < tmpOffset)
+                continue;
+
+            auto bitMap = tmpLivenessForCheckpoint(*codeBlock, codeOriginPtr->bytecodeIndex());
+            return bitMap.get(operandIndex - tmpOffset);
+        }
+
+        VirtualRegister reg = operand.virtualRegister() - codeOriginPtr->stackOffset();
         
         if (verbose)
             dataLog("reg = ", reg, "\n");
 
-        auto* inlineCallFrame = codeOriginPtr->inlineCallFrame();
-        if (operand.offset() < codeOriginPtr->stackOffset() + CallFrame::headerSizeInRegisters) {
+        if (operand.virtualRegister().offset() < codeOriginPtr->stackOffset() + CallFrame::headerSizeInRegisters) {
             if (reg.isArgument()) {
                 RELEASE_ASSERT(reg.offset() < CallFrame::headerSizeInRegisters);
 
 
                 if (inlineCallFrame->isClosureCall
-                    && reg.offset() == CallFrameSlot::callee) {
+                    && reg == CallFrameSlot::callee) {
                     if (verbose)
                         dataLog("Looks like a callee.\n");
                     return true;
                 }
                 
                 if (inlineCallFrame->isVarargs()
-                    && reg.offset() == CallFrameSlot::argumentCountIncludingThis) {
+                    && reg == CallFrameSlot::argumentCountIncludingThis) {
                     if (verbose)
                         dataLog("Looks like the argument count.\n");
                     return true;
@@ -1176,42 +1197,39 @@ bool Graph::isLiveInBytecode(VirtualRegister operand, CodeOrigin codeOrigin)
             CodeBlock* codeBlock = baselineCodeBlockFor(inlineCallFrame);
             FullBytecodeLiveness& fullLiveness = livenessFor(codeBlock);
             BytecodeIndex bytecodeIndex = codeOriginPtr->bytecodeIndex();
-            return fullLiveness.operandIsLive(reg.offset(), bytecodeIndex, appropriateLivenessCalculationPoint(*codeOriginPtr, isCallerOrigin));
-        }
-
-        if (!inlineCallFrame) {
-            if (verbose)
-                dataLog("Ran out of stack, returning true.\n");
-            return true;
+            return fullLiveness.virtualRegisterIsLive(reg, bytecodeIndex, appropriateLivenessCalculationPoint(*codeOriginPtr, isCallerOrigin));
         }
 
         // Arguments are always live. This would be redundant if it wasn't for our
         // op_call_varargs inlining.
-        if (reg.isArgument()
+        if (inlineCallFrame && reg.isArgument()
             && static_cast<size_t>(reg.toArgument()) < inlineCallFrame->argumentsWithFixup.size()) {
             if (verbose)
                 dataLog("Argument is live.\n");
             return true;
         }
 
-        // We need to handle tail callers because we may decide to exit to the
-        // the return bytecode following the tail call.
-        codeOriginPtr = &inlineCallFrame->directCaller;
         isCallerOrigin = true;
     }
-    
-    RELEASE_ASSERT_NOT_REACHED();
+
+    if (operand.isTmp())
+        return false;
+
+    if (verbose)
+        dataLog("Ran out of stack, returning true.\n");
+    return true;    
 }
 
-BitVector Graph::localsLiveInBytecode(CodeOrigin codeOrigin)
+BitVector Graph::localsAndTmpsLiveInBytecode(CodeOrigin codeOrigin)
 {
     BitVector result;
-    result.ensureSize(block(0)->variablesAtHead.numberOfLocals());
-    forAllLocalsLiveInBytecode(
+    unsigned numLocals = block(0)->variablesAtHead.numberOfLocals();
+    result.ensureSize(numLocals + block(0)->variablesAtHead.numberOfTmps());
+    forAllLocalsAndTmpsLiveInBytecode(
         codeOrigin,
-        [&] (VirtualRegister reg) {
-            ASSERT(reg.isLocal());
-            result.quickSet(reg.toLocal());
+        [&] (Operand operand) {
+            unsigned offset = operand.isTmp() ? numLocals + operand.value() : operand.toLocal();
+            result.quickSet(offset);
         });
     return result;
 }
@@ -1635,8 +1653,8 @@ MethodOfGettingAValueProfile Graph::methodOfGettingAValueProfileFor(Node* curren
 
     for (Node* node = operandNode; node;) {
         if (node->accessesStack(*this)) {
-            if (m_form != SSA && node->local().isArgument()) {
-                int argument = node->local().toArgument();
+            if (m_form != SSA && node->operand().isArgument()) {
+                int argument = node->operand().toArgument();
                 Node* argumentNode = m_rootToArguments.find(block(0))->value[argument];
                 // FIXME: We should match SetArgumentDefinitely nodes at other entrypoints as well:
                 // https://bugs.webkit.org/show_bug.cgi?id=175841
@@ -1656,7 +1674,7 @@ MethodOfGettingAValueProfile Graph::methodOfGettingAValueProfileFor(Node* curren
                     return MethodOfGettingAValueProfile::fromLazyOperand(
                         profiledBlock,
                         LazyOperandValueProfileKey(
-                            node->origin.semantic.bytecodeIndex(), node->local()));
+                            node->origin.semantic.bytecodeIndex(), node->operand()));
                 }
             }
 
index ec4f22b..d433d85 100644 (file)
@@ -840,13 +840,13 @@ public:
     // Quickly query if a single local is live at the given point. This is faster than calling
     // forAllLiveInBytecode() if you will only query one local. But, if you want to know all of the
     // locals live, then calling this for each local is much slower than forAllLiveInBytecode().
-    bool isLiveInBytecode(VirtualRegister, CodeOrigin);
+    bool isLiveInBytecode(Operand, CodeOrigin);
     
-    // Quickly get all of the non-argument locals live at the given point. This doesn't give you
+    // Quickly get all of the non-argument locals and tmps live at the given point. This doesn't give you
     // any arguments because those are all presumed live. You can call forAllLiveInBytecode() to
     // also get the arguments. This is much faster than calling isLiveInBytecode() for each local.
     template<typename Functor>
-    void forAllLocalsLiveInBytecode(CodeOrigin codeOrigin, const Functor& functor)
+    void forAllLocalsAndTmpsLiveInBytecode(CodeOrigin codeOrigin, const Functor& functor)
     {
         // Support for not redundantly reporting arguments. Necessary because in case of a varargs
         // call, only the callee knows that arguments are live while in the case of a non-varargs
@@ -881,6 +881,14 @@ public:
                 if (livenessAtBytecode[relativeLocal])
                     functor(reg);
             }
+
+            if (codeOriginPtr->bytecodeIndex().checkpoint()) {
+                ASSERT(codeBlock->numTmps());
+                auto liveTmps = tmpLivenessForCheckpoint(*codeBlock, codeOriginPtr->bytecodeIndex());
+                liveTmps.forEachSetBit([&] (size_t tmp) {
+                    functor(remapOperand(inlineCallFrame, Operand::tmp(tmp)));
+                });
+            }
             
             if (!inlineCallFrame)
                 break;
@@ -904,9 +912,9 @@ public:
         }
     }
     
-    // Get a BitVector of all of the non-argument locals live right now. This is mostly useful if
+    // Get a BitVector of all of the locals and tmps live right now. This is mostly useful if
     // you want to compare two sets of live locals from two different CodeOrigins.
-    BitVector localsLiveInBytecode(CodeOrigin);
+    BitVector localsAndTmpsLiveInBytecode(CodeOrigin);
 
     LivenessCalculationPoint appropriateLivenessCalculationPoint(CodeOrigin origin, bool isCallerOrigin)
     {
@@ -938,12 +946,12 @@ public:
         return LivenessCalculationPoint::BeforeUse;
     }
     
-    // Tells you all of the arguments and locals live at the given CodeOrigin. This is a small
-    // extension to forAllLocalsLiveInBytecode(), since all arguments are always presumed live.
+    // Tells you all of the operands live at the given CodeOrigin. This is a small
+    // extension to forAllLocalsOrTmpsLiveInBytecode(), since all arguments are always presumed live.
     template<typename Functor>
     void forAllLiveInBytecode(CodeOrigin codeOrigin, const Functor& functor)
     {
-        forAllLocalsLiveInBytecode(codeOrigin, functor);
+        forAllLocalsAndTmpsLiveInBytecode(codeOrigin, functor);
         
         // Report all arguments as being live.
         for (unsigned argument = block(0)->variablesAtHead.numberOfArguments(); argument--;)
@@ -1114,6 +1122,7 @@ public:
     std::unique_ptr<BackwardsCFG> m_backwardsCFG;
     std::unique_ptr<BackwardsDominators> m_backwardsDominators;
     std::unique_ptr<ControlEquivalenceAnalysis> m_controlEquivalenceAnalysis;
+    unsigned m_tmps;
     unsigned m_localVars;
     unsigned m_nextMachineLocal;
     unsigned m_parameterSlots;
index 99aeffc..202d225 100644 (file)
@@ -45,8 +45,8 @@ static constexpr bool verbose = false;
 InPlaceAbstractState::InPlaceAbstractState(Graph& graph)
     : m_graph(graph)
     , m_abstractValues(*graph.m_abstractValuesCache)
-    , m_variables(m_graph.m_codeBlock->numParameters(), graph.m_localVars)
-    , m_block(0)
+    , m_variables(OperandsLike, graph.block(0)->variablesAtHead)
+    , m_block(nullptr)
 {
 }
 
index bf5ead1..1865e6a 100644 (file)
@@ -167,13 +167,11 @@ public:
         return fastForward(m_variables[index]);
     }
 
-    AbstractValue& operand(int operand)
+    AbstractValue& operand(Operand operand)
     {
         return variableAt(m_variables.operandIndex(operand));
     }
     
-    AbstractValue& operand(VirtualRegister operand) { return this->operand(operand.offset()); }
-    
     AbstractValue& local(size_t index)
     {
         return variableAt(m_variables.localIndex(index));
index 70eef88..c5544bb 100644 (file)
@@ -85,8 +85,6 @@ void JITCompiler::linkOSRExits()
         }
     }
     
-    MacroAssemblerCodeRef<JITThunkPtrTag> osrExitThunk = vm().getCTIStub(osrExitThunkGenerator);
-    auto osrExitThunkLabel = CodeLocationLabel<JITThunkPtrTag>(osrExitThunk.code());
     for (unsigned i = 0; i < m_jitCode->osrExit.size(); ++i) {
         OSRExitCompilationInfo& info = m_exitCompilationInfo[i];
         JumpList& failureJumps = info.m_failureJumps;
@@ -97,13 +95,7 @@ void JITCompiler::linkOSRExits()
 
         jitAssertHasValidCallFrame();
         store32(TrustedImm32(i), &vm().osrExitIndex);
-        if (Options::useProbeOSRExit()) {
-            Jump target = jump();
-            addLinkTask([target, osrExitThunkLabel] (LinkBuffer& linkBuffer) {
-                linkBuffer.link(target, osrExitThunkLabel);
-            });
-        } else
-            info.m_patchableJump = patchableJump();
+        info.m_patchableJump = patchableJump();
     }
 }
 
@@ -595,11 +587,12 @@ void JITCompiler::noticeOSREntry(BasicBlock& basicBlock, JITCompiler::Label bloc
             default:
                 break;
             }
-            
-            if (variable->local() != variable->machineLocal()) {
+
+            ASSERT(!variable->operand().isTmp());
+            if (variable->operand().virtualRegister() != variable->machineLocal()) {
                 entry->m_reshufflings.append(
                     OSREntryReshuffling(
-                        variable->local().offset(), variable->machineLocal().offset()));
+                        variable->operand().virtualRegister().offset(), variable->machineLocal().offset()));
             }
         }
     }
index d1bd7da..4880874 100644 (file)
@@ -135,7 +135,7 @@ public:
 
     void emitStoreCallSiteIndex(CallSiteIndex callSite)
     {
-        store32(TrustedImm32(callSite.bits()), tagFor(static_cast<VirtualRegister>(CallFrameSlot::argumentCountIncludingThis)));
+        store32(TrustedImm32(callSite.bits()), tagFor(CallFrameSlot::argumentCountIncludingThis));
     }
 
     // Add a call out from JIT code, without an exception check.
index 42f68ad..5c35e53 100644 (file)
@@ -63,7 +63,7 @@ public:
         return true;
     }
 
-    bool isValidFlushLocation(BasicBlock* startingBlock, unsigned index, VirtualRegister operand)
+    bool isValidFlushLocation(BasicBlock* startingBlock, unsigned index, Operand operand)
     {
         // This code is not meant to be fast. We just use it for assertions. If we got liveness wrong,
         // this function would return false for a Flush that we insert.
@@ -81,7 +81,7 @@ public:
         auto flushIsDefinitelyInvalid = [&] (BasicBlock* block, unsigned index) {
             bool allGood = false;
             for (unsigned i = index; i--; ) {
-                if (block->at(i)->accessesStack(m_graph) && block->at(i)->local() == operand) {
+                if (block->at(i)->accessesStack(m_graph) && block->at(i)->operand() == operand) {
                     allGood = true;
                     break;
                 }
@@ -115,8 +115,7 @@ public:
     void handleBlockForTryCatch(BasicBlock* block, InsertionSet& insertionSet)
     {
         HandlerInfo* currentExceptionHandler = nullptr;
-        FastBitVector liveAtCatchHead;
-        liveAtCatchHead.resize(m_graph.block(0)->variablesAtTail.numberOfLocals());
+        Operands<bool> liveAtCatchHead(0, m_graph.block(0)->variablesAtTail.numberOfLocals(), m_graph.block(0)->variablesAtTail.numberOfTmps());
 
         HandlerInfo* cachedHandlerResult;
         CodeOrigin cachedCodeOrigin;
@@ -133,11 +132,11 @@ public:
                 InlineCallFrame* inlineCallFrame = origin.inlineCallFrame();
                 CodeBlock* codeBlock = m_graph.baselineCodeBlockFor(inlineCallFrame);
                 if (HandlerInfo* handler = codeBlock->handlerForBytecodeIndex(bytecodeIndexToCheck)) {
-                    liveAtCatchHead.clearAll();
+                    liveAtCatchHead.fill(false);
 
                     BytecodeIndex catchBytecodeIndex = BytecodeIndex(handler->target);
-                    m_graph.forAllLocalsLiveInBytecode(CodeOrigin(catchBytecodeIndex, inlineCallFrame), [&] (VirtualRegister operand) {
-                        liveAtCatchHead[operand.toLocal()] = true;
+                    m_graph.forAllLocalsAndTmpsLiveInBytecode(CodeOrigin(catchBytecodeIndex, inlineCallFrame), [&] (Operand operand) {
+                        liveAtCatchHead.operand(operand) = true;
                     });
 
                     cachedHandlerResult = handler;
@@ -156,12 +155,12 @@ public:
             return cachedHandlerResult;
         };
 
-        Operands<VariableAccessData*> currentBlockAccessData(block->variablesAtTail.numberOfArguments(), block->variablesAtTail.numberOfLocals(), nullptr);
+        Operands<VariableAccessData*> currentBlockAccessData(OperandsLike, block->variablesAtTail, nullptr);
 
         auto flushEverything = [&] (NodeOrigin origin, unsigned index) {
             RELEASE_ASSERT(currentExceptionHandler);
-            auto flush = [&] (VirtualRegister operand) {
-                if ((operand.isLocal() && liveAtCatchHead[operand.toLocal()]) || operand.isArgument()) {
+            auto flush = [&] (Operand operand) {
+                if (operand.isArgument() || liveAtCatchHead.operand(operand)) {
 
                     ASSERT(isValidFlushLocation(block, index, operand));
 
@@ -178,6 +177,8 @@ public:
 
             for (unsigned local = 0; local < block->variablesAtTail.numberOfLocals(); local++)
                 flush(virtualRegisterForLocal(local));
+            for (unsigned tmp = 0; tmp < block->variablesAtTail.numberOfTmps(); ++tmp)
+                flush(Operand::tmp(tmp));
             flush(VirtualRegister(CallFrame::thisArgumentOffset()));
         };
 
@@ -192,9 +193,8 @@ public:
             }
 
             if (currentExceptionHandler && (node->op() == SetLocal || node->op() == SetArgumentDefinitely || node->op() == SetArgumentMaybe)) {
-                VirtualRegister operand = node->local();
-                if ((operand.isLocal() && liveAtCatchHead[operand.toLocal()]) || operand.isArgument()) {
-
+                Operand operand = node->operand();
+                if (operand.isArgument() || liveAtCatchHead.operand(operand)) {
                     ASSERT(isValidFlushLocation(block, nodeIndex, operand));
 
                     VariableAccessData* variableAccessData = currentBlockAccessData.operand(operand);
@@ -207,7 +207,7 @@ public:
             }
 
             if (node->accessesStack(m_graph))
-                currentBlockAccessData.operand(node->local()) = node->variableAccessData();
+                currentBlockAccessData.operand(node->operand()) = node->variableAccessData();
         }
 
         if (currentExceptionHandler) {
@@ -216,7 +216,7 @@ public:
         }
     }
 
-    VariableAccessData* newVariableAccessData(VirtualRegister operand)
+    VariableAccessData* newVariableAccessData(Operand operand)
     {
         ASSERT(!operand.isConstant());
         
index 73a3d75..fff96bf 100644 (file)
@@ -85,7 +85,7 @@ private:
         m_state.fill(Epoch());
         m_graph.forAllLiveInBytecode(
             block->terminal()->origin.forExit,
-            [&] (VirtualRegister reg) {
+            [&] (Operand reg) {
                 m_state.operand(reg) = currentEpoch;
             });
         
@@ -99,7 +99,7 @@ private:
             Node* node = block->at(nodeIndex);
             
             if (node->op() == MovHint) {
-                Epoch localEpoch = m_state.operand(node->unlinkedLocal());
+                Epoch localEpoch = m_state.operand(node->unlinkedOperand());
                 if (DFGMovHintRemovalPhaseInternal::verbose)
                     dataLog("    At ", node, ": current = ", currentEpoch, ", local = ", localEpoch, "\n");
                 if (!localEpoch || localEpoch == currentEpoch) {
@@ -107,7 +107,7 @@ private:
                     node->child1() = Edge();
                     m_changed = true;
                 }
-                m_state.operand(node->unlinkedLocal()) = Epoch();
+                m_state.operand(node->unlinkedOperand()) = Epoch();
             }
             
             if (mayExit(m_graph, node) != DoesNotExit)
@@ -116,15 +116,15 @@ private:
             if (nodeIndex) {
                 forAllKilledOperands(
                     m_graph, block->at(nodeIndex - 1), node,
-                    [&] (VirtualRegister reg) {
+                    [&] (Operand operand) {
                         // This function is a bit sloppy - it might claim to kill a local even if
                         // it's still live after. We need to protect against that.
-                        if (!!m_state.operand(reg))
+                        if (!!m_state.operand(operand))
                             return;
                         
                         if (DFGMovHintRemovalPhaseInternal::verbose)
-                            dataLog("    Killed operand at ", node, ": ", reg, "\n");
-                        m_state.operand(reg) = currentEpoch;
+                            dataLog("    Killed operand at ", node, ": ", operand, "\n");
+                        m_state.operand(operand) = currentEpoch;
                     });
             }
         }
index 3915937..b80155e 100644 (file)
@@ -250,13 +250,13 @@ struct StackAccessData {
     {
     }
     
-    StackAccessData(VirtualRegister local, FlushFormat format)
-        : local(local)
+    StackAccessData(Operand operand, FlushFormat format)
+        : operand(operand)
         , format(format)
     {
     }
     
-    VirtualRegister local;
+    Operand operand;
     VirtualRegister machineLocal;
     FlushFormat format;
     
@@ -902,6 +902,7 @@ public:
         switch (op()) {
         case GetMyArgumentByVal:
         case GetMyArgumentByValOutOfBounds:
+        case VarargsLength:
         case LoadVarargs:
         case ForwardVarargs:
         case CallVarargs:
@@ -923,9 +924,11 @@ public:
         switch (op()) {
         case GetMyArgumentByVal:
         case GetMyArgumentByValOutOfBounds:
+        case VarargsLength:
+            return child1();
         case LoadVarargs:
         case ForwardVarargs:
-            return child1();
+            return child2();
         case CallVarargs:
         case CallForwardVarargs:
         case ConstructVarargs:
@@ -973,9 +976,9 @@ public:
         return m_opInfo.as<VariableAccessData*>()->find();
     }
     
-    VirtualRegister local()
+    Operand operand()
     {
-        return variableAccessData()->local();
+        return variableAccessData()->operand();
     }
     
     VirtualRegister machineLocal()
@@ -983,7 +986,7 @@ public:
         return variableAccessData()->machineLocal();
     }
     
-    bool hasUnlinkedLocal()
+    bool hasUnlinkedOperand()
     {
         switch (op()) {
         case ExtractOSREntryLocal:
@@ -996,10 +999,10 @@ public:
         }
     }
     
-    VirtualRegister unlinkedLocal()
+    Operand unlinkedOperand()
     {
-        ASSERT(hasUnlinkedLocal());
-        return VirtualRegister(m_opInfo.as<int32_t>());
+        ASSERT(hasUnlinkedOperand());
+        return Operand::fromBits(m_opInfo.as<uint64_t>());
     }
     
     bool hasStackAccessData()
@@ -1350,7 +1353,7 @@ public:
     
     bool hasLoadVarargsData()
     {
-        return op() == LoadVarargs || op() == ForwardVarargs;
+        return op() == LoadVarargs || op() == ForwardVarargs || op() == VarargsLength;
     }
     
     LoadVarargsData* loadVarargsData()
index 81913ed..c3868c0 100644 (file)
@@ -74,6 +74,7 @@ namespace JSC { namespace DFG {
     macro(GetLocal, NodeResultJS | NodeMustGenerate) \
     macro(SetLocal, 0) \
     \
+    /* These are used in SSA form to represent to track */\
     macro(PutStack, NodeMustGenerate) \
     macro(KillStack, NodeMustGenerate) \
     macro(GetStack, NodeResultJS) \
@@ -202,6 +203,7 @@ namespace JSC { namespace DFG {
     macro(GetByValWithThis, NodeResultJS | NodeMustGenerate) \
     macro(GetMyArgumentByVal, NodeResultJS | NodeMustGenerate) \
     macro(GetMyArgumentByValOutOfBounds, NodeResultJS | NodeMustGenerate) \
+    macro(VarargsLength, NodeMustGenerate | NodeResultInt32) \
     macro(LoadVarargs, NodeMustGenerate) \
     macro(ForwardVarargs, NodeMustGenerate) \
     macro(PutByValDirect, NodeMustGenerate | NodeHasVarArgs) \
index 79bc796..d0a9406 100644 (file)
 #include "JSCInlines.h"
 
 namespace JSC { namespace DFG {
-namespace DFGOSRAvailabilityAnalysisPhaseInternal {
-static constexpr bool verbose = false;
-}
 
 class OSRAvailabilityAnalysisPhase : public Phase {
+    static constexpr bool verbose = false;
 public:
     OSRAvailabilityAnalysisPhase(Graph& graph)
         : Phase(graph, "OSR availability analysis")
@@ -75,8 +73,8 @@ public:
             dataLog("Live: ");
             m_graph.forAllLiveInBytecode(
                 block->at(0)->origin.forExit,
-                [&] (VirtualRegister reg) {
-                    dataLog(reg, " ");
+                [&] (Operand operand) {
+                    dataLog(operand, " ");
                 });
             dataLogLn("");
         };
@@ -91,7 +89,7 @@ public:
                 if (!block)
                     continue;
                 
-                if (DFGOSRAvailabilityAnalysisPhaseInternal::verbose) {
+                if (verbose) {
                     dataLogLn("Before changing Block #", block->index);
                     dumpAvailability(block);
                 }
@@ -106,7 +104,7 @@ public:
                 block->ssa->availabilityAtTail = calculator.m_availability;
                 changed = true;
 
-                if (DFGOSRAvailabilityAnalysisPhaseInternal::verbose) {
+                if (verbose) {
                     dataLogLn("After changing Block #", block->index);
                     dumpAvailability(block);
                 }
@@ -120,7 +118,7 @@ public:
                     BasicBlock* successor = block->successor(successorIndex);
                     successor->ssa->availabilityAtHead.pruneByLiveness(
                         m_graph, successor->at(0)->origin.forExit);
-                    if (DFGOSRAvailabilityAnalysisPhaseInternal::verbose) {
+                    if (verbose) {
                         dataLogLn("After pruning Block #", successor->index);
                         dumpAvailability(successor);
                         dumpBytecodeLivenessAtHead(successor);
@@