JSVALUE64: Pass arguments in platform argument registers when making JavaScript calls
authormsaboff@apple.com <msaboff@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 10 Dec 2016 07:32:38 +0000 (07:32 +0000)
committermsaboff@apple.com <msaboff@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 10 Dec 2016 07:32:38 +0000 (07:32 +0000)
https://bugs.webkit.org/show_bug.cgi?id=160355

Reviewed by Filip Pizlo.

JSTests:

New microbenchmarks to measure call type performance.

* microbenchmarks/calling-computed-args.js: Added.
* microbenchmarks/calling-many-callees.js: Added.
* microbenchmarks/calling-one-callee-fixed.js: Added.
* microbenchmarks/calling-one-callee.js: Added.
* microbenchmarks/calling-poly-callees.js: Added.
* microbenchmarks/calling-poly-extra-arity-callees.js: Added.
* microbenchmarks/calling-tailcall.js: Added.
* microbenchmarks/calling-virtual-arity-fixup-callees.js: Added.
* microbenchmarks/calling-virtual-arity-fixup-stackargs.js: Added.
* microbenchmarks/calling-virtual-callees.js: Added.
* microbenchmarks/calling-virtual-extra-arity-callees.js: Added.

Source/JavaScriptCore:

This patch implements passing JavaScript function arguments in registers for 64 bit platforms.

The implemented convention follows the ABI conventions for the associated platform.
The first two arguments are the callee and argument count, the rest of the argument registers
contain "this" and following argument until all platform argument registers are exhausted.
Arguments beyond what fit in registers are placed on the stack in the same location as
before this patch.

For X86-64 non-Windows platforms, there are 6 argument registers specified in the related ABI.
ARM64 has had argument registers.  This allows for 4 or 6 parameter values to be placed in
registers on these respective platforms.  This patch doesn't implement passing arguments in
registers for 32 bit platform, since most platforms have at most 4 argument registers
specified and 32 bit platforms use two 32 bit registers/memory locations to store one JSValue.

The call frame on the stack in unchanged in format and the arguments that are passed in
registers use the corresponding call frame location as a spill location. Arguments can
also be passed on the stack. The LLInt, baseline JIT'ed code as well as the initial entry
from C++ code base arguments on the stack. DFG s and FTL generated code pass arguments
via registers. All callees can accept arguments either in registers or on the stack.
The callee is responsible for moving argument to its preferred location.

The multiple entry points to JavaSCript code is now handled via the JITEntryPoints class and
related code.  That class now has entries for StackArgsArityCheckNotRequired,
StackArgsMustCheckArity and for platforms that support registers arguments,
RegisterArgsArityCheckNotRequired, RegisterArgsMustCheckArity as well as and additional
RegisterArgsPossibleExtraArgs entry point when extra registers argument are passed.
This last case is needed to spill those extra arguments to the corresponding call frame
slots.

* JavaScriptCore.xcodeproj/project.pbxproj:
* b3/B3ArgumentRegValue.h:
* b3/B3Validate.cpp:
* bytecode/CallLinkInfo.cpp:
(JSC::CallLinkInfo::CallLinkInfo):
* bytecode/CallLinkInfo.h:
(JSC::CallLinkInfo::setUpCall):
(JSC::CallLinkInfo::argumentsLocation):
(JSC::CallLinkInfo::argumentsInRegisters):
* bytecode/PolymorphicAccess.cpp:
(JSC::AccessCase::generateImpl):
* dfg/DFGAbstractInterpreterInlines.h:
(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::parseBlock):
* dfg/DFGCPSRethreadingPhase.cpp:
(JSC::DFG::CPSRethreadingPhase::canonicalizeLocalsInBlock):
(JSC::DFG::CPSRethreadingPhase::specialCaseArguments):
(JSC::DFG::CPSRethreadingPhase::computeIsFlushed):
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGCommon.h:
* dfg/DFGDCEPhase.cpp:
(JSC::DFG::DCEPhase::run):
* dfg/DFGDoesGC.cpp:
(JSC::DFG::doesGC):
* dfg/DFGDriver.cpp:
(JSC::DFG::compileImpl):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
* dfg/DFGGenerationInfo.h:
(JSC::DFG::GenerationInfo::initArgumentRegisterValue):
* dfg/DFGGraph.cpp:
(JSC::DFG::Graph::dump):
(JSC::DFG::Graph::methodOfGettingAValueProfileFor):
* dfg/DFGGraph.h:
(JSC::DFG::Graph::needsFlushedThis):
(JSC::DFG::Graph::addImmediateShouldSpeculateInt32):
* dfg/DFGInPlaceAbstractState.cpp:
(JSC::DFG::InPlaceAbstractState::initialize):
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::link):
(JSC::DFG::JITCompiler::compile):
(JSC::DFG::JITCompiler::compileFunction):
(JSC::DFG::JITCompiler::compileEntry): Deleted.
* dfg/DFGJITCompiler.h:
(JSC::DFG::JITCompiler::addJSDirectCall):
(JSC::DFG::JITCompiler::JSDirectCallRecord::JSDirectCallRecord):
(JSC::DFG::JITCompiler::JSDirectCallRecord::hasSlowCall):
* dfg/DFGJITFinalizer.cpp:
(JSC::DFG::JITFinalizer::JITFinalizer):
(JSC::DFG::JITFinalizer::finalize):
(JSC::DFG::JITFinalizer::finalizeFunction):
* dfg/DFGJITFinalizer.h:
* dfg/DFGLiveCatchVariablePreservationPhase.cpp:
(JSC::DFG::LiveCatchVariablePreservationPhase::handleBlock):
* dfg/DFGMaximalFlushInsertionPhase.cpp:
(JSC::DFG::MaximalFlushInsertionPhase::treatRegularBlock):
(JSC::DFG::MaximalFlushInsertionPhase::treatRootBlock):
* dfg/DFGMayExit.cpp:
* dfg/DFGMinifiedNode.cpp:
(JSC::DFG::MinifiedNode::fromNode):
* dfg/DFGMinifiedNode.h:
(JSC::DFG::belongsInMinifiedGraph):
* dfg/DFGNode.cpp:
(JSC::DFG::Node::hasVariableAccessData):
* dfg/DFGNode.h:
(JSC::DFG::Node::accessesStack):
(JSC::DFG::Node::setVariableAccessData):
(JSC::DFG::Node::hasArgumentRegisterIndex):
(JSC::DFG::Node::argumentRegisterIndex):
* dfg/DFGNodeType.h:
* dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
(JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
* dfg/DFGOSREntrypointCreationPhase.cpp:
(JSC::DFG::OSREntrypointCreationPhase::run):
* dfg/DFGPlan.cpp:
(JSC::DFG::Plan::compileInThreadImpl):
* dfg/DFGPreciseLocalClobberize.h:
(JSC::DFG::PreciseLocalClobberizeAdaptor::readTop):
* dfg/DFGPredictionInjectionPhase.cpp:
(JSC::DFG::PredictionInjectionPhase::run):
* dfg/DFGPredictionPropagationPhase.cpp:
* dfg/DFGPutStackSinkingPhase.cpp:
* dfg/DFGRegisterBank.h:
(JSC::DFG::RegisterBank::iterator::unlock):
(JSC::DFG::RegisterBank::unlockAtIndex):
* dfg/DFGSSAConversionPhase.cpp:
(JSC::DFG::SSAConversionPhase::run):
* dfg/DFGSafeToExecute.h:
(JSC::DFG::safeToExecute):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::SpeculativeJIT):
(JSC::DFG::SpeculativeJIT::clearGenerationInfo):
(JSC::DFG::dumpRegisterInfo):
(JSC::DFG::SpeculativeJIT::dump):
(JSC::DFG::SpeculativeJIT::compileCurrentBlock):
(JSC::DFG::SpeculativeJIT::checkArgumentTypes):
(JSC::DFG::SpeculativeJIT::setupArgumentRegistersForEntry):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT.h:
(JSC::DFG::SpeculativeJIT::allocate):
(JSC::DFG::SpeculativeJIT::spill):
(JSC::DFG::SpeculativeJIT::generationInfoFromVirtualRegister):
(JSC::DFG::JSValueOperand::JSValueOperand):
(JSC::DFG::JSValueOperand::gprUseSpecific):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::fillJSValue):
(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGStrengthReductionPhase.cpp:
(JSC::DFG::StrengthReductionPhase::handleNode):
* dfg/DFGThunks.cpp:
(JSC::DFG::osrEntryThunkGenerator):
* dfg/DFGVariableEventStream.cpp:
(JSC::DFG::VariableEventStream::reconstruct):
* dfg/DFGVirtualRegisterAllocationPhase.cpp:
(JSC::DFG::VirtualRegisterAllocationPhase::allocateRegister):
(JSC::DFG::VirtualRegisterAllocationPhase::run):
* ftl/FTLCapabilities.cpp:
(JSC::FTL::canCompile):
* ftl/FTLJITCode.cpp:
(JSC::FTL::JITCode::~JITCode):
(JSC::FTL::JITCode::initializeEntrypointThunk):
(JSC::FTL::JITCode::setEntryFor):
(JSC::FTL::JITCode::addressForCall):
(JSC::FTL::JITCode::executableAddressAtOffset):
(JSC::FTL::JITCode::initializeAddressForCall): Deleted.
(JSC::FTL::JITCode::initializeArityCheckEntrypoint): Deleted.
* ftl/FTLJITCode.h:
* ftl/FTLJITFinalizer.cpp:
(JSC::FTL::JITFinalizer::finalizeFunction):
* ftl/FTLLink.cpp:
(JSC::FTL::link):
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::lower):
(JSC::FTL::DFG::LowerDFGToB3::compileNode):
(JSC::FTL::DFG::LowerDFGToB3::compileGetArgumentRegister):
(JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstruct):
(JSC::FTL::DFG::LowerDFGToB3::compileDirectCallOrConstruct):
(JSC::FTL::DFG::LowerDFGToB3::compileTailCall):
(JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargsSpread):
(JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargs):
(JSC::FTL::DFG::LowerDFGToB3::compileCallEval):
* ftl/FTLOSREntry.cpp:
(JSC::FTL::prepareOSREntry):
* ftl/FTLOutput.cpp:
(JSC::FTL::Output::argumentRegister):
(JSC::FTL::Output::argumentRegisterInt32):
* ftl/FTLOutput.h:
* interpreter/ShadowChicken.cpp:
(JSC::ShadowChicken::update):
* jit/AssemblyHelpers.cpp:
(JSC::AssemblyHelpers::emitDumbVirtualCall):
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::spillArgumentRegistersToFrameBeforePrologue):
(JSC::AssemblyHelpers::spillArgumentRegistersToFrame):
(JSC::AssemblyHelpers::fillArgumentRegistersFromFrameBeforePrologue):
(JSC::AssemblyHelpers::emitPutArgumentToCallFrameBeforePrologue):
(JSC::AssemblyHelpers::emitPutArgumentToCallFrame):
(JSC::AssemblyHelpers::emitGetFromCallFrameHeaderBeforePrologue):
(JSC::AssemblyHelpers::emitGetFromCallFrameArgumentBeforePrologue):
(JSC::AssemblyHelpers::emitGetPayloadFromCallFrameHeaderBeforePrologue):
(JSC::AssemblyHelpers::incrementCounter):
* jit/CachedRecovery.cpp:
(JSC::CachedRecovery::addTargetJSValueRegs):
* jit/CachedRecovery.h:
(JSC::CachedRecovery::gprTargets):
(JSC::CachedRecovery::setWantedFPR):
(JSC::CachedRecovery::wantedJSValueRegs):
(JSC::CachedRecovery::setWantedJSValueRegs): Deleted.
* jit/CallFrameShuffleData.h:
* jit/CallFrameShuffler.cpp:
(JSC::CallFrameShuffler::CallFrameShuffler):
(JSC::CallFrameShuffler::dump):
(JSC::CallFrameShuffler::tryWrites):
(JSC::CallFrameShuffler::prepareAny):
* jit/CallFrameShuffler.h:
(JSC::CallFrameShuffler::snapshot):
(JSC::CallFrameShuffler::addNew):
(JSC::CallFrameShuffler::initDangerFrontier):
(JSC::CallFrameShuffler::updateDangerFrontier):
(JSC::CallFrameShuffler::findDangerFrontierFrom):
* jit/CallFrameShuffler64.cpp:
(JSC::CallFrameShuffler::emitDisplace):
* jit/GPRInfo.h:
(JSC::JSValueRegs::operator==):
(JSC::JSValueRegs::operator!=):
(JSC::GPRInfo::toArgumentIndex):
(JSC::argumentRegisterFor):
(JSC::argumentRegisterForCallee):
(JSC::argumentRegisterForArgumentCount):
(JSC::argumentRegisterIndexForJSFunctionArgument):
(JSC::jsFunctionArgumentForArgumentRegister):
(JSC::argumentRegisterForFunctionArgument):
(JSC::numberOfRegisterArgumentsFor):
* jit/JIT.cpp:
(JSC::JIT::compileWithoutLinking):
(JSC::JIT::link):
(JSC::JIT::compileCTINativeCall): Deleted.
* jit/JIT.h:
(JSC::JIT::compileNativeCallEntryPoints):
* jit/JITCall.cpp:
(JSC::JIT::compileSetupVarargsFrame):
(JSC::JIT::compileCallEval):
(JSC::JIT::compileCallEvalSlowCase):
(JSC::JIT::compileOpCall):
(JSC::JIT::compileOpCallSlowCase):
* jit/JITCall32_64.cpp:
(JSC::JIT::compileCallEvalSlowCase):
(JSC::JIT::compileOpCall):
(JSC::JIT::compileOpCallSlowCase):
* jit/JITCode.cpp:
(JSC::JITCode::execute):
(JSC::DirectJITCode::DirectJITCode):
(JSC::DirectJITCode::initializeEntryPoints):
(JSC::DirectJITCode::addressForCall):
(JSC::NativeJITCode::addressForCall):
(JSC::DirectJITCode::initializeCodeRef): Deleted.
* jit/JITCode.h:
(JSC::JITCode::executableAddress): Deleted.
* jit/JITEntryPoints.h: Added.
(JSC::JITEntryPoints::JITEntryPoints):
(JSC::JITEntryPoints::entryFor):
(JSC::JITEntryPoints::setEntryFor):
(JSC::JITEntryPoints::offsetOfEntryFor):
(JSC::JITEntryPoints::registerEntryTypeForArgumentCount):
(JSC::JITEntryPoints::registerEntryTypeForArgumentType):
(JSC::JITEntryPoints::clearEntries):
(JSC::JITEntryPoints::operator=):
(JSC::JITEntryPointsWithRef::JITEntryPointsWithRef):
(JSC::JITEntryPointsWithRef::codeRef):
(JSC::argumentsLocationFor):
(JSC::registerEntryPointTypeFor):
(JSC::entryPointTypeFor):
(JSC::thunkEntryPointTypeFor):
(JSC::JITJSCallThunkEntryPointsWithRef::JITJSCallThunkEntryPointsWithRef):
(JSC::JITJSCallThunkEntryPointsWithRef::entryFor):
(JSC::JITJSCallThunkEntryPointsWithRef::setEntryFor):
(JSC::JITJSCallThunkEntryPointsWithRef::offsetOfEntryFor):
(JSC::JITJSCallThunkEntryPointsWithRef::clearEntries):
(JSC::JITJSCallThunkEntryPointsWithRef::codeRef):
(JSC::JITJSCallThunkEntryPointsWithRef::operator=):
* jit/JITOpcodes.cpp:
(JSC::JIT::privateCompileJITEntryNativeCall):
(JSC::JIT::privateCompileCTINativeCall): Deleted.
* jit/JITOpcodes32_64.cpp:
(JSC::JIT::privateCompileJITEntryNativeCall):
(JSC::JIT::privateCompileCTINativeCall): Deleted.
* jit/JITOperations.cpp:
* jit/JITThunks.cpp:
(JSC::JITThunks::jitEntryNativeCall):
(JSC::JITThunks::jitEntryNativeConstruct):
(JSC::JITThunks::jitEntryStub):
(JSC::JITThunks::jitCallThunkEntryStub):
(JSC::JITThunks::hostFunctionStub):
(JSC::JITThunks::ctiNativeCall): Deleted.
(JSC::JITThunks::ctiNativeConstruct): Deleted.
* jit/JITThunks.h:
* jit/JSInterfaceJIT.h:
(JSC::JSInterfaceJIT::emitJumpIfNotInt32):
(JSC::JSInterfaceJIT::emitLoadInt32):
* jit/RegisterSet.cpp:
(JSC::RegisterSet::argumentRegisters):
* jit/RegisterSet.h:
* jit/Repatch.cpp:
(JSC::linkSlowFor):
(JSC::revertCall):
(JSC::unlinkFor):
(JSC::linkVirtualFor):
(JSC::linkPolymorphicCall):
* jit/SpecializedThunkJIT.h:
(JSC::SpecializedThunkJIT::SpecializedThunkJIT):
(JSC::SpecializedThunkJIT::checkJSStringArgument):
(JSC::SpecializedThunkJIT::linkFailureHere):
(JSC::SpecializedThunkJIT::finalize):
* jit/ThunkGenerator.h:
* jit/ThunkGenerators.cpp:
(JSC::createRegisterArgumentsSpillEntry):
(JSC::slowPathFor):
(JSC::linkCallThunkGenerator):
(JSC::linkDirectCallThunkGenerator):
(JSC::linkPolymorphicCallThunkGenerator):
(JSC::virtualThunkFor):
(JSC::nativeForGenerator):
(JSC::nativeCallGenerator):
(JSC::nativeTailCallGenerator):
(JSC::nativeTailCallWithoutSavedTagsGenerator):
(JSC::nativeConstructGenerator):
(JSC::stringCharLoadRegCall):
(JSC::charCodeAtThunkGenerator):
(JSC::charAtThunkGenerator):
(JSC::fromCharCodeThunkGenerator):
(JSC::clz32ThunkGenerator):
(JSC::sqrtThunkGenerator):
(JSC::floorThunkGenerator):
(JSC::ceilThunkGenerator):
(JSC::truncThunkGenerator):
(JSC::roundThunkGenerator):
(JSC::expThunkGenerator):
(JSC::logThunkGenerator):
(JSC::absThunkGenerator):
(JSC::imulThunkGenerator):
(JSC::randomThunkGenerator):
(JSC::boundThisNoArgsFunctionCallGenerator):
* jit/ThunkGenerators.h:
* jsc.cpp:
(jscmain):
* llint/LLIntEntrypoint.cpp:
(JSC::LLInt::setFunctionEntrypoint):
(JSC::LLInt::setEvalEntrypoint):
(JSC::LLInt::setProgramEntrypoint):
(JSC::LLInt::setModuleProgramEntrypoint):
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::entryOSR):
(JSC::LLInt::setUpCall):
* llint/LLIntThunks.cpp:
(JSC::LLInt::generateThunkWithJumpTo):
(JSC::LLInt::functionForRegisterCallEntryThunkGenerator):
(JSC::LLInt::functionForStackCallEntryThunkGenerator):
(JSC::LLInt::functionForRegisterConstructEntryThunkGenerator):
(JSC::LLInt::functionForStackConstructEntryThunkGenerator):
(JSC::LLInt::functionForRegisterCallArityCheckThunkGenerator):
(JSC::LLInt::functionForStackCallArityCheckThunkGenerator):
(JSC::LLInt::functionForRegisterConstructArityCheckThunkGenerator):
(JSC::LLInt::functionForStackConstructArityCheckThunkGenerator):
(JSC::LLInt::functionForCallEntryThunkGenerator): Deleted.
(JSC::LLInt::functionForConstructEntryThunkGenerator): Deleted.
(JSC::LLInt::functionForCallArityCheckThunkGenerator): Deleted.
(JSC::LLInt::functionForConstructArityCheckThunkGenerator): Deleted.
* llint/LLIntThunks.h:
* runtime/ArityCheckMode.h:
* runtime/ExecutableBase.cpp:
(JSC::ExecutableBase::clearCode):
* runtime/ExecutableBase.h:
(JSC::ExecutableBase::entrypointFor):
(JSC::ExecutableBase::offsetOfEntryFor):
(JSC::ExecutableBase::offsetOfJITCodeWithArityCheckFor): Deleted.
* runtime/JSBoundFunction.cpp:
(JSC::boundThisNoArgsFunctionCall):
* runtime/NativeExecutable.cpp:
(JSC::NativeExecutable::finishCreation):
* runtime/ScriptExecutable.cpp:
(JSC::ScriptExecutable::installCode):
* runtime/VM.cpp:
(JSC::VM::VM):
(JSC::thunkGeneratorForIntrinsic):
(JSC::VM::clearCounters):
(JSC::VM::dumpCounters):
* runtime/VM.h:
(JSC::VM::getJITEntryStub):
(JSC::VM::getJITCallThunkEntryStub):
(JSC::VM::addressOfCounter):
(JSC::VM::counterFor):
* wasm/WasmBinding.cpp:
(JSC::Wasm::importStubGenerator):

Source/WTF:

Added a new build option ENABLE_VM_COUNTERS to enable JIT'able counters.
The default is for the option to be off.

* wtf/Platform.h:
Added ENABLE_VM_COUNTERS

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@209653 268f45cc-cd09-0410-ab3c-d52691b4dbfc

117 files changed:
JSTests/ChangeLog
JSTests/microbenchmarks/calling-computed-args.js [new file with mode: 0644]
JSTests/microbenchmarks/calling-many-callees.js [new file with mode: 0644]
JSTests/microbenchmarks/calling-one-callee-fixed.js [new file with mode: 0644]
JSTests/microbenchmarks/calling-one-callee.js [new file with mode: 0644]
JSTests/microbenchmarks/calling-poly-callees.js [new file with mode: 0644]
JSTests/microbenchmarks/calling-poly-extra-arity-callees.js [new file with mode: 0644]
JSTests/microbenchmarks/calling-tailcall.js [new file with mode: 0644]
JSTests/microbenchmarks/calling-virtual-arity-fixup-callees.js [new file with mode: 0644]
JSTests/microbenchmarks/calling-virtual-arity-fixup-stackargs.js [new file with mode: 0644]
JSTests/microbenchmarks/calling-virtual-callees.js [new file with mode: 0644]
JSTests/microbenchmarks/calling-virtual-extra-arity-callees.js [new file with mode: 0644]
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/b3/B3ArgumentRegValue.h
Source/JavaScriptCore/b3/B3Validate.cpp
Source/JavaScriptCore/bytecode/CallLinkInfo.cpp
Source/JavaScriptCore/bytecode/CallLinkInfo.h
Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp
Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
Source/JavaScriptCore/dfg/DFGCPSRethreadingPhase.cpp
Source/JavaScriptCore/dfg/DFGClobberize.h
Source/JavaScriptCore/dfg/DFGCommon.h
Source/JavaScriptCore/dfg/DFGDCEPhase.cpp
Source/JavaScriptCore/dfg/DFGDoesGC.cpp
Source/JavaScriptCore/dfg/DFGDriver.cpp
Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
Source/JavaScriptCore/dfg/DFGGenerationInfo.h
Source/JavaScriptCore/dfg/DFGGraph.cpp
Source/JavaScriptCore/dfg/DFGGraph.h
Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.cpp
Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
Source/JavaScriptCore/dfg/DFGJITCompiler.h
Source/JavaScriptCore/dfg/DFGJITFinalizer.cpp
Source/JavaScriptCore/dfg/DFGJITFinalizer.h
Source/JavaScriptCore/dfg/DFGLiveCatchVariablePreservationPhase.cpp
Source/JavaScriptCore/dfg/DFGMaximalFlushInsertionPhase.cpp
Source/JavaScriptCore/dfg/DFGMayExit.cpp
Source/JavaScriptCore/dfg/DFGMinifiedNode.cpp
Source/JavaScriptCore/dfg/DFGMinifiedNode.h
Source/JavaScriptCore/dfg/DFGNode.cpp
Source/JavaScriptCore/dfg/DFGNode.h
Source/JavaScriptCore/dfg/DFGNodeType.h
Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
Source/JavaScriptCore/dfg/DFGOSREntrypointCreationPhase.cpp
Source/JavaScriptCore/dfg/DFGPlan.cpp
Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
Source/JavaScriptCore/dfg/DFGPredictionInjectionPhase.cpp
Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
Source/JavaScriptCore/dfg/DFGPutStackSinkingPhase.cpp
Source/JavaScriptCore/dfg/DFGRegisterBank.h
Source/JavaScriptCore/dfg/DFGSSAConversionPhase.cpp
Source/JavaScriptCore/dfg/DFGSafeToExecute.h
Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
Source/JavaScriptCore/dfg/DFGThunks.cpp
Source/JavaScriptCore/dfg/DFGVariableEventStream.cpp
Source/JavaScriptCore/dfg/DFGVirtualRegisterAllocationPhase.cpp
Source/JavaScriptCore/ftl/FTLCapabilities.cpp
Source/JavaScriptCore/ftl/FTLJITCode.cpp
Source/JavaScriptCore/ftl/FTLJITCode.h
Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp
Source/JavaScriptCore/ftl/FTLLink.cpp
Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
Source/JavaScriptCore/ftl/FTLOSREntry.cpp
Source/JavaScriptCore/ftl/FTLOutput.cpp
Source/JavaScriptCore/ftl/FTLOutput.h
Source/JavaScriptCore/interpreter/ShadowChicken.cpp
Source/JavaScriptCore/jit/AssemblyHelpers.cpp
Source/JavaScriptCore/jit/AssemblyHelpers.h
Source/JavaScriptCore/jit/CachedRecovery.cpp
Source/JavaScriptCore/jit/CachedRecovery.h
Source/JavaScriptCore/jit/CallFrameShuffleData.h
Source/JavaScriptCore/jit/CallFrameShuffler.cpp
Source/JavaScriptCore/jit/CallFrameShuffler.h
Source/JavaScriptCore/jit/CallFrameShuffler64.cpp
Source/JavaScriptCore/jit/GPRInfo.h
Source/JavaScriptCore/jit/JIT.cpp
Source/JavaScriptCore/jit/JIT.h
Source/JavaScriptCore/jit/JITCall.cpp
Source/JavaScriptCore/jit/JITCall32_64.cpp
Source/JavaScriptCore/jit/JITCode.cpp
Source/JavaScriptCore/jit/JITCode.h
Source/JavaScriptCore/jit/JITEntryPoints.h [new file with mode: 0644]
Source/JavaScriptCore/jit/JITOpcodes.cpp
Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
Source/JavaScriptCore/jit/JITOperations.cpp
Source/JavaScriptCore/jit/JITThunks.cpp
Source/JavaScriptCore/jit/JITThunks.h
Source/JavaScriptCore/jit/JSInterfaceJIT.h
Source/JavaScriptCore/jit/RegisterSet.cpp
Source/JavaScriptCore/jit/RegisterSet.h
Source/JavaScriptCore/jit/Repatch.cpp
Source/JavaScriptCore/jit/SpecializedThunkJIT.h
Source/JavaScriptCore/jit/ThunkGenerator.h
Source/JavaScriptCore/jit/ThunkGenerators.cpp
Source/JavaScriptCore/jit/ThunkGenerators.h
Source/JavaScriptCore/jsc.cpp
Source/JavaScriptCore/llint/LLIntEntrypoint.cpp
Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
Source/JavaScriptCore/llint/LLIntThunks.cpp
Source/JavaScriptCore/llint/LLIntThunks.h
Source/JavaScriptCore/runtime/ArityCheckMode.h
Source/JavaScriptCore/runtime/ExecutableBase.cpp
Source/JavaScriptCore/runtime/ExecutableBase.h
Source/JavaScriptCore/runtime/JSBoundFunction.cpp
Source/JavaScriptCore/runtime/NativeExecutable.cpp
Source/JavaScriptCore/runtime/ScriptExecutable.cpp
Source/JavaScriptCore/runtime/VM.cpp
Source/JavaScriptCore/runtime/VM.h
Source/JavaScriptCore/wasm/WasmBinding.cpp
Source/WTF/ChangeLog
Source/WTF/wtf/Platform.h

index ef2ddd8..dfbec59 100644 (file)
@@ -1,3 +1,24 @@
+2016-12-09  Michael Saboff  <msaboff@apple.com>
+
+        JSVALUE64: Pass arguments in platform argument registers when making JavaScript calls
+        https://bugs.webkit.org/show_bug.cgi?id=160355
+
+        Reviewed by Filip Pizlo.
+
+        New microbenchmarks to measure call type performance.
+
+        * microbenchmarks/calling-computed-args.js: Added.
+        * microbenchmarks/calling-many-callees.js: Added.
+        * microbenchmarks/calling-one-callee-fixed.js: Added.
+        * microbenchmarks/calling-one-callee.js: Added.
+        * microbenchmarks/calling-poly-callees.js: Added.
+        * microbenchmarks/calling-poly-extra-arity-callees.js: Added.
+        * microbenchmarks/calling-tailcall.js: Added.
+        * microbenchmarks/calling-virtual-arity-fixup-callees.js: Added.
+        * microbenchmarks/calling-virtual-arity-fixup-stackargs.js: Added.
+        * microbenchmarks/calling-virtual-callees.js: Added.
+        * microbenchmarks/calling-virtual-extra-arity-callees.js: Added.
+
 2016-12-09  Keith Miller  <keith_miller@apple.com>
 
         Wasm should support call_indirect
diff --git a/JSTests/microbenchmarks/calling-computed-args.js b/JSTests/microbenchmarks/calling-computed-args.js
new file mode 100644 (file)
index 0000000..afc994e
--- /dev/null
@@ -0,0 +1,53 @@
+function sum2(a, b)
+{
+    return a + b;
+}
+
+noInline(sum2);
+
+function sum2b(a, b)
+{
+    return a + b;
+}
+
+noInline(sum2b);
+
+function sum3(a, b, c)
+{
+    return a + b + c;
+}
+
+noInline(sum3);
+
+function sum3b(a, b, c)
+{
+    return a + b + c;
+}
+
+noInline(sum3b);
+
+function test()
+{
+    let o1 = {
+        one: 1,
+        two: 2
+    }
+    let o2 = {
+        three: 3,
+        five: 5
+    };
+    let o3 = {
+        four: 4,
+        six: 6
+    };
+    let result = 0;
+    for (let i = 0; i < 2000000; i++)
+        result = sum2(o1.one, o2.five) + sum2b(o1.two, o1.one + o2.five)
+            + sum3(o2.three, o3.four, o2.five) + sum3b(o1.two, o2.three + o2.five, o3.six);
+
+    return result;
+}
+
+let result = test();
+if (result != 42)
+    throw "Unexpected result: " + result;
diff --git a/JSTests/microbenchmarks/calling-many-callees.js b/JSTests/microbenchmarks/calling-many-callees.js
new file mode 100644 (file)
index 0000000..4df6dd3
--- /dev/null
@@ -0,0 +1,40 @@
+function sum2(a, b)
+{
+    return a + b;
+}
+
+noInline(sum2);
+
+function sum2b(a, b)
+{
+    return a + b + b;
+}
+
+noInline(sum2b);
+
+function sum3(a, b, c)
+{
+    return a + b + c;
+}
+
+noInline(sum3);
+
+function sum3b(a, b, c)
+{
+    return a + b + b + c;
+}
+
+noInline(sum3b);
+
+function test()
+{
+    let result = 0;
+    for (let i = 0; i < 2000000; i++)
+        result = sum2(1, 2) + sum2b(2, 3) + sum3(5, 5, 5) + sum3b(2, 4, 6);
+
+    return result;
+}
+
+let result = test();
+if (result != 42)
+    throw "Unexpected result: " + result;
diff --git a/JSTests/microbenchmarks/calling-one-callee-fixed.js b/JSTests/microbenchmarks/calling-one-callee-fixed.js
new file mode 100644 (file)
index 0000000..e0721b1
--- /dev/null
@@ -0,0 +1,19 @@
+function sum(a, b, c)
+{
+    return a + b + c;
+}
+
+noInline(sum);
+
+function test()
+{
+    let result = 0;
+    for (let i = 0; i < 4000000; i++)
+        result = sum(1, 2, 3);
+
+    return result;
+}
+
+let result = test();
+if (result != 6)
+    throw "Unexpected result: " + result;
diff --git a/JSTests/microbenchmarks/calling-one-callee.js b/JSTests/microbenchmarks/calling-one-callee.js
new file mode 100644 (file)
index 0000000..a979787
--- /dev/null
@@ -0,0 +1,19 @@
+function sum(a, b, c)
+{
+    return a + b + c;
+}
+
+noInline(sum);
+
+function test(a, b, c)
+{
+    let result = 0;
+    for (let i = 0; i < 4000000; i++)
+        result = sum(a, b, c);
+
+    return result;
+}
+
+let result = test(1, 2, 3);
+if (result != 6)
+    throw "Unexpected result: " + result;
diff --git a/JSTests/microbenchmarks/calling-poly-callees.js b/JSTests/microbenchmarks/calling-poly-callees.js
new file mode 100644 (file)
index 0000000..9aadf39
--- /dev/null
@@ -0,0 +1,35 @@
+function sum1(a, b, c)
+{
+    return a + b + c;
+}
+
+noInline(sum1);
+
+function sum2(a, b, c)
+{
+    return b + a + c;
+}
+
+noInline(sum2);
+
+function sum3(a, b, c)
+{
+    return c + a + b;
+}
+
+noInline(sum3);
+
+let functions = [ sum1, sum2, sum3 ];
+
+function test(a, b, c)
+{
+    let result = 0;
+    for (let i = 0; i < 4000000; i++)
+        result = functions[i % 3](a, b, c);
+
+    return result;
+}
+
+let result = test(2, 10, 30);
+if (result != 42)
+    throw "Unexpected result: " + result;
diff --git a/JSTests/microbenchmarks/calling-poly-extra-arity-callees.js b/JSTests/microbenchmarks/calling-poly-extra-arity-callees.js
new file mode 100644 (file)
index 0000000..cb68179
--- /dev/null
@@ -0,0 +1,35 @@
+function sum1(a, b)
+{
+    return a + b;
+}
+
+noInline(sum1);
+
+function sum2(a, b)
+{
+    return b + a;
+}
+
+noInline(sum2);
+
+function sum3(a, b)
+{
+    return a + b;
+}
+
+noInline(sum3);
+
+let functions = [ sum1, sum2, sum3 ];
+
+function test(a, b, c)
+{
+    let result = 0;
+    for (let i = 0; i < 4000000; i++)
+        result = functions[i % 3](a, b, c);
+
+    return result;
+}
+
+let result = test(2, 40, "Test");
+if (result != 42)
+    throw "Unexpected result: " + result;
diff --git a/JSTests/microbenchmarks/calling-tailcall.js b/JSTests/microbenchmarks/calling-tailcall.js
new file mode 100644 (file)
index 0000000..829af85
--- /dev/null
@@ -0,0 +1,28 @@
+"use strict";
+
+function sum(a, b, c)
+{
+    return a + b + c;
+}
+
+noInline(sum);
+
+function tailCaller(a, b, c)
+{
+    return sum(b, a, c);
+}
+
+noInline(tailCaller);
+
+function test(a, b, c)
+{
+    let result = 0;
+    for (let i = 0; i < 4000000; i++)
+        result = tailCaller(a, b, c);
+
+    return result;
+}
+
+let result = test(1, 2, 3);
+if (result != 6)
+    throw "Unexpected result: " + result;
diff --git a/JSTests/microbenchmarks/calling-virtual-arity-fixup-callees.js b/JSTests/microbenchmarks/calling-virtual-arity-fixup-callees.js
new file mode 100644 (file)
index 0000000..8e71942
--- /dev/null
@@ -0,0 +1,56 @@
+function sum1(a, b, c)
+{
+    return a + b + (c | 0);
+}
+
+noInline(sum1);
+
+function sum2(a, b, c)
+{
+    return b + a + (c | 0);
+}
+
+noInline(sum2);
+
+function sum3(a, b, c)
+{
+    return (c | 0) + a + b;
+}
+
+noInline(sum3);
+
+function sum4(a, b, c)
+{
+    return (c | 0) + a + b;
+}
+
+noInline(sum4);
+
+function sum5(a, b, c)
+{
+    return (c | 0) + a + b;
+}
+
+noInline(sum5);
+
+function sum6(a, b, c)
+{
+    return (c | 0) + a + b;
+}
+
+noInline(sum6);
+
+let functions = [ sum1, sum2, sum3, sum4, sum5, sum6 ];
+
+function test(a, b)
+{
+    let result = 0;
+    for (let i = 0; i < 4000000; i++)
+        result = functions[i % 6](a, b);
+
+    return result;
+}
+
+let result = test(2, 40);
+if (result != 42)
+    throw "Unexpected result: " + result;
diff --git a/JSTests/microbenchmarks/calling-virtual-arity-fixup-stackargs.js b/JSTests/microbenchmarks/calling-virtual-arity-fixup-stackargs.js
new file mode 100644 (file)
index 0000000..f492068
--- /dev/null
@@ -0,0 +1,56 @@
+function sum1(a, b, c, d)
+{
+    return a + b + c + (d | 0);
+}
+
+noInline(sum1);
+
+function sum2(a, b, c, d)
+{
+    return b + a + c + (d | 0);
+}
+
+noInline(sum2);
+
+function sum3(a, b, c, d)
+{
+    return (d | 0) + a + b + c;
+}
+
+noInline(sum3);
+
+function sum4(a, b, c, d)
+{
+    return (d | 0) + a + b + c;
+}
+
+noInline(sum4);
+
+function sum5(a, b, c, d)
+{
+    return (d | 0) + a + b + c;
+}
+
+noInline(sum5);
+
+function sum6(a, b, c, d)
+{
+    return (d | 0) + a + b + c;
+}
+
+noInline(sum6);
+
+let functions = [ sum1, sum2, sum3, sum4, sum5, sum6 ];
+
+function test(a, b, c)
+{
+    let result = 0;
+    for (let i = 0; i < 4000000; i++)
+        result = functions[i % 6](a, b, c);
+
+    return result;
+}
+
+let result = test(2, 10, 30);
+if (result != 42)
+    throw "Unexpected result: " + result;
diff --git a/JSTests/microbenchmarks/calling-virtual-callees.js b/JSTests/microbenchmarks/calling-virtual-callees.js
new file mode 100644 (file)
index 0000000..b6ba61b
--- /dev/null
@@ -0,0 +1,56 @@
+function sum1(a, b, c)
+{
+    return a + b + c;
+}
+
+noInline(sum1);
+
+function sum2(a, b, c)
+{
+    return b + a + c;
+}
+
+noInline(sum2);
+
+function sum3(a, b, c)
+{
+    return c + a + b;
+}
+
+noInline(sum3);
+
+function sum4(a, b, c)
+{
+    return c + a + b;
+}
+
+noInline(sum4);
+
+function sum5(a, b, c)
+{
+    return c + a + b;
+}
+
+noInline(sum5);
+
+function sum6(a, b, c)
+{
+    return c + a + b;
+}
+
+noInline(sum6);
+
+let functions = [ sum1, sum2, sum3, sum4, sum5, sum6 ];
+
+function test(a, b, c)
+{
+    let result = 0;
+    for (let i = 0; i < 4000000; i++)
+        result = functions[i % 6](a, b, c);
+
+    return result;
+}
+
+let result = test(2, 10, 30);
+if (result != 42)
+    throw "Unexpected result: " + result;
diff --git a/JSTests/microbenchmarks/calling-virtual-extra-arity-callees.js b/JSTests/microbenchmarks/calling-virtual-extra-arity-callees.js
new file mode 100644 (file)
index 0000000..f107ec6
--- /dev/null
@@ -0,0 +1,56 @@
+function sum1(a, b)
+{
+    return a + b;
+}
+
+noInline(sum1);
+
+function sum2(a, b)
+{
+    return b + a;
+}
+
+noInline(sum2);
+
+function sum3(a, b)
+{
+    return a + b;
+}
+
+noInline(sum3);
+
+function sum4(a, b)
+{
+    return a + b;
+}
+
+noInline(sum4);
+
+function sum5(a, b)
+{
+    return a + b;
+}
+
+noInline(sum5);
+
+function sum6(a, b)
+{
+    return a + b;
+}
+
+noInline(sum6);
+
+let functions = [ sum1, sum2, sum3, sum4, sum5, sum6 ];
+
+function test(a, b, c)
+{
+    let result = 0;
+    for (let i = 0; i < 4000000; i++)
+        result = functions[i % 6](a, b, c);
+
+    return result;
+}
+
+let result = test(40, 2, "Test");
+if (result != 42)
+    throw "Unexpected result: " + result;
index 8850105..d34820a 100644 (file)
@@ -1,3 +1,399 @@
+2016-12-09  Michael Saboff  <msaboff@apple.com>
+
+        JSVALUE64: Pass arguments in platform argument registers when making JavaScript calls
+        https://bugs.webkit.org/show_bug.cgi?id=160355
+
+        Reviewed by Filip Pizlo.
+
+        This patch implements passing JavaScript function arguments in registers for 64 bit platforms.
+
+        The implemented convention follows the ABI conventions for the associated platform.
+        The first two arguments are the callee and argument count, the rest of the argument registers
+        contain "this" and following argument until all platform argument registers are exhausted.
+        Arguments beyond what fit in registers are placed on the stack in the same location as
+        before this patch.
+
+        For X86-64 non-Windows platforms, there are 6 argument registers specified in the related ABI.
+        ARM64 has had argument registers.  This allows for 4 or 6 parameter values to be placed in
+        registers on these respective platforms.  This patch doesn't implement passing arguments in
+        registers for 32 bit platform, since most platforms have at most 4 argument registers
+        specified and 32 bit platforms use two 32 bit registers/memory locations to store one JSValue.
+
+        The call frame on the stack in unchanged in format and the arguments that are passed in
+        registers use the corresponding call frame location as a spill location. Arguments can
+        also be passed on the stack. The LLInt, baseline JIT'ed code as well as the initial entry
+        from C++ code base arguments on the stack. DFG s and FTL generated code pass arguments
+        via registers. All callees can accept arguments either in registers or on the stack.
+        The callee is responsible for moving argument to its preferred location.
+
+        The multiple entry points to JavaSCript code is now handled via the JITEntryPoints class and
+        related code.  That class now has entries for StackArgsArityCheckNotRequired,
+        StackArgsMustCheckArity and for platforms that support registers arguments,
+        RegisterArgsArityCheckNotRequired, RegisterArgsMustCheckArity as well as and additional
+        RegisterArgsPossibleExtraArgs entry point when extra registers argument are passed.
+        This last case is needed to spill those extra arguments to the corresponding call frame
+        slots.
+
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * b3/B3ArgumentRegValue.h:
+        * b3/B3Validate.cpp:
+        * bytecode/CallLinkInfo.cpp:
+        (JSC::CallLinkInfo::CallLinkInfo):
+        * bytecode/CallLinkInfo.h:
+        (JSC::CallLinkInfo::setUpCall):
+        (JSC::CallLinkInfo::argumentsLocation):
+        (JSC::CallLinkInfo::argumentsInRegisters):
+        * bytecode/PolymorphicAccess.cpp:
+        (JSC::AccessCase::generateImpl):
+        * dfg/DFGAbstractInterpreterInlines.h:
+        (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
+        * dfg/DFGByteCodeParser.cpp:
+        (JSC::DFG::ByteCodeParser::parseBlock):
+        * dfg/DFGCPSRethreadingPhase.cpp:
+        (JSC::DFG::CPSRethreadingPhase::canonicalizeLocalsInBlock):
+        (JSC::DFG::CPSRethreadingPhase::specialCaseArguments):
+        (JSC::DFG::CPSRethreadingPhase::computeIsFlushed):
+        * dfg/DFGClobberize.h:
+        (JSC::DFG::clobberize):
+        * dfg/DFGCommon.h:
+        * dfg/DFGDCEPhase.cpp:
+        (JSC::DFG::DCEPhase::run):
+        * dfg/DFGDoesGC.cpp:
+        (JSC::DFG::doesGC):
+        * dfg/DFGDriver.cpp:
+        (JSC::DFG::compileImpl):
+        * dfg/DFGFixupPhase.cpp:
+        (JSC::DFG::FixupPhase::fixupNode):
+        * dfg/DFGGenerationInfo.h:
+        (JSC::DFG::GenerationInfo::initArgumentRegisterValue):
+        * dfg/DFGGraph.cpp:
+        (JSC::DFG::Graph::dump):
+        (JSC::DFG::Graph::methodOfGettingAValueProfileFor):
+        * dfg/DFGGraph.h:
+        (JSC::DFG::Graph::needsFlushedThis):
+        (JSC::DFG::Graph::addImmediateShouldSpeculateInt32):
+        * dfg/DFGInPlaceAbstractState.cpp:
+        (JSC::DFG::InPlaceAbstractState::initialize):
+        * dfg/DFGJITCompiler.cpp:
+        (JSC::DFG::JITCompiler::link):
+        (JSC::DFG::JITCompiler::compile):
+        (JSC::DFG::JITCompiler::compileFunction):
+        (JSC::DFG::JITCompiler::compileEntry): Deleted.
+        * dfg/DFGJITCompiler.h:
+        (JSC::DFG::JITCompiler::addJSDirectCall):
+        (JSC::DFG::JITCompiler::JSDirectCallRecord::JSDirectCallRecord):
+        (JSC::DFG::JITCompiler::JSDirectCallRecord::hasSlowCall):
+        * dfg/DFGJITFinalizer.cpp:
+        (JSC::DFG::JITFinalizer::JITFinalizer):
+        (JSC::DFG::JITFinalizer::finalize):
+        (JSC::DFG::JITFinalizer::finalizeFunction):
+        * dfg/DFGJITFinalizer.h:
+        * dfg/DFGLiveCatchVariablePreservationPhase.cpp:
+        (JSC::DFG::LiveCatchVariablePreservationPhase::handleBlock):
+        * dfg/DFGMaximalFlushInsertionPhase.cpp:
+        (JSC::DFG::MaximalFlushInsertionPhase::treatRegularBlock):
+        (JSC::DFG::MaximalFlushInsertionPhase::treatRootBlock):
+        * dfg/DFGMayExit.cpp:
+        * dfg/DFGMinifiedNode.cpp:
+        (JSC::DFG::MinifiedNode::fromNode):
+        * dfg/DFGMinifiedNode.h:
+        (JSC::DFG::belongsInMinifiedGraph):
+        * dfg/DFGNode.cpp:
+        (JSC::DFG::Node::hasVariableAccessData):
+        * dfg/DFGNode.h:
+        (JSC::DFG::Node::accessesStack):
+        (JSC::DFG::Node::setVariableAccessData):
+        (JSC::DFG::Node::hasArgumentRegisterIndex):
+        (JSC::DFG::Node::argumentRegisterIndex):
+        * dfg/DFGNodeType.h:
+        * dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
+        (JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
+        * dfg/DFGOSREntrypointCreationPhase.cpp:
+        (JSC::DFG::OSREntrypointCreationPhase::run):
+        * dfg/DFGPlan.cpp:
+        (JSC::DFG::Plan::compileInThreadImpl):
+        * dfg/DFGPreciseLocalClobberize.h:
+        (JSC::DFG::PreciseLocalClobberizeAdaptor::readTop):
+        * dfg/DFGPredictionInjectionPhase.cpp:
+        (JSC::DFG::PredictionInjectionPhase::run):
+        * dfg/DFGPredictionPropagationPhase.cpp:
+        * dfg/DFGPutStackSinkingPhase.cpp:
+        * dfg/DFGRegisterBank.h:
+        (JSC::DFG::RegisterBank::iterator::unlock):
+        (JSC::DFG::RegisterBank::unlockAtIndex):
+        * dfg/DFGSSAConversionPhase.cpp:
+        (JSC::DFG::SSAConversionPhase::run):
+        * dfg/DFGSafeToExecute.h:
+        (JSC::DFG::safeToExecute):
+        * dfg/DFGSpeculativeJIT.cpp:
+        (JSC::DFG::SpeculativeJIT::SpeculativeJIT):
+        (JSC::DFG::SpeculativeJIT::clearGenerationInfo):
+        (JSC::DFG::dumpRegisterInfo):
+        (JSC::DFG::SpeculativeJIT::dump):
+        (JSC::DFG::SpeculativeJIT::compileCurrentBlock):
+        (JSC::DFG::SpeculativeJIT::checkArgumentTypes):
+        (JSC::DFG::SpeculativeJIT::setupArgumentRegistersForEntry):
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGSpeculativeJIT.h:
+        (JSC::DFG::SpeculativeJIT::allocate):
+        (JSC::DFG::SpeculativeJIT::spill):
+        (JSC::DFG::SpeculativeJIT::generationInfoFromVirtualRegister):
+        (JSC::DFG::JSValueOperand::JSValueOperand):
+        (JSC::DFG::JSValueOperand::gprUseSpecific):
+        * dfg/DFGSpeculativeJIT32_64.cpp:
+        (JSC::DFG::SpeculativeJIT::emitCall):
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::fillJSValue):
+        (JSC::DFG::SpeculativeJIT::emitCall):
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGStrengthReductionPhase.cpp:
+        (JSC::DFG::StrengthReductionPhase::handleNode):
+        * dfg/DFGThunks.cpp:
+        (JSC::DFG::osrEntryThunkGenerator):
+        * dfg/DFGVariableEventStream.cpp:
+        (JSC::DFG::VariableEventStream::reconstruct):
+        * dfg/DFGVirtualRegisterAllocationPhase.cpp:
+        (JSC::DFG::VirtualRegisterAllocationPhase::allocateRegister):
+        (JSC::DFG::VirtualRegisterAllocationPhase::run):
+        * ftl/FTLCapabilities.cpp:
+        (JSC::FTL::canCompile):
+        * ftl/FTLJITCode.cpp:
+        (JSC::FTL::JITCode::~JITCode):
+        (JSC::FTL::JITCode::initializeEntrypointThunk):
+        (JSC::FTL::JITCode::setEntryFor):
+        (JSC::FTL::JITCode::addressForCall):
+        (JSC::FTL::JITCode::executableAddressAtOffset):
+        (JSC::FTL::JITCode::initializeAddressForCall): Deleted.
+        (JSC::FTL::JITCode::initializeArityCheckEntrypoint): Deleted.
+        * ftl/FTLJITCode.h:
+        * ftl/FTLJITFinalizer.cpp:
+        (JSC::FTL::JITFinalizer::finalizeFunction):
+        * ftl/FTLLink.cpp:
+        (JSC::FTL::link):
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::lower):
+        (JSC::FTL::DFG::LowerDFGToB3::compileNode):
+        (JSC::FTL::DFG::LowerDFGToB3::compileGetArgumentRegister):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstruct):
+        (JSC::FTL::DFG::LowerDFGToB3::compileDirectCallOrConstruct):
+        (JSC::FTL::DFG::LowerDFGToB3::compileTailCall):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargsSpread):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargs):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCallEval):
+        * ftl/FTLOSREntry.cpp:
+        (JSC::FTL::prepareOSREntry):
+        * ftl/FTLOutput.cpp:
+        (JSC::FTL::Output::argumentRegister):
+        (JSC::FTL::Output::argumentRegisterInt32):
+        * ftl/FTLOutput.h:
+        * interpreter/ShadowChicken.cpp:
+        (JSC::ShadowChicken::update):
+        * jit/AssemblyHelpers.cpp:
+        (JSC::AssemblyHelpers::emitDumbVirtualCall):
+        * jit/AssemblyHelpers.h:
+        (JSC::AssemblyHelpers::spillArgumentRegistersToFrameBeforePrologue):
+        (JSC::AssemblyHelpers::spillArgumentRegistersToFrame):
+        (JSC::AssemblyHelpers::fillArgumentRegistersFromFrameBeforePrologue):
+        (JSC::AssemblyHelpers::emitPutArgumentToCallFrameBeforePrologue):
+        (JSC::AssemblyHelpers::emitPutArgumentToCallFrame):
+        (JSC::AssemblyHelpers::emitGetFromCallFrameHeaderBeforePrologue):
+        (JSC::AssemblyHelpers::emitGetFromCallFrameArgumentBeforePrologue):
+        (JSC::AssemblyHelpers::emitGetPayloadFromCallFrameHeaderBeforePrologue):
+        (JSC::AssemblyHelpers::incrementCounter):
+        * jit/CachedRecovery.cpp:
+        (JSC::CachedRecovery::addTargetJSValueRegs):
+        * jit/CachedRecovery.h:
+        (JSC::CachedRecovery::gprTargets):
+        (JSC::CachedRecovery::setWantedFPR):
+        (JSC::CachedRecovery::wantedJSValueRegs):
+        (JSC::CachedRecovery::setWantedJSValueRegs): Deleted.
+        * jit/CallFrameShuffleData.h:
+        * jit/CallFrameShuffler.cpp:
+        (JSC::CallFrameShuffler::CallFrameShuffler):
+        (JSC::CallFrameShuffler::dump):
+        (JSC::CallFrameShuffler::tryWrites):
+        (JSC::CallFrameShuffler::prepareAny):
+        * jit/CallFrameShuffler.h:
+        (JSC::CallFrameShuffler::snapshot):
+        (JSC::CallFrameShuffler::addNew):
+        (JSC::CallFrameShuffler::initDangerFrontier):
+        (JSC::CallFrameShuffler::updateDangerFrontier):
+        (JSC::CallFrameShuffler::findDangerFrontierFrom):
+        * jit/CallFrameShuffler64.cpp:
+        (JSC::CallFrameShuffler::emitDisplace):
+        * jit/GPRInfo.h:
+        (JSC::JSValueRegs::operator==):
+        (JSC::JSValueRegs::operator!=):
+        (JSC::GPRInfo::toArgumentIndex):
+        (JSC::argumentRegisterFor):
+        (JSC::argumentRegisterForCallee):
+        (JSC::argumentRegisterForArgumentCount):
+        (JSC::argumentRegisterIndexForJSFunctionArgument):
+        (JSC::jsFunctionArgumentForArgumentRegister):
+        (JSC::argumentRegisterForFunctionArgument):
+        (JSC::numberOfRegisterArgumentsFor):
+        * jit/JIT.cpp:
+        (JSC::JIT::compileWithoutLinking):
+        (JSC::JIT::link):
+        (JSC::JIT::compileCTINativeCall): Deleted.
+        * jit/JIT.h:
+        (JSC::JIT::compileNativeCallEntryPoints):
+        * jit/JITCall.cpp:
+        (JSC::JIT::compileSetupVarargsFrame):
+        (JSC::JIT::compileCallEval):
+        (JSC::JIT::compileCallEvalSlowCase):
+        (JSC::JIT::compileOpCall):
+        (JSC::JIT::compileOpCallSlowCase):
+        * jit/JITCall32_64.cpp:
+        (JSC::JIT::compileCallEvalSlowCase):
+        (JSC::JIT::compileOpCall):
+        (JSC::JIT::compileOpCallSlowCase):
+        * jit/JITCode.cpp:
+        (JSC::JITCode::execute):
+        (JSC::DirectJITCode::DirectJITCode):
+        (JSC::DirectJITCode::initializeEntryPoints):
+        (JSC::DirectJITCode::addressForCall):
+        (JSC::NativeJITCode::addressForCall):
+        (JSC::DirectJITCode::initializeCodeRef): Deleted.
+        * jit/JITCode.h:
+        (JSC::JITCode::executableAddress): Deleted.
+        * jit/JITEntryPoints.h: Added.
+        (JSC::JITEntryPoints::JITEntryPoints):
+        (JSC::JITEntryPoints::entryFor):
+        (JSC::JITEntryPoints::setEntryFor):
+        (JSC::JITEntryPoints::offsetOfEntryFor):
+        (JSC::JITEntryPoints::registerEntryTypeForArgumentCount):
+        (JSC::JITEntryPoints::registerEntryTypeForArgumentType):
+        (JSC::JITEntryPoints::clearEntries):
+        (JSC::JITEntryPoints::operator=):
+        (JSC::JITEntryPointsWithRef::JITEntryPointsWithRef):
+        (JSC::JITEntryPointsWithRef::codeRef):
+        (JSC::argumentsLocationFor):
+        (JSC::registerEntryPointTypeFor):
+        (JSC::entryPointTypeFor):
+        (JSC::thunkEntryPointTypeFor):
+        (JSC::JITJSCallThunkEntryPointsWithRef::JITJSCallThunkEntryPointsWithRef):
+        (JSC::JITJSCallThunkEntryPointsWithRef::entryFor):
+        (JSC::JITJSCallThunkEntryPointsWithRef::setEntryFor):
+        (JSC::JITJSCallThunkEntryPointsWithRef::offsetOfEntryFor):
+        (JSC::JITJSCallThunkEntryPointsWithRef::clearEntries):
+        (JSC::JITJSCallThunkEntryPointsWithRef::codeRef):
+        (JSC::JITJSCallThunkEntryPointsWithRef::operator=):
+        * jit/JITOpcodes.cpp:
+        (JSC::JIT::privateCompileJITEntryNativeCall):
+        (JSC::JIT::privateCompileCTINativeCall): Deleted.
+        * jit/JITOpcodes32_64.cpp:
+        (JSC::JIT::privateCompileJITEntryNativeCall):
+        (JSC::JIT::privateCompileCTINativeCall): Deleted.
+        * jit/JITOperations.cpp:
+        * jit/JITThunks.cpp:
+        (JSC::JITThunks::jitEntryNativeCall):
+        (JSC::JITThunks::jitEntryNativeConstruct):
+        (JSC::JITThunks::jitEntryStub):
+        (JSC::JITThunks::jitCallThunkEntryStub):
+        (JSC::JITThunks::hostFunctionStub):
+        (JSC::JITThunks::ctiNativeCall): Deleted.
+        (JSC::JITThunks::ctiNativeConstruct): Deleted.
+        * jit/JITThunks.h:
+        * jit/JSInterfaceJIT.h:
+        (JSC::JSInterfaceJIT::emitJumpIfNotInt32):
+        (JSC::JSInterfaceJIT::emitLoadInt32):
+        * jit/RegisterSet.cpp:
+        (JSC::RegisterSet::argumentRegisters):
+        * jit/RegisterSet.h:
+        * jit/Repatch.cpp:
+        (JSC::linkSlowFor):
+        (JSC::revertCall):
+        (JSC::unlinkFor):
+        (JSC::linkVirtualFor):
+        (JSC::linkPolymorphicCall):
+        * jit/SpecializedThunkJIT.h:
+        (JSC::SpecializedThunkJIT::SpecializedThunkJIT):
+        (JSC::SpecializedThunkJIT::checkJSStringArgument):
+        (JSC::SpecializedThunkJIT::linkFailureHere):
+        (JSC::SpecializedThunkJIT::finalize):
+        * jit/ThunkGenerator.h:
+        * jit/ThunkGenerators.cpp:
+        (JSC::createRegisterArgumentsSpillEntry):
+        (JSC::slowPathFor):
+        (JSC::linkCallThunkGenerator):
+        (JSC::linkDirectCallThunkGenerator):
+        (JSC::linkPolymorphicCallThunkGenerator):
+        (JSC::virtualThunkFor):
+        (JSC::nativeForGenerator):
+        (JSC::nativeCallGenerator):
+        (JSC::nativeTailCallGenerator):
+        (JSC::nativeTailCallWithoutSavedTagsGenerator):
+        (JSC::nativeConstructGenerator):
+        (JSC::stringCharLoadRegCall):
+        (JSC::charCodeAtThunkGenerator):
+        (JSC::charAtThunkGenerator):
+        (JSC::fromCharCodeThunkGenerator):
+        (JSC::clz32ThunkGenerator):
+        (JSC::sqrtThunkGenerator):
+        (JSC::floorThunkGenerator):
+        (JSC::ceilThunkGenerator):
+        (JSC::truncThunkGenerator):
+        (JSC::roundThunkGenerator):
+        (JSC::expThunkGenerator):
+        (JSC::logThunkGenerator):
+        (JSC::absThunkGenerator):
+        (JSC::imulThunkGenerator):
+        (JSC::randomThunkGenerator):
+        (JSC::boundThisNoArgsFunctionCallGenerator):
+        * jit/ThunkGenerators.h:
+        * jsc.cpp:
+        (jscmain):
+        * llint/LLIntEntrypoint.cpp:
+        (JSC::LLInt::setFunctionEntrypoint):
+        (JSC::LLInt::setEvalEntrypoint):
+        (JSC::LLInt::setProgramEntrypoint):
+        (JSC::LLInt::setModuleProgramEntrypoint):
+        * llint/LLIntSlowPaths.cpp:
+        (JSC::LLInt::entryOSR):
+        (JSC::LLInt::setUpCall):
+        * llint/LLIntThunks.cpp:
+        (JSC::LLInt::generateThunkWithJumpTo):
+        (JSC::LLInt::functionForRegisterCallEntryThunkGenerator):
+        (JSC::LLInt::functionForStackCallEntryThunkGenerator):
+        (JSC::LLInt::functionForRegisterConstructEntryThunkGenerator):
+        (JSC::LLInt::functionForStackConstructEntryThunkGenerator):
+        (JSC::LLInt::functionForRegisterCallArityCheckThunkGenerator):
+        (JSC::LLInt::functionForStackCallArityCheckThunkGenerator):
+        (JSC::LLInt::functionForRegisterConstructArityCheckThunkGenerator):
+        (JSC::LLInt::functionForStackConstructArityCheckThunkGenerator):
+        (JSC::LLInt::functionForCallEntryThunkGenerator): Deleted.
+        (JSC::LLInt::functionForConstructEntryThunkGenerator): Deleted.
+        (JSC::LLInt::functionForCallArityCheckThunkGenerator): Deleted.
+        (JSC::LLInt::functionForConstructArityCheckThunkGenerator): Deleted.
+        * llint/LLIntThunks.h:
+        * runtime/ArityCheckMode.h:
+        * runtime/ExecutableBase.cpp:
+        (JSC::ExecutableBase::clearCode):
+        * runtime/ExecutableBase.h:
+        (JSC::ExecutableBase::entrypointFor):
+        (JSC::ExecutableBase::offsetOfEntryFor):
+        (JSC::ExecutableBase::offsetOfJITCodeWithArityCheckFor): Deleted.
+        * runtime/JSBoundFunction.cpp:
+        (JSC::boundThisNoArgsFunctionCall):
+        * runtime/NativeExecutable.cpp:
+        (JSC::NativeExecutable::finishCreation):
+        * runtime/ScriptExecutable.cpp:
+        (JSC::ScriptExecutable::installCode):
+        * runtime/VM.cpp:
+        (JSC::VM::VM):
+        (JSC::thunkGeneratorForIntrinsic):
+        (JSC::VM::clearCounters):
+        (JSC::VM::dumpCounters):
+        * runtime/VM.h:
+        (JSC::VM::getJITEntryStub):
+        (JSC::VM::getJITCallThunkEntryStub):
+        (JSC::VM::addressOfCounter):
+        (JSC::VM::counterFor):
+        * wasm/WasmBinding.cpp:
+        (JSC::Wasm::importStubGenerator):
+
 2016-12-09  Keith Miller  <keith_miller@apple.com>
 
         Wasm should support call_indirect
index 982bff7..550c117 100644 (file)
                65C02850171795E200351E35 /* ARMv7Disassembler.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65C0284F171795E200351E35 /* ARMv7Disassembler.cpp */; };
                65C0285C1717966800351E35 /* ARMv7DOpcode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65C0285A1717966800351E35 /* ARMv7DOpcode.cpp */; };
                65C0285D1717966800351E35 /* ARMv7DOpcode.h in Headers */ = {isa = PBXBuildFile; fileRef = 65C0285B1717966800351E35 /* ARMv7DOpcode.h */; };
+               65DBF3021D93392B003AF4B0 /* JITEntryPoints.h in Headers */ = {isa = PBXBuildFile; fileRef = 650300F21C50274600D786D7 /* JITEntryPoints.h */; settings = {ATTRIBUTES = (Private, ); }; };
                65FB5117184EEE7000C12B70 /* ProtoCallFrame.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65FB5116184EE9BC00C12B70 /* ProtoCallFrame.cpp */; };
                65FB63A41C8EA09C0020719B /* YarrCanonicalizeUnicode.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 65A946141C8E9F6F00A7209A /* YarrCanonicalizeUnicode.cpp */; };
                6AD2CB4D19B9140100065719 /* DebuggerEvalEnabler.h in Headers */ = {isa = PBXBuildFile; fileRef = 6AD2CB4C19B9140100065719 /* DebuggerEvalEnabler.h */; settings = {ATTRIBUTES = (Private, ); }; };
                62E3D5EF1B8D0B7300B868BB /* DataFormat.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = DataFormat.cpp; sourceTree = "<group>"; };
                62EC9BB41B7EB07C00303AD1 /* CallFrameShuffleData.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallFrameShuffleData.cpp; sourceTree = "<group>"; };
                62EC9BB51B7EB07C00303AD1 /* CallFrameShuffleData.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallFrameShuffleData.h; sourceTree = "<group>"; };
+               650300F21C50274600D786D7 /* JITEntryPoints.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITEntryPoints.h; sourceTree = "<group>"; };
                6507D2970E871E4A00D7D896 /* JSTypeInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSTypeInfo.h; sourceTree = "<group>"; };
                651122E5140469BA002B101D /* testRegExp.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = testRegExp.cpp; sourceTree = "<group>"; };
                6511230514046A4C002B101D /* testRegExp */ = {isa = PBXFileReference; explicitFileType = "compiled.mach-o.executable"; includeInIndex = 0; path = testRegExp; sourceTree = BUILT_PRODUCTS_DIR; };
                                0FAF7EFB165BA919000C8455 /* JITDisassembler.h */,
                                FE187A0A1C0229230038BBCA /* JITDivGenerator.cpp */,
                                FE187A0B1C0229230038BBCA /* JITDivGenerator.h */,
+                               650300F21C50274600D786D7 /* JITEntryPoints.h */,
                                0F46807F14BA572700BFE272 /* JITExceptions.cpp */,
                                0F46808014BA572700BFE272 /* JITExceptions.h */,
                                0FB14E1C18124ACE009B6B4D /* JITInlineCacheGenerator.cpp */,
                                79B00CBD1C6AB07E0088C65D /* ProxyConstructor.h in Headers */,
                                53D444DC1DAF08AB00B92784 /* B3WasmAddressValue.h in Headers */,
                                990DA67F1C8E316A00295159 /* generate_objc_protocol_type_conversions_implementation.py in Headers */,
+                               65DBF3021D93392B003AF4B0 /* JITEntryPoints.h in Headers */,
                                DC17E8191C9C91DB008A6AB3 /* ShadowChickenInlines.h in Headers */,
                                DC17E8181C9C91D9008A6AB3 /* ShadowChicken.h in Headers */,
                                799EF7C41C56ED96002B0534 /* B3PCToOriginMap.h in Headers */,
index 55b365f..c4963b3 100644 (file)
@@ -55,6 +55,13 @@ private:
         ASSERT(reg.isSet());
     }
 
+    ArgumentRegValue(Origin origin, Reg reg, Type type)
+        : Value(CheckedOpcode, ArgumentReg, type, origin)
+        , m_reg(reg)
+    {
+        ASSERT(reg.isSet());
+    }
+
     Reg m_reg;
 };
 
index f7224cf..f99da5b 100644 (file)
@@ -182,9 +182,12 @@ public:
             case ArgumentReg:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
                 VALIDATE(!value->numChildren(), ("At ", *value));
-                VALIDATE(
-                    (value->as<ArgumentRegValue>()->argumentReg().isGPR() ? pointerType() : Double)
-                    == value->type(), ("At ", *value));
+                // FIXME: https://bugs.webkit.org/show_bug.cgi?id=165717
+                // We need to handle Int32 arguments and Int64 arguments
+                // for the same register distinctly.
+                VALIDATE((value->as<ArgumentRegValue>()->argumentReg().isGPR()
+                    ? (value->type() == pointerType() || value->type() == Int32)
+                    : value->type() == Double), ("At ", *value));
                 break;
             case Add:
             case Sub:
index 7ffda05..030a97e 100644 (file)
@@ -60,6 +60,7 @@ CallLinkInfo::CallLinkInfo()
     , m_hasSeenClosure(false)
     , m_clearedByGC(false)
     , m_allowStubs(true)
+    , m_argumentsLocation(static_cast<unsigned>(ArgumentsLocation::StackArgs))
     , m_isLinked(false)
     , m_callType(None)
     , m_calleeGPR(255)
index 0a91020..0c9641d 100644 (file)
@@ -28,6 +28,7 @@
 #include "CallMode.h"
 #include "CodeLocation.h"
 #include "CodeSpecializationKind.h"
+#include "JITEntryPoints.h"
 #include "PolymorphicCallStubRoutine.h"
 #include "WriteBarrier.h"
 #include <wtf/SentinelLinkedList.h>
@@ -157,9 +158,12 @@ public:
     bool isLinked() { return m_stub || m_calleeOrCodeBlock; }
     void unlink(VM&);
 
-    void setUpCall(CallType callType, CodeOrigin codeOrigin, unsigned calleeGPR)
+    void setUpCall(CallType callType, ArgumentsLocation argumentsLocation, CodeOrigin codeOrigin, unsigned calleeGPR)
     {
+        ASSERT(!isVarargsCallType(callType) || (argumentsLocation == StackArgs));
+
         m_callType = callType;
+        m_argumentsLocation = static_cast<unsigned>(argumentsLocation);
         m_codeOrigin = codeOrigin;
         m_calleeGPR = calleeGPR;
     }
@@ -275,6 +279,16 @@ public:
         return static_cast<CallType>(m_callType);
     }
 
+    ArgumentsLocation argumentsLocation()
+    {
+        return static_cast<ArgumentsLocation>(m_argumentsLocation);
+    }
+
+    bool argumentsInRegisters()
+    {
+        return m_argumentsLocation != StackArgs;
+    }
+
     uint32_t* addressOfMaxNumArguments()
     {
         return &m_maxNumArguments;
@@ -339,6 +353,7 @@ private:
     bool m_hasSeenClosure : 1;
     bool m_clearedByGC : 1;
     bool m_allowStubs : 1;
+    unsigned m_argumentsLocation : 4;
     bool m_isLinked : 1;
     unsigned m_callType : 4; // CallType
     unsigned m_calleeGPR : 8;
index e6c7427..fb51a5f 100644 (file)
@@ -1032,7 +1032,7 @@ void AccessCase::generateImpl(AccessGenerationState& state)
             m_rareData->callLinkInfo->disallowStubs();
             
             m_rareData->callLinkInfo->setUpCall(
-                CallLinkInfo::Call, stubInfo.codeOrigin, loadedValueGPR);
+                CallLinkInfo::Call, StackArgs, stubInfo.codeOrigin, loadedValueGPR);
 
             CCallHelpers::JumpList done;
 
@@ -1105,7 +1105,7 @@ void AccessCase::generateImpl(AccessGenerationState& state)
             // We *always* know that the getter/setter, if non-null, is a cell.
             jit.move(CCallHelpers::TrustedImm32(JSValue::CellTag), GPRInfo::regT1);
 #endif
-            jit.move(CCallHelpers::TrustedImmPtr(m_rareData->callLinkInfo.get()), GPRInfo::regT2);
+            jit.move(CCallHelpers::TrustedImmPtr(m_rareData->callLinkInfo.get()), GPRInfo::nonArgGPR0);
             slowPathCall = jit.nearCall();
             if (m_type == Getter)
                 jit.setupResults(valueRegs);
@@ -1131,7 +1131,7 @@ void AccessCase::generateImpl(AccessGenerationState& state)
 
                     linkBuffer.link(
                         slowPathCall,
-                        CodeLocationLabel(vm.getCTIStub(linkCallThunkGenerator).code()));
+                        CodeLocationLabel(vm.getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(StackArgs)));
                 });
         } else {
             ASSERT(m_type == CustomValueGetter || m_type == CustomAccessorGetter || m_type == CustomValueSetter || m_type == CustomAccessorSetter);
index 2791aea..70f525b 100644 (file)
@@ -271,7 +271,17 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
         // non-clear value.
         ASSERT(!m_state.variables().operand(node->local()).isClear());
         break;
-        
+
+    case GetArgumentRegister:
+        ASSERT(!m_state.variables().operand(node->local()).isClear());
+        if (node->variableAccessData()->flushFormat() == FlushedJSValue) {
+            forNode(node).makeBytecodeTop();
+            break;
+        }
+
+        forNode(node).setType(m_graph, typeFilterFor(node->variableAccessData()->flushFormat()));
+        break;
+
     case LoadVarargs:
     case ForwardVarargs: {
         // FIXME: ForwardVarargs should check if the count becomes known, and if it does, it should turn
index d452b92..394835a 100644 (file)
@@ -3697,11 +3697,59 @@ bool ByteCodeParser::parseBlock(unsigned limit)
     // us to track if a use of an argument may use the actual argument passed, as
     // opposed to using a value we set explicitly.
     if (m_currentBlock == m_graph.block(0) && !inlineCallFrame()) {
-        m_graph.m_arguments.resize(m_numArguments);
-        // We will emit SetArgument nodes. They don't exit, but we're at the top of an op_enter so
-        // exitOK = true.
+        m_graph.m_argumentsOnStack.resize(m_numArguments);
+        m_graph.m_argumentsForChecking.resize(m_numArguments);
+        // Create all GetArgumentRegister nodes first and then the corresponding MovHint nodes,
+        // followed by the corresponding SetLocal nodes and finally any SetArgument nodes for
+        // the remaining arguments.
+        // We do this to make the exit processing correct. We start with m_exitOK = true since
+        // GetArgumentRegister nodes can exit, even though they don't. The MovHint's technically could
+        // exit but won't. The SetLocals can exit and therefore we want all the MovHints
+        // before the first SetLocal so that the register state is consistent.
+        // We do all this processing before creating any SetArgument nodes since they are
+        // morally equivalent to the SetLocals for GetArgumentRegister nodes.
         m_exitOK = true;
-        for (unsigned argument = 0; argument < m_numArguments; ++argument) {
+        
+        unsigned numRegisterArguments = std::min(m_numArguments, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS);
+
+        Vector<Node*, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS> getArgumentRegisterNodes;
+
+        // First create GetArgumentRegister nodes.
+        for (unsigned argument = 0; argument < numRegisterArguments; ++argument) {
+            getArgumentRegisterNodes.append(
+                addToGraph(GetArgumentRegister, OpInfo(0),
+                    OpInfo(argumentRegisterIndexForJSFunctionArgument(argument))));
+        }
+
+        // Create all the MovHint's for the GetArgumentRegister nodes created above.
+        for (unsigned i = 0; i < getArgumentRegisterNodes.size(); ++i) {
+            Node* getArgumentRegister = getArgumentRegisterNodes[i];
+            addToGraph(MovHint, OpInfo(virtualRegisterForArgument(i).offset()), getArgumentRegister);
+            // We can't exit anymore.
+            m_exitOK = false;
+        }
+
+        // Exit is now okay, but we need to fence with an ExitOK node.
+        m_exitOK = true;
+        addToGraph(ExitOK);
+
+        // Create all the SetLocals's for the GetArgumentRegister nodes created above.
+        for (unsigned i = 0; i < getArgumentRegisterNodes.size(); ++i) {
+            Node* getArgumentRegister = getArgumentRegisterNodes[i];
+            VariableAccessData* variableAccessData = newVariableAccessData(virtualRegisterForArgument(i));
+            variableAccessData->mergeStructureCheckHoistingFailed(
+                m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadCache));
+            variableAccessData->mergeCheckArrayHoistingFailed(
+                m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadIndexingType));
+            Node* setLocal = addToGraph(SetLocal, OpInfo(variableAccessData), getArgumentRegister);
+            m_currentBlock->variablesAtTail.argument(i) = setLocal;
+            getArgumentRegister->setVariableAccessData(setLocal->variableAccessData());
+            m_graph.m_argumentsOnStack[i] = setLocal;
+            m_graph.m_argumentsForChecking[i] = getArgumentRegister;
+        }
+
+        // Finally create any SetArgument nodes.
+        for (unsigned argument = NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argument < m_numArguments; ++argument) {
             VariableAccessData* variable = newVariableAccessData(
                 virtualRegisterForArgument(argument));
             variable->mergeStructureCheckHoistingFailed(
@@ -3710,7 +3758,8 @@ bool ByteCodeParser::parseBlock(unsigned limit)
                 m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadIndexingType));
             
             Node* setArgument = addToGraph(SetArgument, OpInfo(variable));
-            m_graph.m_arguments[argument] = setArgument;
+            m_graph.m_argumentsOnStack[argument] = setArgument;
+            m_graph.m_argumentsForChecking[argument] = setArgument;
             m_currentBlock->variablesAtTail.setArgumentFirstTime(argument, setArgument);
         }
     }
@@ -4820,8 +4869,10 @@ bool ByteCodeParser::parseBlock(unsigned limit)
             // We need to make sure that we don't unbox our arguments here since that won't be
             // done by the arguments object creation node as that node may not exist.
             noticeArgumentsUse();
-            flushForReturn();
             Terminality terminality = handleVarargsCall(currentInstruction, TailCallForwardVarargs, CallMode::Tail);
+            // We need to insert flush nodes for our arguments after the TailCallForwardVarargs
+            // node so that they will be flushed to the stack and kept alive.
+            flushForReturn();
             ASSERT_WITH_MESSAGE(m_currentInstruction == currentInstruction, "handleVarargsCall, which may have inlined the callee, trashed m_currentInstruction");
             // If the call is terminal then we should not parse any further bytecodes as the TailCall will exit the function.
             // If the call is not terminal, however, then we want the subsequent op_ret/op_jump to update metadata and clean
index 67b3574..d36da63 100644 (file)
@@ -299,14 +299,16 @@ private:
             // The rules for threaded CPS form:
             // 
             // Head variable: describes what is live at the head of the basic block.
-            // Head variable links may refer to Flush, PhantomLocal, Phi, or SetArgument.
-            // SetArgument may only appear in the root block.
+            // Head variable links may refer to Flush, PhantomLocal, Phi, GetArgumentRegister
+            // or SetArgument.
+            // GetArgumentRegister and SetArgument may only appear in the root block.
             //
             // Tail variable: the last thing that happened to the variable in the block.
-            // It may be a Flush, PhantomLocal, GetLocal, SetLocal, SetArgument, or Phi.
-            // SetArgument may only appear in the root block. Note that if there ever
-            // was a GetLocal to the variable, and it was followed by PhantomLocals and
-            // Flushes but not SetLocals, then the tail variable will be the GetLocal.
+            // It may be a Flush, PhantomLocal, GetLocal, SetLocal, GetArgumentRegister,
+            // SetArgument, or Phi. GetArgumentRegister and SetArgument may only appear
+            // in the root block. Note that if there ever was a GetLocal to the variable,
+            // and it was followed by PhantomLocals and Flushes but not SetLocals, then
+            // the tail variable will be the GetLocal.
             // This reflects the fact that you only care that the tail variable is a
             // Flush or PhantomLocal if nothing else interesting happened. Likewise, if
             // there ever was a SetLocal and it was followed by Flushes, then the tail
@@ -367,12 +369,13 @@ private:
     
     void specialCaseArguments()
     {
-        // Normally, a SetArgument denotes the start of a live range for a local's value on the stack.
-        // But those SetArguments used for the actual arguments to the machine CodeBlock get
-        // special-cased. We could have instead used two different node types - one for the arguments
-        // at the prologue case, and another for the other uses. But this seemed like IR overkill.
-        for (unsigned i = m_graph.m_arguments.size(); i--;)
-            m_graph.block(0)->variablesAtHead.setArgumentFirstTime(i, m_graph.m_arguments[i]);
+        // Normally, a SetArgument or SetLocal denotes the start of a live range for
+        // a local's value on the stack. But those SetArguments and SetLocals used
+        // for the actual arguments to the machine CodeBlock get special-cased. We could have
+        // instead used two different node types - one for the arguments at the prologue case,
+        // and another for the other uses. But this seemed like IR overkill.
+        for (unsigned i = m_graph.m_argumentsOnStack.size(); i--;)
+            m_graph.block(0)->variablesAtHead.setArgumentFirstTime(i, m_graph.m_argumentsOnStack[i]);
     }
     
     template<OperandKind operandKind>
@@ -480,6 +483,7 @@ private:
             switch (node->op()) {
             case SetLocal:
             case SetArgument:
+            case GetArgumentRegister:
                 break;
                 
             case Flush:
index 0ced522..0eb619d 100644 (file)
@@ -406,6 +406,7 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
     case Phi:
     case PhantomLocal:
     case SetArgument:
+    case GetArgumentRegister:
     case Jump:
     case Branch:
     case Switch:
@@ -470,7 +471,7 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
     case PhantomClonedArguments:
         // DFG backend requires that the locals that this reads are flushed. FTL backend can handle those
         // locals being promoted.
-        if (!isFTL(graph.m_plan.mode))
+        if (!isFTL(graph.m_plan.mode) && !node->origin.semantic.inlineCallFrame)
             read(Stack);
         
         // Even though it's phantom, it still has the property that one can't be replaced with another.
@@ -559,11 +560,18 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
     case TailCall:
     case DirectTailCall:
     case TailCallVarargs:
-    case TailCallForwardVarargs:
         read(World);
         write(SideState);
         return;
         
+    case TailCallForwardVarargs:
+        // We read all arguments after "this".
+        for (unsigned arg = 1; arg < graph.m_argumentsOnStack.size(); arg++)
+            read(AbstractHeap(Stack, virtualRegisterForArgument(arg)));
+        read(World);
+        write(SideState);
+        return;
+
     case GetGetter:
         read(GetterSetter_getter);
         def(HeapLocation(GetterLoc, GetterSetter_getter, node->child1()), LazyNode(node));
index 4104eb3..16977f0 100644 (file)
@@ -152,6 +152,8 @@ enum StructureRegistrationResult { StructureRegisteredNormally, StructureRegiste
 
 enum OptimizationFixpointState { BeforeFixpoint, FixpointNotConverged, FixpointConverged };
 
+enum StrengthReduceArgumentFlushes { DontOptimizeArgumentFlushes, OptimizeArgumentFlushes };
+
 // Describes the form you can expect the entire graph to be in.
 enum GraphForm {
     // LoadStore form means that basic blocks may freely use GetLocal, SetLocal,
index a70a869..abf2c98 100644 (file)
@@ -53,7 +53,8 @@ public:
         for (BasicBlock* block : m_graph.blocksInPreOrder())
             fixupBlock(block);
         
-        cleanVariables(m_graph.m_arguments);
+        cleanVariables(m_graph.m_argumentsOnStack);
+        cleanVariables(m_graph.m_argumentsForChecking);
 
         // Just do a basic Phantom/Check clean-up.
         for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) {
index e71f2b3..ff08aef 100644 (file)
@@ -261,6 +261,7 @@ bool doesGC(Graph& graph, Node* node)
     case GetStack:
     case GetFromArguments:
     case PutToArguments:
+    case GetArgumentRegister:
     case GetArgument:
     case LogShadowChickenPrologue:
     case LogShadowChickenTail:
index 14cd0d0..afdab10 100644 (file)
@@ -90,8 +90,9 @@ static CompilationResult compileImpl(
     // make sure that all JIT code generation does finalization on the main thread.
     vm.getCTIStub(osrExitGenerationThunkGenerator);
     vm.getCTIStub(throwExceptionFromCallSlowPathGenerator);
-    vm.getCTIStub(linkCallThunkGenerator);
-    vm.getCTIStub(linkPolymorphicCallThunkGenerator);
+    vm.getJITCallThunkEntryStub(linkCallThunkGenerator);
+    vm.getJITCallThunkEntryStub(linkDirectCallThunkGenerator);
+    vm.getJITCallThunkEntryStub(linkPolymorphicCallThunkGenerator);
     
     if (vm.typeProfiler())
         vm.typeProfilerLog()->processLogEntries(ASCIILiteral("Preparing for DFG compilation."));
index ca562b4..9cb68e9 100644 (file)
@@ -1791,6 +1791,7 @@ private:
         case DoubleConstant:
         case GetLocal:
         case GetCallee:
+        case GetArgumentRegister:
         case GetArgumentCountIncludingThis:
         case GetRestLength:
         case GetArgument:
index e9df841..5efec84 100644 (file)
@@ -104,6 +104,19 @@ public:
         ASSERT(format & DataFormatJS);
         initGPR(node, useCount, gpr, format);
     }
+
+    void initArgumentRegisterValue(Node* node, uint32_t useCount, GPRReg gpr, DataFormat registerFormat =  DataFormatJS)
+    {
+        m_node = node;
+        m_useCount = useCount;
+        m_registerFormat = registerFormat;
+        m_spillFormat = DataFormatNone;
+        m_canFill = false;
+        u.gpr = gpr;
+        m_bornForOSR = false;
+        m_isConstant = false;
+        ASSERT(m_useCount);
+    }
 #elif USE(JSVALUE32_64)
     void initJSValue(Node* node, uint32_t useCount, GPRReg tagGPR, GPRReg payloadGPR, DataFormat format = DataFormatJS)
     {
index f9f8e4c..c45313f 100644 (file)
@@ -294,7 +294,6 @@ void Graph::dump(PrintStream& out, const char* prefix, Node* node, DumpContext*
         for (unsigned i = 0; i < data.variants.size(); ++i)
             out.print(comma, inContext(data.variants[i], context));
     }
-    ASSERT(node->hasVariableAccessData(*this) == node->accessesStack(*this));
     if (node->hasVariableAccessData(*this)) {
         VariableAccessData* variableAccessData = node->tryGetVariableAccessData();
         if (variableAccessData) {
@@ -373,6 +372,8 @@ void Graph::dump(PrintStream& out, const char* prefix, Node* node, DumpContext*
             out.print(comma, inContext(data->cases[i].value, context), ":", data->cases[i].target);
         out.print(comma, "default:", data->fallThrough);
     }
+    if (node->hasArgumentRegisterIndex())
+        out.print(comma, node->argumentRegisterIndex(), "(", GPRInfo::toArgumentRegister(node->argumentRegisterIndex()), ")");
     ClobberSet reads;
     ClobberSet writes;
     addReadsAndWrites(*this, node, reads, writes);
@@ -396,7 +397,7 @@ void Graph::dump(PrintStream& out, const char* prefix, Node* node, DumpContext*
         out.print(comma, "WasHoisted");
     out.print(")");
 
-    if (node->accessesStack(*this) && node->tryGetVariableAccessData())
+    if ((node->accessesStack(*this) || node->op() == GetArgumentRegister) && node->tryGetVariableAccessData())
         out.print("  predicting ", SpeculationDump(node->tryGetVariableAccessData()->prediction()));
     else if (node->hasHeapPrediction())
         out.print("  predicting ", SpeculationDump(node->getHeapPrediction()));
@@ -506,8 +507,10 @@ void Graph::dump(PrintStream& out, DumpContext* context)
     out.print("  Fixpoint state: ", m_fixpointState, "; Form: ", m_form, "; Unification state: ", m_unificationState, "; Ref count state: ", m_refCountState, "\n");
     if (m_form == SSA)
         out.print("  Argument formats: ", listDump(m_argumentFormats), "\n");
-    else
-        out.print("  Arguments: ", listDump(m_arguments), "\n");
+    else {
+        out.print("  Arguments for checking: ", listDump(m_argumentsForChecking), "\n");
+        out.print("  Arguments on stack: ", listDump(m_argumentsOnStack), "\n");
+    }
     out.print("\n");
     
     Node* lastNode = nullptr;
@@ -1620,13 +1623,13 @@ MethodOfGettingAValueProfile Graph::methodOfGettingAValueProfileFor(Node* curren
         if (!currentNode || node->origin != currentNode->origin) {
             CodeBlock* profiledBlock = baselineCodeBlockFor(node->origin.semantic);
 
-            if (node->accessesStack(*this)) {
+            if (node->accessesStack(*this) || node->op() == GetArgumentRegister) {
                 ValueProfile* result = [&] () -> ValueProfile* {
                     if (!node->local().isArgument())
                         return nullptr;
                     int argument = node->local().toArgument();
-                    Node* argumentNode = m_arguments[argument];
-                    if (!argumentNode)
+                    Node* argumentNode = m_argumentsOnStack[argument];
+                    if (!argumentNode || !argumentNode->accessesStack(*this))
                         return nullptr;
                     if (node->variableAccessData() != argumentNode->variableAccessData())
                         return nullptr;
index d3047f3..00b5ae7 100644 (file)
@@ -859,7 +859,7 @@ public:
     bool willCatchExceptionInMachineFrame(CodeOrigin, CodeOrigin& opCatchOriginOut, HandlerInfo*& catchHandlerOut);
     
     bool needsScopeRegister() const { return m_hasDebuggerEnabled || m_codeBlock->usesEval(); }
-    bool needsFlushedThis() const { return m_codeBlock->usesEval(); }
+    bool needsFlushedThis() const { return m_hasDebuggerEnabled || m_codeBlock->usesEval(); }
 
     VM& m_vm;
     Plan& m_plan;
@@ -878,9 +878,9 @@ public:
     
     Bag<StorageAccessData> m_storageAccessData;
     
-    // In CPS, this is all of the SetArgument nodes for the arguments in the machine code block
-    // that survived DCE. All of them except maybe "this" will survive DCE, because of the Flush
-    // nodes.
+    // In CPS, this is all of the GetArgumentRegister and SetArgument nodes for the arguments in
+    // the machine code block that survived DCE. All of them except maybe "this" will survive DCE,
+    // because of the Flush nodes.
     //
     // In SSA, this is all of the GetStack nodes for the arguments in the machine code block that
     // may have some speculation in the prologue and survived DCE. Note that to get the speculation
@@ -903,7 +903,8 @@ public:
     //
     // If we DCE the ArithAdd and we remove the int check on x, then this won't do the side
     // effects.
-    Vector<Node*, 8> m_arguments;
+    Vector<Node*, 8> m_argumentsOnStack;
+    Vector<Node*, 8> m_argumentsForChecking;
     
     // In CPS, this is meaningless. In SSA, this is the argument speculation that we've locked in.
     Vector<FlushFormat> m_argumentFormats;
@@ -954,6 +955,7 @@ public:
     GraphForm m_form;
     UnificationState m_unificationState;
     PlanStage m_planStage { PlanStage::Initial };
+    StrengthReduceArgumentFlushes m_strengthReduceArguments = { StrengthReduceArgumentFlushes::DontOptimizeArgumentFlushes };
     RefCountState m_refCountState;
     bool m_hasDebuggerEnabled;
     bool m_hasExceptionHandlers { false };
index 1d20d1d..8044b78 100644 (file)
@@ -106,11 +106,11 @@ void InPlaceAbstractState::initialize()
         if (m_graph.m_form == SSA)
             format = m_graph.m_argumentFormats[i];
         else {
-            Node* node = m_graph.m_arguments[i];
+            Node* node = m_graph.m_argumentsOnStack[i];
             if (!node)
                 format = FlushedJSValue;
             else {
-                ASSERT(node->op() == SetArgument);
+                ASSERT(node->op() == SetArgument || node->op() == SetLocal);
                 format = node->variableAccessData()->flushFormat();
             }
         }
index 0ac6aff..573c03c 100644 (file)
@@ -99,18 +99,6 @@ void JITCompiler::linkOSRExits()
     }
 }
 
-void JITCompiler::compileEntry()
-{
-    // This code currently matches the old JIT. In the function header we need to
-    // save return address and call frame via the prologue and perform a fast stack check.
-    // FIXME: https://bugs.webkit.org/show_bug.cgi?id=56292
-    // We'll need to convert the remaining cti_ style calls (specifically the stack
-    // check) which will be dependent on stack layout. (We'd need to account for this in
-    // both normal return code and when jumping to an exception handler).
-    emitFunctionPrologue();
-    emitPutToCallFrameHeader(m_codeBlock, CallFrameSlot::codeBlock);
-}
-
 void JITCompiler::compileSetupRegistersForEntry()
 {
     emitSaveCalleeSaves();
@@ -277,7 +265,7 @@ void JITCompiler::link(LinkBuffer& linkBuffer)
     for (unsigned i = 0; i < m_jsCalls.size(); ++i) {
         JSCallRecord& record = m_jsCalls[i];
         CallLinkInfo& info = *record.info;
-        linkBuffer.link(record.slowCall, FunctionPtr(m_vm->getCTIStub(linkCallThunkGenerator).code().executableAddress()));
+        linkBuffer.link(record.slowCall, FunctionPtr(m_vm->getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(info.argumentsLocation()).executableAddress()));
         info.setCallLocations(
             CodeLocationLabel(linkBuffer.locationOfNearCall(record.slowCall)),
             CodeLocationLabel(linkBuffer.locationOf(record.targetToCheck)),
@@ -287,6 +275,8 @@ void JITCompiler::link(LinkBuffer& linkBuffer)
     for (JSDirectCallRecord& record : m_jsDirectCalls) {
         CallLinkInfo& info = *record.info;
         linkBuffer.link(record.call, linkBuffer.locationOf(record.slowPath));
+        if (record.hasSlowCall())
+            linkBuffer.link(record.slowCall, FunctionPtr(m_vm->getJITCallThunkEntryStub(linkDirectCallThunkGenerator).entryFor(info.argumentsLocation()).executableAddress()));
         info.setCallLocations(
             CodeLocationLabel(),
             linkBuffer.locationOf(record.slowPath),
@@ -354,8 +344,14 @@ void JITCompiler::link(LinkBuffer& linkBuffer)
 
 void JITCompiler::compile()
 {
+    Label mainEntry(this);
+
     setStartOfCode();
-    compileEntry();
+    emitFunctionPrologue();
+
+    Label entryPoint(this);
+    emitPutToCallFrameHeader(m_codeBlock, CallFrameSlot::codeBlock);
+
     m_speculative = std::make_unique<SpeculativeJIT>(*this);
 
     // Plant a check that sufficient space is available in the JSStack.
@@ -382,6 +378,20 @@ void JITCompiler::compile()
 
     m_speculative->callOperationWithCallFrameRollbackOnException(operationThrowStackOverflowError, m_codeBlock);
 
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    m_stackArgsArityOKEntry = label();
+    emitFunctionPrologue();
+
+    // Load argument values into argument registers
+    loadPtr(addressFor(CallFrameSlot::callee), argumentRegisterForCallee());
+    load32(payloadFor(CallFrameSlot::argumentCount), argumentRegisterForArgumentCount());
+    
+    for (unsigned argIndex = 0; argIndex < static_cast<unsigned>(m_codeBlock->numParameters()) && argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++)
+        load64(Address(GPRInfo::callFrameRegister, (CallFrameSlot::thisArgument + argIndex) * static_cast<int>(sizeof(Register))), argumentRegisterForFunctionArgument(argIndex));
+    
+    jump(entryPoint);
+#endif
+
     // Generate slow path code.
     m_speculative->runSlowPathGenerators(m_pcToCodeOriginMapBuilder);
     m_pcToCodeOriginMapBuilder.appendItem(labelIgnoringWatchpoints(), PCToCodeOriginMapBuilder::defaultCodeOrigin());
@@ -406,24 +416,87 @@ void JITCompiler::compile()
     codeBlock()->shrinkToFit(CodeBlock::LateShrink);
 
     disassemble(*linkBuffer);
-    
+
+    JITEntryPoints entrypoints;
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    entrypoints.setEntryFor(RegisterArgsArityCheckNotRequired, linkBuffer->locationOf(mainEntry));
+    entrypoints.setEntryFor(StackArgsArityCheckNotRequired, linkBuffer->locationOf(m_stackArgsArityOKEntry));
+#else
+    entrypoints.setEntryFor(StackArgsArityCheckNotRequired, linkBuffer->locationOf(mainEntry));
+#endif
+
     m_graph.m_plan.finalizer = std::make_unique<JITFinalizer>(
-        m_graph.m_plan, WTFMove(m_jitCode), WTFMove(linkBuffer));
+        m_graph.m_plan, WTFMove(m_jitCode), WTFMove(linkBuffer), entrypoints);
 }
 
 void JITCompiler::compileFunction()
 {
     setStartOfCode();
-    compileEntry();
+
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    unsigned numParameters = static_cast<unsigned>(m_codeBlock->numParameters());
+    GPRReg argCountReg = argumentRegisterForArgumentCount();
+    JumpList continueRegisterEntry;
+    Label registerArgumentsEntrypoints[NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS + 1];
+
+    if (numParameters < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
+        // Spill any extra register arguments passed to function onto the stack.
+        for (unsigned extraRegisterArgumentIndex = NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS - 1;
+            extraRegisterArgumentIndex >= numParameters; extraRegisterArgumentIndex--) {
+            registerArgumentsEntrypoints[extraRegisterArgumentIndex + 1] = label();
+            emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(extraRegisterArgumentIndex), extraRegisterArgumentIndex);
+        }
+    }
+    incrementCounter(this, VM::RegArgsExtra);
+
+    continueRegisterEntry.append(jump());
+
+    m_registerArgsWithArityCheck = label();
+    incrementCounter(this, VM::RegArgsArity);
+
+    Label registerArgsCheckArity(this);
+
+    Jump registerCheckArity;
+
+    if (numParameters < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
+        registerCheckArity = branch32(NotEqual, argCountReg, TrustedImm32(numParameters));
+    else {
+        registerCheckArity = branch32(Below, argCountReg, TrustedImm32(numParameters));
+        m_registerArgsWithPossibleExtraArgs = label();
+    }
+    
+    Label registerEntryNoArity(this);
+
+    if (numParameters <= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
+        registerArgumentsEntrypoints[numParameters] = registerEntryNoArity;
+
+    incrementCounter(this, VM::RegArgsNoArity);
+
+    continueRegisterEntry.link(this);
+#endif // NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+
+    Label mainEntry(this);
+
+    emitFunctionPrologue();
 
     // === Function header code generation ===
     // This is the main entry point, without performing an arity check.
     // If we needed to perform an arity check we will already have moved the return address,
     // so enter after this.
     Label fromArityCheck(this);
+
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    storePtr(argumentRegisterForCallee(), addressFor(CallFrameSlot::callee));
+    store32(argCountReg, payloadFor(CallFrameSlot::argumentCount));
+
+    Label fromStackEntry(this);
+#endif
+    
+    emitPutToCallFrameHeader(m_codeBlock, CallFrameSlot::codeBlock);
+
     // Plant a check that sufficient space is available in the JSStack.
-    addPtr(TrustedImm32(virtualRegisterForLocal(m_graph.requiredRegisterCountForExecutionAndExit() - 1).offset() * sizeof(Register)), GPRInfo::callFrameRegister, GPRInfo::regT1);
-    Jump stackOverflow = branchPtr(Above, AbsoluteAddress(m_vm->addressOfSoftStackLimit()), GPRInfo::regT1);
+    addPtr(TrustedImm32(virtualRegisterForLocal(m_graph.requiredRegisterCountForExecutionAndExit() - 1).offset() * sizeof(Register)), GPRInfo::callFrameRegister, GPRInfo::nonArgGPR0);
+    Jump stackOverflow = branchPtr(Above, AbsoluteAddress(m_vm->addressOfSoftStackLimit()), GPRInfo::nonArgGPR0);
 
     // Move the stack pointer down to accommodate locals
     addPtr(TrustedImm32(m_graph.stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, stackPointerRegister);
@@ -452,28 +525,105 @@ void JITCompiler::compileFunction()
         addPtr(TrustedImm32(-maxFrameExtentForSlowPathCall), stackPointerRegister);
 
     m_speculative->callOperationWithCallFrameRollbackOnException(operationThrowStackOverflowError, m_codeBlock);
+
+    JumpList arityOK;
     
-    // The fast entry point into a function does not check the correct number of arguments
-    // have been passed to the call (we only use the fast entry point where we can statically
-    // determine the correct number of arguments have been passed, or have already checked).
-    // In cases where an arity check is necessary, we enter here.
-    // FIXME: change this from a cti call to a DFG style operation (normal C calling conventions).
-    m_arityCheck = label();
-    compileEntry();
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    jump(registerArgsCheckArity);
+
+    JumpList registerArityNeedsFixup;
+    if (numParameters < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
+        registerCheckArity.link(this);
+        registerArityNeedsFixup.append(branch32(Below, argCountReg, TrustedImm32(m_codeBlock->numParameters())));
+
+        // We have extra register arguments.
+
+        // The fast entry point into a function does not check that the correct number of arguments
+        // have been passed to the call (we only use the fast entry point where we can statically
+        // determine the correct number of arguments have been passed, or have already checked).
+        // In cases where an arity check is necessary, we enter here.
+        m_registerArgsWithPossibleExtraArgs = label();
+
+        incrementCounter(this, VM::RegArgsExtra);
+
+        // Spill extra args passed to function
+        for (unsigned argIndex = static_cast<unsigned>(m_codeBlock->numParameters()); argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) {
+            branch32(MacroAssembler::BelowOrEqual, argCountReg, MacroAssembler::TrustedImm32(argIndex)).linkTo(mainEntry, this);
+            emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex);
+        }
+        jump(mainEntry);
+    }
+
+    // Fall through
+    if (numParameters > 0) {
+        // There should always be a "this" parameter.
+        unsigned registerArgumentFixupCount = std::min(numParameters - 1, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS);
+        Label registerArgumentsNeedArityFixup = label();
+
+        for (unsigned argIndex = 1; argIndex <= registerArgumentFixupCount; argIndex++)
+            registerArgumentsEntrypoints[argIndex] = registerArgumentsNeedArityFixup;
+    }
+
+    incrementCounter(this, VM::RegArgsArity);
+
+    registerArityNeedsFixup.link(this);
+
+    if (numParameters >= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
+        registerCheckArity.link(this);
+
+    spillArgumentRegistersToFrameBeforePrologue();
+
+#if ENABLE(VM_COUNTERS)
+    Jump continueToStackArityFixup = jump();
+#endif
+#endif // NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+
+    m_stackArgsWithArityCheck = label();
+    incrementCounter(this, VM::StackArgsArity);
+
+#if ENABLE(VM_COUNTERS)
+    continueToStackArityFixup.link(this);
+#endif
+
+    emitFunctionPrologue();
 
     load32(AssemblyHelpers::payloadFor((VirtualRegister)CallFrameSlot::argumentCount), GPRInfo::regT1);
-    branch32(AboveOrEqual, GPRInfo::regT1, TrustedImm32(m_codeBlock->numParameters())).linkTo(fromArityCheck, this);
+    arityOK.append(branch32(AboveOrEqual, GPRInfo::regT1, TrustedImm32(m_codeBlock->numParameters())));
+
+    incrementCounter(this, VM::ArityFixupRequired);
+
     emitStoreCodeOrigin(CodeOrigin(0));
     if (maxFrameExtentForSlowPathCall)
         addPtr(TrustedImm32(-maxFrameExtentForSlowPathCall), stackPointerRegister);
     m_speculative->callOperationWithCallFrameRollbackOnException(m_codeBlock->m_isConstructor ? operationConstructArityCheck : operationCallArityCheck, GPRInfo::regT0);
     if (maxFrameExtentForSlowPathCall)
         addPtr(TrustedImm32(maxFrameExtentForSlowPathCall), stackPointerRegister);
-    branchTest32(Zero, GPRInfo::returnValueGPR).linkTo(fromArityCheck, this);
+    arityOK.append(branchTest32(Zero, GPRInfo::returnValueGPR));
+
     emitStoreCodeOrigin(CodeOrigin(0));
     move(GPRInfo::returnValueGPR, GPRInfo::argumentGPR0);
     m_callArityFixup = call();
+
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    Jump toFillRegisters = jump();
+
+    m_stackArgsArityOKEntry = label();
+
+    incrementCounter(this, VM::StackArgsNoArity);
+    emitFunctionPrologue();
+
+    arityOK.link(this);
+    toFillRegisters.link(this);
+
+    // Load argument values into argument registers
+    for (unsigned argIndex = 0; argIndex < static_cast<unsigned>(m_codeBlock->numParameters()) && argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++)
+        load64(Address(GPRInfo::callFrameRegister, (CallFrameSlot::thisArgument + argIndex) * static_cast<int>(sizeof(Register))), argumentRegisterForFunctionArgument(argIndex));
+
+    jump(fromStackEntry);
+#else
+    arityOK.linkTo(fromArityCheck, this);
     jump(fromArityCheck);
+#endif
     
     // Generate slow path code.
     m_speculative->runSlowPathGenerators(m_pcToCodeOriginMapBuilder);
@@ -502,10 +652,35 @@ void JITCompiler::compileFunction()
     
     disassemble(*linkBuffer);
 
-    MacroAssemblerCodePtr withArityCheck = linkBuffer->locationOf(m_arityCheck);
+    JITEntryPoints entrypoints;
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+#if ENABLE(VM_COUNTERS)
+    MacroAssemblerCodePtr mainEntryCodePtr = linkBuffer->locationOf(registerEntryNoArity);
+#else
+    MacroAssemblerCodePtr mainEntryCodePtr = linkBuffer->locationOf(mainEntry);
+#endif
+    entrypoints.setEntryFor(RegisterArgsArityCheckNotRequired, mainEntryCodePtr);
+    entrypoints.setEntryFor(RegisterArgsPossibleExtraArgs, linkBuffer->locationOf(m_registerArgsWithPossibleExtraArgs));
+    entrypoints.setEntryFor(RegisterArgsMustCheckArity, linkBuffer->locationOf(m_registerArgsWithArityCheck));
+
+    for (unsigned argCount = 1; argCount <= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argCount++) {
+        MacroAssemblerCodePtr entry;
+        if (argCount == numParameters)
+            entry = mainEntryCodePtr;
+        else if (registerArgumentsEntrypoints[argCount].isSet())
+            entry = linkBuffer->locationOf(registerArgumentsEntrypoints[argCount]);
+        else
+            entry = linkBuffer->locationOf(m_registerArgsWithArityCheck);
+        entrypoints.setEntryFor(JITEntryPoints::registerEntryTypeForArgumentCount(argCount), entry);
+    }
+    entrypoints.setEntryFor(StackArgsArityCheckNotRequired, linkBuffer->locationOf(m_stackArgsArityOKEntry));
+#else
+    entrypoints.setEntryFor(StackArgsArityCheckNotRequired, linkBuffer->locationOf(mainEntry));
+#endif
+    entrypoints.setEntryFor(StackArgsMustCheckArity, linkBuffer->locationOf(m_stackArgsWithArityCheck));
 
     m_graph.m_plan.finalizer = std::make_unique<JITFinalizer>(
-        m_graph.m_plan, WTFMove(m_jitCode), WTFMove(linkBuffer), withArityCheck);
+        m_graph.m_plan, WTFMove(m_jitCode), WTFMove(linkBuffer), entrypoints);
 }
 
 void JITCompiler::disassemble(LinkBuffer& linkBuffer)
index 4f8e817..f02ca14 100644 (file)
@@ -217,6 +217,11 @@ public:
         m_jsDirectCalls.append(JSDirectCallRecord(call, slowPath, info));
     }
     
+    void addJSDirectCall(Call call, Call slowCall, Label slowPath, CallLinkInfo* info)
+    {
+        m_jsDirectCalls.append(JSDirectCallRecord(call, slowCall, slowPath, info));
+    }
+    
     void addJSDirectTailCall(PatchableJump patchableJump, Call call, Label slowPath, CallLinkInfo* info)
     {
         m_jsDirectTailCalls.append(JSDirectTailCallRecord(patchableJump, call, slowPath, info));
@@ -267,7 +272,6 @@ private:
     friend class OSRExitJumpPlaceholder;
     
     // Internal implementation to compile.
-    void compileEntry();
     void compileSetupRegistersForEntry();
     void compileEntryExecutionFlag();
     void compileBody();
@@ -318,7 +322,18 @@ private:
         {
         }
         
+        JSDirectCallRecord(Call call, Call slowCall, Label slowPath, CallLinkInfo* info)
+            : call(call)
+            , slowCall(slowCall)
+            , slowPath(slowPath)
+            , info(info)
+        {
+        }
+
+        bool hasSlowCall() { return slowCall.m_label.isSet(); }
+
         Call call;
+        Call slowCall;
         Label slowPath;
         CallLinkInfo* info;
     };
@@ -355,7 +370,12 @@ private:
     Vector<ExceptionHandlingOSRExitInfo> m_exceptionHandlerOSRExitCallSites;
     
     Call m_callArityFixup;
-    Label m_arityCheck;
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    Label m_registerArgsWithPossibleExtraArgs;
+    Label m_registerArgsWithArityCheck;
+    Label m_stackArgsArityOKEntry;
+#endif
+    Label m_stackArgsWithArityCheck;
     std::unique_ptr<SpeculativeJIT> m_speculative;
     PCToCodeOriginMapBuilder m_pcToCodeOriginMapBuilder;
 };
index 5391158..dab4332 100644 (file)
 
 namespace JSC { namespace DFG {
 
-JITFinalizer::JITFinalizer(Plan& plan, PassRefPtr<JITCode> jitCode, std::unique_ptr<LinkBuffer> linkBuffer, MacroAssemblerCodePtr withArityCheck)
+JITFinalizer::JITFinalizer(Plan& plan, PassRefPtr<JITCode> jitCode,
+    std::unique_ptr<LinkBuffer> linkBuffer, JITEntryPoints& entrypoints)
     : Finalizer(plan)
     , m_jitCode(jitCode)
     , m_linkBuffer(WTFMove(linkBuffer))
-    , m_withArityCheck(withArityCheck)
+    , m_entrypoints(entrypoints)
 {
 }
 
@@ -56,9 +57,8 @@ size_t JITFinalizer::codeSize()
 
 bool JITFinalizer::finalize()
 {
-    m_jitCode->initializeCodeRef(
-        FINALIZE_DFG_CODE(*m_linkBuffer, ("DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data())),
-        MacroAssemblerCodePtr());
+    MacroAssemblerCodeRef codeRef = FINALIZE_DFG_CODE(*m_linkBuffer, ("DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data()));
+    m_jitCode->initializeEntryPoints(JITEntryPointsWithRef(codeRef, m_entrypoints));
     
     m_plan.codeBlock->setJITCode(m_jitCode);
     
@@ -69,10 +69,11 @@ bool JITFinalizer::finalize()
 
 bool JITFinalizer::finalizeFunction()
 {
-    RELEASE_ASSERT(!m_withArityCheck.isEmptyValue());
-    m_jitCode->initializeCodeRef(
-        FINALIZE_DFG_CODE(*m_linkBuffer, ("DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data())),
-        m_withArityCheck);
+    RELEASE_ASSERT(!m_entrypoints.entryFor(StackArgsMustCheckArity).isEmptyValue());
+    MacroAssemblerCodeRef codeRef = FINALIZE_DFG_CODE(*m_linkBuffer, ("DFG JIT code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::DFGJIT)).data()));
+
+    m_jitCode->initializeEntryPoints(JITEntryPointsWithRef(codeRef, m_entrypoints));
+
     m_plan.codeBlock->setJITCode(m_jitCode);
     
     finalizeCommon();
index 1b21f8c..8e46a0c 100644 (file)
@@ -36,7 +36,7 @@ namespace JSC { namespace DFG {
 
 class JITFinalizer : public Finalizer {
 public:
-    JITFinalizer(Plan&, PassRefPtr<JITCode>, std::unique_ptr<LinkBuffer>, MacroAssemblerCodePtr withArityCheck = MacroAssemblerCodePtr(MacroAssemblerCodePtr::EmptyValue));
+    JITFinalizer(Plan&, PassRefPtr<JITCode>, std::unique_ptr<LinkBuffer>, JITEntryPoints&);
     virtual ~JITFinalizer();
     
     size_t codeSize() override;
@@ -48,7 +48,7 @@ private:
     
     RefPtr<JITCode> m_jitCode;
     std::unique_ptr<LinkBuffer> m_linkBuffer;
-    MacroAssemblerCodePtr m_withArityCheck;
+    JITEntryPoints m_entrypoints;
 };
 
 } } // namespace JSC::DFG
index db36999..966771f 100644 (file)
@@ -101,7 +101,7 @@ public:
         {
             for (unsigned i = 0; i < block->size(); i++) {
                 Node* node = block->at(i);
-                bool isPrimordialSetArgument = node->op() == SetArgument && node->local().isArgument() && node == m_graph.m_arguments[node->local().toArgument()];
+                bool isPrimordialSetArgument = node->op() == SetArgument && node->local().isArgument() && node == m_graph.m_argumentsOnStack[node->local().toArgument()];
                 InlineCallFrame* inlineCallFrame = node->origin.semantic.inlineCallFrame;
                 if (inlineCallFrame)
                     seenInlineCallFrames.add(inlineCallFrame);
index 3e04b0c..5b7f99f 100644 (file)
@@ -67,8 +67,8 @@ public:
         {
             for (unsigned i = 0; i < block->size(); i++) {
                 Node* node = block->at(i);
-                bool isPrimordialSetArgument = node->op() == SetArgument && node->local().isArgument() && node == m_graph.m_arguments[node->local().toArgument()];
-                if (node->op() == SetLocal || (node->op() == SetArgument && !isPrimordialSetArgument)) {
+                if ((node->op() == SetArgument || node->op() == SetLocal)
+                    && (!node->local().isArgument() || node != m_graph.m_argumentsOnStack[node->local().toArgument()])) {
                     VirtualRegister operand = node->local();
                     VariableAccessData* flushAccessData = currentBlockAccessData.operand(operand);
                     if (!flushAccessData)
@@ -117,7 +117,6 @@ public:
             if (initialAccessData.operand(operand))
                 continue;
 
-            DFG_ASSERT(m_graph, node, node->op() != SetLocal); // We should have inserted a Flush before this!
             initialAccessData.operand(operand) = node->variableAccessData();
             initialAccessNodes.operand(operand) = node;
         }
index 4627905..5ac36ac 100644 (file)
@@ -72,6 +72,7 @@ ExitMode mayExitImpl(Graph& graph, Node* node, StateType& state)
     case GetStack:
     case GetCallee:
     case GetArgumentCountIncludingThis:
+    case GetArgumentRegister:
     case GetRestLength:
     case GetScope:
     case PhantomLocal:
index 80795c2..54b6536 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012-2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -41,6 +41,8 @@ MinifiedNode MinifiedNode::fromNode(Node* node)
     result.m_op = node->op();
     if (hasConstant(node->op()))
         result.m_info = JSValue::encode(node->asJSValue());
+    else if (node->op() == GetArgumentRegister)
+        result.m_info = jsFunctionArgumentForArgumentRegisterIndex(node->argumentRegisterIndex());
     else {
         ASSERT(node->op() == PhantomDirectArguments || node->op() == PhantomClonedArguments);
         result.m_info = bitwise_cast<uintptr_t>(node->origin.semantic.inlineCallFrame);
index 29f6da3..c168d1c 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2014, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2014-2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -43,6 +43,7 @@ inline bool belongsInMinifiedGraph(NodeType type)
     case DoubleConstant:
     case PhantomDirectArguments:
     case PhantomClonedArguments:
+    case GetArgumentRegister:
         return true;
     default:
         return false;
@@ -71,6 +72,10 @@ public:
     {
         return bitwise_cast<InlineCallFrame*>(static_cast<uintptr_t>(m_info));
     }
+
+    bool hasArgumentIndex() const { return hasArgumentIndex(m_op); }
+
+    unsigned argumentIndex() const { return m_info; }
     
     static MinifiedID getID(MinifiedNode* node) { return node->id(); }
     static bool compareByNodeIndex(const MinifiedNode& a, const MinifiedNode& b)
@@ -88,6 +93,11 @@ private:
     {
         return type == PhantomDirectArguments || type == PhantomClonedArguments;
     }
+
+    static bool hasArgumentIndex(NodeType type)
+    {
+        return type == GetArgumentRegister;
+    }
     
     MinifiedID m_id;
     uint64_t m_info;
index 6a94b59..e5065c9 100644 (file)
@@ -71,6 +71,7 @@ bool Node::hasVariableAccessData(Graph& graph)
     case GetLocal:
     case SetLocal:
     case SetArgument:
+    case GetArgumentRegister:
     case Flush:
     case PhantomLocal:
         return true;
index b28157f..c880ada 100644 (file)
@@ -828,6 +828,9 @@ public:
     bool hasVariableAccessData(Graph&);
     bool accessesStack(Graph& graph)
     {
+        if (op() == GetArgumentRegister)
+            return false;
+
         return hasVariableAccessData(graph);
     }
     
@@ -846,6 +849,11 @@ public:
         return m_opInfo.as<VariableAccessData*>()->find();
     }
     
+    void setVariableAccessData(VariableAccessData* variable)
+    {
+        m_opInfo = variable;
+    }
+    
     VirtualRegister local()
     {
         return variableAccessData()->local();
@@ -1214,6 +1222,17 @@ public:
         return speculationFromJSType(queriedType());
     }
     
+    bool hasArgumentRegisterIndex()
+    {
+        return op() == GetArgumentRegister;
+    }
+    
+    unsigned argumentRegisterIndex()
+    {
+        ASSERT(hasArgumentRegisterIndex());
+        return m_opInfo2.as<unsigned>();
+    }
+    
     bool hasResult()
     {
         return !!result();
index d45e4df..87eecb3 100644 (file)
@@ -53,6 +53,7 @@ namespace JSC { namespace DFG {
     macro(CreateThis, NodeResultJS) /* Note this is not MustGenerate since we're returning it anyway. */ \
     macro(GetCallee, NodeResultJS) \
     macro(GetArgumentCountIncludingThis, NodeResultInt32) \
+    macro(GetArgumentRegister, NodeResultJS /* | NodeMustGenerate */) \
     \
     /* Nodes for local variable access. These nodes are linked together using Phi nodes. */\
     /* Any two nodes that are part of the same Phi graph will share the same */\
index 0359846..1a23c37 100644 (file)
@@ -144,6 +144,11 @@ void LocalOSRAvailabilityCalculator::executeNode(Node* node)
         break;
     }
 
+    case GetArgumentRegister: {
+        m_availability.m_locals.operand(node->local()).setNode(node);
+        break;
+    }
+
     case MovHint: {
         m_availability.m_locals.operand(node->unlinkedLocal()).setNode(node->child1().node());
         break;
index 5d22edd..1be2799 100644 (file)
@@ -112,16 +112,32 @@ public:
         // type checks to here.
         origin = target->at(0)->origin;
         
-        for (int argument = 0; argument < baseline->numParameters(); ++argument) {
+        for (unsigned argument = 0; argument < static_cast<unsigned>(baseline->numParameters()); ++argument) {
             Node* oldNode = target->variablesAtHead.argument(argument);
             if (!oldNode) {
-                // Just for sanity, always have a SetArgument even if it's not needed.
-                oldNode = m_graph.m_arguments[argument];
+                // Just for sanity, always have an argument node even if it's not needed.
+                oldNode = m_graph.m_argumentsForChecking[argument];
             }
-            Node* node = newRoot->appendNode(
-                m_graph, SpecNone, SetArgument, origin,
-                OpInfo(oldNode->variableAccessData()));
-            m_graph.m_arguments[argument] = node;
+            Node* node;
+            Node* stackNode;
+            if (argument < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
+                node = newRoot->appendNode(
+                    m_graph, SpecNone, GetArgumentRegister, origin,
+                    OpInfo(oldNode->variableAccessData()),
+                    OpInfo(argumentRegisterIndexForJSFunctionArgument(argument)));
+                stackNode = newRoot->appendNode(
+                    m_graph, SpecNone, SetLocal, origin,
+                    OpInfo(oldNode->variableAccessData()),
+                    Edge(node));
+            } else {
+                node = newRoot->appendNode(
+                    m_graph, SpecNone, SetArgument, origin,
+                    OpInfo(oldNode->variableAccessData()));
+                stackNode = node;
+            }
+
+            m_graph.m_argumentsForChecking[argument] = node;
+            m_graph.m_argumentsOnStack[argument] = stackNode;
         }
 
         for (int local = 0; local < baseline->m_numCalleeLocals; ++local) {
index c0c9479..b0d1eae 100644 (file)
@@ -314,7 +314,9 @@ Plan::CompilationPath Plan::compileInThreadImpl(LongLivedState& longLivedState)
     performCFA(dfg);
     performConstantFolding(dfg);
     bool changed = false;
+    dfg.m_strengthReduceArguments = OptimizeArgumentFlushes;
     changed |= performCFGSimplification(dfg);
+    changed |= performStrengthReduction(dfg);
     changed |= performLocalCSE(dfg);
     
     if (validationEnabled())
index baf0ad9..b2c79ba 100644 (file)
@@ -197,7 +197,7 @@ private:
 
             
         default: {
-            // All of the outermost arguments, except this, are definitely read.
+            // All of the outermost stack arguments, except this, are definitely read.
             for (unsigned i = m_graph.m_codeBlock->numParameters(); i-- > 1;)
                 m_read(virtualRegisterForArgument(i));
         
index 27fb903..2b0a70f 100644 (file)
@@ -56,7 +56,7 @@ public:
                 if (!profile)
                     continue;
             
-                m_graph.m_arguments[arg]->variableAccessData()->predict(
+                m_graph.m_argumentsForChecking[arg]->variableAccessData()->predict(
                     profile->computeUpdatedPrediction(locker));
             }
         }
@@ -74,7 +74,7 @@ public:
                 Node* node = block->variablesAtHead.operand(operand);
                 if (!node)
                     continue;
-                ASSERT(node->accessesStack(m_graph));
+                ASSERT(node->accessesStack(m_graph) || node->op() == GetArgumentRegister);
                 node->variableAccessData()->predict(
                     speculationFromValue(m_graph.m_plan.mustHandleValues[i]));
             }
index 356ef3e..3fff8c0 100644 (file)
@@ -168,6 +168,16 @@ private:
             break;
         }
 
+        case GetArgumentRegister: {
+            VariableAccessData* variable = node->variableAccessData();
+            SpeculatedType prediction = variable->prediction();
+            if (!variable->couldRepresentInt52() && (prediction & SpecInt52Only))
+                prediction = (prediction | SpecAnyIntAsDouble) & ~SpecInt52Only;
+            if (prediction)
+                changed |= mergePrediction(prediction);
+            break;
+        }
+            
         case UInt32ToNumber: {
             if (node->canSpeculateInt32(m_pass))
                 changed |= mergePrediction(SpecInt32Only);
@@ -968,6 +978,7 @@ private:
 
         case GetLocal:
         case SetLocal:
+        case GetArgumentRegister:
         case UInt32ToNumber:
         case ValueAdd:
         case ArithAdd:
index 78162c7..4e28d9d 100644 (file)
@@ -147,7 +147,7 @@ public:
             
         } while (changed);
         
-        // All of the arguments should be live at head of root. Note that we may find that some
+        // All of the stack arguments should be live at head of root. Note that we may find that some
         // locals are live at head of root. This seems wrong but isn't. This will happen for example
         // if the function accesses closure variable #42 for some other function and we either don't
         // have variable #42 at all or we haven't set it at root, for whatever reason. Basically this
@@ -157,8 +157,12 @@ public:
         //
         // For our purposes here, the imprecision in the aliasing is harmless. It just means that we
         // may not do as much Phi pruning as we wanted.
-        for (size_t i = liveAtHead.atIndex(0).numberOfArguments(); i--;)
-            DFG_ASSERT(m_graph, nullptr, liveAtHead.atIndex(0).argument(i));
+        for (size_t i = liveAtHead.atIndex(0).numberOfArguments(); i--;) {
+            if (i >= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
+                // Stack arguments are live at the head of root.
+                DFG_ASSERT(m_graph, nullptr, liveAtHead.atIndex(0).argument(i));
+            }
+        }
         
         // Next identify where we would want to sink PutStacks to. We say that there is a deferred
         // flush if we had a PutStack with a given FlushFormat but it hasn't been materialized yet.
@@ -358,7 +362,8 @@ public:
             for (Node* node : *block) {
                 switch (node->op()) {
                 case PutStack:
-                    putStacksToSink.add(node);
+                    if (!m_graph.m_argumentsOnStack.contains(node))
+                        putStacksToSink.add(node);
                     ssaCalculator.newDef(
                         operandToVariable.operand(node->stackAccessData()->local),
                         block, node->child1().node());
@@ -483,11 +488,15 @@ public:
                             return;
                         }
                     
+                        Node* incoming = mapping.operand(operand);
+                        // Since we don't delete argument PutStacks, no need to add one back.
+                        if (m_graph.m_argumentsOnStack.contains(incoming))
+                            return;
+
                         // Gotta insert a PutStack.
                         if (verbose)
                             dataLog("Inserting a PutStack for ", operand, " at ", node, "\n");
 
-                        Node* incoming = mapping.operand(operand);
                         DFG_ASSERT(m_graph, node, incoming);
                     
                         insertionSet.insertNode(
@@ -538,6 +547,8 @@ public:
                     Node* incoming;
                     if (isConcrete(deferred.operand(operand))) {
                         incoming = mapping.operand(operand);
+                        if (m_graph.m_argumentsOnStack.contains(incoming))
+                            continue;
                         DFG_ASSERT(m_graph, phiNode, incoming);
                     } else {
                         // Issue a GetStack to get the value. This might introduce some redundancy
index a3323e6..5675cce 100644 (file)
@@ -236,6 +236,11 @@ public:
             return m_bank->isLockedAtIndex(m_index);
         }
 
+        void unlock() const
+        {
+            return m_bank->unlockAtIndex(m_index);
+        }
+        
         void release() const
         {
             m_bank->releaseAtIndex(m_index);
@@ -298,6 +303,13 @@ private:
         return m_data[index].lockCount;
     }
 
+    void unlockAtIndex(unsigned index)
+    {
+        ASSERT(index < NUM_REGS);
+        ASSERT(m_data[index].lockCount);
+        --m_data[index].lockCount;
+    }
+
     VirtualRegister nameAtIndex(unsigned index) const
     {
         ASSERT(index < NUM_REGS);
index cde0740..bab9593 100644 (file)
@@ -73,7 +73,8 @@ public:
         }
         
         // Find all SetLocals and create Defs for them. We handle SetArgument by creating a
-        // GetLocal, and recording the flush format.
+        // GetStack, and recording the flush format. We handle GetArgumentRegister by directly
+        // adding the node to m_argumentMapping hash map.
         for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) {
             BasicBlock* block = m_graph.block(blockIndex);
             if (!block)
@@ -83,14 +84,16 @@ public:
             // assignment for every local.
             for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
                 Node* node = block->at(nodeIndex);
-                if (node->op() != SetLocal && node->op() != SetArgument)
+                if (node->op() != SetLocal && node->op() != SetArgument && node->op() != GetArgumentRegister)
                     continue;
                 
                 VariableAccessData* variable = node->variableAccessData();
                 
-                Node* childNode;
+                Node* childNode = nullptr;
                 if (node->op() == SetLocal)
                     childNode = node->child1().node();
+                else if (node->op() == GetArgumentRegister)
+                    m_argumentMapping.add(node, node);
                 else {
                     ASSERT(node->op() == SetArgument);
                     childNode = m_insertionSet.insertNode(
@@ -101,9 +104,11 @@ public:
                         m_argumentGetters.add(childNode);
                     m_argumentMapping.add(node, childNode);
                 }
-                
-                m_calculator.newDef(
-                    m_ssaVariableForVariable.get(variable), block, childNode);
+
+                if (childNode) {
+                    m_calculator.newDef(
+                        m_ssaVariableForVariable.get(variable), block, childNode);
+                }
             }
             
             m_insertionSet.execute(block);
@@ -294,7 +299,13 @@ public:
                     valueForOperand.operand(variable->local()) = child;
                     break;
                 }
-                    
+
+                case GetArgumentRegister: {
+                    VariableAccessData* variable = node->variableAccessData();
+                    valueForOperand.operand(variable->local()) = node;
+                    break;
+                }
+
                 case GetStack: {
                     ASSERT(m_argumentGetters.contains(node));
                     valueForOperand.operand(node->stackAccessData()->local) = node;
@@ -382,17 +393,21 @@ public:
             block->ssa = std::make_unique<BasicBlock::SSAData>(block);
         }
         
-        m_graph.m_argumentFormats.resize(m_graph.m_arguments.size());
-        for (unsigned i = m_graph.m_arguments.size(); i--;) {
+        m_graph.m_argumentFormats.resize(m_graph.m_argumentsForChecking.size());
+        for (unsigned i = m_graph.m_argumentsForChecking.size(); i--;) {
             FlushFormat format = FlushedJSValue;
 
-            Node* node = m_argumentMapping.get(m_graph.m_arguments[i]);
+            Node* node = m_argumentMapping.get(m_graph.m_argumentsForChecking[i]);
             
             RELEASE_ASSERT(node);
-            format = node->stackAccessData()->format;
+            if (node->op() == GetArgumentRegister) {
+                VariableAccessData* variable = node->variableAccessData();
+                format = variable->flushFormat();
+            } else
+                format = node->stackAccessData()->format;
             
             m_graph.m_argumentFormats[i] = format;
-            m_graph.m_arguments[i] = node; // Record the load that loads the arguments for the benefit of exit profiling.
+            m_graph.m_argumentsForChecking[i] = node; // Record the load that loads the arguments for the benefit of exit profiling.
         }
         
         m_graph.m_form = SSA;
index f3f7892..3266198 100644 (file)
@@ -147,6 +147,7 @@ bool safeToExecute(AbstractStateType& state, Graph& graph, Node* node)
     case CreateThis:
     case GetCallee:
     case GetArgumentCountIncludingThis:
+    case GetArgumentRegister:
     case GetRestLength:
     case GetLocal:
     case SetLocal:
index 9cbdd86..0e92c2a 100644 (file)
@@ -74,6 +74,7 @@ SpeculativeJIT::SpeculativeJIT(JITCompiler& jit)
     , m_lastGeneratedNode(LastNodeType)
     , m_indexInBlock(0)
     , m_generationInfo(m_jit.graph().frameRegisterCount())
+    , m_argumentGenerationInfo(CallFrameSlot::callee + GPRInfo::numberOfArgumentRegisters)
     , m_state(m_jit.graph())
     , m_interpreter(m_jit.graph(), m_state)
     , m_stream(&jit.jitCode()->variableEventStream)
@@ -407,6 +408,8 @@ void SpeculativeJIT::clearGenerationInfo()
 {
     for (unsigned i = 0; i < m_generationInfo.size(); ++i)
         m_generationInfo[i] = GenerationInfo();
+    for (unsigned i = 0; i < m_argumentGenerationInfo.size(); ++i)
+        m_argumentGenerationInfo[i] = GenerationInfo();
     m_gprs = RegisterBank<GPRInfo>();
     m_fprs = RegisterBank<FPRInfo>();
 }
@@ -1199,6 +1202,25 @@ static const char* dataFormatString(DataFormat format)
     return strings[format];
 }
 
+static void dumpRegisterInfo(GenerationInfo& info, unsigned index)
+{
+    if (info.alive())
+        dataLogF("    % 3d:%s%s", index, dataFormatString(info.registerFormat()), dataFormatString(info.spillFormat()));
+    else
+        dataLogF("    % 3d:[__][__]", index);
+    if (info.registerFormat() == DataFormatDouble)
+        dataLogF(":fpr%d\n", info.fpr());
+    else if (info.registerFormat() != DataFormatNone
+#if USE(JSVALUE32_64)
+        && !(info.registerFormat() & DataFormatJS)
+#endif
+        ) {
+        ASSERT(info.gpr() != InvalidGPRReg);
+        dataLogF(":%s\n", GPRInfo::debugName(info.gpr()));
+    } else
+        dataLogF("\n");
+}
+
 void SpeculativeJIT::dump(const char* label)
 {
     if (label)
@@ -1208,25 +1230,15 @@ void SpeculativeJIT::dump(const char* label)
     m_gprs.dump();
     dataLogF("  fprs:\n");
     m_fprs.dump();
-    dataLogF("  VirtualRegisters:\n");
-    for (unsigned i = 0; i < m_generationInfo.size(); ++i) {
-        GenerationInfo& info = m_generationInfo[i];
-        if (info.alive())
-            dataLogF("    % 3d:%s%s", i, dataFormatString(info.registerFormat()), dataFormatString(info.spillFormat()));
-        else
-            dataLogF("    % 3d:[__][__]", i);
-        if (info.registerFormat() == DataFormatDouble)
-            dataLogF(":fpr%d\n", info.fpr());
-        else if (info.registerFormat() != DataFormatNone
-#if USE(JSVALUE32_64)
-            && !(info.registerFormat() & DataFormatJS)
-#endif
-            ) {
-            ASSERT(info.gpr() != InvalidGPRReg);
-            dataLogF(":%s\n", GPRInfo::debugName(info.gpr()));
-        } else
-            dataLogF("\n");
-    }
+
+    dataLogF("  Argument VirtualRegisters:\n");
+    for (unsigned i = 0; i < m_argumentGenerationInfo.size(); ++i)
+        dumpRegisterInfo(m_argumentGenerationInfo[i], i);
+
+    dataLogF("  Local VirtualRegisters:\n");
+    for (unsigned i = 0; i < m_generationInfo.size(); ++i)
+        dumpRegisterInfo(m_generationInfo[i], i);
+
     if (label)
         dataLogF("</%s>\n", label);
 }
@@ -1677,6 +1689,9 @@ void SpeculativeJIT::compileCurrentBlock()
     
     m_jit.blockHeads()[m_block->index] = m_jit.label();
 
+    if (!m_block->index)
+        checkArgumentTypes();
+
     if (!m_block->intersectionOfCFAHasVisited) {
         // Don't generate code for basic blocks that are unreachable according to CFA.
         // But to be sure that nobody has generated a jump to this block, drop in a
@@ -1687,6 +1702,9 @@ void SpeculativeJIT::compileCurrentBlock()
 
     m_stream->appendAndLog(VariableEvent::reset());
     
+    if (!m_block->index)
+        setupArgumentRegistersForEntry();
+    
     m_jit.jitAssertHasValidCallFrame();
     m_jit.jitAssertTagsInPlace();
     m_jit.jitAssertArgumentCountSane();
@@ -1696,6 +1714,21 @@ void SpeculativeJIT::compileCurrentBlock()
     
     for (size_t i = m_block->variablesAtHead.size(); i--;) {
         int operand = m_block->variablesAtHead.operandForIndex(i);
+        if (!m_block->index && operandIsArgument(operand)) {
+            unsigned argument = m_block->variablesAtHead.argumentForIndex(i);
+            Node* argumentNode = m_jit.graph().m_argumentsForChecking[argument];
+            
+            if (argumentNode && argumentNode->op() == GetArgumentRegister) {
+                if (!argumentNode->refCount())
+                    continue; // No need to record dead GetArgumentRegisters's.
+                m_stream->appendAndLog(
+                    VariableEvent::movHint(
+                        MinifiedID(argumentNode),
+                        argumentNode->local()));
+                continue;
+            }
+        }
+
         Node* node = m_block->variablesAtHead[i];
         if (!node)
             continue; // No need to record dead SetLocal's.
@@ -1782,13 +1815,15 @@ void SpeculativeJIT::checkArgumentTypes()
     m_origin = NodeOrigin(CodeOrigin(0), CodeOrigin(0), true);
 
     for (int i = 0; i < m_jit.codeBlock()->numParameters(); ++i) {
-        Node* node = m_jit.graph().m_arguments[i];
+        Node* node = m_jit.graph().m_argumentsForChecking[i];
         if (!node) {
             // The argument is dead. We don't do any checks for such arguments.
             continue;
         }
         
-        ASSERT(node->op() == SetArgument);
+        ASSERT(node->op() == SetArgument
+            || (node->op() == SetLocal && node->child1()->op() == GetArgumentRegister)
+            || node->op() == GetArgumentRegister);
         ASSERT(node->shouldGenerate());
 
         VariableAccessData* variableAccessData = node->variableAccessData();
@@ -1799,23 +1834,44 @@ void SpeculativeJIT::checkArgumentTypes()
         
         VirtualRegister virtualRegister = variableAccessData->local();
 
-        JSValueSource valueSource = JSValueSource(JITCompiler::addressFor(virtualRegister));
-        
+        JSValueSource valueSource;
+
+#if USE(JSVALUE64)
+        GPRReg argumentRegister = InvalidGPRReg;
+
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+        if (static_cast<unsigned>(i) < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
+            argumentRegister = argumentRegisterForFunctionArgument(i);
+            valueSource = JSValueSource(argumentRegister);
+        } else
+#endif
+#endif
+            valueSource = JSValueSource(JITCompiler::addressFor(virtualRegister));
+
 #if USE(JSVALUE64)
         switch (format) {
         case FlushedInt32: {
-            speculationCheck(BadType, valueSource, node, m_jit.branch64(MacroAssembler::Below, JITCompiler::addressFor(virtualRegister), GPRInfo::tagTypeNumberRegister));
+            if (argumentRegister != InvalidGPRReg)
+                speculationCheck(BadType, valueSource, node, m_jit.branch64(MacroAssembler::Below, argumentRegister, GPRInfo::tagTypeNumberRegister));
+            else
+                speculationCheck(BadType, valueSource, node, m_jit.branch64(MacroAssembler::Below, JITCompiler::addressFor(virtualRegister), GPRInfo::tagTypeNumberRegister));
             break;
         }
         case FlushedBoolean: {
             GPRTemporary temp(this);
-            m_jit.load64(JITCompiler::addressFor(virtualRegister), temp.gpr());
+            if (argumentRegister != InvalidGPRReg)
+                m_jit.move(argumentRegister, temp.gpr());
+            else
+                m_jit.load64(JITCompiler::addressFor(virtualRegister), temp.gpr());
             m_jit.xor64(TrustedImm32(static_cast<int32_t>(ValueFalse)), temp.gpr());
             speculationCheck(BadType, valueSource, node, m_jit.branchTest64(MacroAssembler::NonZero, temp.gpr(), TrustedImm32(static_cast<int32_t>(~1))));
             break;
         }
         case FlushedCell: {
-            speculationCheck(BadType, valueSource, node, m_jit.branchTest64(MacroAssembler::NonZero, JITCompiler::addressFor(virtualRegister), GPRInfo::tagMaskRegister));
+            if (argumentRegister != InvalidGPRReg)
+                speculationCheck(BadType, valueSource, node, m_jit.branchTest64(MacroAssembler::NonZero, argumentRegister, GPRInfo::tagMaskRegister));
+            else
+                speculationCheck(BadType, valueSource, node, m_jit.branchTest64(MacroAssembler::NonZero, JITCompiler::addressFor(virtualRegister), GPRInfo::tagMaskRegister));
             break;
         }
         default:
@@ -1846,10 +1902,38 @@ void SpeculativeJIT::checkArgumentTypes()
     m_origin = NodeOrigin();
 }
 
+void SpeculativeJIT::setupArgumentRegistersForEntry()
+{
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    BasicBlock* firstBlock = m_jit.graph().block(0);
+
+    // FIXME: https://bugs.webkit.org/show_bug.cgi?id=165720
+    // We should scan m_arguemntsForChecking instead of looking for GetArgumentRegister
+    // nodes in the root block.
+    for (size_t indexInBlock = 0; indexInBlock < firstBlock->size(); ++indexInBlock) {
+        Node* node = firstBlock->at(indexInBlock);
+
+        if (node->op() == GetArgumentRegister) {
+            VirtualRegister virtualRegister = node->virtualRegister();
+            GenerationInfo& info = generationInfoFromVirtualRegister(virtualRegister);
+            GPRReg argumentReg = GPRInfo::toArgumentRegister(node->argumentRegisterIndex());
+            
+            ASSERT(argumentReg != InvalidGPRReg);
+            
+            ASSERT(!m_gprs.isLocked(argumentReg));
+            m_gprs.allocateSpecific(argumentReg);
+            m_gprs.retain(argumentReg, virtualRegister, SpillOrderJS);
+            info.initArgumentRegisterValue(node, node->refCount(), argumentReg, DataFormatJS);
+            info.noticeOSRBirth(*m_stream, node, virtualRegister);
+            // Don't leave argument registers locked.
+            m_gprs.unlock(argumentReg);
+        }
+    }
+#endif
+}
+
 bool SpeculativeJIT::compile()
 {
-    checkArgumentTypes();
-    
     ASSERT(!m_currentNode);
     for (BlockIndex blockIndex = 0; blockIndex < m_jit.graph().numBlocks(); ++blockIndex) {
         m_jit.setForBlockIndex(blockIndex);
index de006cc..f215786 100644 (file)
@@ -128,7 +128,7 @@ public:
     }
     
 #if USE(JSVALUE64)
-    GPRReg fillJSValue(Edge);
+    GPRReg fillJSValue(Edge, GPRReg gprToUse = InvalidGPRReg);
 #elif USE(JSVALUE32_64)
     bool fillJSValue(Edge, GPRReg&, GPRReg&, FPRReg&);
 #endif
@@ -200,6 +200,9 @@ public:
 #if ENABLE(DFG_REGISTER_ALLOCATION_VALIDATION)
         m_jit.addRegisterAllocationAtOffset(m_jit.debugOffset());
 #endif
+        if (specific == InvalidGPRReg)
+            return allocate();
+
         VirtualRegister spillMe = m_gprs.allocateSpecific(specific);
         if (spillMe.isValid()) {
 #if USE(JSVALUE32_64)
@@ -315,6 +318,8 @@ public:
 
     void checkArgumentTypes();
 
+    void setupArgumentRegistersForEntry();
+
     void clearGenerationInfo();
 
     // These methods are used when generating 'unexpected'
@@ -485,6 +490,9 @@ public:
     // Spill a VirtualRegister to the JSStack.
     void spill(VirtualRegister spillMe)
     {
+        if (spillMe.isArgument() && m_block->index > 0)
+            return;
+
         GenerationInfo& info = generationInfoFromVirtualRegister(spillMe);
 
 #if USE(JSVALUE32_64)
@@ -2873,7 +2881,10 @@ public:
 
     GenerationInfo& generationInfoFromVirtualRegister(VirtualRegister virtualRegister)
     {
-        return m_generationInfo[virtualRegister.toLocal()];
+        if (virtualRegister.isLocal())
+            return m_generationInfo[virtualRegister.toLocal()];
+        ASSERT(virtualRegister.isArgument());
+        return m_argumentGenerationInfo[virtualRegister.offset()];
     }
     
     GenerationInfo& generationInfo(Node* node)
@@ -2896,6 +2907,7 @@ public:
     unsigned m_indexInBlock;
     // Virtual and physical register maps.
     Vector<GenerationInfo, 32> m_generationInfo;
+    Vector<GenerationInfo, 8> m_argumentGenerationInfo;
     RegisterBank<GPRInfo> m_gprs;
     RegisterBank<FPRInfo> m_fprs;
 
@@ -2994,6 +3006,20 @@ public:
 #endif
     }
 
+#if USE(JSVALUE64)
+    explicit JSValueOperand(SpeculativeJIT* jit, Edge edge, GPRReg regToUse)
+        : m_jit(jit)
+        , m_edge(edge)
+        , m_gprOrInvalid(InvalidGPRReg)
+    {
+        ASSERT(m_jit);
+        if (!edge)
+            return;
+        if (jit->isFilled(node()) || regToUse != InvalidGPRReg)
+            gprUseSpecific(regToUse);
+    }
+#endif
+    
     ~JSValueOperand()
     {
         if (!m_edge)
@@ -3030,6 +3056,12 @@ public:
             m_gprOrInvalid = m_jit->fillJSValue(m_edge);
         return m_gprOrInvalid;
     }
+    GPRReg gprUseSpecific(GPRReg regToUse)
+    {
+        if (m_gprOrInvalid == InvalidGPRReg)
+            m_gprOrInvalid = m_jit->fillJSValue(m_edge, regToUse);
+        return m_gprOrInvalid;
+    }
     JSValueRegs jsValueRegs()
     {
         return JSValueRegs(gpr());
index 73fb368..3834767 100644 (file)
@@ -932,7 +932,7 @@ void SpeculativeJIT::emitCall(Node* node)
     CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(dynamicOrigin, m_stream->size());
     
     CallLinkInfo* info = m_jit.codeBlock()->addCallLinkInfo();
-    info->setUpCall(callType, node->origin.semantic, calleePayloadGPR);
+    info->setUpCall(callType, StackArgs, node->origin.semantic, calleePayloadGPR);
     
     auto setResultAndResetStack = [&] () {
         GPRFlushedCallResult resultPayload(this);
@@ -1081,7 +1081,7 @@ void SpeculativeJIT::emitCall(Node* node)
             m_jit.emitRestoreCalleeSaves();
     }
 
-    m_jit.move(MacroAssembler::TrustedImmPtr(info), GPRInfo::regT2);
+    m_jit.move(MacroAssembler::TrustedImmPtr(info), GPRInfo::nonArgGPR0);
     JITCompiler::Call slowCall = m_jit.nearCall();
 
     done.link(&m_jit);
@@ -5624,6 +5624,7 @@ void SpeculativeJIT::compile(Node* node)
     case KillStack:
     case GetStack:
     case GetMyArgumentByVal:
+    case GetArgumentRegister:
     case GetMyArgumentByValOutOfBounds:
     case PhantomCreateRest:
     case PhantomSpread:
index 975be29..bee7ede 100644 (file)
@@ -80,14 +80,14 @@ void SpeculativeJIT::boxInt52(GPRReg sourceGPR, GPRReg targetGPR, DataFormat for
     unlock(fpr);
 }
 
-GPRReg SpeculativeJIT::fillJSValue(Edge edge)
+GPRReg SpeculativeJIT::fillJSValue(Edge edge, GPRReg gprToUse)
 {
     VirtualRegister virtualRegister = edge->virtualRegister();
     GenerationInfo& info = generationInfoFromVirtualRegister(virtualRegister);
     
     switch (info.registerFormat()) {
     case DataFormatNone: {
-        GPRReg gpr = allocate();
+        GPRReg gpr = allocate(gprToUse);
 
         if (edge->hasConstant()) {
             JSValue jsValue = edge->asJSValue();
@@ -120,7 +120,12 @@ GPRReg SpeculativeJIT::fillJSValue(Edge edge)
         // If the register has already been locked we need to take a copy.
         // If not, we'll zero extend in place, so mark on the info that this is now type DataFormatInt32, not DataFormatJSInt32.
         if (m_gprs.isLocked(gpr)) {
-            GPRReg result = allocate();
+            GPRReg result = allocate(gprToUse);
+            m_jit.or64(GPRInfo::tagTypeNumberRegister, gpr, result);
+            return result;
+        }
+        if (gprToUse != InvalidGPRReg && gpr != gprToUse) {
+            GPRReg result = allocate(gprToUse);
             m_jit.or64(GPRInfo::tagTypeNumberRegister, gpr, result);
             return result;
         }
@@ -138,6 +143,11 @@ GPRReg SpeculativeJIT::fillJSValue(Edge edge)
     case DataFormatJSCell:
     case DataFormatJSBoolean: {
         GPRReg gpr = info.gpr();
+        if (gprToUse != InvalidGPRReg && gpr != gprToUse) {
+            GPRReg result = allocate(gprToUse);
+            m_jit.move(gpr, result);
+            return result;
+        }
         m_gprs.lock(gpr);
         return gpr;
     }
@@ -632,6 +642,7 @@ void SpeculativeJIT::compileMiscStrictEq(Node* node)
 void SpeculativeJIT::emitCall(Node* node)
 {
     CallLinkInfo::CallType callType;
+    ArgumentsLocation argumentsLocation = StackArgs;
     bool isVarargs = false;
     bool isForwardVarargs = false;
     bool isTail = false;
@@ -714,7 +725,11 @@ void SpeculativeJIT::emitCall(Node* node)
 
     GPRReg calleeGPR = InvalidGPRReg;
     CallFrameShuffleData shuffleData;
-    
+    std::optional<JSValueOperand> tailCallee;
+    std::optional<GPRTemporary> calleeGPRTemporary;
+
+    incrementCounter(&m_jit, VM::DFGCaller);
+
     ExecutableBase* executable = nullptr;
     FunctionExecutable* functionExecutable = nullptr;
     if (isDirect) {
@@ -733,6 +748,7 @@ void SpeculativeJIT::emitCall(Node* node)
         GPRReg resultGPR;
         unsigned numUsedStackSlots = m_jit.graph().m_nextMachineLocal;
         
+        incrementCounter(&m_jit, VM::CallVarargs);
         if (isForwardVarargs) {
             flushRegisters();
             if (node->child3())
@@ -841,15 +857,25 @@ void SpeculativeJIT::emitCall(Node* node)
         }
 
         if (isTail) {
+            incrementCounter(&m_jit, VM::TailCall);
             Edge calleeEdge = m_jit.graph().child(node, 0);
-            JSValueOperand callee(this, calleeEdge);
-            calleeGPR = callee.gpr();
+            // We can't get the a specific register for the callee, since that will just move
+            // from any current register.  When we silent fill in the slow path we'll fill
+            // the original register and won't have the callee in the right register.
+            // Therefore we allocate a temp register for the callee and move ourselves.
+            tailCallee.emplace(this, calleeEdge);
+            GPRReg tailCalleeGPR = tailCallee->gpr();
+            calleeGPR = argumentRegisterForCallee();
+            if (tailCalleeGPR != calleeGPR)
+                calleeGPRTemporary = GPRTemporary(this, calleeGPR);
             if (!isDirect)
-                callee.use();
+                tailCallee->use();
 
+            argumentsLocation = argumentsLocationFor(numAllocatedArgs);
+            shuffleData.argumentsInRegisters = argumentsLocation != StackArgs;
             shuffleData.tagTypeNumber = GPRInfo::tagTypeNumberRegister;
             shuffleData.numLocals = m_jit.graph().frameRegisterCount();
-            shuffleData.callee = ValueRecovery::inGPR(calleeGPR, DataFormatJS);
+            shuffleData.callee = ValueRecovery::inGPR(tailCalleeGPR, DataFormatJS);
             shuffleData.args.resize(numAllocatedArgs);
 
             for (unsigned i = 0; i < numPassedArgs; ++i) {
@@ -864,7 +890,8 @@ void SpeculativeJIT::emitCall(Node* node)
                 shuffleData.args[i] = ValueRecovery::constant(jsUndefined());
 
             shuffleData.setupCalleeSaveRegisters(m_jit.codeBlock());
-        } else {
+        } else if (node->op() == CallEval) {
+            // CallEval is handled with the arguments in the stack
             m_jit.store32(MacroAssembler::TrustedImm32(numPassedArgs), JITCompiler::calleeFramePayloadSlot(CallFrameSlot::argumentCount));
 
             for (unsigned i = 0; i < numPassedArgs; i++) {
@@ -878,15 +905,57 @@ void SpeculativeJIT::emitCall(Node* node)
             
             for (unsigned i = numPassedArgs; i < numAllocatedArgs; ++i)
                 m_jit.storeTrustedValue(jsUndefined(), JITCompiler::calleeArgumentSlot(i));
+
+            incrementCounter(&m_jit, VM::CallEval);
+        } else {
+            for (unsigned i = numPassedArgs; i-- > 0;) {
+                GPRReg platformArgGPR = argumentRegisterForFunctionArgument(i);
+                Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i];
+                JSValueOperand arg(this, argEdge, platformArgGPR);
+                GPRReg argGPR = arg.gpr();
+                ASSERT(argGPR == platformArgGPR || platformArgGPR == InvalidGPRReg);
+
+                // Only free the non-argument registers at this point.
+                if (platformArgGPR == InvalidGPRReg) {
+                    use(argEdge);
+                    m_jit.store64(argGPR, JITCompiler::calleeArgumentSlot(i));
+                }
+            }
+
+            // Use the argument edges for arguments passed in registers.
+            for (unsigned i = numPassedArgs; i-- > 0;) {
+                GPRReg argGPR = argumentRegisterForFunctionArgument(i);
+                if (argGPR != InvalidGPRReg) {
+                    Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i];
+                    use(argEdge);
+                }
+            }
+
+            GPRTemporary argCount(this, argumentRegisterForArgumentCount());
+            GPRReg argCountGPR = argCount.gpr();
+            m_jit.move(TrustedImm32(numPassedArgs), argCountGPR);
+            argumentsLocation = argumentsLocationFor(numAllocatedArgs);
+
+            for (unsigned i = numPassedArgs; i < numAllocatedArgs; ++i) {
+                GPRReg platformArgGPR = argumentRegisterForFunctionArgument(i);
+
+                if (platformArgGPR == InvalidGPRReg)
+                    m_jit.storeTrustedValue(jsUndefined(), JITCompiler::calleeArgumentSlot(i));
+                else {
+                    GPRTemporary argumentTemp(this, platformArgGPR);
+                    m_jit.move(TrustedImm64(JSValue::encode(jsUndefined())), argumentTemp.gpr());
+                }
+            }
         }
     }
     
     if (!isTail || isVarargs || isForwardVarargs) {
         Edge calleeEdge = m_jit.graph().child(node, 0);
-        JSValueOperand callee(this, calleeEdge);
+        JSValueOperand callee(this, calleeEdge, argumentRegisterForCallee());
         calleeGPR = callee.gpr();
         callee.use();
-        m_jit.store64(calleeGPR, JITCompiler::calleeFrameSlot(CallFrameSlot::callee));
+        if (argumentsLocation == StackArgs)
+            m_jit.store64(calleeGPR, JITCompiler::calleeFrameSlot(CallFrameSlot::callee));
 
         flushRegisters();
     }
@@ -913,7 +982,7 @@ void SpeculativeJIT::emitCall(Node* node)
     };
     
     CallLinkInfo* callLinkInfo = m_jit.codeBlock()->addCallLinkInfo();
-    callLinkInfo->setUpCall(callType, m_currentNode->origin.semantic, calleeGPR);
+    callLinkInfo->setUpCall(callType, argumentsLocation, m_currentNode->origin.semantic, calleeGPR);
 
     if (node->op() == CallEval) {
         // We want to call operationCallEval but we don't want to overwrite the parameter area in
@@ -954,8 +1023,14 @@ void SpeculativeJIT::emitCall(Node* node)
         if (isTail) {
             RELEASE_ASSERT(node->op() == DirectTailCall);
             
+            if (calleeGPRTemporary != std::nullopt)
+                m_jit.move(tailCallee->gpr(), calleeGPRTemporary->gpr());
+
             JITCompiler::PatchableJump patchableJump = m_jit.patchableJump();
             JITCompiler::Label mainPath = m_jit.label();
+
+            incrementCounter(&m_jit, VM::TailCall);
+            incrementCounter(&m_jit, VM::DirectCall);
             
             m_jit.emitStoreCallSiteIndex(callSite);
             
@@ -971,6 +1046,8 @@ void SpeculativeJIT::emitCall(Node* node)
             callOperation(operationLinkDirectCall, callLinkInfo, calleeGPR);
             silentFillAllRegisters(InvalidGPRReg);
             m_jit.exceptionCheck();
+            if (calleeGPRTemporary != std::nullopt)
+                m_jit.move(tailCallee->gpr(), calleeGPRTemporary->gpr());
             m_jit.jump().linkTo(mainPath, &m_jit);
             
             useChildren(node);
@@ -981,6 +1058,8 @@ void SpeculativeJIT::emitCall(Node* node)
         
         JITCompiler::Label mainPath = m_jit.label();
         
+        incrementCounter(&m_jit, VM::DirectCall);
+
         m_jit.emitStoreCallSiteIndex(callSite);
         
         JITCompiler::Call call = m_jit.nearCall();
@@ -988,20 +1067,25 @@ void SpeculativeJIT::emitCall(Node* node)
         
         JITCompiler::Label slowPath = m_jit.label();
         if (isX86())
-            m_jit.pop(JITCompiler::selectScratchGPR(calleeGPR));
+            m_jit.pop(GPRInfo::nonArgGPR0);
+
+        m_jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0); // Link info needs to be in nonArgGPR0
+        JITCompiler::Call slowCall = m_jit.nearCall();
 
-        callOperation(operationLinkDirectCall, callLinkInfo, calleeGPR);
         m_jit.exceptionCheck();
         m_jit.jump().linkTo(mainPath, &m_jit);
         
         done.link(&m_jit);
         
         setResultAndResetStack();
-        
-        m_jit.addJSDirectCall(call, slowPath, callLinkInfo);
+
+        m_jit.addJSDirectCall(call, slowCall, slowPath, callLinkInfo);
         return;
     }
-    
+
+    if (isTail && calleeGPRTemporary != std::nullopt)
+        m_jit.move(tailCallee->gpr(), calleeGPRTemporary->gpr());
+
     m_jit.emitStoreCallSiteIndex(callSite);
     
     JITCompiler::DataLabelPtr targetToCheck;
@@ -1025,23 +1109,22 @@ void SpeculativeJIT::emitCall(Node* node)
 
     if (node->op() == TailCall) {
         CallFrameShuffler callFrameShuffler(m_jit, shuffleData);
-        callFrameShuffler.setCalleeJSValueRegs(JSValueRegs(GPRInfo::regT0));
+        if (argumentsLocation == StackArgs)
+            callFrameShuffler.setCalleeJSValueRegs(JSValueRegs(argumentRegisterForCallee()));
         callFrameShuffler.prepareForSlowPath();
-    } else {
-        m_jit.move(calleeGPR, GPRInfo::regT0); // Callee needs to be in regT0
-
-        if (isTail)
-            m_jit.emitRestoreCalleeSaves(); // This needs to happen after we moved calleeGPR to regT0
-    }
+    } else if (isTail)
+        m_jit.emitRestoreCalleeSaves();
 
-    m_jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::regT2); // Link info needs to be in regT2
+    m_jit.move(MacroAssembler::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0); // Link info needs to be in nonArgGPR0
     JITCompiler::Call slowCall = m_jit.nearCall();
 
     done.link(&m_jit);
 
-    if (isTail)
+    if (isTail) {
+        tailCallee = std::nullopt;
+        calleeGPRTemporary = std::nullopt;
         m_jit.abortWithReason(JITDidReturnFromTailCall);
-    else
+    else
         setResultAndResetStack();
 
     m_jit.addJSCall(fastCall, slowCall, targetToCheck, callLinkInfo);
@@ -4166,6 +4249,9 @@ void SpeculativeJIT::compile(Node* node)
         break;
     }
 
+    case GetArgumentRegister:
+        break;
+            
     case GetRestLength: {
         compileGetRestLength(node);
         break;
index 66c5008..23161fb 100644 (file)
@@ -276,6 +276,9 @@ private:
             Node* setLocal = nullptr;
             VirtualRegister local = m_node->local();
             
+            if (local.isArgument() && m_graph.m_strengthReduceArguments != OptimizeArgumentFlushes)
+                break;
+
             for (unsigned i = m_nodeIndex; i--;) {
                 Node* node = m_block->at(i);
                 if (node->op() == SetLocal && node->local() == local) {
index 38bdd11..2bc9802 100644 (file)
@@ -130,15 +130,30 @@ MacroAssemblerCodeRef osrEntryThunkGenerator(VM* vm)
     jit.store32(GPRInfo::regT3, MacroAssembler::BaseIndex(GPRInfo::callFrameRegister, GPRInfo::regT4, MacroAssembler::TimesEight, -static_cast<intptr_t>(sizeof(Register)) + static_cast<intptr_t>(sizeof(int32_t))));
     jit.branchPtr(MacroAssembler::NotEqual, GPRInfo::regT1, MacroAssembler::TrustedImmPtr(bitwise_cast<void*>(-static_cast<intptr_t>(CallFrame::headerSizeInRegisters)))).linkTo(loop, &jit);
     
-    jit.loadPtr(MacroAssembler::Address(GPRInfo::regT0, offsetOfTargetPC), GPRInfo::regT1);
-    MacroAssembler::Jump ok = jit.branchPtr(MacroAssembler::Above, GPRInfo::regT1, MacroAssembler::TrustedImmPtr(bitwise_cast<void*>(static_cast<intptr_t>(1000))));
+    jit.loadPtr(MacroAssembler::Address(GPRInfo::regT0, offsetOfTargetPC), GPRInfo::nonArgGPR0);
+    MacroAssembler::Jump ok = jit.branchPtr(MacroAssembler::Above, GPRInfo::nonArgGPR0, MacroAssembler::TrustedImmPtr(bitwise_cast<void*>(static_cast<intptr_t>(1000))));
     jit.abortWithReason(DFGUnreasonableOSREntryJumpDestination);
 
     ok.link(&jit);
+
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    // Load argument values into argument registers
+    jit.loadPtr(MacroAssembler::Address(GPRInfo::callFrameRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register))), argumentRegisterForCallee());
+    GPRReg argCountReg = argumentRegisterForArgumentCount();
+    jit.load32(AssemblyHelpers::payloadFor(CallFrameSlot::argumentCount), argCountReg);
+    
+    MacroAssembler::JumpList doneLoadingArgs;
+    
+    for (unsigned argIndex = 0; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++)
+        jit.load64(MacroAssembler::Address(GPRInfo::callFrameRegister, (CallFrameSlot::thisArgument + argIndex) * static_cast<int>(sizeof(Register))), argumentRegisterForFunctionArgument(argIndex));
+    
+    doneLoadingArgs.link(&jit);
+#endif
+    
     jit.restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer();
     jit.emitMaterializeTagCheckRegisters();
 
-    jit.jump(GPRInfo::regT1);
+    jit.jump(GPRInfo::nonArgGPR0);
     
     LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID);
     return FINALIZE_CODE(patchBuffer, ("DFG OSR entry thunk"));
index 1809621..f81c010 100644 (file)
@@ -133,8 +133,13 @@ void VariableEventStream::reconstruct(
     if (!index) {
         valueRecoveries = Operands<ValueRecovery>(codeBlock->numParameters(), numVariables);
         for (size_t i = 0; i < valueRecoveries.size(); ++i) {
-            valueRecoveries[i] = ValueRecovery::displacedInJSStack(
-                VirtualRegister(valueRecoveries.operandForIndex(i)), DataFormatJS);
+            if (i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
+                valueRecoveries[i] = ValueRecovery::inGPR(
+                    argumentRegisterForFunctionArgument(i), DataFormatJS);
+            } else {
+                valueRecoveries[i] = ValueRecovery::displacedInJSStack(
+                    VirtualRegister(valueRecoveries.operandForIndex(i)), DataFormatJS);
+            }
         }
         return;
     }
@@ -161,6 +166,12 @@ void VariableEventStream::reconstruct(
             MinifiedGenerationInfo info;
             info.update(event);
             generationInfos.add(event.id(), info);
+            MinifiedNode* node = graph.at(event.id());
+            if (node && node->hasArgumentIndex()) {
+                unsigned argument = node->argumentIndex();
+                VirtualRegister argumentReg = virtualRegisterForArgument(argument);
+                operandSources.setOperand(argumentReg, ValueSource(event.id()));
+            }
             break;
         }
         case Fill:
index f812759..6eca227 100644 (file)
@@ -42,7 +42,33 @@ public:
         : Phase(graph, "virtual register allocation")
     {
     }
-    
+
+    void allocateRegister(ScoreBoard& scoreBoard, Node* node)
+    {
+        // First, call use on all of the current node's children, then
+        // allocate a VirtualRegister for this node. We do so in this
+        // order so that if a child is on its last use, and a
+        // VirtualRegister is freed, then it may be reused for node.
+        if (node->flags() & NodeHasVarArgs) {
+            for (unsigned childIdx = node->firstChild(); childIdx < node->firstChild() + node->numChildren(); childIdx++)
+                scoreBoard.useIfHasResult(m_graph.m_varArgChildren[childIdx]);
+        } else {
+            scoreBoard.useIfHasResult(node->child1());
+            scoreBoard.useIfHasResult(node->child2());
+            scoreBoard.useIfHasResult(node->child3());
+        }
+
+        if (!node->hasResult())
+            return;
+
+        VirtualRegister virtualRegister = scoreBoard.allocate();
+        node->setVirtualRegister(virtualRegister);
+        // 'mustGenerate' nodes have their useCount artificially elevated,
+        // call use now to account for this.
+        if (node->mustGenerate())
+            scoreBoard.use(node);
+    }
+
     bool run()
     {
         DFG_ASSERT(m_graph, nullptr, m_graph.m_form == ThreadedCPS);
@@ -59,6 +85,17 @@ public:
                 // Force usage of highest-numbered virtual registers.
                 scoreBoard.sortFree();
             }
+
+            // Handle GetArgumentRegister Nodes first as the register is alive on entry
+            // to the function and may need to be spilled before any use.
+            if (!blockIndex) {
+                for (size_t indexInBlock = 0; indexInBlock < block->size(); ++indexInBlock) {
+                    Node* node = block->at(indexInBlock);
+                    if (node->op() == GetArgumentRegister)
+                        allocateRegister(scoreBoard, node);
+                }
+            }
+
             for (size_t indexInBlock = 0; indexInBlock < block->size(); ++indexInBlock) {
                 Node* node = block->at(indexInBlock);
         
@@ -73,32 +110,14 @@ public:
                 case GetLocal:
                     ASSERT(!node->child1()->hasResult());
                     break;
+                case GetArgumentRegister:
+                    ASSERT(!blockIndex);
+                    continue;
                 default:
                     break;
                 }
-                
-                // First, call use on all of the current node's children, then
-                // allocate a VirtualRegister for this node. We do so in this
-                // order so that if a child is on its last use, and a
-                // VirtualRegister is freed, then it may be reused for node.
-                if (node->flags() & NodeHasVarArgs) {
-                    for (unsigned childIdx = node->firstChild(); childIdx < node->firstChild() + node->numChildren(); childIdx++)
-                        scoreBoard.useIfHasResult(m_graph.m_varArgChildren[childIdx]);
-                } else {
-                    scoreBoard.useIfHasResult(node->child1());
-                    scoreBoard.useIfHasResult(node->child2());
-                    scoreBoard.useIfHasResult(node->child3());
-                }
-
-                if (!node->hasResult())
-                    continue;
 
-                VirtualRegister virtualRegister = scoreBoard.allocate();
-                node->setVirtualRegister(virtualRegister);
-                // 'mustGenerate' nodes have their useCount artificially elevated,
-                // call use now to account for this.
-                if (node->mustGenerate())
-                    scoreBoard.use(node);
+                allocateRegister(scoreBoard, node);
             }
             scoreBoard.assertClear();
         }
index d42ff4a..9ab80aa 100644 (file)
@@ -172,6 +172,7 @@ inline CapabilityLevel canCompile(Node* node)
     case GetExecutable:
     case GetScope:
     case GetCallee:
+    case GetArgumentRegister:
     case GetArgumentCountIncludingThis:
     case ToNumber:
     case ToString:
index 1cdb509..63d7415 100644 (file)
@@ -45,7 +45,8 @@ JITCode::~JITCode()
         dataLog("Destroying FTL JIT code at ");
         CommaPrinter comma;
         dataLog(comma, m_b3Code);
-        dataLog(comma, m_arityCheckEntrypoint);
+        dataLog(comma, m_registerArgsPossibleExtraArgsEntryPoint);
+        dataLog(comma, m_registerArgsCheckArityEntryPoint);
         dataLog("\n");
     }
 }
@@ -60,31 +61,30 @@ void JITCode::initializeB3Byproducts(std::unique_ptr<OpaqueByproducts> byproduct
     m_b3Byproducts = WTFMove(byproducts);
 }
 
-void JITCode::initializeAddressForCall(CodePtr address)
+void JITCode::initializeEntrypointThunk(CodeRef entrypointThunk)
 {
-    m_addressForCall = address;
+    m_entrypointThunk = entrypointThunk;
 }
 
-void JITCode::initializeArityCheckEntrypoint(CodeRef entrypoint)
+void JITCode::setEntryFor(EntryPointType type, CodePtr entry)
 {
-    m_arityCheckEntrypoint = entrypoint;
+    m_entrypoints.setEntryFor(type, entry);
 }
-
-JITCode::CodePtr JITCode::addressForCall(ArityCheckMode arityCheck)
+    
+JITCode::CodePtr JITCode::addressForCall(EntryPointType entryType)
 {
-    switch (arityCheck) {
-    case ArityCheckNotRequired:
-        return m_addressForCall;
-    case MustCheckArity:
-        return m_arityCheckEntrypoint.code();
-    }
-    RELEASE_ASSERT_NOT_REACHED();
-    return CodePtr();
+    CodePtr entry = m_entrypoints.entryFor(entryType);
+    RELEASE_ASSERT(entry);
+    return entry;
 }
 
 void* JITCode::executableAddressAtOffset(size_t offset)
 {
-    return reinterpret_cast<char*>(m_addressForCall.executableAddress()) + offset;
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    return reinterpret_cast<char*>(addressForCall(RegisterArgsArityCheckNotRequired).executableAddress()) + offset;
+#else
+    return reinterpret_cast<char*>(addressForCall(StackArgsArityCheckNotRequired).executableAddress()) + offset;
+#endif
 }
 
 void* JITCode::dataAddressAtOffset(size_t)
index 2c2809e..3db70df 100644 (file)
@@ -44,7 +44,7 @@ public:
     JITCode();
     ~JITCode();
 
-    CodePtr addressForCall(ArityCheckMode) override;
+    CodePtr addressForCall(EntryPointType) override;
     void* executableAddressAtOffset(size_t offset) override;
     void* dataAddressAtOffset(size_t offset) override;
     unsigned offsetOf(void* pointerIntoCode) override;
@@ -53,9 +53,9 @@ public:
 
     void initializeB3Code(CodeRef);
     void initializeB3Byproducts(std::unique_ptr<B3::OpaqueByproducts>);
-    void initializeAddressForCall(CodePtr);
-    void initializeArityCheckEntrypoint(CodeRef);
-    
+    void initializeEntrypointThunk(CodeRef);
+    void setEntryFor(EntryPointType, CodePtr);
+
     void validateReferences(const TrackedReferences&) override;
 
     RegisterSet liveRegistersToPreserveAtExceptionHandlingCallSite(CodeBlock*, CallSiteIndex) override;
@@ -77,7 +77,12 @@ private:
     CodePtr m_addressForCall;
     CodeRef m_b3Code;
     std::unique_ptr<B3::OpaqueByproducts> m_b3Byproducts;
-    CodeRef m_arityCheckEntrypoint;
+    CodeRef m_entrypointThunk;
+    JITEntryPoints m_entrypoints;
+    CodePtr m_registerArgsPossibleExtraArgsEntryPoint;
+    CodePtr m_registerArgsCheckArityEntryPoint;
+    CodePtr m_stackArgsArityOKEntryPoint;
+    CodePtr m_stackArgsCheckArityEntrypoint;
 };
 
 } } // namespace JSC::FTL
index dcf3dad..95463c3 100644 (file)
@@ -76,7 +76,7 @@ bool JITFinalizer::finalizeFunction()
             dumpDisassembly, *b3CodeLinkBuffer,
             ("FTL B3 code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::FTLJIT)).data())));
 
-    jitCode->initializeArityCheckEntrypoint(
+    jitCode->initializeEntrypointThunk(
         FINALIZE_CODE_IF(
             dumpDisassembly, *entrypointLinkBuffer,
             ("FTL entrypoint thunk for %s with B3 generated code at %p", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::FTLJIT)).data(), function)));
index d11b2a9..f944984 100644 (file)
@@ -127,14 +127,110 @@ void link(State& state)
     
     switch (graph.m_plan.mode) {
     case FTLMode: {
-        CCallHelpers::JumpList mainPathJumps;
-    
-        jit.load32(
-            frame.withOffset(sizeof(Register) * CallFrameSlot::argumentCount),
-            GPRInfo::regT1);
-        mainPathJumps.append(jit.branch32(
-            CCallHelpers::AboveOrEqual, GPRInfo::regT1,
-            CCallHelpers::TrustedImm32(codeBlock->numParameters())));
+        CCallHelpers::JumpList fillRegistersAndContinueMainPath;
+        CCallHelpers::JumpList toMainPath;
+
+        unsigned numParameters = static_cast<unsigned>(codeBlock->numParameters());
+        unsigned maxRegisterArgumentCount = std::min(numParameters, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS);
+
+        GPRReg argCountReg = argumentRegisterForArgumentCount();
+
+        CCallHelpers::Label registerArgumentsEntrypoints[NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS + 1];
+
+        if (numParameters < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
+            // Spill any extra register arguments passed to function onto the stack.
+            for (unsigned argIndex = NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS - 1; argIndex >= numParameters; argIndex--) {
+                registerArgumentsEntrypoints[argIndex + 1] = jit.label();
+                jit.emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex);
+            }
+            incrementCounter(&jit, VM::RegArgsExtra);
+            toMainPath.append(jit.jump());
+        }
+
+        CCallHelpers::JumpList continueToArityFixup;
+
+        CCallHelpers::Label stackArgsCheckArityEntry = jit.label();
+        incrementCounter(&jit, VM::StackArgsArity);
+        jit.load32(frame.withOffset(sizeof(Register) * CallFrameSlot::argumentCount), GPRInfo::regT1);
+        continueToArityFixup.append(jit.branch32(
+            CCallHelpers::Below, GPRInfo::regT1,
+            CCallHelpers::TrustedImm32(numParameters)));
+
+#if ENABLE(VM_COUNTERS)
+        CCallHelpers::Jump continueToStackArityOk = jit.jump();
+#endif
+
+        CCallHelpers::Label stackArgsArityOKEntry = jit.label();
+
+        incrementCounter(&jit, VM::StackArgsArity);
+
+#if ENABLE(VM_COUNTERS)
+        continueToStackArityOk.link(&jit);
+#endif
+
+        // Load argument values into argument registers
+
+        // FIXME: Would like to eliminate these to load, but we currently can't jump into
+        // the B3 compiled code at an arbitrary point from the slow entry where the
+        // registers are stored to the stack.
+        jit.emitGetFromCallFrameHeaderBeforePrologue(CallFrameSlot::callee, argumentRegisterForCallee());
+        jit.emitGetPayloadFromCallFrameHeaderBeforePrologue(CallFrameSlot::argumentCount, argumentRegisterForArgumentCount());
+
+        for (unsigned argIndex = 0; argIndex < maxRegisterArgumentCount; argIndex++)
+            jit.emitGetFromCallFrameArgumentBeforePrologue(argIndex, argumentRegisterForFunctionArgument(argIndex));
+
+        toMainPath.append(jit.jump());
+
+        CCallHelpers::Label registerArgsCheckArityEntry = jit.label();
+        incrementCounter(&jit, VM::RegArgsArity);
+
+        CCallHelpers::JumpList continueToRegisterArityFixup;
+        CCallHelpers::Label checkForExtraRegisterArguments;
+
+        if (numParameters < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
+            toMainPath.append(jit.branch32(
+                CCallHelpers::Equal, argCountReg, CCallHelpers::TrustedImm32(numParameters)));
+            continueToRegisterArityFixup.append(jit.branch32(
+                CCallHelpers::Below, argCountReg, CCallHelpers::TrustedImm32(numParameters)));
+            //  Fall through to the "extra register arity" case.
+
+            checkForExtraRegisterArguments = jit.label();
+            // Spill any extra register arguments passed to function onto the stack.
+            for (unsigned argIndex = numParameters; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) {
+                toMainPath.append(jit.branch32(CCallHelpers::BelowOrEqual, argCountReg, CCallHelpers::TrustedImm32(argIndex)));
+                jit.emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex);
+            }
+
+            incrementCounter(&jit, VM::RegArgsExtra);
+            toMainPath.append(jit.jump());
+        } else
+            toMainPath.append(jit.branch32(
+                CCallHelpers::AboveOrEqual, argCountReg, CCallHelpers::TrustedImm32(numParameters)));
+
+#if ENABLE(VM_COUNTERS)
+        continueToRegisterArityFixup.append(jit.jump());
+#endif
+
+        if (numParameters > 0) {
+            //  There should always be a "this" parameter.
+            CCallHelpers::Label registerArgumentsNeedArityFixup = jit.label();
+
+            for (unsigned argIndex = 1; argIndex < numParameters && argIndex <= maxRegisterArgumentCount; argIndex++)
+                registerArgumentsEntrypoints[argIndex] = registerArgumentsNeedArityFixup;
+        }
+
+#if ENABLE(VM_COUNTERS)
+        incrementCounter(&jit, VM::RegArgsArity);
+#endif
+
+        continueToRegisterArityFixup.link(&jit);
+
+        jit.spillArgumentRegistersToFrameBeforePrologue(maxRegisterArgumentCount);
+
+        continueToArityFixup.link(&jit);
+
+        incrementCounter(&jit, VM::ArityFixupRequired);
+
         jit.emitFunctionPrologue();
         jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
         jit.storePtr(GPRInfo::callFrameRegister, &vm.topCallFrame);
@@ -155,11 +251,20 @@ void link(State& state)
 
         jit.move(GPRInfo::returnValueGPR, GPRInfo::argumentGPR0);
         jit.emitFunctionEpilogue();
-        mainPathJumps.append(jit.branchTest32(CCallHelpers::Zero, GPRInfo::argumentGPR0));
+        fillRegistersAndContinueMainPath.append(jit.branchTest32(CCallHelpers::Zero, GPRInfo::argumentGPR0));
         jit.emitFunctionPrologue();
         CCallHelpers::Call callArityFixup = jit.call();
         jit.emitFunctionEpilogue();
-        mainPathJumps.append(jit.jump());
+
+        fillRegistersAndContinueMainPath.append(jit.jump());
+
+        fillRegistersAndContinueMainPath.linkTo(stackArgsArityOKEntry, &jit);
+
+#if ENABLE(VM_COUNTERS)
+        CCallHelpers::Label registerEntryNoArity = jit.label();
+        incrementCounter(&jit, VM::RegArgsNoArity);
+        toMainPath.append(jit.jump());
+#endif
 
         linkBuffer = std::make_unique<LinkBuffer>(vm, jit, codeBlock, JITCompilationCanFail);
         if (linkBuffer->didFailToAllocate()) {
@@ -169,9 +274,35 @@ void link(State& state)
         linkBuffer->link(callArityCheck, codeBlock->m_isConstructor ? operationConstructArityCheck : operationCallArityCheck);
         linkBuffer->link(callLookupExceptionHandlerFromCallerFrame, lookupExceptionHandlerFromCallerFrame);
         linkBuffer->link(callArityFixup, FunctionPtr((vm.getCTIStub(arityFixupGenerator)).code().executableAddress()));
-        linkBuffer->link(mainPathJumps, CodeLocationLabel(bitwise_cast<void*>(state.generatedFunction)));
+        linkBuffer->link(toMainPath, CodeLocationLabel(bitwise_cast<void*>(state.generatedFunction)));
+
+        state.jitCode->setEntryFor(StackArgsMustCheckArity, linkBuffer->locationOf(stackArgsCheckArityEntry));
+        state.jitCode->setEntryFor(StackArgsArityCheckNotRequired, linkBuffer->locationOf(stackArgsArityOKEntry));
 
-        state.jitCode->initializeAddressForCall(MacroAssemblerCodePtr(bitwise_cast<void*>(state.generatedFunction)));
+#if ENABLE(VM_COUNTERS)
+        MacroAssemblerCodePtr mainEntry = linkBuffer->locationOf(registerEntryNoArity);
+#else
+        MacroAssemblerCodePtr mainEntry = MacroAssemblerCodePtr(bitwise_cast<void*>(state.generatedFunction));
+#endif
+        state.jitCode->setEntryFor(RegisterArgsArityCheckNotRequired, mainEntry);
+
+        if (checkForExtraRegisterArguments.isSet())
+            state.jitCode->setEntryFor(RegisterArgsPossibleExtraArgs, linkBuffer->locationOf(checkForExtraRegisterArguments));
+        else
+            state.jitCode->setEntryFor(RegisterArgsPossibleExtraArgs, mainEntry);
+                                                                             
+        state.jitCode->setEntryFor(RegisterArgsMustCheckArity, linkBuffer->locationOf(registerArgsCheckArityEntry));
+
+        for (unsigned argCount = 1; argCount <= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argCount++) {
+            MacroAssemblerCodePtr entry;
+            if (argCount == numParameters)
+                entry = mainEntry;
+            else if (registerArgumentsEntrypoints[argCount].isSet())
+                entry = linkBuffer->locationOf(registerArgumentsEntrypoints[argCount]);
+            else
+                entry = linkBuffer->locationOf(registerArgsCheckArityEntry);
+            state.jitCode->setEntryFor(JITEntryPoints::registerEntryTypeForArgumentCount(argCount), entry);
+        }
         break;
     }
         
@@ -181,7 +312,20 @@ void link(State& state)
         // point we've even done the stack check. Basically we just have to make the
         // call to the B3-generated code.
         CCallHelpers::Label start = jit.label();
+
         jit.emitFunctionEpilogue();
+
+        // Load argument values into argument registers
+
+        // FIXME: Would like to eliminate these to load, but we currently can't jump into
+        // the B3 compiled code at an arbitrary point from the slow entry where the
+        // registers are stored to the stack.
+        jit.emitGetFromCallFrameHeaderBeforePrologue(CallFrameSlot::callee, argumentRegisterForCallee());
+        jit.emitGetPayloadFromCallFrameHeaderBeforePrologue(CallFrameSlot::argumentCount, argumentRegisterForArgumentCount());
+
+        for (unsigned argIndex = 0; argIndex < static_cast<unsigned>(codeBlock->numParameters()) && argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++)
+            jit.emitGetFromCallFrameArgumentBeforePrologue(argIndex, argumentRegisterForFunctionArgument(argIndex));
+
         CCallHelpers::Jump mainPathJump = jit.jump();
         
         linkBuffer = std::make_unique<LinkBuffer>(vm, jit, codeBlock, JITCompilationCanFail);
@@ -191,7 +335,7 @@ void link(State& state)
         }
         linkBuffer->link(mainPathJump, CodeLocationLabel(bitwise_cast<void*>(state.generatedFunction)));
 
-        state.jitCode->initializeAddressForCall(linkBuffer->locationOf(start));
+        state.jitCode->setEntryFor(RegisterArgsArityCheckNotRequired, linkBuffer->locationOf(start));
         break;
     }
         
index c8ed9a9..76808c8 100644 (file)
@@ -196,6 +196,10 @@ public:
         m_proc.addFastConstant(m_tagTypeNumber->key());
         m_proc.addFastConstant(m_tagMask->key());
         
+        // Store out callee and argument count for possible OSR exit.
+        m_out.store64(m_out.argumentRegister(argumentRegisterForCallee()), addressFor(CallFrameSlot::callee));
+        m_out.store32(m_out.argumentRegisterInt32(argumentRegisterForArgumentCount()), payloadFor(CallFrameSlot::argumentCount));
+
         m_out.storePtr(m_out.constIntPtr(codeBlock()), addressFor(CallFrameSlot::codeBlock));
 
         // Stack Overflow Check.
@@ -247,20 +251,34 @@ public:
         // Check Arguments.
         availabilityMap().clear();
         availabilityMap().m_locals = Operands<Availability>(codeBlock()->numParameters(), 0);
+
+        Vector<Node*, 8> argumentNodes;
+        Vector<LValue, 8> argumentValues;
+
+        argumentNodes.resize(codeBlock()->numParameters());
+        argumentValues.resize(codeBlock()->numParameters());
+
+        m_highBlock = m_graph.block(0);
+
         for (unsigned i = codeBlock()->numParameters(); i--;) {
-            availabilityMap().m_locals.argument(i) =
-                Availability(FlushedAt(FlushedJSValue, virtualRegisterForArgument(i)));
-        }
-        m_node = nullptr;
-        m_origin = NodeOrigin(CodeOrigin(0), CodeOrigin(0), true);
-        for (unsigned i = codeBlock()->numParameters(); i--;) {
-            Node* node = m_graph.m_arguments[i];
+            Node* node = m_graph.m_argumentsForChecking[i];
             VirtualRegister operand = virtualRegisterForArgument(i);
             
-            LValue jsValue = m_out.load64(addressFor(operand));
-            
+            LValue jsValue = nullptr;
+
             if (node) {
-                DFG_ASSERT(m_graph, node, operand == node->stackAccessData()->machineLocal);
+                if (i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
+                    availabilityMap().m_locals.argument(i) = Availability(node);
+                    jsValue = m_out.argumentRegister(GPRInfo::toArgumentRegister(node->argumentRegisterIndex()));
+
+                    setJSValue(node, jsValue);
+                } else {
+                    availabilityMap().m_locals.argument(i) =
+                        Availability(FlushedAt(FlushedJSValue, operand));
+                    jsValue = m_out.load64(addressFor(virtualRegisterForArgument(i)));
+                }
+            
+                DFG_ASSERT(m_graph, node, node->hasArgumentRegisterIndex() || operand == node->stackAccessData()->machineLocal);
                 
                 // This is a hack, but it's an effective one. It allows us to do CSE on the
                 // primordial load of arguments. This assumes that the GetLocal that got put in
@@ -268,7 +286,21 @@ public:
                 // should hold true.
                 m_loadedArgumentValues.add(node, jsValue);
             }
+
+            argumentNodes[i] = node;
+            argumentValues[i] = jsValue;
+        }
+
+        m_node = nullptr;
+        m_origin = NodeOrigin(CodeOrigin(0), CodeOrigin(0), true);
+        for (unsigned i = codeBlock()->numParameters(); i--;) {
+            Node* node = argumentNodes[i];
             
+            if (!node)
+                continue;
+
+            LValue jsValue = argumentValues[i];
+
             switch (m_graph.m_argumentFormats[i]) {
             case FlushedInt32:
                 speculate(BadType, jsValueValue(jsValue), node, isNotInt32(jsValue));
@@ -813,6 +845,9 @@ private:
         case GetArgumentCountIncludingThis:
             compileGetArgumentCountIncludingThis();
             break;
+        case GetArgumentRegister:
+            compileGetArgumentRegister();
+            break;
         case GetScope:
             compileGetScope();
             break;
@@ -5402,6 +5437,16 @@ private:
         setInt32(m_out.load32(payloadFor(CallFrameSlot::argumentCount)));
     }
     
+    void compileGetArgumentRegister()
+    {
+        // We might have already have a value for this node.
+        if (LValue value = m_loadedArgumentValues.get(m_node)) {
+            setJSValue(value);
+            return;
+        }
+        setJSValue(m_out.argumentRegister(GPRInfo::toArgumentRegister(m_node->argumentRegisterIndex())));
+    }
+    
     void compileGetScope()
     {
         setJSValue(m_out.loadPtr(lowCell(m_node->child1()), m_heaps.JSFunction_scope));
@@ -5814,9 +5859,10 @@ private:
         // the call.
         Vector<ConstrainedValue> arguments;
 
-        // Make sure that the callee goes into GPR0 because that's where the slow path thunks expect the
-        // callee to be.
-        arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(GPRInfo::regT0)));
+        // Make sure that the callee goes into argumentRegisterForCallee() because that's where
+        // the slow path thunks expect the callee to be.
+        GPRReg calleeReg = argumentRegisterForCallee();
+        arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(calleeReg)));
 
         auto addArgument = [&] (LValue value, VirtualRegister reg, int offset) {
             intptr_t offsetFromSP =
@@ -5824,10 +5870,16 @@ private:
             arguments.append(ConstrainedValue(value, ValueRep::stackArgument(offsetFromSP)));
         };
 
-        addArgument(jsCallee, VirtualRegister(CallFrameSlot::callee), 0);
-        addArgument(m_out.constInt32(numArgs), VirtualRegister(CallFrameSlot::argumentCount), PayloadOffset);
-        for (unsigned i = 0; i < numArgs; ++i)
-            addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0);
+        ArgumentsLocation argLocation = argumentsLocationFor(numArgs);
+        arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(calleeReg)));
+        arguments.append(ConstrainedValue(m_out.constInt32(numArgs), ValueRep::reg(argumentRegisterForArgumentCount())));
+
+        for (unsigned i = 0; i < numArgs; ++i) {
+            if (i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
+                arguments.append(ConstrainedValue(lowJSValue(m_graph.varArgChild(node, 1 + i)), ValueRep::reg(argumentRegisterForFunctionArgument(i))));
+            else
+                addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0);
+        }
 
         PatchpointValue* patchpoint = m_out.patchpoint(Int64);
         patchpoint->appendVector(arguments);
@@ -5856,9 +5908,11 @@ private:
 
                 CallLinkInfo* callLinkInfo = jit.codeBlock()->addCallLinkInfo();
 
+                incrementCounter(&jit, VM::FTLCaller);
+
                 CCallHelpers::DataLabelPtr targetToCheck;
                 CCallHelpers::Jump slowPath = jit.branchPtrWithPatch(
-                    CCallHelpers::NotEqual, GPRInfo::regT0, targetToCheck,
+                    CCallHelpers::NotEqual, calleeReg, targetToCheck,
                     CCallHelpers::TrustedImmPtr(0));
 
                 CCallHelpers::Call fastCall = jit.nearCall();
@@ -5866,13 +5920,13 @@ private:
 
                 slowPath.link(&jit);
 
-                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2);
+                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0);
                 CCallHelpers::Call slowCall = jit.nearCall();
                 done.link(&jit);
 
                 callLinkInfo->setUpCall(
                     node->op() == Construct ? CallLinkInfo::Construct : CallLinkInfo::Call,
-                    node->origin.semantic, GPRInfo::regT0);
+                    argLocation, node->origin.semantic, argumentRegisterForCallee());
 
                 jit.addPtr(
                     CCallHelpers::TrustedImm32(-params.proc().frameSize()),
@@ -5881,7 +5935,7 @@ private:
                 jit.addLinkTask(
                     [=] (LinkBuffer& linkBuffer) {
                         MacroAssemblerCodePtr linkCall =
-                            linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code();
+                            linkBuffer.vm().getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(callLinkInfo->argumentsLocation());
                         linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
 
                         callLinkInfo->setCallLocations(
@@ -5925,20 +5979,38 @@ private:
         
         Vector<ConstrainedValue> arguments;
         
-        arguments.append(ConstrainedValue(jsCallee, ValueRep::SomeRegister));
+        // Make sure that the callee goes into argumentRegisterForCallee() because that's where
+        // the slow path thunks expect the callee to be.
+        GPRReg calleeReg = argumentRegisterForCallee();
+        arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(calleeReg)));
         if (!isTail) {
             auto addArgument = [&] (LValue value, VirtualRegister reg, int offset) {
                 intptr_t offsetFromSP =
                     (reg.offset() - CallerFrameAndPC::sizeInRegisters) * sizeof(EncodedJSValue) + offset;
                 arguments.append(ConstrainedValue(value, ValueRep::stackArgument(offsetFromSP)));
             };
-            
+
+            arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(calleeReg)));
+#if ENABLE(CALLER_SPILLS_CALLEE)
             addArgument(jsCallee, VirtualRegister(CallFrameSlot::callee), 0);
+#endif
+            arguments.append(ConstrainedValue(m_out.constInt32(numPassedArgs), ValueRep::reg(argumentRegisterForArgumentCount())));
+#if ENABLE(CALLER_SPILLS_ARGCOUNT)
             addArgument(m_out.constInt32(numPassedArgs), VirtualRegister(CallFrameSlot::argumentCount), PayloadOffset);
-            for (unsigned i = 0; i < numPassedArgs; ++i)
-                addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0);
-            for (unsigned i = numPassedArgs; i < numAllocatedArgs; ++i)
-                addArgument(m_out.constInt64(JSValue::encode(jsUndefined())), virtualRegisterForArgument(i), 0);
+#endif
+            
+            for (unsigned i = 0; i < numPassedArgs; ++i) {
+                if (i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
+                    arguments.append(ConstrainedValue(lowJSValue(m_graph.varArgChild(node, 1 + i)), ValueRep::reg(argumentRegisterForFunctionArgument(i))));
+                else
+                    addArgument(lowJSValue(m_graph.varArgChild(node, 1 + i)), virtualRegisterForArgument(i), 0);
+            }
+            for (unsigned i = numPassedArgs; i < numAllocatedArgs; ++i) {
+                if (i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
+                    arguments.append(ConstrainedValue(m_out.constInt64(JSValue::encode(jsUndefined())), ValueRep::reg(argumentRegisterForFunctionArgument(i))));
+                else
+                    addArgument(m_out.constInt64(JSValue::encode(jsUndefined())), virtualRegisterForArgument(i), 0);
+            }
         } else {
             for (unsigned i = 0; i < numPassedArgs; ++i)
                 arguments.append(ConstrainedValue(lowJSValue(m_graph.varArgChild(node, 1 + i)), ValueRep::WarmAny));
@@ -5980,6 +6052,7 @@ private:
                     shuffleData.numLocals = state->jitCode->common.frameRegisterCount;
                     
                     RegisterSet toSave = params.unavailableRegisters();
+                    shuffleData.argumentsInRegisters = true;
                     shuffleData.callee = ValueRecovery::inGPR(calleeGPR, DataFormatCell);
                     toSave.set(calleeGPR);
                     for (unsigned i = 0; i < numPassedArgs; ++i) {
@@ -5998,7 +6071,11 @@ private:
                     
                     CCallHelpers::PatchableJump patchableJump = jit.patchableJump();
                     CCallHelpers::Label mainPath = jit.label();
-                    
+
+                    incrementCounter(&jit, VM::FTLCaller);
+                    incrementCounter(&jit, VM::TailCall);
+                    incrementCounter(&jit, VM::DirectCall);
+
                     jit.store32(
                         CCallHelpers::TrustedImm32(callSiteIndex.bits()),
                         CCallHelpers::tagFor(VirtualRegister(CallFrameSlot::argumentCount)));
@@ -6019,7 +6096,7 @@ private:
                     jit.jump().linkTo(mainPath, &jit);
                     
                     callLinkInfo->setUpCall(
-                        CallLinkInfo::DirectTailCall, node->origin.semantic, InvalidGPRReg);
+                        CallLinkInfo::DirectTailCall, argumentsLocationFor(numPassedArgs), node->origin.semantic, InvalidGPRReg);
                     callLinkInfo->setExecutableDuringCompilation(executable);
                     if (numAllocatedArgs > numPassedArgs)
                         callLinkInfo->setMaxNumArguments(numAllocatedArgs);
@@ -6042,6 +6119,9 @@ private:
                 
                 CCallHelpers::Label mainPath = jit.label();
 
+                incrementCounter(&jit, VM::FTLCaller);
+                incrementCounter(&jit, VM::DirectCall);
+
                 jit.store32(
                     CCallHelpers::TrustedImm32(callSiteIndex.bits()),
                     CCallHelpers::tagFor(VirtualRegister(CallFrameSlot::argumentCount)));
@@ -6053,7 +6133,7 @@ private:
                 
                 callLinkInfo->setUpCall(
                     isConstruct ? CallLinkInfo::DirectConstruct : CallLinkInfo::DirectCall,
-                    node->origin.semantic, InvalidGPRReg);
+                    argumentsLocationFor(numPassedArgs), node->origin.semantic, InvalidGPRReg);
                 callLinkInfo->setExecutableDuringCompilation(executable);
                 if (numAllocatedArgs > numPassedArgs)
                     callLinkInfo->setMaxNumArguments(numAllocatedArgs);
@@ -6064,13 +6144,11 @@ private:
                         
                         CCallHelpers::Label slowPath = jit.label();
                         if (isX86())
-                            jit.pop(CCallHelpers::selectScratchGPR(calleeGPR));
-                        
-                        callOperation(
-                            *state, params.unavailableRegisters(), jit,
-                            node->origin.semantic, exceptions.get(), operationLinkDirectCall,
-                            InvalidGPRReg, CCallHelpers::TrustedImmPtr(callLinkInfo),
-                            calleeGPR).call();
+                            jit.pop(GPRInfo::nonArgGPR0);
+
+                        jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0); // Link info needs to be in nonArgGPR0
+                        CCallHelpers::Call slowCall = jit.nearCall();
+                        exceptions->append(jit.emitExceptionCheck(AssemblyHelpers::NormalExceptionCheck, AssemblyHelpers::FarJumpWidth));
                         jit.jump().linkTo(mainPath, &jit);
                         
                         jit.addLinkTask(
@@ -6079,6 +6157,9 @@ private:
                                 CodeLocationLabel slowPathLocation = linkBuffer.locationOf(slowPath);
                                 
                                 linkBuffer.link(call, slowPathLocation);
+                                MacroAssemblerCodePtr linkCall =
+                                    linkBuffer.vm().getJITCallThunkEntryStub(linkDirectCallThunkGenerator).entryFor(callLinkInfo->argumentsLocation());
+                                linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
                                 
                                 callLinkInfo->setCallLocations(
                                     CodeLocationLabel(),
@@ -6110,7 +6191,8 @@ private:
 
         Vector<ConstrainedValue> arguments;
 
-        arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(GPRInfo::regT0)));
+        GPRReg calleeReg = argumentRegisterForCallee();
+        arguments.append(ConstrainedValue(jsCallee, ValueRep::reg(calleeReg)));
 
         for (unsigned i = 0; i < numArgs; ++i) {
             // Note: we could let the shuffler do boxing for us, but it's not super clear that this
@@ -6144,9 +6226,13 @@ private:
                 AllowMacroScratchRegisterUsage allowScratch(jit);
                 CallSiteIndex callSiteIndex = state->jitCode->common.addUniqueCallSiteIndex(codeOrigin);
 
+                incrementCounter(&jit, VM::FTLCaller);
+                incrementCounter(&jit, VM::TailCall);
+
                 CallFrameShuffleData shuffleData;
+                shuffleData.argumentsInRegisters = true;
                 shuffleData.numLocals = state->jitCode->common.frameRegisterCount;
-                shuffleData.callee = ValueRecovery::inGPR(GPRInfo::regT0, DataFormatJS);
+                shuffleData.callee = ValueRecovery::inGPR(calleeReg, DataFormatJS);
 
                 for (unsigned i = 0; i < numArgs; ++i)
                     shuffleData.args.append(params[1 + i].recoveryForJSValue());
@@ -6157,7 +6243,7 @@ private:
 
                 CCallHelpers::DataLabelPtr targetToCheck;
                 CCallHelpers::Jump slowPath = jit.branchPtrWithPatch(
-                    CCallHelpers::NotEqual, GPRInfo::regT0, targetToCheck,
+                    CCallHelpers::NotEqual, calleeReg, targetToCheck,
                     CCallHelpers::TrustedImmPtr(0));
 
                 callLinkInfo->setFrameShuffleData(shuffleData);
@@ -6175,20 +6261,19 @@ private:
                     CCallHelpers::tagFor(VirtualRegister(CallFrameSlot::argumentCount)));
 
                 CallFrameShuffler slowPathShuffler(jit, shuffleData);
-                slowPathShuffler.setCalleeJSValueRegs(JSValueRegs(GPRInfo::regT0));
                 slowPathShuffler.prepareForSlowPath();
 
-                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2);
+                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0);
                 CCallHelpers::Call slowCall = jit.nearCall();
 
                 jit.abortWithReason(JITDidReturnFromTailCall);
 
-                callLinkInfo->setUpCall(CallLinkInfo::TailCall, codeOrigin, GPRInfo::regT0);
+                callLinkInfo->setUpCall(CallLinkInfo::TailCall, argumentsLocationFor(numArgs), codeOrigin, calleeReg);
 
                 jit.addLinkTask(
                     [=] (LinkBuffer& linkBuffer) {
                         MacroAssemblerCodePtr linkCall =
-                            linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code();
+                            linkBuffer.vm().getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(callLinkInfo->argumentsLocation());
                         linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
 
                         callLinkInfo->setCallLocations(
@@ -6278,6 +6363,7 @@ private:
                     CCallHelpers::tagFor(VirtualRegister(CallFrameSlot::argumentCount)));
 
                 CallLinkInfo* callLinkInfo = jit.codeBlock()->addCallLinkInfo();
+                ArgumentsLocation argumentsLocation = StackArgs;
 
                 RegisterSet usedRegisters = RegisterSet::allRegisters();
                 usedRegisters.exclude(RegisterSet::volatileRegistersForJSCall());
@@ -6427,7 +6513,7 @@ private:
                 if (isTailCall)
                     jit.emitRestoreCalleeSaves();
                 ASSERT(!usedRegisters.get(GPRInfo::regT2));
-                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2);
+                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0);
                 CCallHelpers::Call slowCall = jit.nearCall();
                 
                 if (isTailCall)
@@ -6435,7 +6521,7 @@ private:
                 else
                     done.link(&jit);
                 
-                callLinkInfo->setUpCall(callType, node->origin.semantic, GPRInfo::regT0);
+                callLinkInfo->setUpCall(callType, argumentsLocation, node->origin.semantic, GPRInfo::regT0);
 
                 jit.addPtr(
                     CCallHelpers::TrustedImm32(-originalStackHeight),
@@ -6444,7 +6530,7 @@ private:
                 jit.addLinkTask(
                     [=] (LinkBuffer& linkBuffer) {
                         MacroAssemblerCodePtr linkCall =
-                            linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code();
+                            linkBuffer.vm().getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(StackArgs);
                         linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
                         
                         callLinkInfo->setCallLocations(
@@ -6545,11 +6631,15 @@ private:
 
                 exceptionHandle->scheduleExitCreationForUnwind(params, callSiteIndex);
 
+                incrementCounter(&jit, VM::FTLCaller);
+                incrementCounter(&jit, VM::CallVarargs);
+                
                 jit.store32(
                     CCallHelpers::TrustedImm32(callSiteIndex.bits()),
                     CCallHelpers::tagFor(VirtualRegister(CallFrameSlot::argumentCount)));
 
                 CallLinkInfo* callLinkInfo = jit.codeBlock()->addCallLinkInfo();
+                ArgumentsLocation argumentsLocation = StackArgs;
                 CallVarargsData* data = node->callVarargsData();
 
                 unsigned argIndex = 1;
@@ -6710,7 +6800,7 @@ private:
 
                 if (isTailCall)
                     jit.emitRestoreCalleeSaves();
-                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2);
+                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::nonArgGPR0);
                 CCallHelpers::Call slowCall = jit.nearCall();
                 
                 if (isTailCall)
@@ -6718,7 +6808,7 @@ private:
                 else
                     done.link(&jit);
                 
-                callLinkInfo->setUpCall(callType, node->origin.semantic, GPRInfo::regT0);
+                callLinkInfo->setUpCall(callType, argumentsLocation, node->origin.semantic, GPRInfo::regT0);
                 
                 jit.addPtr(
                     CCallHelpers::TrustedImm32(-originalStackHeight),
@@ -6727,7 +6817,7 @@ private:
                 jit.addLinkTask(
                     [=] (LinkBuffer& linkBuffer) {
                         MacroAssemblerCodePtr linkCall =
-                            linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code();
+                            linkBuffer.vm().getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(StackArgs);
                         linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
                         
                         callLinkInfo->setCallLocations(
@@ -6796,13 +6886,16 @@ private:
                 Box<CCallHelpers::JumpList> exceptions = exceptionHandle->scheduleExitCreation(params)->jumps(jit);
                 
                 exceptionHandle->scheduleExitCreationForUnwind(params, callSiteIndex);
-                
+
+                incrementCounter(&jit, VM::FTLCaller);
+                incrementCounter(&jit, VM::CallEval);
+
                 jit.store32(
                     CCallHelpers::TrustedImm32(callSiteIndex.bits()),
                     CCallHelpers::tagFor(VirtualRegister(CallFrameSlot::argumentCount)));
                 
                 CallLinkInfo* callLinkInfo = jit.codeBlock()->addCallLinkInfo();
-                callLinkInfo->setUpCall(CallLinkInfo::Call, node->origin.semantic, GPRInfo::regT0);
+                callLinkInfo->setUpCall(CallLinkInfo::Call, StackArgs, node->origin.semantic, GPRInfo::regT0);
                 
                 jit.addPtr(CCallHelpers::TrustedImm32(-static_cast<ptrdiff_t>(sizeof(CallerFrameAndPC))), CCallHelpers::stackPointerRegister, GPRInfo::regT1);
                 jit.storePtr(GPRInfo::callFrameRegister, CCallHelpers::Address(GPRInfo::regT1, CallFrame::callerFrameOffset()));
index 9a391e3..02bffe4 100644 (file)
@@ -71,7 +71,11 @@ void* prepareOSREntry(
     if (Options::verboseOSR())
         dataLog("    Values at entry: ", values, "\n");
     
-    for (int argument = values.numberOfArguments(); argument--;) {
+    for (unsigned argument = values.numberOfArguments(); argument--;) {
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+        if (argument < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
+            break;
+#endif
         JSValue valueOnStack = exec->r(virtualRegisterForArgument(argument).offset()).asanUnsafeJSValue();
         JSValue reconstructedValue = values.argument(argument);
         if (valueOnStack == reconstructedValue || !argument)
@@ -99,8 +103,12 @@ void* prepareOSREntry(
     }
     
     exec->setCodeBlock(entryCodeBlock);
-    
-    void* result = entryCode->addressForCall(ArityCheckNotRequired).executableAddress();
+
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    void* result = entryCode->addressForCall(RegisterArgsArityCheckNotRequired).executableAddress();
+#else
+    void* result = entryCode->addressForCall(StackArgsArityCheckNotRequired).executableAddress();
+#endif
     if (Options::verboseOSR())
         dataLog("    Entry will succeed, going to address", RawPointer(result), "\n");
     
index bf209b0..62e64d7 100644 (file)
@@ -89,6 +89,16 @@ void Output::appendTo(LBasicBlock block)
     m_block = block;
 }
 
+LValue Output::argumentRegister(Reg reg)
+{
+    return m_block->appendNew<ArgumentRegValue>(m_proc, origin(), reg);
+}
+
+LValue Output::argumentRegisterInt32(Reg reg)
+{
+    return m_block->appendNew<ArgumentRegValue>(m_proc, origin(), reg, Int32);
+}
+
 LValue Output::framePointer()
 {
     return m_block->appendNew<B3::Value>(m_proc, B3::FramePointer, origin());
index daea7d0..d14072e 100644 (file)
@@ -98,6 +98,8 @@ public:
     void setOrigin(DFG::Node* node) { m_origin = node; }
     B3::Origin origin() { return B3::Origin(m_origin); }
 
+    LValue argumentRegister(Reg reg);
+    LValue argumentRegisterInt32(Reg reg);
     LValue framePointer();
 
     B3::SlotBaseValue* lockedStackSlot(size_t bytes);
index 6d12771..056d521 100644 (file)
@@ -284,16 +284,24 @@ void ShadowChicken::update(VM&, ExecState* exec)
             bool foundFrame = advanceIndexInLogTo(callFrame, callFrame->jsCallee(), callFrame->callerFrame());
             bool isTailDeleted = false;
             JSScope* scope = nullptr;
+            JSValue thisValue = jsUndefined();
             CodeBlock* codeBlock = callFrame->codeBlock();
-            if (codeBlock && codeBlock->wasCompiledWithDebuggingOpcodes() && codeBlock->scopeRegister().isValid()) {
-                scope = callFrame->scope(codeBlock->scopeRegister().offset());
-                RELEASE_ASSERT(scope->inherits(JSScope::info()));
-            } else if (foundFrame) {
-                scope = m_log[indexInLog].scope;
-                if (scope)
+            if (codeBlock && codeBlock->wasCompiledWithDebuggingOpcodes()) {
+                if (codeBlock->scopeRegister().isValid()) {
+                    scope = callFrame->scope(codeBlock->scopeRegister().offset());
                     RELEASE_ASSERT(scope->inherits(JSScope::info()));
+                }
+                thisValue = callFrame->thisValue();
+            } else if (foundFrame) {
+                if (!scope) {
+                    scope = m_log[indexInLog].scope;
+                    if (scope)
+                        RELEASE_ASSERT(scope->inherits(JSScope::info()));
+                }
+                if (thisValue.isUndefined())
+                    thisValue = m_log[indexInLog].thisValue;
             }
-            toPush.append(Frame(visitor->callee(), callFrame, isTailDeleted, callFrame->thisValue(), scope, codeBlock, callFrame->callSiteIndex()));
+            toPush.append(Frame(visitor->callee(), callFrame, isTailDeleted, thisValue, scope, codeBlock, callFrame->callSiteIndex()));
 
             if (indexInLog < logCursorIndex
                 // This condition protects us from the case where advanceIndexInLogTo didn't find
index 4b4cd8c..c700a0e 100644 (file)
@@ -616,13 +616,13 @@ void AssemblyHelpers::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer()
 
 void AssemblyHelpers::emitDumbVirtualCall(CallLinkInfo* info)
 {
-    move(TrustedImmPtr(info), GPRInfo::regT2);
+    move(TrustedImmPtr(info), GPRInfo::nonArgGPR0);
     Call call = nearCall();
     addLinkTask(
         [=] (LinkBuffer& linkBuffer) {
-            MacroAssemblerCodeRef virtualThunk = virtualThunkFor(&linkBuffer.vm(), *info);
-            info->setSlowStub(createJITStubRoutine(virtualThunk, linkBuffer.vm(), nullptr, true));
-            linkBuffer.link(call, CodeLocationLabel(virtualThunk.code()));
+            JITJSCallThunkEntryPointsWithRef virtualThunk = virtualThunkFor(&linkBuffer.vm(), *info);
+            info->setSlowStub(createJITStubRoutine(virtualThunk.codeRef(), linkBuffer.vm(), nullptr, true));
+            linkBuffer.link(call, CodeLocationLabel(virtualThunk.entryFor(StackArgs)));
         });
 }
 
index 3318ff8..6aa0995 100644 (file)
@@ -414,6 +414,89 @@ public:
 #endif
     }
 
+    enum SpillRegisterType { SpillAll, SpillExactly };
+
+    void spillArgumentRegistersToFrameBeforePrologue(unsigned minimumArgsToSpill = 0, SpillRegisterType spillType = SpillAll)
+    {
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+        JumpList doneStoringArgs;
+
+        emitPutToCallFrameHeaderBeforePrologue(argumentRegisterForCallee(), CallFrameSlot::callee);
+        GPRReg argCountReg = argumentRegisterForArgumentCount();
+        emitPutToCallFrameHeaderBeforePrologue(argCountReg, CallFrameSlot::argumentCount);
+
+        unsigned argIndex = 0;
+        // Always spill "this"
+        minimumArgsToSpill = std::max(minimumArgsToSpill, 1U);
+
+        for (; argIndex < minimumArgsToSpill && argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++)
+            emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex);
+
+        if (spillType == SpillAll) {
+            // Spill extra args passed to function
+            for (; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) {
+                doneStoringArgs.append(branch32(MacroAssembler::BelowOrEqual, argCountReg, MacroAssembler::TrustedImm32(argIndex)));
+                emitPutArgumentToCallFrameBeforePrologue(argumentRegisterForFunctionArgument(argIndex), argIndex);
+            }
+        }
+
+        doneStoringArgs.link(this);
+#else
+        UNUSED_PARAM(minimumArgsToSpill);
+        UNUSED_PARAM(spillType);
+#endif
+    }
+
+    void spillArgumentRegistersToFrame(unsigned minimumArgsToSpill = 0, SpillRegisterType spillType = SpillAll)
+    {
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+        JumpList doneStoringArgs;
+
+        emitPutToCallFrameHeader(argumentRegisterForCallee(), CallFrameSlot::callee);
+        GPRReg argCountReg = argumentRegisterForArgumentCount();
+        emitPutToCallFrameHeader(argCountReg, CallFrameSlot::argumentCount);
+        
+        unsigned argIndex = 0;
+        // Always spill "this"
+        minimumArgsToSpill = std::max(minimumArgsToSpill, 1U);
+        
+        for (; argIndex < minimumArgsToSpill && argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++)
+            emitPutArgumentToCallFrame(argumentRegisterForFunctionArgument(argIndex), argIndex);
+        
+        if (spillType == SpillAll) {
+            // Spill extra args passed to function
+            for (; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) {
+                doneStoringArgs.append(branch32(MacroAssembler::BelowOrEqual, argCountReg, MacroAssembler::TrustedImm32(argIndex)));
+                emitPutArgumentToCallFrame(argumentRegisterForFunctionArgument(argIndex), argIndex);
+            }
+        }
+        
+        doneStoringArgs.link(this);
+#else
+        UNUSED_PARAM(minimumArgsToSpill);
+        UNUSED_PARAM(spillType);
+#endif
+    }
+    
+    void fillArgumentRegistersFromFrameBeforePrologue()
+    {
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+        JumpList doneLoadingArgs;
+
+        emitGetFromCallFrameHeaderBeforePrologue(CallFrameSlot::callee, argumentRegisterForCallee());
+        GPRReg argCountReg = argumentRegisterForArgumentCount();
+        emitGetPayloadFromCallFrameHeaderBeforePrologue(CallFrameSlot::argumentCount, argCountReg);
+        
+        for (unsigned argIndex = 0; argIndex < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argIndex++) {
+            if (argIndex) // Always load "this"
+                doneLoadingArgs.append(branch32(MacroAssembler::BelowOrEqual, argCountReg, MacroAssembler::TrustedImm32(argIndex)));
+            emitGetFromCallFrameArgumentBeforePrologue(argIndex, argumentRegisterForFunctionArgument(argIndex));
+        }
+        
+        doneLoadingArgs.link(this);
+#endif
+    }
+
 #if CPU(X86_64) || CPU(X86)
     static size_t prologueStackPointerDelta()
     {
@@ -624,6 +707,31 @@ public:
     {
         storePtr(from, Address(stackPointerRegister, entry * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta()));
     }
+
+    void emitPutArgumentToCallFrameBeforePrologue(GPRReg from, unsigned argument)
+    {
+        storePtr(from, Address(stackPointerRegister, (CallFrameSlot::thisArgument + argument) * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta()));
+    }
+
+    void emitPutArgumentToCallFrame(GPRReg from, unsigned argument)
+    {
+        emitPutToCallFrameHeader(from, CallFrameSlot::thisArgument + argument);
+    }
+
+    void emitGetFromCallFrameHeaderBeforePrologue(const int entry, GPRReg to)
+    {
+        loadPtr(Address(stackPointerRegister, entry * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta()), to);
+    }
+    
+    void emitGetFromCallFrameArgumentBeforePrologue(unsigned argument, GPRReg to)
+    {
+        loadPtr(Address(stackPointerRegister, (CallFrameSlot::thisArgument + argument) * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta()), to);
+    }
+    
+    void emitGetPayloadFromCallFrameHeaderBeforePrologue(const int entry, GPRReg to)
+    {
+        load32(Address(stackPointerRegister, entry * static_cast<ptrdiff_t>(sizeof(Register)) - prologueStackPointerDelta() + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload)), to);
+    }
 #else
     void emitPutPayloadToCallFrameHeaderBeforePrologue(GPRReg from, int entry)
     {
@@ -1660,7 +1768,14 @@ public:
 #if USE(JSVALUE64)
     void wangsInt64Hash(GPRReg inputAndResult, GPRReg scratch);
 #endif
-    
+
+#if ENABLE(VM_COUNTERS)
+    void incrementCounter(VM::VMCounterType counterType)
+    {
+        addPtr(TrustedImm32(1), AbsoluteAddress(vm()->addressOfCounter(counterType)));
+    }
+#endif
+
 protected:
     VM* m_vm;
     CodeBlock* m_codeBlock;
@@ -1669,6 +1784,12 @@ protected:
     HashMap<CodeBlock*, Vector<BytecodeAndMachineOffset>> m_decodedCodeMaps;
 };
 
+#if ENABLE(VM_COUNTERS)
+#define incrementCounter(jit, counterType) (jit)->incrementCounter(counterType)
+#else
+#define incrementCounter(jit, counterType) ((void)0)
+#endif
+
 } // namespace JSC
 
 #endif // ENABLE(JIT)
index f4aacc6..6dbef5e 100644 (file)
 
 namespace JSC {
 
+void CachedRecovery::addTargetJSValueRegs(JSValueRegs jsValueRegs)
+{
+    ASSERT(m_wantedFPR == InvalidFPRReg);
+    size_t existing = m_gprTargets.find(jsValueRegs);
+    if (existing == WTF::notFound) {
+#if USE(JSVALUE64)
+        if (m_gprTargets.size() > 0 && m_recovery.isSet() && m_recovery.isInGPR()) {
+            // If we are recovering to the same GPR, make that GPR the first target.
+            GPRReg sourceGPR = m_recovery.gpr();
+            if (jsValueRegs.gpr() == sourceGPR) {
+                // Append the current first GPR below.
+                jsValueRegs = JSValueRegs(m_gprTargets[0].gpr());
+                m_gprTargets[0] = JSValueRegs(sourceGPR);
+            }
+        }
+#endif
+        m_gprTargets.append(jsValueRegs);
+    }
+}
+
 // We prefer loading doubles and undetermined JSValues into FPRs
 // because it would otherwise use up GPRs.  Two in JSVALUE32_64.
 bool CachedRecovery::loadsIntoFPR() const
index f627ac9..44e388d 100644 (file)
@@ -50,6 +50,7 @@ public:
     CachedRecovery& operator=(CachedRecovery&&) = delete;
 
     const Vector<VirtualRegister, 1>& targets() const { return m_targets; }
+    const Vector<JSValueRegs, 1>& gprTargets() const { return m_gprTargets; }
 
     void addTarget(VirtualRegister reg)
     {
@@ -68,15 +69,11 @@ public:
         m_targets.clear();
     }
 
-    void setWantedJSValueRegs(JSValueRegs jsValueRegs)
-    {
-        ASSERT(m_wantedFPR == InvalidFPRReg);
-        m_wantedJSValueRegs = jsValueRegs;
-    }
+    void addTargetJSValueRegs(JSValueRegs);
 
     void setWantedFPR(FPRReg fpr)
     {
-        ASSERT(!m_wantedJSValueRegs);
+        ASSERT(m_gprTargets.isEmpty());
         m_wantedFPR = fpr;
     }
 
@@ -119,14 +116,20 @@ public:
 
     void setRecovery(ValueRecovery recovery) { m_recovery = recovery; }
 
-    JSValueRegs wantedJSValueRegs() const { return m_wantedJSValueRegs; }
+    JSValueRegs wantedJSValueRegs() const
+    {
+        if (m_gprTargets.isEmpty())
+            return JSValueRegs();
+
+        return m_gprTargets[0];
+    }
 
     FPRReg wantedFPR() const { return m_wantedFPR; }
 private:
     ValueRecovery m_recovery;
-    JSValueRegs m_wantedJSValueRegs;
     FPRReg m_wantedFPR { InvalidFPRReg };
     Vector<VirtualRegister, 1> m_targets;
+    Vector<JSValueRegs, 1> m_gprTargets;
 };
 
 } // namespace JSC
index a987eab..33d9ebc 100644 (file)
@@ -39,6 +39,7 @@ public:
     ValueRecovery callee;
     Vector<ValueRecovery> args;
 #if USE(JSVALUE64)
+    bool argumentsInRegisters { false };
     RegisterMap<ValueRecovery> registers;
     GPRReg tagTypeNumber { InvalidGPRReg };
 
index 7209f89..0fbdfd4 100644 (file)
@@ -42,6 +42,9 @@ CallFrameShuffler::CallFrameShuffler(CCallHelpers& jit, const CallFrameShuffleDa
         + roundArgumentCountToAlignFrame(jit.codeBlock()->numParameters()))
     , m_alignedNewFrameSize(CallFrame::headerSizeInRegisters
         + roundArgumentCountToAlignFrame(data.args.size()))
+#if USE(JSVALUE64)
+    , m_argumentsInRegisters(data.argumentsInRegisters)
+#endif
     , m_frameDelta(m_alignedNewFrameSize - m_alignedOldFrameSize)
     , m_lockedRegisters(RegisterSet::allRegisters())
 {
@@ -54,11 +57,21 @@ CallFrameShuffler::CallFrameShuffler(CCallHelpers& jit, const CallFrameShuffleDa
     m_lockedRegisters.exclude(RegisterSet::vmCalleeSaveRegisters());
 
     ASSERT(!data.callee.isInJSStack() || data.callee.virtualRegister().isLocal());
-    addNew(VirtualRegister(CallFrameSlot::callee), data.callee);
-
+#if USE(JSVALUE64)
+    if (data.argumentsInRegisters)
+        addNew(JSValueRegs(argumentRegisterForCallee()), data.callee);
+    else
+#endif
+        addNew(VirtualRegister(CallFrameSlot::callee), data.callee);
+    
     for (size_t i = 0; i < data.args.size(); ++i) {
         ASSERT(!data.args[i].isInJSStack() || data.args[i].virtualRegister().isLocal());
-        addNew(virtualRegisterForArgument(i), data.args[i]);
+#if USE(JSVALUE64)
+        if (data.argumentsInRegisters && i < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
+            addNew(JSValueRegs(argumentRegisterForFunctionArgument(i)), data.args[i]);
+        else
+#endif
+            addNew(virtualRegisterForArgument(i), data.args[i]);
     }
 
 #if USE(JSVALUE64)
@@ -185,8 +198,13 @@ void CallFrameShuffler::dump(PrintStream& out) const
             }
         }
 #else
-        if (newCachedRecovery)
+        if (newCachedRecovery) {
             out.print("         ", reg, " <- ", newCachedRecovery->recovery());
+            if (newCachedRecovery->gprTargets().size() > 1) {
+                for (size_t i = 1; i < newCachedRecovery->gprTargets().size(); i++)
+                    out.print(", ", newCachedRecovery->gprTargets()[i].gpr(), " <- ", newCachedRecovery->recovery());
+            }
+        }
 #endif
         out.print("\n");
     }
@@ -496,7 +514,7 @@ bool CallFrameShuffler::tryWrites(CachedRecovery& cachedRecovery)
     ASSERT(cachedRecovery.recovery().isInRegisters()
         || cachedRecovery.recovery().isConstant());
 
-    if (verbose)
+    if (verbose && cachedRecovery.targets().size())
         dataLog("   * Storing ", cachedRecovery.recovery());
     for (size_t i = 0; i < cachedRecovery.targets().size(); ++i) {
         VirtualRegister target { cachedRecovery.targets()[i] };
@@ -505,9 +523,9 @@ bool CallFrameShuffler::tryWrites(CachedRecovery& cachedRecovery)
             dataLog(!i ? " into " : ", and ", "NEW ", target);
         emitStore(cachedRecovery, addressForNew(target));
         setNew(target, nullptr);
+        if (verbose)
+            dataLog("\n");
     }
-    if (verbose)
-        dataLog("\n");
     cachedRecovery.clearTargets();
     if (!cachedRecovery.wantedJSValueRegs() && cachedRecovery.wantedFPR() == InvalidFPRReg)
         clearCachedRecovery(cachedRecovery.recovery());
@@ -606,7 +624,7 @@ void CallFrameShuffler::prepareAny()
 {
     ASSERT(!isUndecided());
 
-    updateDangerFrontier();
+    initDangerFrontier();
 
     // First, we try to store any value that goes above the danger
     // frontier. This will never use more registers since we are only
@@ -702,13 +720,9 @@ void CallFrameShuffler::prepareAny()
         ASSERT_UNUSED(writesOK, writesOK);
     }
 
-#if USE(JSVALUE64)
-    if (m_tagTypeNumber != InvalidGPRReg && m_newRegisters[m_tagTypeNumber])
-        releaseGPR(m_tagTypeNumber);
-#endif
-
     // Handle 2) by loading all registers. We don't have to do any
     // writes, since they have been taken care of above.
+    // Note that we need m_tagTypeNumber to remain locked to box wanted registers.
     if (verbose)
         dataLog("  Loading wanted registers into registers\n");
     for (Reg reg = Reg::first(); reg <= Reg::last(); reg = reg.next()) {
@@ -742,12 +756,19 @@ void CallFrameShuffler::prepareAny()
 
     // We need to handle 4) first because it implies releasing
     // m_newFrameBase, which could be a wanted register.
+    // Note that we delay setting the argument count register as it needs to be released in step 3.
     if (verbose)
         dataLog("   * Storing the argument count into ", VirtualRegister { CallFrameSlot::argumentCount }, "\n");
-    m_jit.store32(MacroAssembler::TrustedImm32(0),
-        addressForNew(VirtualRegister { CallFrameSlot::argumentCount }).withOffset(TagOffset));
-    m_jit.store32(MacroAssembler::TrustedImm32(argCount()),
-        addressForNew(VirtualRegister { CallFrameSlot::argumentCount }).withOffset(PayloadOffset));
+#if USE(JSVALUE64)
+    if (!m_argumentsInRegisters) {
+#endif
+        m_jit.store32(MacroAssembler::TrustedImm32(0),
+            addressForNew(VirtualRegister { CallFrameSlot::argumentCount }).withOffset(TagOffset));
+        m_jit.store32(MacroAssembler::TrustedImm32(argCount()),
+            addressForNew(VirtualRegister { CallFrameSlot::argumentCount }).withOffset(PayloadOffset));
+#if USE(JSVALUE64)
+    }
+#endif
 
     if (!isSlowPath()) {
         ASSERT(m_newFrameBase != MacroAssembler::stackPointerRegister);
@@ -767,6 +788,23 @@ void CallFrameShuffler::prepareAny()
 
         emitDisplace(*cachedRecovery);
     }
+
+#if USE(JSVALUE64)
+    // For recoveries with multiple register targets, copy the contents of the first target to the
+    // remaining targets.
+    for (Reg reg = Reg::first(); reg <= Reg::last(); reg = reg.next()) {
+        CachedRecovery* cachedRecovery { m_newRegisters[reg] };
+        if (!cachedRecovery || cachedRecovery->gprTargets().size() < 2)
+            continue;
+
+        GPRReg sourceGPR = cachedRecovery->gprTargets()[0].gpr();
+        for (size_t i = 1; i < cachedRecovery->gprTargets().size(); i++)
+            m_jit.move(sourceGPR, cachedRecovery->gprTargets()[i].gpr());
+    }
+
+    if (m_argumentsInRegisters)
+        m_jit.move(MacroAssembler::TrustedImm32(argCount()), argumentRegisterForArgumentCount());
+#endif
 }
 
 } // namespace JSC
index f0918fc..b6b22e7 100644 (file)
@@ -96,17 +96,37 @@ public:
     // contains information about where the
     // arguments/callee/callee-save registers are by taking into
     // account any spilling that acquireGPR() could have done.
-    CallFrameShuffleData snapshot() const
+    CallFrameShuffleData snapshot(ArgumentsLocation argumentsLocation) const
     {
         ASSERT(isUndecided());
 
         CallFrameShuffleData data;
         data.numLocals = numLocals();
-        data.callee = getNew(VirtualRegister { CallFrameSlot::callee })->recovery();
+#if USE(JSVALUE64)
+        data.argumentsInRegisters = argumentsLocation != StackArgs;
+#endif
+        if (argumentsLocation == StackArgs)
+            data.callee = getNew(VirtualRegister { CallFrameSlot::callee })->recovery();
+        else {
+            Reg reg { argumentRegisterForCallee() };
+            CachedRecovery* cachedRecovery { m_newRegisters[reg] };
+            data.callee = cachedRecovery->recovery();
+        }
         data.args.resize(argCount());
-        for (size_t i = 0; i < argCount(); ++i)
-            data.args[i] = getNew(virtualRegisterForArgument(i))->recovery();
+        for (size_t i = 0; i < argCount(); ++i) {
+            if (argumentsLocation == StackArgs || i >= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
+                data.args[i] = getNew(virtualRegisterForArgument(i))->recovery();
+            else {
+                Reg reg { argumentRegisterForFunctionArgument(i) };
+                CachedRecovery* cachedRecovery { m_newRegisters[reg] };
+                data.args[i] = cachedRecovery->recovery();
+            }
+        }
         for (Reg reg = Reg::first(); reg <= Reg::last(); reg = reg.next()) {
+            if (reg.isGPR() && argumentsLocation != StackArgs
+                && GPRInfo::toArgumentIndex(reg.gpr()) < argumentRegisterIndexForJSFunctionArgument(argCount()))
+                continue;
+
             CachedRecovery* cachedRecovery { m_newRegisters[reg] };
             if (!cachedRecovery)
                 continue;
@@ -376,6 +396,9 @@ private:
 
     int m_alignedOldFrameSize;
     int m_alignedNewFrameSize;
+#if USE(JSVALUE64)
+    bool m_argumentsInRegisters;
+#endif
 
     // This is the distance, in slots, between the base of the new
     // frame and the base of the old frame. It could be negative when
@@ -641,9 +664,13 @@ private:
         ASSERT(jsValueRegs && !getNew(jsValueRegs));
         CachedRecovery* cachedRecovery = addCachedRecovery(recovery);
 #if USE(JSVALUE64)
-        if (cachedRecovery->wantedJSValueRegs())
-            m_newRegisters[cachedRecovery->wantedJSValueRegs().gpr()] = nullptr;
-        m_newRegisters[jsValueRegs.gpr()] = cachedRecovery;
+        if (cachedRecovery->wantedJSValueRegs()) {
+            if (recovery.isInGPR() && jsValueRegs.gpr() == recovery.gpr()) {
+                m_newRegisters[cachedRecovery->wantedJSValueRegs().gpr()] = nullptr;
+                m_newRegisters[jsValueRegs.gpr()] = cachedRecovery;
+            }
+        } else
+            m_newRegisters[jsValueRegs.gpr()] = cachedRecovery;
 #else
         if (JSValueRegs oldRegs { cachedRecovery->wantedJSValueRegs() }) {
             if (oldRegs.payloadGPR())
@@ -656,8 +683,7 @@ private:
         if (jsValueRegs.tagGPR() != InvalidGPRReg)
             m_newRegisters[jsValueRegs.tagGPR()] = cachedRecovery;
 #endif
-        ASSERT(!cachedRecovery->wantedJSValueRegs());
-        cachedRecovery->setWantedJSValueRegs(jsValueRegs);
+        cachedRecovery->addTargetJSValueRegs(jsValueRegs);
     }
 
     void addNew(FPRReg fpr, ValueRecovery recovery)
@@ -755,13 +781,23 @@ private:
         return reg <= dangerFrontier();
     }
 
+    void initDangerFrontier()
+    {
+        findDangerFrontierFrom(lastNew());
+    }
+
     void updateDangerFrontier()
     {
+        findDangerFrontierFrom(m_dangerFrontier - 1);
+    }
+
+    void findDangerFrontierFrom(VirtualRegister nextReg)
+    {
         ASSERT(!isUndecided());
 
         m_dangerFrontier = firstNew() - 1;
-        for (VirtualRegister reg = lastNew(); reg >= firstNew(); reg -= 1) {
-            if (!getNew(reg) || !isValidOld(newAsOld(reg)) || !getOld(newAsOld(reg)))
+        for (VirtualRegister reg = nextReg; reg >= firstNew(); reg -= 1) {
+            if (!isValidOld(newAsOld(reg)) || !getOld(newAsOld(reg)))
                 continue;
 
             m_dangerFrontier = reg;
index 2ef6ed1..86a0dde 100644 (file)
@@ -323,7 +323,8 @@ void CallFrameShuffler::emitDisplace(CachedRecovery& cachedRecovery)
             m_jit.move(cachedRecovery.recovery().gpr(), wantedReg.gpr());
         else
             m_jit.move64ToDouble(cachedRecovery.recovery().gpr(), wantedReg.fpr());
-        RELEASE_ASSERT(cachedRecovery.recovery().dataFormat() == DataFormatJS);
+        DataFormat format = cachedRecovery.recovery().dataFormat();
+        RELEASE_ASSERT(format == DataFormatJS || format == DataFormatCell);
         updateRecovery(cachedRecovery,
             ValueRecovery::inRegister(wantedReg, DataFormatJS));
     } else {
index 3d6a4c6..180f76a 100644 (file)
@@ -69,8 +69,8 @@ public:
     bool operator!() const { return m_gpr == InvalidGPRReg; }
     explicit operator bool() const { return m_gpr != InvalidGPRReg; }
 
-    bool operator==(JSValueRegs other) { return m_gpr == other.m_gpr; }
-    bool operator!=(JSValueRegs other) { return !(*this == other); }
+    bool operator==(JSValueRegs other) const { return m_gpr == other.m_gpr; }
+    bool operator!=(JSValueRegs other) const { return !(*this == other); }
     
     GPRReg gpr() const { return m_gpr; }
     GPRReg tagGPR() const { return InvalidGPRReg; }
@@ -331,6 +331,7 @@ private:
 
 #if CPU(X86)
 #define NUMBER_OF_ARGUMENT_REGISTERS 0u
+#define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 0u
 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u
 
 class GPRInfo {
@@ -353,6 +354,7 @@ public:
     static const GPRReg argumentGPR2 = X86Registers::eax; // regT0
     static const GPRReg argumentGPR3 = X86Registers::ebx; // regT3
     static const GPRReg nonArgGPR0 = X86Registers::esi; // regT4
+    static const GPRReg nonArgGPR1 = X86Registers::edi; // regT5
     static const GPRReg returnValueGPR = X86Registers::eax; // regT0
     static const GPRReg returnValueGPR2 = X86Registers::edx; // regT1
     static const GPRReg nonPreservedNonReturnGPR = X86Registers::ecx;
@@ -379,6 +381,14 @@ public:
         return result;
     }
 
+    static unsigned toArgumentIndex(GPRReg reg)
+    {
+        ASSERT(reg != InvalidGPRReg);
+        ASSERT(static_cast<int>(reg) < 8);
+        static const unsigned indexForArgumentRegister[8] = { 2, 0, 1, 3, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex };
+        return indexForArgumentRegister[reg];
+    }
+
     static const char* debugName(GPRReg reg)
     {
         ASSERT(reg != InvalidGPRReg);
@@ -399,9 +409,11 @@ public:
 #if !OS(WINDOWS)
 #define NUMBER_OF_ARGUMENT_REGISTERS 6u
 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 5u
+#define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS (NUMBER_OF_ARGUMENT_REGISTERS - 2u)
 #else
 #define NUMBER_OF_ARGUMENT_REGISTERS 4u
 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 7u
+#define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 0u
 #endif
 
 class GPRInfo {
@@ -464,6 +476,7 @@ public:
     static const GPRReg argumentGPR3 = X86Registers::r9; // regT3
 #endif
     static const GPRReg nonArgGPR0 = X86Registers::r10; // regT5 (regT4 on Windows)
+    static const GPRReg nonArgGPR1 = X86Registers::eax; // regT0
     static const GPRReg returnValueGPR = X86Registers::eax; // regT0
     static const GPRReg returnValueGPR2 = X86Registers::edx; // regT1 or regT2
     static const GPRReg nonPreservedNonReturnGPR = X86Registers::r10; // regT5 (regT4 on Windows)
@@ -508,6 +521,18 @@ public:
         return indexForRegister[reg];
     }
 
+    static unsigned toArgumentIndex(GPRReg reg)
+    {
+        ASSERT(reg != InvalidGPRReg);
+        ASSERT(static_cast<int>(reg) < 16);
+#if !OS(WINDOWS)
+        static const unsigned indexForArgumentRegister[16] = { InvalidIndex, 3, 2, InvalidIndex, InvalidIndex, InvalidIndex, 1, 0, 4, 5, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex };
+#else
+        static const unsigned indexForArgumentRegister[16] = { InvalidIndex, 0, 1, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, 2, 3, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex, InvalidIndex };
+#endif
+        return indexForArgumentRegister[reg];
+    }
+    
     static const char* debugName(GPRReg reg)
     {
         ASSERT(reg != InvalidGPRReg);
@@ -538,6 +563,7 @@ public:
 
 #if CPU(ARM)
 #define NUMBER_OF_ARGUMENT_REGISTERS 4u
+#define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 0u
 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u
 
 class GPRInfo {
@@ -601,6 +627,15 @@ public:
         return result;
     }
 
+    static unsigned toArgumentIndex(GPRReg reg)
+    {
+        ASSERT(reg != InvalidGPRReg);
+        ASSERT(static_cast<int>(reg) < 16);
+        if (reg > argumentGPR3)
+            return InvalidIndex;
+        return (unsigned)reg;
+    }
+    
     static const char* debugName(GPRReg reg)
     {
         ASSERT(reg != InvalidGPRReg);
@@ -621,6 +656,7 @@ public:
 
 #if CPU(ARM64)
 #define NUMBER_OF_ARGUMENT_REGISTERS 8u
+#define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS (NUMBER_OF_ARGUMENT_REGISTERS - 2u)
 // Callee Saves includes x19..x28 and FP registers q8..q15
 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 18u
 
@@ -698,6 +734,7 @@ public:
     COMPILE_ASSERT(ARM64Registers::q13 == 13, q13_is_13);
     COMPILE_ASSERT(ARM64Registers::q14 == 14, q14_is_14);
     COMPILE_ASSERT(ARM64Registers::q15 == 15, q15_is_15);
+
     static GPRReg toRegister(unsigned index)
     {
         return (GPRReg)index;
@@ -715,6 +752,14 @@ public:
         return toRegister(index);
     }
 
+    static unsigned toArgumentIndex(GPRReg reg)
+    {
+        ASSERT(reg != InvalidGPRReg);
+        if (reg > argumentGPR7)
+            return InvalidIndex;
+        return (unsigned)reg;
+    }
+
     static const char* debugName(GPRReg reg)
     {
         ASSERT(reg != InvalidGPRReg);
@@ -746,6 +791,7 @@ public:
 
 #if CPU(MIPS)
 #define NUMBER_OF_ARGUMENT_REGISTERS 4u
+#define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 0u
 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u
 
 class GPRInfo {
@@ -773,6 +819,7 @@ public:
     static const GPRReg argumentGPR2 = MIPSRegisters::a2;
     static const GPRReg argumentGPR3 = MIPSRegisters::a3;
     static const GPRReg nonArgGPR0 = regT4;
+    static const GPRReg nonArgGPR1 = regT5;
     static const GPRReg returnValueGPR = regT0;
     static const GPRReg returnValueGPR2 = regT1;
     static const GPRReg nonPreservedNonReturnGPR = regT2;
@@ -825,6 +872,7 @@ public:
 
 #if CPU(SH4)
 #define NUMBER_OF_ARGUMENT_REGISTERS 4u
+#define NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS 0u
 #define NUMBER_OF_CALLEE_SAVES_REGISTERS 0u
 
 class GPRInfo {
@@ -855,6 +903,7 @@ public:
     static const GPRReg argumentGPR2 = SH4Registers::r6; // regT2
     static const GPRReg argumentGPR3 = SH4Registers::r7; // regT3
     static const GPRReg nonArgGPR0 = regT4;
+    static const GPRReg nonArgGPR1 = regT5;
     static const GPRReg returnValueGPR = regT0;
     static const GPRReg returnValueGPR2 = regT1;
     static const GPRReg nonPreservedNonReturnGPR = regT2;
@@ -891,6 +940,73 @@ public:
 
 #endif // CPU(SH4)
 
+inline GPRReg argumentRegisterFor(unsigned argumentIndex)
+{
+#if NUMBER_OF_ARGUMENT_REGISTERS
+    if (argumentIndex >= NUMBER_OF_ARGUMENT_REGISTERS)
+        return InvalidGPRReg;
+    return GPRInfo::toArgumentRegister(argumentIndex);
+#else
+    UNUSED_PARAM(argumentIndex);
+    RELEASE_ASSERT_NOT_REACHED();
+    return InvalidGPRReg;
+#endif
+}
+
+inline GPRReg argumentRegisterForCallee()
+{
+#if NUMBER_OF_ARGUMENT_REGISTERS
+    return argumentRegisterFor(0);
+#else
+    return GPRInfo::regT0;
+#endif
+}
+
+inline GPRReg argumentRegisterForArgumentCount()
+{
+    return argumentRegisterFor(1);
+}
+
+inline unsigned argumentRegisterIndexForJSFunctionArgument(unsigned argument)
+{
+    return argument + 2;
+}
+
+inline unsigned jsFunctionArgumentForArgumentRegisterIndex(unsigned index)
+{
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS > 0
+    ASSERT(index >= 2);
+    return index - 2;
+#else
+    UNUSED_PARAM(index);
+    RELEASE_ASSERT_NOT_REACHED();
+    return 0;
+#endif
+}
+
+inline unsigned jsFunctionArgumentForArgumentRegister(GPRReg gpr)
+{
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS > 0
+    unsigned argumentRegisterIndex = GPRInfo::toArgumentIndex(gpr);
+    ASSERT(argumentRegisterIndex != GPRInfo::InvalidIndex);
+    return jsFunctionArgumentForArgumentRegisterIndex(argumentRegisterIndex);
+#else
+    UNUSED_PARAM(gpr);
+    RELEASE_ASSERT_NOT_REACHED();
+    return 0;
+#endif
+}
+
+inline GPRReg argumentRegisterForFunctionArgument(unsigned argumentIndex)
+{
+    return argumentRegisterFor(argumentRegisterIndexForJSFunctionArgument(argumentIndex));
+}
+
+inline unsigned numberOfRegisterArgumentsFor(unsigned argumentCount)
+{
+    return std::min(argumentCount, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS);
+}
+
 // The baseline JIT uses "accumulator" style execution with regT0 (for 64-bit)
 // and regT0 + regT1 (for 32-bit) serving as the accumulator register(s) for
 // passing results of one opcode to the next. Hence:
index c2acfb3..e7cfdc2 100644 (file)
@@ -66,14 +66,6 @@ void ctiPatchCallByReturnAddress(ReturnAddressPtr returnAddress, FunctionPtr new
         newCalleeFunction);
 }
 
-JIT::CodeRef JIT::compileCTINativeCall(VM* vm, NativeFunction func)
-{
-    if (!vm->canUseJIT())
-        return CodeRef::createLLIntCodeRef(llint_native_call_trampoline);
-    JIT jit(vm, 0);
-    return jit.privateCompileCTINativeCall(vm, func);
-}
-
 JIT::JIT(VM* vm, CodeBlock* codeBlock)
     : JSInterfaceJIT(vm, codeBlock)
     , m_interpreter(vm->interpreter)
@@ -579,6 +571,20 @@ void JIT::compileWithoutLinking(JITCompilationEffort effort)
     if (m_randomGenerator.getUint32() & 1)
         nop();
 
+#if USE(JSVALUE64)
+    spillArgumentRegistersToFrameBeforePrologue(static_cast<unsigned>(m_codeBlock->numParameters()));
+    incrementCounter(this, VM::RegArgsNoArity);
+#if ENABLE(VM_COUNTERS)
+    Jump continueStackEntry = jump();
+#endif
+#endif
+    m_stackArgsArityOKEntry = label();
+    incrementCounter(this, VM::StackArgsNoArity);
+
+#if USE(JSVALUE64) && ENABLE(VM_COUNTERS)
+    continueStackEntry.link(this);
+#endif
+
     emitFunctionPrologue();
     emitPutToCallFrameHeader(m_codeBlock, CallFrameSlot::codeBlock);
 
@@ -635,7 +641,21 @@ void JIT::compileWithoutLinking(JITCompilationEffort effort)
     callOperationWithCallFrameRollbackOnException(operationThrowStackOverflowError, m_codeBlock);
 
     if (m_codeBlock->codeType() == FunctionCode) {
-        m_arityCheck = label();
+        m_registerArgsWithArityCheck = label();
+
+        incrementCounter(this, VM::RegArgsArity);
+
+        spillArgumentRegistersToFrameBeforePrologue();
+
+#if ENABLE(VM_COUNTERS)
+        Jump continueStackArityEntry = jump();
+#endif
+
+        m_stackArgsWithArityCheck = label();
+        incrementCounter(this, VM::StackArgsArity);
+#if ENABLE(VM_COUNTERS)
+        continueStackArityEntry.link(this);
+#endif
         store8(TrustedImm32(0), &m_codeBlock->m_shouldAlwaysBeInlined);
         emitFunctionPrologue();
         emitPutToCallFrameHeader(m_codeBlock, CallFrameSlot::codeBlock);
@@ -643,6 +663,8 @@ void JIT::compileWithoutLinking(JITCompilationEffort effort)
         load32(payloadFor(CallFrameSlot::argumentCount), regT1);
         branch32(AboveOrEqual, regT1, TrustedImm32(m_codeBlock->m_numParameters)).linkTo(beginLabel, this);
 
+        incrementCounter(this, VM::ArityFixupRequired);
+
         m_bytecodeOffset = 0;
 
         if (maxFrameExtentForSlowPathCall)
@@ -778,9 +800,14 @@ CompilationResult JIT::link()
     }
     m_codeBlock->setJITCodeMap(jitCodeMapEncoder.finish());
 
-    MacroAssemblerCodePtr withArityCheck;
-    if (m_codeBlock->codeType() == FunctionCode)
-        withArityCheck = patchBuffer.locationOf(m_arityCheck);
+    MacroAssemblerCodePtr stackEntryArityOKPtr = patchBuffer.locationOf(m_stackArgsArityOKEntry);
+    
+    MacroAssemblerCodePtr registerEntryWithArityCheckPtr;
+    MacroAssemblerCodePtr stackEntryWithArityCheckPtr;
+    if (m_codeBlock->codeType() == FunctionCode) {
+        registerEntryWithArityCheckPtr = patchBuffer.locationOf(m_registerArgsWithArityCheck);
+        stackEntryWithArityCheckPtr = patchBuffer.locationOf(m_stackArgsWithArityCheck);
+    }
 
     if (Options::dumpDisassembly()) {
         m_disassembler->dump(patchBuffer);
@@ -804,8 +831,20 @@ CompilationResult JIT::link()
         static_cast<double>(m_instructions.size()));
 
     m_codeBlock->shrinkToFit(CodeBlock::LateShrink);
+    JITEntryPoints entrypoints(result.code(), registerEntryWithArityCheckPtr, registerEntryWithArityCheckPtr, stackEntryArityOKPtr, stackEntryWithArityCheckPtr);
+
+    unsigned numParameters = static_cast<unsigned>(m_codeBlock->numParameters());
+    for (unsigned argCount = 1; argCount <= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; argCount++) {
+        MacroAssemblerCodePtr entry;
+        if (argCount == numParameters)
+            entry = result.code();
+        else
+            entry = registerEntryWithArityCheckPtr;
+        entrypoints.setEntryFor(JITEntryPoints::registerEntryTypeForArgumentCount(argCount), entry);
+    }
+
     m_codeBlock->setJITCode(
-        adoptRef(new DirectJITCode(result, withArityCheck, JITCode::BaselineJIT)));
+        adoptRef(new DirectJITCode(JITEntryPointsWithRef(result, entrypoints), JITCode::BaselineJIT)));
 
 #if ENABLE(JIT_VERBOSE)
     dataLogF("JIT generated code for %p at [%p, %p).\n", m_codeBlock, result.executableMemory()->start(), result.executableMemory()->end());
index e6aa023..c381026 100644 (file)
@@ -43,6 +43,7 @@
 #include "JITInlineCacheGenerator.h"
 #include "JITMathIC.h"
 #include "JSInterfaceJIT.h"
+#include "LowLevelInterpreter.h"
 #include "PCToCodeOriginMap.h"
 #include "UnusedPointer.h"
 
@@ -246,7 +247,15 @@ namespace JSC {
             jit.privateCompileHasIndexedProperty(byValInfo, returnAddress, arrayMode);
         }
 
-        static CodeRef compileCTINativeCall(VM*, NativeFunction);
+        static JITEntryPointsWithRef compileNativeCallEntryPoints(VM* vm, NativeFunction func)
+        {
+            if (!vm->canUseJIT()) {
+                CodeRef nativeCallRef = CodeRef::createLLIntCodeRef(llint_native_call_trampoline);
+                return JITEntryPointsWithRef(nativeCallRef, nativeCallRef.code(), nativeCallRef.code());
+            }
+            JIT jit(vm, 0);
+            return jit.privateCompileJITEntryNativeCall(vm, func);
+        }
 
         static unsigned frameRegisterCountFor(CodeBlock*);
         static int stackPointerOffsetFor(CodeBlock*);
@@ -266,8 +275,7 @@ namespace JSC {
 
         void privateCompileHasIndexedProperty(ByValInfo*, ReturnAddressPtr, JITArrayMode);
 
-        Label privateCompileCTINativeCall(VM*, bool isConstruct = false);
-        CodeRef privateCompileCTINativeCall(VM*, NativeFunction);
+        JITEntryPointsWithRef privateCompileJITEntryNativeCall(VM*, NativeFunction);
         void privateCompilePatchGetArrayLength(ReturnAddressPtr returnAddress);
 
         // Add a call out from JIT code, without an exception check.
@@ -949,8 +957,10 @@ namespace JSC {
         unsigned m_putByIdIndex;
         unsigned m_byValInstructionIndex;
         unsigned m_callLinkInfoIndex;
-        
-        Label m_arityCheck;
+
+        Label m_stackArgsArityOKEntry;
+        Label m_stackArgsWithArityCheck;
+        Label m_registerArgsWithArityCheck;
         std::unique_ptr<LinkBuffer> m_linkBuffer;
 
         std::unique_ptr<JITDisassembler> m_disassembler;
index 64ed087..7a62d80 100644 (file)
@@ -91,6 +91,8 @@ void JIT::compileSetupVarargsFrame(OpcodeID opcode, Instruction* instruction, Ca
     store64(regT0, Address(regT1, CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register))));
 
     addPtr(TrustedImm32(sizeof(CallerFrameAndPC)), regT1, stackPointerRegister);
+    incrementCounter(this, VM::BaselineCaller);
+    incrementCounter(this, VM::CallVarargs);
 }
 
 void JIT::compileCallEval(Instruction* instruction)
@@ -98,6 +100,9 @@ void JIT::compileCallEval(Instruction* instruction)
     addPtr(TrustedImm32(-static_cast<ptrdiff_t>(sizeof(CallerFrameAndPC))), stackPointerRegister, regT1);
     storePtr(callFrameRegister, Address(regT1, CallFrame::callerFrameOffset()));
 
+    incrementCounter(this, VM::BaselineCaller);
+    incrementCounter(this, VM::CallEval);
+
     addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister);
     checkStackPointerAlignment();
 
@@ -113,7 +118,7 @@ void JIT::compileCallEval(Instruction* instruction)
 void JIT::compileCallEvalSlowCase(Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter)
 {
     CallLinkInfo* info = m_codeBlock->addCallLinkInfo();
-    info->setUpCall(CallLinkInfo::Call, CodeOrigin(m_bytecodeOffset), regT0);
+    info->setUpCall(CallLinkInfo::Call, StackArgs, CodeOrigin(m_bytecodeOffset), regT0);
 
     linkSlowCase(iter);
     int registerOffset = -instruction[4].u.operand;
@@ -154,12 +159,14 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca
     COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_tail_call_forward_arguments), call_and_tail_call_forward_arguments_opcodes_must_be_same_length);
 
     CallLinkInfo* info = nullptr;
+    ArgumentsLocation argumentsLocation = StackArgs;
+
     if (opcodeID != op_call_eval)
         info = m_codeBlock->addCallLinkInfo();
     if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs || opcodeID == op_tail_call_varargs || opcodeID == op_tail_call_forward_arguments)
         compileSetupVarargsFrame(opcodeID, instruction, info);
     else {
-        int argCount = instruction[3].u.operand;
+        unsigned argCount = instruction[3].u.unsignedValue;
         int registerOffset = -instruction[4].u.operand;
 
         if (opcodeID == op_call && shouldEmitProfiling()) {
@@ -171,15 +178,25 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca
         }
     
         addPtr(TrustedImm32(registerOffset * sizeof(Register) + sizeof(CallerFrameAndPC)), callFrameRegister, stackPointerRegister);
+        if (argumentsLocation != StackArgs) {
+            move(TrustedImm32(argCount), argumentRegisterForArgumentCount());
+            unsigned registerArgs = std::min(argCount, NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS);
+            for (unsigned arg = 0; arg < registerArgs; arg++)
+                load64(Address(stackPointerRegister, (CallFrameSlot::thisArgument + arg) * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC)), argumentRegisterForFunctionArgument(arg));
+        }
         store32(TrustedImm32(argCount), Address(stackPointerRegister, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset - sizeof(CallerFrameAndPC)));
     } // SP holds newCallFrame + sizeof(CallerFrameAndPC), with ArgumentCount initialized.
+
+    incrementCounter(this, VM::BaselineCaller);
     
     uint32_t bytecodeOffset = instruction - m_codeBlock->instructions().begin();
     uint32_t locationBits = CallSiteIndex(bytecodeOffset).bits();
     store32(TrustedImm32(locationBits), Address(callFrameRegister, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + TagOffset));
 
-    emitGetVirtualRegister(callee, regT0); // regT0 holds callee.
-    store64(regT0, Address(stackPointerRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC)));
+    GPRReg calleeRegister = argumentRegisterForCallee();
+
+    emitGetVirtualRegister(callee, calleeRegister);
+    store64(calleeRegister, Address(stackPointerRegister, CallFrameSlot::callee * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC)));
 
     if (opcodeID == op_call_eval) {
         compileCallEval(instruction);
@@ -187,16 +204,18 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca
     }
 
     DataLabelPtr addressOfLinkedFunctionCheck;
-    Jump slowCase = branchPtrWithPatch(NotEqual, regT0, addressOfLinkedFunctionCheck, TrustedImmPtr(0));
+    Jump slowCase = branchPtrWithPatch(NotEqual, calleeRegister, addressOfLinkedFunctionCheck, TrustedImmPtr(0));
     addSlowCase(slowCase);
 
     ASSERT(m_callCompilationInfo.size() == callLinkInfoIndex);
-    info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), CodeOrigin(m_bytecodeOffset), regT0);
+    info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), argumentsLocation, CodeOrigin(m_bytecodeOffset), calleeRegister);
     m_callCompilationInfo.append(CallCompilationInfo());
     m_callCompilationInfo[callLinkInfoIndex].hotPathBegin = addressOfLinkedFunctionCheck;
     m_callCompilationInfo[callLinkInfoIndex].callLinkInfo = info;
 
     if (opcodeID == op_tail_call) {
+        incrementCounter(this, VM::TailCall);
+
         CallFrameShuffleData shuffleData;
         shuffleData.tagTypeNumber = GPRInfo::tagTypeNumberRegister;
         shuffleData.numLocals =
@@ -209,7 +228,7 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca
                     DataFormatJS);
         }
         shuffleData.callee =
-            ValueRecovery::inGPR(regT0, DataFormatJS);
+            ValueRecovery::inGPR(calleeRegister, DataFormatJS);
         shuffleData.setupCalleeSaveRegisters(m_codeBlock);
         info->setFrameShuffleData(shuffleData);
         CallFrameShuffler(*this, shuffleData).prepareForTailCall();
@@ -246,9 +265,10 @@ void JIT::compileOpCallSlowCase(OpcodeID opcodeID, Instruction* instruction, Vec
     if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs || opcodeID == op_tail_call_forward_arguments)
         emitRestoreCalleeSaves();
 
-    move(TrustedImmPtr(m_callCompilationInfo[callLinkInfoIndex].callLinkInfo), regT2);
+    CallLinkInfo* callLinkInfo = m_callCompilationInfo[callLinkInfoIndex].callLinkInfo;
+    move(TrustedImmPtr(callLinkInfo), nonArgGPR0);
 
-    m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getCTIStub(linkCallThunkGenerator).code());
+    m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(callLinkInfo->argumentsLocation()));
 
     if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) {
         abortWithReason(JITDidReturnFromTailCall);
index 573b062..e61bcd2 100644 (file)
@@ -203,7 +203,7 @@ void JIT::compileCallEval(Instruction* instruction)
 void JIT::compileCallEvalSlowCase(Instruction* instruction, Vector<SlowCaseEntry>::iterator& iter)
 {
     CallLinkInfo* info = m_codeBlock->addCallLinkInfo();
-    info->setUpCall(CallLinkInfo::Call, CodeOrigin(m_bytecodeOffset), regT0);
+    info->setUpCall(CallLinkInfo::Call, StackArgs, CodeOrigin(m_bytecodeOffset), regT0);
 
     linkSlowCase(iter);
 
@@ -211,12 +211,12 @@ void JIT::compileCallEvalSlowCase(Instruction* instruction, Vector<SlowCaseEntry
 
     addPtr(TrustedImm32(registerOffset * sizeof(Register) + sizeof(CallerFrameAndPC)), callFrameRegister, stackPointerRegister);
 
-    move(TrustedImmPtr(info), regT2);
+    move(TrustedImmPtr(info), nonArgGPR0);
 
     emitLoad(CallFrameSlot::callee, regT1, regT0);
-    MacroAssemblerCodeRef virtualThunk = virtualThunkFor(m_vm, *info);
-    info->setSlowStub(createJITStubRoutine(virtualThunk, *m_vm, nullptr, true));
-    emitNakedCall(virtualThunk.code());
+    JITJSCallThunkEntryPointsWithRef virtualThunk = virtualThunkFor(m_vm, *info);
+    info->setSlowStub(createJITStubRoutine(virtualThunk.codeRef(), *m_vm, nullptr, true));
+    emitNakedCall(virtualThunk.entryFor(StackArgs));
     addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, stackPointerRegister);
     checkStackPointerAlignment();
 
@@ -286,7 +286,7 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca
     addSlowCase(slowCase);
 
     ASSERT(m_callCompilationInfo.size() == callLinkInfoIndex);
-    info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), CodeOrigin(m_bytecodeOffset), regT0);
+    info->setUpCall(CallLinkInfo::callTypeFor(opcodeID), StackArgs, CodeOrigin(m_bytecodeOffset), regT0);
     m_callCompilationInfo.append(CallCompilationInfo());
     m_callCompilationInfo[callLinkInfoIndex].hotPathBegin = addressOfLinkedFunctionCheck;
     m_callCompilationInfo[callLinkInfoIndex].callLinkInfo = info;
@@ -317,12 +317,13 @@ void JIT::compileOpCallSlowCase(OpcodeID opcodeID, Instruction* instruction, Vec
     linkSlowCase(iter);
     linkSlowCase(iter);
 
-    move(TrustedImmPtr(m_callCompilationInfo[callLinkInfoIndex].callLinkInfo), regT2);
+    CallLinkInfo* callLinkInfo = m_callCompilationInfo[callLinkInfoIndex].callLinkInfo;
+    move(TrustedImmPtr(callLinkInfo), nonArgGPR0);
 
     if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs)
         emitRestoreCalleeSaves();
 
-    m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getCTIStub(linkCallThunkGenerator).code());
+    m_callCompilationInfo[callLinkInfoIndex].callReturnLocation = emitNakedCall(m_vm->getJITCallThunkEntryStub(linkCallThunkGenerator).entryFor(callLinkInfo->argumentsLocation()));
 
     if (opcodeID == op_tail_call || opcodeID == op_tail_call_varargs) {
         abortWithReason(JITDidReturnFromTailCall);
index 9c92552..653489a 100644 (file)
@@ -75,9 +75,9 @@ JSValue JITCode::execute(VM* vm, ProtoCallFrame* protoCallFrame)
 
     if (!function || !protoCallFrame->needArityCheck()) {
         ASSERT(!protoCallFrame->needArityCheck());
-        entryAddress = executableAddress();
+        entryAddress = addressForCall(StackArgsArityCheckNotRequired).executableAddress();
     } else
-        entryAddress = addressForCall(MustCheckArity).executableAddress();
+        entryAddress = addressForCall(StackArgsMustCheckArity).executableAddress();
     JSValue result = JSValue::decode(vmEntryToJavaScript(entryAddress, vm, protoCallFrame));
     return scope.exception() ? jsNull() : result;
 }
@@ -162,9 +162,9 @@ DirectJITCode::DirectJITCode(JITType jitType)
 {
 }
 
-DirectJITCode::DirectJITCode(JITCode::CodeRef ref, JITCode::CodePtr withArityCheck, JITType jitType)
-    : JITCodeWithCodeRef(ref, jitType)
-    , m_withArityCheck(withArityCheck)
+DirectJITCode::DirectJITCode(JITEntryPointsWithRef entries, JITType jitType)
+    : JITCodeWithCodeRef(entries.codeRef(), jitType)
+    , m_entryPoints(entries)
 {
 }
 
@@ -172,25 +172,16 @@ DirectJITCode::~DirectJITCode()
 {
 }
 
-void DirectJITCode::initializeCodeRef(JITCode::CodeRef ref, JITCode::CodePtr withArityCheck)
+void DirectJITCode::initializeEntryPoints(JITEntryPointsWithRef entries)
 {
     RELEASE_ASSERT(!m_ref);
-    m_ref = ref;
-    m_withArityCheck = withArityCheck;
+    m_ref = entries.codeRef();
+    m_entryPoints = entries;
 }
 
-JITCode::CodePtr DirectJITCode::addressForCall(ArityCheckMode arity)
+JITCode::CodePtr DirectJITCode::addressForCall(EntryPointType type)
 {
-    switch (arity) {
-    case ArityCheckNotRequired:
-        RELEASE_ASSERT(m_ref);
-        return m_ref.code();
-    case MustCheckArity:
-        RELEASE_ASSERT(m_withArityCheck);
-        return m_withArityCheck;
-    }
-    RELEASE_ASSERT_NOT_REACHED();
-    return CodePtr();
+    return m_entryPoints.entryFor(type);
 }
 
 NativeJITCode::NativeJITCode(JITType jitType)
@@ -213,7 +204,7 @@ void NativeJITCode::initializeCodeRef(CodeRef ref)
     m_ref = ref;
 }
 
-JITCode::CodePtr NativeJITCode::addressForCall(ArityCheckMode)
+JITCode::CodePtr NativeJITCode::addressForCall(EntryPointType)
 {
     RELEASE_ASSERT(!!m_ref);
     return m_ref.code();
index 75c70c7..9cd5cfe 100644 (file)
 
 #pragma once
 
-#include "ArityCheckMode.h"
 #include "CallFrame.h"
 #include "CodeOrigin.h"
 #include "Disassembler.h"
+#include "JITEntryPoints.h"
 #include "JSCJSValue.h"
 #include "MacroAssemblerCodeRef.h"
 #include "RegisterSet.h"
@@ -173,9 +173,8 @@ public:
         return jitCode->jitType();
     }
     
-    virtual CodePtr addressForCall(ArityCheckMode) = 0;
+    virtual CodePtr addressForCall(EntryPointType) = 0;
     virtual void* executableAddressAtOffset(size_t offset) = 0;
-    void* executableAddress() { return executableAddressAtOffset(0); }
     virtual void* dataAddressAtOffset(size_t offset) = 0;
     virtual unsigned offsetOf(void* pointerIntoCode) = 0;
     
@@ -224,15 +223,15 @@ protected:
 class DirectJITCode : public JITCodeWithCodeRef {
 public:
     DirectJITCode(JITType);
-    DirectJITCode(CodeRef, CodePtr withArityCheck, JITType);
+    DirectJITCode(JITEntryPointsWithRef, JITType);
     virtual ~DirectJITCode();
     
-    void initializeCodeRef(CodeRef, CodePtr withArityCheck);
+    void initializeEntryPoints(JITEntryPointsWithRef);
 
-    CodePtr addressForCall(ArityCheckMode) override;
+    CodePtr addressForCall(EntryPointType) override;
 
 private:
-    CodePtr m_withArityCheck;
+    JITEntryPoints m_entryPoints;
 };
 
 class NativeJITCode : public JITCodeWithCodeRef {
@@ -243,7 +242,7 @@ public:
     
     void initializeCodeRef(CodeRef);
 
-    CodePtr addressForCall(ArityCheckMode) override;
+    CodePtr addressForCall(EntryPointType) override;
 };
 
 } // namespace JSC
diff --git a/Source/JavaScriptCore/jit/JITEntryPoints.h b/Source/JavaScriptCore/jit/JITEntryPoints.h
new file mode 100644 (file)
index 0000000..90a3f0c
--- /dev/null
@@ -0,0 +1,363 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#if ENABLE(JIT)
+
+#include "GPRInfo.h"
+#include "MacroAssemblerCodeRef.h"
+
+namespace JSC {
+class VM;
+class MacroAssemblerCodeRef;
+
+enum ArgumentsLocation : unsigned {
+    StackArgs = 0,
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS >= 4
+    RegisterArgs1InRegisters,
+    RegisterArgs2InRegisters,
+    RegisterArgs3InRegisters,
+    RegisterArgs4InRegisters,
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS == 6
+    RegisterArgs5InRegisters,
+    RegisterArgs6InRegisters,
+#endif
+    RegisterArgsWithExtraOnStack
+#endif
+};
+
+// This enum needs to have the same enumerator ordering as ArgumentsLocation.
+enum ThunkEntryPointType : unsigned {
+    StackArgsEntry = 0,
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS >= 4
+    Register1ArgEntry,
+    Register2ArgsEntry,
+    Register3ArgsEntry,
+    Register4ArgsEntry,
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS == 6
+    Register5ArgsEntry,
+    Register6ArgsEntry,
+#endif
+#endif
+    ThunkEntryPointTypeCount
+};
+
+enum EntryPointType {
+    StackArgsArityCheckNotRequired,
+    StackArgsMustCheckArity,
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    RegisterArgsArityCheckNotRequired,
+    RegisterArgsPossibleExtraArgs,
+    RegisterArgsMustCheckArity,
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS >= 4
+    RegisterArgs1,
+    RegisterArgs2,
+    RegisterArgs3,
+    RegisterArgs4,
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS == 6
+    RegisterArgs5,
+    RegisterArgs6,
+#endif
+#endif
+#endif
+    NumberOfEntryPointTypes
+};
+
+class JITEntryPoints {
+public:
+    typedef MacroAssemblerCodePtr CodePtr;
+    static const unsigned numberOfEntryTypes = EntryPointType::NumberOfEntryPointTypes;
+
+    JITEntryPoints()
+    {
+        clearEntries();
+    }
+
+    JITEntryPoints(CodePtr registerArgsNoCheckRequiredEntry, CodePtr registerArgsPossibleExtraArgsEntry,
+        CodePtr registerArgsCheckArityEntry, CodePtr stackArgsArityCheckNotRequiredEntry,
+        CodePtr stackArgsCheckArityEntry)
+    {
+        m_entryPoints[StackArgsArityCheckNotRequired] = stackArgsArityCheckNotRequiredEntry;
+        m_entryPoints[StackArgsMustCheckArity] = stackArgsCheckArityEntry;
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+        m_entryPoints[RegisterArgsArityCheckNotRequired] = registerArgsNoCheckRequiredEntry;
+        m_entryPoints[RegisterArgsPossibleExtraArgs] = registerArgsPossibleExtraArgsEntry;
+        m_entryPoints[RegisterArgsMustCheckArity] = registerArgsCheckArityEntry;
+        for (unsigned i = 1; i <= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS; i ++)
+            m_entryPoints[registerEntryTypeForArgumentCount(i)] = registerArgsCheckArityEntry;
+#else
+        UNUSED_PARAM(registerArgsNoCheckRequiredEntry);
+        UNUSED_PARAM(registerArgsPossibleExtraArgsEntry);
+        UNUSED_PARAM(registerArgsCheckArityEntry);
+#endif
+
+    }
+
+    CodePtr entryFor(EntryPointType type)
+    {
+        return m_entryPoints[type];
+    }
+
+    void setEntryFor(EntryPointType type, CodePtr entry)
+    {
+        ASSERT(type < NumberOfEntryPointTypes);
+        m_entryPoints[type] = entry;
+    }
+
+    static ptrdiff_t offsetOfEntryFor(EntryPointType type)
+    {
+        return offsetof(JITEntryPoints, m_entryPoints[type]);
+    }
+
+    static EntryPointType registerEntryTypeForArgumentCount(unsigned argCount)
+    {
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+        ASSERT(argCount);
+        unsigned registerArgCount = numberOfRegisterArgumentsFor(argCount);
+        if (!registerArgCount || registerArgCount != argCount)
+            return RegisterArgsMustCheckArity;
+
+        return static_cast<EntryPointType>(RegisterArgs1 + registerArgCount - 1);
+#else
+        UNUSED_PARAM(argCount);
+        RELEASE_ASSERT_NOT_REACHED();
+        return StackArgsMustCheckArity;
+#endif
+    }
+
+    static EntryPointType registerEntryTypeForArgumentType(ArgumentsLocation type)
+    {
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+        ASSERT(type != StackArgs);
+        if (type == RegisterArgsWithExtraOnStack)
+            return RegisterArgsMustCheckArity;
+        
+        return static_cast<EntryPointType>(RegisterArgs1 + type - RegisterArgs1InRegisters);
+#else
+        UNUSED_PARAM(type);
+        RELEASE_ASSERT_NOT_REACHED();
+        return StackArgsMustCheckArity;
+#endif
+    }
+
+    void clearEntries()
+    {
+        for (unsigned i = numberOfEntryTypes; i--;)
+            m_entryPoints[i] = MacroAssemblerCodePtr();
+    }
+
+    JITEntryPoints& operator=(const JITEntryPoints& other)
+    {
+        for (unsigned i = numberOfEntryTypes; i--;)
+            m_entryPoints[i] = other.m_entryPoints[i];
+
+        return *this;
+    }
+
+private:
+
+    CodePtr m_entryPoints[numberOfEntryTypes];
+};
+
+class JITEntryPointsWithRef : public JITEntryPoints {
+public:
+    typedef MacroAssemblerCodeRef CodeRef;
+
+    JITEntryPointsWithRef()
+    {
+    }
+
+    JITEntryPointsWithRef(const JITEntryPointsWithRef& other)
+        : JITEntryPoints(other)
+        , m_codeRef(other.m_codeRef)
+    {
+    }
+
+    JITEntryPointsWithRef(CodeRef codeRef, const JITEntryPoints& other)
+        : JITEntryPoints(other)
+        , m_codeRef(codeRef)
+    {
+    }
+    
+    JITEntryPointsWithRef(CodeRef codeRef, CodePtr stackArgsArityCheckNotRequiredEntry,
+        CodePtr stackArgsCheckArityEntry)
+        : JITEntryPoints(CodePtr(), CodePtr(), CodePtr(), stackArgsArityCheckNotRequiredEntry, stackArgsCheckArityEntry)
+        , m_codeRef(codeRef)
+    {
+    }
+
+    JITEntryPointsWithRef(CodeRef codeRef, CodePtr registerArgsNoChecksRequiredEntry,
+        CodePtr registerArgsPossibleExtraArgsEntry, CodePtr registerArgsCheckArityEntry,
+        CodePtr stackArgsArityCheckNotRequiredEntry, CodePtr stackArgsCheckArityEntry)
+        : JITEntryPoints(registerArgsNoChecksRequiredEntry, registerArgsPossibleExtraArgsEntry,
+            registerArgsCheckArityEntry, stackArgsArityCheckNotRequiredEntry,
+            stackArgsCheckArityEntry)
+        , m_codeRef(codeRef)
+    {
+    }
+
+    CodeRef codeRef() { return m_codeRef; }
+
+private:
+    CodeRef m_codeRef;
+};
+
+inline ArgumentsLocation argumentsLocationFor(unsigned argumentCount)
+{
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    if (!argumentCount)
+        return StackArgs;
+    
+    argumentCount = std::min(NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS + 1, argumentCount);
+    
+    return static_cast<ArgumentsLocation>(ArgumentsLocation::RegisterArgs1InRegisters + argumentCount - 1);
+#else
+    UNUSED_PARAM(argumentCount);
+    return StackArgs;
+#endif
+}
+
+inline EntryPointType registerEntryPointTypeFor(unsigned argumentCount)
+{
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    if (!argumentCount || argumentCount > NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
+        return RegisterArgsMustCheckArity;
+    
+    return static_cast<EntryPointType>(EntryPointType::RegisterArgs1 + argumentCount - 1);
+#else
+    RELEASE_ASSERT_NOT_REACHED();
+    UNUSED_PARAM(argumentCount);
+    return StackArgsMustCheckArity;
+#endif
+}
+
+inline EntryPointType entryPointTypeFor(ArgumentsLocation argumentLocation)
+{
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    if (argumentLocation == StackArgs)
+        return StackArgsMustCheckArity;
+    
+    if (argumentLocation == RegisterArgsWithExtraOnStack)
+        return RegisterArgsMustCheckArity;
+    
+    return static_cast<EntryPointType>(EntryPointType::RegisterArgs1 + static_cast<unsigned>(argumentLocation - RegisterArgs1InRegisters));
+#else
+    UNUSED_PARAM(argumentLocation);
+    return StackArgsMustCheckArity;
+#endif
+}
+
+inline ThunkEntryPointType thunkEntryPointTypeFor(ArgumentsLocation argumentLocation)
+{
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    unsigned argumentLocationIndex = std::min(RegisterArgsWithExtraOnStack - 1, static_cast<unsigned>(argumentLocation));
+    return static_cast<ThunkEntryPointType>(argumentLocationIndex);
+#else
+    UNUSED_PARAM(argumentLocation);
+    return StackArgsEntry;
+#endif
+}
+
+inline ThunkEntryPointType thunkEntryPointTypeFor(unsigned argumentCount)
+{
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    argumentCount = numberOfRegisterArgumentsFor(argumentCount);
+    
+    return static_cast<ThunkEntryPointType>(ThunkEntryPointType::Register1ArgEntry + argumentCount - 1);
+#else
+    UNUSED_PARAM(argumentCount);
+    return StackArgsEntry;
+#endif
+}
+
+class JITJSCallThunkEntryPointsWithRef {
+public:
+    typedef MacroAssemblerCodePtr CodePtr;
+    typedef MacroAssemblerCodeRef CodeRef;
+    static const unsigned numberOfEntryTypes = ThunkEntryPointType::ThunkEntryPointTypeCount;
+
+    JITJSCallThunkEntryPointsWithRef()
+    {
+    }
+
+    JITJSCallThunkEntryPointsWithRef(CodeRef codeRef)
+        : m_codeRef(codeRef)
+    {
+    }
+
+    JITJSCallThunkEntryPointsWithRef(const JITJSCallThunkEntryPointsWithRef& other)
+        : m_codeRef(other.m_codeRef)
+    {
+        for (unsigned i = 0; i < numberOfEntryTypes; i++)
+            m_entryPoints[i] = other.m_entryPoints[i];
+    }
+
+    CodePtr entryFor(ThunkEntryPointType type)
+    {
+        return m_entryPoints[type];
+    }
+
+    CodePtr entryFor(ArgumentsLocation argumentsLocation)
+    {
+        return entryFor(thunkEntryPointTypeFor(argumentsLocation));
+    }
+
+    void setEntryFor(ThunkEntryPointType type, CodePtr entry)
+    {
+        m_entryPoints[type] = entry;
+    }
+
+    static ptrdiff_t offsetOfEntryFor(ThunkEntryPointType type)
+    {
+        return offsetof(JITJSCallThunkEntryPointsWithRef, m_entryPoints[type]);
+    }
+
+    void clearEntries()
+    {
+        for (unsigned i = numberOfEntryTypes; i--;)
+            m_entryPoints[i] = MacroAssemblerCodePtr();
+    }
+
+    CodeRef codeRef() { return m_codeRef; }
+
+    JITJSCallThunkEntryPointsWithRef& operator=(const JITJSCallThunkEntryPointsWithRef& other)
+    {
+        m_codeRef = other.m_codeRef;
+        for (unsigned i = numberOfEntryTypes; i--;)
+            m_entryPoints[i] = other.m_entryPoints[i];
+        
+        return *this;
+    }
+
+private:
+    CodeRef m_codeRef;
+    CodePtr m_entryPoints[numberOfEntryTypes];
+};
+
+
+} // namespace JSC
+
+#endif // ENABLE(JIT)
index 5672fa0..1573afd 100644 (file)
@@ -49,9 +49,9 @@ namespace JSC {
 
 #if USE(JSVALUE64)
 
-JIT::CodeRef JIT::privateCompileCTINativeCall(VM* vm, NativeFunction)
+JITEntryPointsWithRef JIT::privateCompileJITEntryNativeCall(VM* vm, NativeFunction)
 {
-    return vm->getCTIStub(nativeCallGenerator);
+    return vm->getJITEntryStub(nativeCallGenerator);
 }
 
 void JIT::emit_op_mov(Instruction* currentInstruction)
index 5b071ad..c6d2d90 100644 (file)
@@ -46,7 +46,7 @@
 
 namespace JSC {
 
-JIT::CodeRef JIT::privateCompileCTINativeCall(VM* vm, NativeFunction func)
+JITEntryPointsWithRef JIT::privateCompileJITEntryNativeCall(VM* vm, NativeFunction func)
 {
     // FIXME: This should be able to log ShadowChicken prologue packets.
     // https://bugs.webkit.org/show_bug.cgi?id=155689
@@ -129,7 +129,9 @@ JIT::CodeRef JIT::privateCompileCTINativeCall(VM* vm, NativeFunction func)
     LinkBuffer patchBuffer(*m_vm, *this, GLOBAL_THUNK_ID);
 
     patchBuffer.link(nativeCall, FunctionPtr(func));
-    return FINALIZE_CODE(patchBuffer, ("JIT CTI native call"));
+    JIT::CodeRef codeRef = FINALIZE_CODE(patchBuffer, ("JIT CTI native call"));
+    
+    return JITEntryPointsWithRef(codeRef, codeRef.code(), codeRef.code());
 }
 
 void JIT::emit_op_mov(Instruction* currentInstruction)
index 44735d5..7aa6d6c 100644 (file)
@@ -890,10 +890,14 @@ SlowPathReturnType JIT_OPERATION operationLinkCall(ExecState* execCallee, CallLi
     JSScope* scope = callee->scopeUnchecked();
     ExecutableBase* executable = callee->executable();
 
-    MacroAssemblerCodePtr codePtr;
+    MacroAssemblerCodePtr codePtr, codePtrForLinking;
     CodeBlock* codeBlock = 0;
     if (executable->isHostFunction()) {
-        codePtr = executable->entrypointFor(kind, MustCheckArity);
+        codePtr = executable->entrypointFor(kind, StackArgsMustCheckArity);
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+        if (callLinkInfo->argumentsInRegisters())
+            codePtrForLinking = executable->entrypointFor(kind, RegisterArgsMustCheckArity);
+#endif
     } else {
         FunctionExecutable* functionExecutable = static_cast<FunctionExecutable*>(executable);
 
@@ -914,17 +918,41 @@ SlowPathReturnType JIT_OPERATION operationLinkCall(ExecState* execCallee, CallLi
                 reinterpret_cast<void*>(KeepTheFrame));
         }
         codeBlock = *codeBlockSlot;
-        ArityCheckMode arity;
-        if (execCallee->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo->isVarargs())
-            arity = MustCheckArity;
-        else
-            arity = ArityCheckNotRequired;
-        codePtr = functionExecutable->entrypointFor(kind, arity);
+        EntryPointType entryType;
+        size_t callerArgumentCount = execCallee->argumentCountIncludingThis();
+        size_t calleeArgumentCount = static_cast<size_t>(codeBlock->numParameters());
+        if (callerArgumentCount < calleeArgumentCount || callLinkInfo->isVarargs()) {
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+            if (callLinkInfo->argumentsInRegisters()) {
+                codePtrForLinking = functionExecutable->entrypointFor(kind, JITEntryPoints::registerEntryTypeForArgumentCount(callerArgumentCount));
+                if (!codePtrForLinking)
+                    codePtrForLinking = functionExecutable->entrypointFor(kind, RegisterArgsMustCheckArity);
+            }
+#endif
+            entryType = StackArgsMustCheckArity;
+            (void) functionExecutable->entrypointFor(kind, entryPointTypeFor(callLinkInfo->argumentsLocation()));
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+        } else if (callLinkInfo->argumentsInRegisters()) {
+            if (callerArgumentCount == calleeArgumentCount || calleeArgumentCount >= NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
+                codePtrForLinking = functionExecutable->entrypointFor(kind, RegisterArgsArityCheckNotRequired);
+            else {
+                codePtrForLinking = functionExecutable->entrypointFor(kind, JITEntryPoints::registerEntryTypeForArgumentCount(callerArgumentCount));
+                if (!codePtrForLinking)
+                    codePtrForLinking = functionExecutable->entrypointFor(kind, RegisterArgsPossibleExtraArgs);
+            }
+            //  Prepopulate the entry points the virtual thunk might use.
+            (void) functionExecutable->entrypointFor(kind, entryPointTypeFor(callLinkInfo->argumentsLocation()));
+
+            entryType = StackArgsArityCheckNotRequired;
+#endif
+        } else
+            entryType = StackArgsArityCheckNotRequired;
+        codePtr = functionExecutable->entrypointFor(kind, entryType);
     }
     if (!callLinkInfo->seenOnce())
         callLinkInfo->setSeen();
     else
-        linkFor(execCallee, *callLinkInfo, codeBlock, callee, codePtr);
+        linkFor(execCallee, *callLinkInfo, codeBlock, callee, codePtrForLinking ? codePtrForLinking : codePtr);
     
     return encodeResult(codePtr.executableAddress(), reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame));
 }
@@ -959,7 +987,11 @@ void JIT_OPERATION operationLinkDirectCall(ExecState* exec, CallLinkInfo* callLi
     MacroAssemblerCodePtr codePtr;
     CodeBlock* codeBlock = nullptr;
     if (executable->isHostFunction())
-        codePtr = executable->entrypointFor(kind, MustCheckArity);
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+        codePtr = executable->entrypointFor(kind, callLinkInfo->argumentsInRegisters() ? RegisterArgsMustCheckArity : StackArgsMustCheckArity);
+#else
+    codePtr = executable->entrypointFor(kind, StackArgsMustCheckArity);
+#endif
     else {
         FunctionExecutable* functionExecutable = static_cast<FunctionExecutable*>(executable);
 
@@ -971,13 +1003,29 @@ void JIT_OPERATION operationLinkDirectCall(ExecState* exec, CallLinkInfo* callLi
             throwException(exec, throwScope, error);
             return;
         }
-        ArityCheckMode arity;
+        EntryPointType entryType;
         unsigned argumentStackSlots = callLinkInfo->maxNumArguments();
-        if (argumentStackSlots < static_cast<size_t>(codeBlock->numParameters()))
-            arity = MustCheckArity;
+        size_t codeBlockParameterCount = static_cast<size_t>(codeBlock->numParameters());
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+        if (callLinkInfo->argumentsInRegisters()) {
+            // This logic could probably be simplified!
+            if (argumentStackSlots < codeBlockParameterCount)
+                entryType = entryPointTypeFor(callLinkInfo->argumentsLocation());
+            else if (argumentStackSlots > NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS) {
+                if (codeBlockParameterCount < NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS)
+                    entryType = RegisterArgsPossibleExtraArgs;
+                else
+                    entryType = RegisterArgsArityCheckNotRequired;
+            } else
+                entryType = registerEntryPointTypeFor(argumentStackSlots);
+        } else if (argumentStackSlots < codeBlockParameterCount)
+#else
+        if (argumentStackSlots < codeBlockParameterCount)
+#endif
+            entryType = StackArgsMustCheckArity;
         else
-            arity = ArityCheckNotRequired;
-        codePtr = functionExecutable->entrypointFor(kind, arity);
+            entryType = StackArgsArityCheckNotRequired;
+        codePtr = functionExecutable->entrypointFor(kind, entryType);
     }
     
     linkDirectFor(exec, *callLinkInfo, codeBlock, codePtr);
@@ -1020,8 +1068,17 @@ inline SlowPathReturnType virtualForWithFunction(
                 reinterpret_cast<void*>(KeepTheFrame));
         }
     }
+#if NUMBER_OF_JS_FUNCTION_ARGUMENT_REGISTERS
+    if (callLinkInfo->argumentsInRegisters()) {
+        // Pull into the cache the arity check register entry if the caller wants a register entry.
+        // This will be used by the generic virtual call thunk.
+        (void) executable->entrypointFor(kind, RegisterArgsMustCheckArity);
+        (void) executable->entrypointFor(kind, entryPointTypeFor(callLinkInfo->argumentsLocation()));
+
+    }
+#endif
     return encodeResult(executable->entrypointFor(
-        kind, MustCheckArity).executableAddress(),
+        kind, StackArgsMustCheckArity).executableAddress(),
         reinterpret_cast<void*>(callLinkInfo->callMode() == CallMode::Tail ? ReuseTheFrame : KeepTheFrame));
 }
 
index 40c12ce..64b6d81 100644 (file)
@@ -44,18 +44,22 @@ JITThunks::~JITThunks()
 {
 }
 
-MacroAssemblerCodePtr JITThunks::ctiNativeCall(VM* vm)
+JITEntryPointsWithRef JITThunks::jitEntryNativeCall(VM* vm)
 {
-    if (!vm->canUseJIT())
-        return MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_call_trampoline);
-    return ctiStub(vm, nativeCallGenerator).code();
+    if (!vm->canUseJIT()) {
+        MacroAssemblerCodePtr nativeCallStub = MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_call_trampoline);
+        return JITEntryPointsWithRef(MacroAssemblerCodeRef::createSelfManagedCodeRef(nativeCallStub), nativeCallStub, nativeCallStub);
+    }
+    return jitEntryStub(vm, nativeCallGenerator);
 }
 
-MacroAssemblerCodePtr JITThunks::ctiNativeConstruct(VM* vm)
+JITEntryPointsWithRef JITThunks::jitEntryNativeConstruct(VM* vm)
 {
-    if (!vm->canUseJIT())
-        return MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_construct_trampoline);
-    return ctiStub(vm, nativeConstructGenerator).code();
+    if (!vm->canUseJIT()) {
+        MacroAssemblerCodePtr nativeConstructStub = MacroAssemblerCodePtr::createLLIntCodePtr(llint_native_construct_trampoline);
+        return JITEntryPointsWithRef(MacroAssemblerCodeRef::createSelfManagedCodeRef(nativeConstructStub), nativeConstructStub, nativeConstructStub);
+    }
+    return jitEntryStub(vm, nativeConstructGenerator);
 }
 
 MacroAssemblerCodePtr JITThunks::ctiNativeTailCall(VM* vm)
@@ -82,6 +86,30 @@ MacroAssemblerCodeRef JITThunks::ctiStub(VM* vm, ThunkGenerator generator)
     return entry.iterator->value;
 }
 
+JITEntryPointsWithRef JITThunks::jitEntryStub(VM* vm, JITEntryGenerator generator)
+{
+    LockHolder locker(m_lock);
+    JITEntryStubMap::AddResult entry = m_jitEntryStubMap.add(generator, JITEntryPointsWithRef());
+    if (entry.isNewEntry) {
+        // Compilation thread can only retrieve existing entries.
+        ASSERT(!isCompilationThread());
+        entry.iterator->value = generator(vm);
+    }
+    return entry.iterator->value;
+}
+
+JITJSCallThunkEntryPointsWithRef JITThunks::jitCallThunkEntryStub(VM* vm, JITCallThunkEntryGenerator generator)
+{
+    LockHolder locker(m_lock);
+    JITCallThunkEntryStubMap::AddResult entry = m_jitCallThunkEntryStubMap.add(generator, JITJSCallThunkEntryPointsWithRef());
+    if (entry.isNewEntry) {
+        // Compilation thread can only retrieve existing entries.
+        ASSERT(!isCompilationThread());
+        entry.iterator->value = generator(vm);
+    }
+    return entry.iterator->value;
+}
+
 void JITThunks::finalize(Handle<Unknown> handle, void*)
 {
     auto* nativeExecutable = jsCast<NativeExecutable*>(handle.get().asCell());
@@ -93,7 +121,7 @@ NativeExecutable* JITThunks::hostFunctionStub(VM* vm, NativeFunction function, N
     return hostFunctionStub(vm, function, constructor, nullptr, NoIntrinsic, nullptr, name);
 }
 
-NativeExecutable* JITThunks::hostFunctionStub(VM* vm, NativeFunction function, NativeFunction constructor, ThunkGenerator generator, Intrinsic intrinsic, const DOMJIT::Signature* signature, const String& name)
+NativeExecutable* JITThunks::hostFunctionStub(VM* vm, NativeFunction function, NativeFunction constructor, JITEntryGenerator generator, Intrinsic intrinsic, const DOMJIT::Signature* signature, const String& name)
 {
     ASSERT(!isCompilationThread());    
     ASSERT(vm->canUseJIT());
@@ -103,19 +131,19 @@ NativeExecutable* JITThunks::hostFunctionStub(VM* vm, NativeFunction function, N
 
     RefPtr<JITCode> forCall;
     if (generator) {
-        MacroAssemblerCodeRef entry = generator(vm);
-        forCall = adoptRef(new DirectJITCode(entry, entry.code(), JITCode::HostCallThunk));
+        JITEntryPointsWithRef entry = generator(vm);
+        forCall = adoptRef(new DirectJITCode(entry, JITCode::HostCallThunk));
     } else
-        forCall = adoptRef(new NativeJITCode(JIT::compileCTINativeCall(vm, function), JITCode::HostCallThunk));
+        forCall = adoptRef(new DirectJITCode(JIT::compileNativeCallEntryPoints(vm, function), JITCode::HostCallThunk));
     
-    RefPtr<JITCode> forConstruct = adoptRef(new NativeJITCode(MacroAssemblerCodeRef::createSelfManagedCodeRef(ctiNativeConstruct(vm)), JITCode::HostCallThunk));
+    RefPtr<JITCode> forConstruct = adoptRef(new DirectJITCode(jitEntryNativeConstruct(vm), JITCode::HostCallThunk));
     
     NativeExecutable* nativeExecutable = NativeExecutable::create(*vm, forCall, function, forConstruct, constructor, intrinsic, signature, name);
     weakAdd(*m_hostFunctionStubMap, std::make_tuple(function, constructor, name), Weak<NativeExecutable>(nativeExecutable, this));
     return nativeExecutable;
 }
 
-NativeExecutable* JITThunks::hostFunctionStub(VM* vm, NativeFunction function, ThunkGenerator generator, Intrinsic intrinsic, const String& name)
+NativeExecutable* JITThunks::hostFunctionStub(VM* vm, NativeFunction function, JITEntryGenerator generator, Intrinsic intrinsic, const String& name)
 {
     return hostFunctionStub(vm, function, callHostFunctionAsConstructor, generator, intrinsic, nullptr, name);
 }
index addcf23..b593bb3 100644 (file)
@@ -29,6 +29,7 @@
 
 #include "CallData.h"
 #include "Intrinsic.h"
+#include "JITEntryPoints.h"
 #include "MacroAssemblerCodeRef.h"
 #include "ThunkGenerator.h"
 #include "Weak.h"
@@ -52,16 +53,18 @@ public:
     JITThunks();
     virtual ~JITThunks();
 
-    MacroAssemblerCodePtr ctiNativeCall(VM*);
-    MacroAssemblerCodePtr ctiNativeConstruct(VM*);
+    JITEntryPointsWithRef jitEntryNativeCall(VM*);
+    JITEntryPointsWithRef jitEntryNativeConstruct(VM*);
     MacroAssemblerCodePtr ctiNativeTailCall(VM*);    
     MacroAssemblerCodePtr ctiNativeTailCallWithoutSavedTags(VM*);    
 
     MacroAssemblerCodeRef ctiStub(VM*, ThunkGenerator);
+    JITEntryPointsWithRef jitEntryStub(VM*, JITEntryGenerator);
+    JITJSCallThunkEntryPointsWithRef jitCallThunkEntryStub(VM*, JITCallThunkEntryGenerator);
 
     NativeExecutable* hostFunctionStub(VM*, NativeFunction, NativeFunction constructor, const String& name);
-    NativeExecutable* hostFunctionStub(VM*, NativeFunction, NativeFunction constructor, ThunkGenerator, Intrinsic, const DOMJIT::Signature*, const String& name);
-    NativeExecutable* hostFunctionStub(VM*, NativeFunction, ThunkGenerator, Intrinsic, const String& name);
+    NativeExecutable* hostFunctionStub(VM*, NativeFunction, NativeFunction constructor, JITEntryGenerator, Intrinsic, const DOMJIT::Signature*, const String& name);
+    NativeExecutable* hostFunctionStub(VM*, NativeFunction, JITEntryGenerator, Intrinsic, const String& name);
 
     void clearHostFunctionStubs();
 
@@ -70,6 +73,10 @@ private:
     
     typedef HashMap<ThunkGenerator, MacroAssemblerCodeRef> CTIStubMap;
     CTIStubMap m_ctiStubMap;
+    typedef HashMap<JITEntryGenerator, JITEntryPointsWithRef> JITEntryStubMap;
+    JITEntryStubMap m_jitEntryStubMap;
+    typedef HashMap<JITCallThunkEntryGenerator, JITJSCallThunkEntryPointsWithRef> JITCallThunkEntryStubMap;
+    JITCallThunkEntryStubMap m_jitCallThunkEntryStubMap;
 
     typedef std::tuple<NativeFunction, NativeFunction, String> HostFunctionKey;
 
index dc0cc1a..176a053 100644 (file)
@@ -63,6 +63,7 @@ namespace JSC {
         Jump emitJumpIfNotJSCell(RegisterID);
         Jump emitJumpIfNumber(RegisterID);
         Jump emitJumpIfNotNumber(RegisterID);
+        Jump emitJumpIfNotInt32(RegisterID reg);
         void emitTagInt(RegisterID src, RegisterID dest);
 #endif
 
@@ -163,12 +164,17 @@ namespace JSC {
         return branchTest64(NonZero, dst, tagMaskRegister);
     }
     
+    inline JSInterfaceJIT::Jump JSInterfaceJIT::emitJumpIfNotInt32(RegisterID reg)
+    {
+        Jump result = branch64(Below, reg, tagTypeNumberRegister);
+        zeroExtend32ToPtr(reg, reg);
+        return result;
+    }
+
     inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadInt32(unsigned virtualRegisterIndex, RegisterID dst)
     {
         load64(addressFor(virtualRegisterIndex), dst);
-        Jump result = branch64(Below, dst, tagTypeNumberRegister);
-        zeroExtend32ToPtr(dst, dst);
-        return result;
+        return emitJumpIfNotInt32(dst);
     }
 
     inline JSInterfaceJIT::Jump JSInterfaceJIT::emitLoadDouble(unsigned virtualRegisterIndex, FPRegisterID dst, RegisterID scratch)
index 721a4ea..37eb4cc 100644 (file)
@@ -159,6 +159,20 @@ RegisterSet RegisterSet::calleeSaveRegisters()
     return result;
 }
 
+RegisterSet RegisterSet::argumentRegisters()
+{
+    RegisterSet result;
+#if USE(JSVALUE64)
+    for (unsigned argumentIndex = 0; argumentIndex < NUMBER_OF_ARGUMENT_REGISTERS; argumentIndex++) {
+        GPRReg argumentReg = argumentRegisterFor(argumentIndex);
+
+        if (argumentReg != InvalidGPRReg)
+            result.set(argumentReg);
+    }
+#endif
+    return result;
+}
+
 RegisterSet RegisterSet::vmCalleeSaveRegisters()
 {
     RegisterSet result;
index 0359066..8d12516 100644 (file)
@@ -49,6 +49,7 @@ public:
     static RegisterSet runtimeRegisters();
     static RegisterSet specialRegisters(); // The union of stack, reserved hardware, and runtime registers.
     JS_EXPORT_PRIVATE static RegisterSet calleeSaveRegisters();
+    static RegisterSet argumentRegisters(); // Registers used to pass arguments when making JS Calls
     static RegisterSet vmCalleeSaveRegisters(); // Callee save registers that might be saved and used by any tier.
     static RegisterSet llintBaselineCalleeSaveRegisters(); // Registers saved and used by the LLInt.
     static RegisterSet dfgCalleeSaveRegisters(); // Registers saved and used by the DFG JIT.
index d4f98f8..285f604 100644 (file)
@@ -540,21 +540,21 @@ void repatchIn(
         ftlThunkAwareRepatchCall(exec->codeBlock(), stubInfo.slowPathCallLocation(), operationIn);
 }
 
-static void linkSlowFor(VM*, CallLinkInfo& callLinkInfo, MacroAssemblerCodeRef codeRef)
+static void linkSlowFor(VM*, CallLinkInfo& callLinkInfo, JITJSCallThunkEntryPointsWithRef thunkEntryPoints)
 {
-    MacroAssembler::repatchNearCall(callLinkInfo.callReturnLocation(), CodeLocationLabel(codeRef.code()));
+    MacroAssembler::repatchNearCall(callLinkInfo.callReturnLocation(), CodeLocationLabel(thunkEntryPoints.entryFor(callLinkInfo.argumentsLocation())));
 }
 
-static void linkSlowFor(VM* vm, CallLinkInfo& callLinkInfo, ThunkGenerator generator)
+static void linkSlowFor(VM* vm, CallLinkInfo& callLinkInfo, JITCallThunkEntryGenerator generator)
 {
-    linkSlowFor(vm, callLinkInfo, vm->getCTIStub(generator));
+    linkSlowFor(vm, callLinkInfo, vm->getJITCallThunkEntryStub(generator));
 }
 
 static void linkSlowFor(VM* vm, CallLinkInfo& callLinkInfo)
 {
-    MacroAssemblerCodeRef virtualThunk = virtualThunkFor(vm, callLinkInfo);
+    JITJSCallThunkEntryPointsWithRef virtualThunk = virtualThunkFor(vm, callLinkInfo);
     linkSlowFor(vm, callLinkInfo, virtualThunk);
-    callLinkInfo.setSlowStub(createJITStubRoutine(virtualThunk, *vm, nullptr, true));
+    callLinkInfo.setSlowStub(createJITStubRoutine(virtualThunk.codeRef(), *vm, nullptr, true));
 }
 
 static bool isWebAssemblyToJSCallee(VM& vm, JSCell* callee)
@@ -644,7 +644,7 @@ void linkSlowFor(
     linkSlowFor(vm, callLinkInfo);
 }
 
-static void revertCall(VM* vm, CallLinkInfo& callLinkInfo, MacroAssemblerCodeRef codeRef)
+static void revertCall(VM* vm, CallLinkInfo& callLinkInfo, JITJSCallThunkEntryPointsWithRef codeRef)
 {
     if (callLinkInfo.isDirect()) {
         callLinkInfo.clearCodeBlock();
@@ -671,7 +671,7 @@ void unlinkFor(VM& vm, CallLinkInfo& callLinkInfo)
     if (Options::dumpDisassembly())
         dataLog("Unlinking call at ", callLinkInfo.hotPathOther(), "\n");
     
-    revertCall(&vm, callLinkInfo, vm.getCTIStub(linkCallThunkGenerator));
+    revertCall(&vm, callLinkInfo, vm.getJITCallThunkEntryStub(linkCallThunkGenerator));
 }
 
 void linkVirtualFor(ExecState* exec, CallLinkInfo& callLinkInfo)
@@ -683,9 +683,9 @@ void linkVirtualFor(ExecState* exec, CallLinkInfo& callLinkInfo)
     if (shouldDumpDisassemblyFor(callerCodeBlock))
         dataLog("Linking virtual call at ", *callerCodeBlock, " ", callerFrame->codeOrigin(), "\n");
 
-    MacroAssemblerCodeRef virtualThunk = virtualThunkFor(&vm, callLinkInfo);
+    JITJSCallThunkEntryPointsWithRef virtualThunk = virtualThunkFor(&vm, callLinkInfo);
     revertCall(&vm, callLinkInfo, virtualThunk);
-    callLinkInfo.setSlowStub(createJITStubRoutine(virtualThunk, vm, nullptr, true));
+    callLinkInfo.setSlowStub(createJITStubRoutine(virtualThunk.codeRef(), vm, nullptr, true));
 }
 
 namespace {
@@ -740,6 +740,7 @@ void linkPolymorphicCall(
         callLinkInfo.setHasSeenClosure();
     
     Vector<PolymorphicCallCase> callCases;
+    size_t callerArgumentCount = exec->argumentCountIncludingThis();
     
     // Figure out what our cases are.
     for (CallVariant variant : list) {
@@ -751,7 +752,7 @@ void linkPolymorphicCall(
             codeBlock = jsCast<FunctionExecutable*>(executable)->codeBlockForCall();
             // If we cannot handle a callee, either because we don't have a CodeBlock or because arity mismatch,
             // assume that it's better for this whole thing to be a virtual call.
-            if (!codeBlock || exec->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo.isVarargs()) {
+            if (!codeBlock || callerArgumentCount < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo.isVarargs()) {
                 linkVirtualFor(exec, callLinkInfo);
                 return;
             }
@@ -775,7 +776,10 @@ void linkPolymorphicCall(
     }
     
     GPRReg calleeGPR = static_cast<GPRReg>(callLinkInfo.calleeGPR());
-    
+
+    if (callLinkInfo.argumentsInRegisters())
+        ASSERT(calleeGPR == argumentRegisterForCallee());
+
     CCallHelpers stubJit(&vm, call