DFG should really support varargs
authorfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 18 Feb 2015 19:55:47 +0000 (19:55 +0000)
committerfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 18 Feb 2015 19:55:47 +0000 (19:55 +0000)
https://bugs.webkit.org/show_bug.cgi?id=141332

Reviewed by Oliver Hunt.

Source/JavaScriptCore:

This adds comprehensive vararg call support to the DFG and FTL compilers. Previously, if a
function had a varargs call, then it could only be compiled if that varargs call was just
forwarding arguments and we were inlining the function rather than compiling it directly. Also,
only varargs calls were dealt with; varargs constructs were not.

This lifts all of those restrictions. Every varargs call or construct can now be compiled by both
the DFG and the FTL. Those calls can also be inlined, too - provided that profiling gives us a
sensible bound on arguments list length. When we inline a varargs call, the act of loading the
varargs is now made explicit in IR. I believe that we have enough IR machinery in place that we
would be able to do the arguments forwarding optimization as an IR transformation. This patch
doesn't implement that yet, and keeps the old bytecode-based varargs argument forwarding
optimization for now.

There are three major IR features introduced in this patch:

CallVarargs/ConstructVarargs: these are like Call/Construct except that they take an arguments
array rather than a list of arguments. Currently, they splat this arguments array onto the stack
using the same basic technique as the baseline JIT has always done. Except, these nodes indicate
that we are not interested in doing the non-escaping "arguments" optimization.

CallForwardVarargs: this is a form of CallVarargs that just does the non-escaping "arguments"
optimization, aka forwarding arguments. It's somewhat lazy that this doesn't include
ConstructForwardVarargs, but the reason is that once we eliminate the lazy tear-off for
arguments, this whole thing will have to be tweaked - and for now forwarding on construct is just
not important in benchmarks. ConstructVarargs will still do forwarding, just not inlined.

LoadVarargs: loads all elements out of an array onto the stack in a manner suitable for a varargs
call. This is used only when a varargs call (or construct) was inlined. The bytecode parser will
make room on the stack for the arguments, and will use LoadVarars to put those arguments into
place.

In the future, we can consider adding strength reductions like:

- If CallVarargs/ConstructVarargs see an array of known size with known elements, turn them into
  Call/Construct.

- If CallVarargs/ConstructVarargs are passed an unmodified, unescaped Arguments object, then
  turn them into CallForwardVarargs/ConstructForwardVarargs.

- If LoadVarargs sees an array of known size, then turn it into a sequence of GetByVals and
  PutLocals.

- If LoadVarargs sees an unmodified, unescaped Arguments object, then turn it into something like
  LoadForwardVarargs.

- If CallVarargs/ConstructVarargs/LoadVarargs see the result of a splice (or other Array
  prototype function), then do the splice and varargs loading in one go (maybe via a new node
  type).

* CMakeLists.txt:
* JavaScriptCore.vcxproj/JavaScriptCore.vcxproj:
* JavaScriptCore.xcodeproj/project.pbxproj:
* assembler/MacroAssembler.h:
(JSC::MacroAssembler::rshiftPtr):
(JSC::MacroAssembler::urshiftPtr):
* assembler/MacroAssemblerARM64.h:
(JSC::MacroAssemblerARM64::urshift64):
* assembler/MacroAssemblerX86_64.h:
(JSC::MacroAssemblerX86_64::urshift64):
* assembler/X86Assembler.h:
(JSC::X86Assembler::shrq_i8r):
* bytecode/CallLinkInfo.h:
(JSC::CallLinkInfo::CallLinkInfo):
* bytecode/CallLinkStatus.cpp:
(JSC::CallLinkStatus::computeFor):
(JSC::CallLinkStatus::setProvenConstantCallee):
(JSC::CallLinkStatus::dump):
* bytecode/CallLinkStatus.h:
(JSC::CallLinkStatus::maxNumArguments):
(JSC::CallLinkStatus::setIsProved): Deleted.
* bytecode/CodeOrigin.cpp:
(WTF::printInternal):
* bytecode/CodeOrigin.h:
(JSC::InlineCallFrame::varargsKindFor):
(JSC::InlineCallFrame::specializationKindFor):
(JSC::InlineCallFrame::isVarargs):
(JSC::InlineCallFrame::isNormalCall): Deleted.
* bytecode/ExitKind.cpp:
(JSC::exitKindToString):
* bytecode/ExitKind.h:
* bytecode/ValueRecovery.cpp:
(JSC::ValueRecovery::dumpInContext):
* dfg/DFGAbstractInterpreterInlines.h:
(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
* dfg/DFGArgumentsSimplificationPhase.cpp:
(JSC::DFG::ArgumentsSimplificationPhase::run):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::flush):
(JSC::DFG::ByteCodeParser::addCall):
(JSC::DFG::ByteCodeParser::handleCall):
(JSC::DFG::ByteCodeParser::handleVarargsCall):
(JSC::DFG::ByteCodeParser::emitFunctionChecks):
(JSC::DFG::ByteCodeParser::inliningCost):
(JSC::DFG::ByteCodeParser::inlineCall):
(JSC::DFG::ByteCodeParser::attemptToInlineCall):
(JSC::DFG::ByteCodeParser::handleInlining):
(JSC::DFG::ByteCodeParser::handleMinMax):
(JSC::DFG::ByteCodeParser::handleIntrinsic):
(JSC::DFG::ByteCodeParser::handleTypedArrayConstructor):
(JSC::DFG::ByteCodeParser::handleConstantInternalFunction):
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::removeLastNodeFromGraph): Deleted.
(JSC::DFG::ByteCodeParser::undoFunctionChecks): Deleted.
* dfg/DFGCapabilities.cpp:
(JSC::DFG::capabilityLevel):
* dfg/DFGCapabilities.h:
(JSC::DFG::functionCapabilityLevel):
(JSC::DFG::mightCompileFunctionFor):
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGCommon.cpp:
(WTF::printInternal):
* dfg/DFGCommon.h:
(JSC::DFG::canInline):
(JSC::DFG::leastUpperBound):
* dfg/DFGDoesGC.cpp:
(JSC::DFG::doesGC):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
* dfg/DFGGraph.cpp:
(JSC::DFG::Graph::dump):
(JSC::DFG::Graph::dumpBlockHeader):
(JSC::DFG::Graph::isLiveInBytecode):
(JSC::DFG::Graph::valueProfileFor):
(JSC::DFG::Graph::methodOfGettingAValueProfileFor):
* dfg/DFGGraph.h:
(JSC::DFG::Graph::valueProfileFor): Deleted.
(JSC::DFG::Graph::methodOfGettingAValueProfileFor): Deleted.
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::compileExceptionHandlers):
(JSC::DFG::JITCompiler::link):
* dfg/DFGMayExit.cpp:
(JSC::DFG::mayExit):
* dfg/DFGNode.h:
(JSC::DFG::Node::hasCallVarargsData):
(JSC::DFG::Node::callVarargsData):
(JSC::DFG::Node::hasLoadVarargsData):
(JSC::DFG::Node::loadVarargsData):
(JSC::DFG::Node::hasHeapPrediction):
* dfg/DFGNodeType.h:
* dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
(JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
* dfg/DFGOSRExitCompilerCommon.cpp:
(JSC::DFG::reifyInlinedCallFrames):
* dfg/DFGOperations.cpp:
* dfg/DFGOperations.h:
* dfg/DFGPlan.cpp:
(JSC::DFG::dumpAndVerifyGraph):
(JSC::DFG::Plan::compileInThreadImpl):
* dfg/DFGPreciseLocalClobberize.h:
(JSC::DFG::PreciseLocalClobberizeAdaptor::readTop):
(JSC::DFG::PreciseLocalClobberizeAdaptor::writeTop):
* dfg/DFGPredictionPropagationPhase.cpp:
(JSC::DFG::PredictionPropagationPhase::propagate):
* dfg/DFGSSAConversionPhase.cpp:
* dfg/DFGSafeToExecute.h:
(JSC::DFG::safeToExecute):
* dfg/DFGSpeculativeJIT.h:
(JSC::DFG::SpeculativeJIT::isFlushed):
(JSC::DFG::SpeculativeJIT::callOperation):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGStackLayoutPhase.cpp:
(JSC::DFG::StackLayoutPhase::run):
(JSC::DFG::StackLayoutPhase::assign):
* dfg/DFGStrengthReductionPhase.cpp:
(JSC::DFG::StrengthReductionPhase::handleNode):
* dfg/DFGTypeCheckHoistingPhase.cpp:
(JSC::DFG::TypeCheckHoistingPhase::run):
* dfg/DFGValidate.cpp:
(JSC::DFG::Validate::validateCPS):
* ftl/FTLAbbreviations.h:
(JSC::FTL::functionType):
(JSC::FTL::buildCall):
* ftl/FTLCapabilities.cpp:
(JSC::FTL::canCompile):
* ftl/FTLCompile.cpp:
(JSC::FTL::mmAllocateDataSection):
* ftl/FTLInlineCacheSize.cpp:
(JSC::FTL::sizeOfCall):
(JSC::FTL::sizeOfCallVarargs):
(JSC::FTL::sizeOfCallForwardVarargs):
(JSC::FTL::sizeOfConstructVarargs):
(JSC::FTL::sizeOfIn):
(JSC::FTL::sizeOfICFor):
(JSC::FTL::sizeOfCheckIn): Deleted.
* ftl/FTLInlineCacheSize.h:
* ftl/FTLIntrinsicRepository.h:
* ftl/FTLJSCall.cpp:
(JSC::FTL::JSCall::JSCall):
* ftl/FTLJSCallBase.cpp:
* ftl/FTLJSCallBase.h:
* ftl/FTLJSCallVarargs.cpp: Added.
(JSC::FTL::JSCallVarargs::JSCallVarargs):
(JSC::FTL::JSCallVarargs::numSpillSlotsNeeded):
(JSC::FTL::JSCallVarargs::emit):
(JSC::FTL::JSCallVarargs::link):
* ftl/FTLJSCallVarargs.h: Added.
(JSC::FTL::JSCallVarargs::node):
(JSC::FTL::JSCallVarargs::stackmapID):
(JSC::FTL::JSCallVarargs::operator<):
* ftl/FTLLowerDFGToLLVM.cpp:
(JSC::FTL::LowerDFGToLLVM::lower):
(JSC::FTL::LowerDFGToLLVM::compileNode):
(JSC::FTL::LowerDFGToLLVM::compileGetMyArgumentsLength):
(JSC::FTL::LowerDFGToLLVM::compileGetMyArgumentByVal):
(JSC::FTL::LowerDFGToLLVM::compileCallOrConstructVarargs):
(JSC::FTL::LowerDFGToLLVM::compileLoadVarargs):
(JSC::FTL::LowerDFGToLLVM::compileIn):
(JSC::FTL::LowerDFGToLLVM::emitStoreBarrier):
(JSC::FTL::LowerDFGToLLVM::vmCall):
(JSC::FTL::LowerDFGToLLVM::vmCallNoExceptions):
(JSC::FTL::LowerDFGToLLVM::callCheck):
* ftl/FTLOutput.h:
(JSC::FTL::Output::call):
* ftl/FTLState.cpp:
(JSC::FTL::State::State):
* ftl/FTLState.h:
* interpreter/Interpreter.cpp:
(JSC::sizeOfVarargs):
(JSC::sizeFrameForVarargs):
* interpreter/Interpreter.h:
* interpreter/StackVisitor.cpp:
(JSC::StackVisitor::readInlinedFrame):
* jit/AssemblyHelpers.cpp:
(JSC::AssemblyHelpers::emitExceptionCheck):
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::addressFor):
(JSC::AssemblyHelpers::calleeFrameSlot):
(JSC::AssemblyHelpers::calleeArgumentSlot):
(JSC::AssemblyHelpers::calleeFrameTagSlot):
(JSC::AssemblyHelpers::calleeFramePayloadSlot):
(JSC::AssemblyHelpers::calleeArgumentTagSlot):
(JSC::AssemblyHelpers::calleeArgumentPayloadSlot):
(JSC::AssemblyHelpers::calleeFrameCallerFrame):
(JSC::AssemblyHelpers::selectScratchGPR):
* jit/CCallHelpers.h:
(JSC::CCallHelpers::setupArgumentsWithExecState):
* jit/GPRInfo.h:
* jit/JIT.cpp:
(JSC::JIT::privateCompile):
* jit/JIT.h:
* jit/JITCall.cpp:
(JSC::JIT::compileSetupVarargsFrame):
(JSC::JIT::compileOpCall):
* jit/JITCall32_64.cpp:
(JSC::JIT::compileSetupVarargsFrame):
(JSC::JIT::compileOpCall):
* jit/JITOperations.h:
* jit/SetupVarargsFrame.cpp:
(JSC::emitSetupVarargsFrameFastCase):
* jit/SetupVarargsFrame.h:
* runtime/Arguments.h:
(JSC::Arguments::create):
(JSC::Arguments::registerArraySizeInBytes):
(JSC::Arguments::finishCreation):
* runtime/Options.h:
* tests/stress/construct-varargs-inline-smaller-Foo.js: Added.
(Foo):
(bar):
(checkEqual):
(test):
* tests/stress/construct-varargs-inline.js: Added.
(Foo):
(bar):
(checkEqual):
(test):
* tests/stress/construct-varargs-no-inline.js: Added.
(Foo):
(bar):
(checkEqual):
(test):
* tests/stress/get-argument-by-val-in-inlined-varargs-call-out-of-bounds.js: Added.
(foo):
(bar):
* tests/stress/get-argument-by-val-safe-in-inlined-varargs-call-out-of-bounds.js: Added.
(foo):
(bar):
* tests/stress/get-my-argument-by-val-creates-arguments.js: Added.
(blah):
(foo):
(bar):
(checkEqual):
(test):
* tests/stress/load-varargs-then-inlined-call-exit-in-foo.js: Added.
(foo):
(bar):
(checkEqual):
* tests/stress/load-varargs-then-inlined-call-inlined.js: Added.
(foo):
(bar):
(baz):
(checkEqual):
(test):
* tests/stress/load-varargs-then-inlined-call.js: Added.
(foo):
(bar):
(checkEqual):
(test):

LayoutTests:

Adds a version of deltablue that uses rest arguments profusely. This speeds up by 20% with this
patch. I believe that the machinery that this patch puts in place will allow us to ultimately
run deltablue-varargs at the same steady-state performance as normal deltablue.

* js/regress/deltablue-varargs-expected.txt: Added.
* js/regress/deltablue-varargs.html: Added.
* js/regress/script-tests/deltablue-varargs.js: Added.
(args):
(Object.prototype.inheritsFrom):
(OrderedCollection):
(OrderedCollection.prototype.add):
(OrderedCollection.prototype.at):
(OrderedCollection.prototype.size):
(OrderedCollection.prototype.removeFirst):
(OrderedCollection.prototype.remove):
(Strength):
(Strength.stronger):
(Strength.weaker):
(Strength.weakestOf):
(Strength.strongest):
(Strength.prototype.nextWeaker):
(Constraint):
(Constraint.prototype.addConstraint):
(Constraint.prototype.satisfy):
(Constraint.prototype.destroyConstraint):
(Constraint.prototype.isInput):
(UnaryConstraint):
(UnaryConstraint.prototype.addToGraph):
(UnaryConstraint.prototype.chooseMethod):
(UnaryConstraint.prototype.isSatisfied):
(UnaryConstraint.prototype.markInputs):
(UnaryConstraint.prototype.output):
(UnaryConstraint.prototype.recalculate):
(UnaryConstraint.prototype.markUnsatisfied):
(UnaryConstraint.prototype.inputsKnown):
(UnaryConstraint.prototype.removeFromGraph):
(StayConstraint):
(StayConstraint.prototype.execute):
(EditConstraint.prototype.isInput):
(EditConstraint.prototype.execute):
(BinaryConstraint):
(BinaryConstraint.prototype.chooseMethod):
(BinaryConstraint.prototype.addToGraph):
(BinaryConstraint.prototype.isSatisfied):
(BinaryConstraint.prototype.markInputs):
(BinaryConstraint.prototype.input):
(BinaryConstraint.prototype.output):
(BinaryConstraint.prototype.recalculate):
(BinaryConstraint.prototype.markUnsatisfied):
(BinaryConstraint.prototype.inputsKnown):
(BinaryConstraint.prototype.removeFromGraph):
(ScaleConstraint):
(ScaleConstraint.prototype.addToGraph):
(ScaleConstraint.prototype.removeFromGraph):
(ScaleConstraint.prototype.markInputs):
(ScaleConstraint.prototype.execute):
(ScaleConstraint.prototype.recalculate):
(EqualityConstraint):
(EqualityConstraint.prototype.execute):
(Variable):
(Variable.prototype.addConstraint):
(Variable.prototype.removeConstraint):
(Planner):
(Planner.prototype.incrementalAdd):
(Planner.prototype.incrementalRemove):
(Planner.prototype.newMark):
(Planner.prototype.makePlan):
(Planner.prototype.extractPlanFromConstraints):
(Planner.prototype.addPropagate):
(Planner.prototype.removePropagateFrom):
(Planner.prototype.addConstraintsConsumingTo):
(Plan):
(Plan.prototype.addConstraint):
(Plan.prototype.size):
(Plan.prototype.constraintAt):
(Plan.prototype.execute):
(chainTest):
(projectionTest):
(change):
(deltaBlue):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@180279 268f45cc-cd09-0410-ab3c-d52691b4dbfc

92 files changed:
LayoutTests/ChangeLog
LayoutTests/js/regress/deltablue-varargs-expected.txt [new file with mode: 0644]
LayoutTests/js/regress/deltablue-varargs.html [new file with mode: 0644]
LayoutTests/js/regress/script-tests/deltablue-varargs.js [new file with mode: 0644]
Source/JavaScriptCore/CMakeLists.txt
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/assembler/MacroAssembler.h
Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
Source/JavaScriptCore/assembler/MacroAssemblerX86_64.h
Source/JavaScriptCore/assembler/X86Assembler.h
Source/JavaScriptCore/bytecode/CallLinkInfo.h
Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
Source/JavaScriptCore/bytecode/CallLinkStatus.h
Source/JavaScriptCore/bytecode/CodeOrigin.cpp
Source/JavaScriptCore/bytecode/CodeOrigin.h
Source/JavaScriptCore/bytecode/ExitKind.cpp
Source/JavaScriptCore/bytecode/ExitKind.h
Source/JavaScriptCore/bytecode/ValueRecovery.cpp
Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
Source/JavaScriptCore/dfg/DFGArgumentsSimplificationPhase.cpp
Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
Source/JavaScriptCore/dfg/DFGCapabilities.cpp
Source/JavaScriptCore/dfg/DFGCapabilities.h
Source/JavaScriptCore/dfg/DFGClobberize.h
Source/JavaScriptCore/dfg/DFGCommon.cpp
Source/JavaScriptCore/dfg/DFGCommon.h
Source/JavaScriptCore/dfg/DFGDoesGC.cpp
Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
Source/JavaScriptCore/dfg/DFGGraph.cpp
Source/JavaScriptCore/dfg/DFGGraph.h
Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
Source/JavaScriptCore/dfg/DFGMayExit.cpp
Source/JavaScriptCore/dfg/DFGNode.h
Source/JavaScriptCore/dfg/DFGNodeType.h
Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
Source/JavaScriptCore/dfg/DFGOperations.cpp
Source/JavaScriptCore/dfg/DFGOperations.h
Source/JavaScriptCore/dfg/DFGPlan.cpp
Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
Source/JavaScriptCore/dfg/DFGSSAConversionPhase.cpp
Source/JavaScriptCore/dfg/DFGSafeToExecute.h
Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
Source/JavaScriptCore/dfg/DFGStackLayoutPhase.cpp
Source/JavaScriptCore/dfg/DFGStrengthReductionPhase.cpp
Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp
Source/JavaScriptCore/dfg/DFGValidate.cpp
Source/JavaScriptCore/ftl/FTLAbbreviations.h
Source/JavaScriptCore/ftl/FTLCapabilities.cpp
Source/JavaScriptCore/ftl/FTLCompile.cpp
Source/JavaScriptCore/ftl/FTLInlineCacheSize.cpp
Source/JavaScriptCore/ftl/FTLInlineCacheSize.h
Source/JavaScriptCore/ftl/FTLIntrinsicRepository.h
Source/JavaScriptCore/ftl/FTLJSCall.cpp
Source/JavaScriptCore/ftl/FTLJSCallBase.cpp
Source/JavaScriptCore/ftl/FTLJSCallBase.h
Source/JavaScriptCore/ftl/FTLJSCallVarargs.cpp [new file with mode: 0644]
Source/JavaScriptCore/ftl/FTLJSCallVarargs.h [new file with mode: 0644]
Source/JavaScriptCore/ftl/FTLLowerDFGToLLVM.cpp
Source/JavaScriptCore/ftl/FTLOutput.h
Source/JavaScriptCore/ftl/FTLState.cpp
Source/JavaScriptCore/ftl/FTLState.h
Source/JavaScriptCore/interpreter/Interpreter.cpp
Source/JavaScriptCore/interpreter/Interpreter.h
Source/JavaScriptCore/interpreter/StackVisitor.cpp
Source/JavaScriptCore/jit/AssemblyHelpers.cpp
Source/JavaScriptCore/jit/AssemblyHelpers.h
Source/JavaScriptCore/jit/CCallHelpers.h
Source/JavaScriptCore/jit/GPRInfo.h
Source/JavaScriptCore/jit/JIT.cpp
Source/JavaScriptCore/jit/JIT.h
Source/JavaScriptCore/jit/JITCall.cpp
Source/JavaScriptCore/jit/JITCall32_64.cpp
Source/JavaScriptCore/jit/JITOperations.h
Source/JavaScriptCore/jit/SetupVarargsFrame.cpp
Source/JavaScriptCore/jit/SetupVarargsFrame.h
Source/JavaScriptCore/runtime/Arguments.h
Source/JavaScriptCore/runtime/Options.h
Source/JavaScriptCore/tests/stress/construct-varargs-inline-smaller-Foo.js [new file with mode: 0644]
Source/JavaScriptCore/tests/stress/construct-varargs-inline.js [new file with mode: 0644]
Source/JavaScriptCore/tests/stress/construct-varargs-no-inline.js [new file with mode: 0644]
Source/JavaScriptCore/tests/stress/get-argument-by-val-in-inlined-varargs-call-out-of-bounds.js [new file with mode: 0644]
Source/JavaScriptCore/tests/stress/get-argument-by-val-safe-in-inlined-varargs-call-out-of-bounds.js [new file with mode: 0644]
Source/JavaScriptCore/tests/stress/get-my-argument-by-val-creates-arguments.js [new file with mode: 0644]
Source/JavaScriptCore/tests/stress/load-varargs-then-inlined-call-exit-in-foo.js [new file with mode: 0644]
Source/JavaScriptCore/tests/stress/load-varargs-then-inlined-call-inlined.js [new file with mode: 0644]
Source/JavaScriptCore/tests/stress/load-varargs-then-inlined-call.js [new file with mode: 0644]

index 946c642..589a825 100644 (file)
@@ -1,3 +1,91 @@
+2015-02-18  Filip Pizlo  <fpizlo@apple.com>
+
+        DFG should really support varargs
+        https://bugs.webkit.org/show_bug.cgi?id=141332
+
+        Reviewed by Oliver Hunt.
+        
+        Adds a version of deltablue that uses rest arguments profusely. This speeds up by 20% with this
+        patch. I believe that the machinery that this patch puts in place will allow us to ultimately
+        run deltablue-varargs at the same steady-state performance as normal deltablue.
+
+        * js/regress/deltablue-varargs-expected.txt: Added.
+        * js/regress/deltablue-varargs.html: Added.
+        * js/regress/script-tests/deltablue-varargs.js: Added.
+        (args):
+        (Object.prototype.inheritsFrom):
+        (OrderedCollection):
+        (OrderedCollection.prototype.add):
+        (OrderedCollection.prototype.at):
+        (OrderedCollection.prototype.size):
+        (OrderedCollection.prototype.removeFirst):
+        (OrderedCollection.prototype.remove):
+        (Strength):
+        (Strength.stronger):
+        (Strength.weaker):
+        (Strength.weakestOf):
+        (Strength.strongest):
+        (Strength.prototype.nextWeaker):
+        (Constraint):
+        (Constraint.prototype.addConstraint):
+        (Constraint.prototype.satisfy):
+        (Constraint.prototype.destroyConstraint):
+        (Constraint.prototype.isInput):
+        (UnaryConstraint):
+        (UnaryConstraint.prototype.addToGraph):
+        (UnaryConstraint.prototype.chooseMethod):
+        (UnaryConstraint.prototype.isSatisfied):
+        (UnaryConstraint.prototype.markInputs):
+        (UnaryConstraint.prototype.output):
+        (UnaryConstraint.prototype.recalculate):
+        (UnaryConstraint.prototype.markUnsatisfied):
+        (UnaryConstraint.prototype.inputsKnown):
+        (UnaryConstraint.prototype.removeFromGraph):
+        (StayConstraint):
+        (StayConstraint.prototype.execute):
+        (EditConstraint.prototype.isInput):
+        (EditConstraint.prototype.execute):
+        (BinaryConstraint):
+        (BinaryConstraint.prototype.chooseMethod):
+        (BinaryConstraint.prototype.addToGraph):
+        (BinaryConstraint.prototype.isSatisfied):
+        (BinaryConstraint.prototype.markInputs):
+        (BinaryConstraint.prototype.input):
+        (BinaryConstraint.prototype.output):
+        (BinaryConstraint.prototype.recalculate):
+        (BinaryConstraint.prototype.markUnsatisfied):
+        (BinaryConstraint.prototype.inputsKnown):
+        (BinaryConstraint.prototype.removeFromGraph):
+        (ScaleConstraint):
+        (ScaleConstraint.prototype.addToGraph):
+        (ScaleConstraint.prototype.removeFromGraph):
+        (ScaleConstraint.prototype.markInputs):
+        (ScaleConstraint.prototype.execute):
+        (ScaleConstraint.prototype.recalculate):
+        (EqualityConstraint):
+        (EqualityConstraint.prototype.execute):
+        (Variable):
+        (Variable.prototype.addConstraint):
+        (Variable.prototype.removeConstraint):
+        (Planner):
+        (Planner.prototype.incrementalAdd):
+        (Planner.prototype.incrementalRemove):
+        (Planner.prototype.newMark):
+        (Planner.prototype.makePlan):
+        (Planner.prototype.extractPlanFromConstraints):
+        (Planner.prototype.addPropagate):
+        (Planner.prototype.removePropagateFrom):
+        (Planner.prototype.addConstraintsConsumingTo):
+        (Plan):
+        (Plan.prototype.addConstraint):
+        (Plan.prototype.size):
+        (Plan.prototype.constraintAt):
+        (Plan.prototype.execute):
+        (chainTest):
+        (projectionTest):
+        (change):
+        (deltaBlue):
+
 2015-02-18  Myles C. Maxfield  <mmaxfield@apple.com>
 
         Justified ruby can cause lines to grow beyond their container
diff --git a/LayoutTests/js/regress/deltablue-varargs-expected.txt b/LayoutTests/js/regress/deltablue-varargs-expected.txt
new file mode 100644 (file)
index 0000000..461a76c
--- /dev/null
@@ -0,0 +1,10 @@
+JSRegress/deltablue-varargs
+
+On success, you will see a series of "PASS" messages, followed by "TEST COMPLETE".
+
+
+PASS no exception thrown
+PASS successfullyParsed is true
+
+TEST COMPLETE
+
diff --git a/LayoutTests/js/regress/deltablue-varargs.html b/LayoutTests/js/regress/deltablue-varargs.html
new file mode 100644 (file)
index 0000000..7ad283b
--- /dev/null
@@ -0,0 +1,12 @@
+<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
+<html>
+<head>
+<script src="../../resources/js-test-pre.js"></script>
+</head>
+<body>
+<script src="../../resources/regress-pre.js"></script>
+<script src="script-tests/deltablue-varargs.js"></script>
+<script src="../../resources/regress-post.js"></script>
+<script src="../../resources/js-test-post.js"></script>
+</body>
+</html>
diff --git a/LayoutTests/js/regress/script-tests/deltablue-varargs.js b/LayoutTests/js/regress/script-tests/deltablue-varargs.js
new file mode 100644 (file)
index 0000000..9b94281
--- /dev/null
@@ -0,0 +1,889 @@
+//@ skip if $architecture == "arm" and $hostOS == "darwin"
+// Copyright 2008 the V8 project authors. All rights reserved.
+// Copyright 1996 John Maloney and Mario Wolczko.
+
+// This program is free software; you can redistribute it and/or modify
+// it under the terms of the GNU General Public License as published by
+// the Free Software Foundation; either version 2 of the License, or
+// (at your option) any later version.
+//
+// This program is distributed in the hope that it will be useful,
+// but WITHOUT ANY WARRANTY; without even the implied warranty of
+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+// GNU General Public License for more details.
+//
+// You should have received a copy of the GNU General Public License
+// along with this program; if not, write to the Free Software
+// Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+
+
+// This implementation of the DeltaBlue benchmark is derived
+// from the Smalltalk implementation by John Maloney and Mario
+// Wolczko. Some parts have been translated directly, whereas
+// others have been modified more aggresively to make it feel
+// more like a JavaScript program.
+
+/**
+ * A JavaScript implementation of the DeltaBlue constraint-solving
+ * algorithm, as described in:
+ *
+ * "The DeltaBlue Algorithm: An Incremental Constraint Hierarchy Solver"
+ *   Bjorn N. Freeman-Benson and John Maloney
+ *   January 1990 Communications of the ACM,
+ *   also available as University of Washington TR 89-08-06.
+ *
+ * Beware: this benchmark is written in a grotesque style where
+ * the constraint model is built by side-effects from constructors.
+ * I've kept it this way to avoid deviating too much from the original
+ * implementation.
+ */
+
+// This thing is an evil hack that we use throughout this benchmark so that we can use this benchmark to
+// stress our varargs implementation. We also have tests for specific features of the varargs code, but
+// having a somewhat large-ish benchmark that uses varargs a lot (even if it's in a silly way) is great
+// for shaking out bugs.
+function args() {
+    var array = [];
+    for (var i = 0; i < arguments.length; ++i)
+        array.push(arguments[i]);
+    return array;
+}
+
+
+/* --- O b j e c t   M o d e l --- */
+
+Object.prototype.inheritsFrom = function (shuper) {
+  function Inheriter() { }
+  Inheriter.prototype = shuper.prototype;
+  this.prototype = new Inheriter(...args());
+  this.superConstructor = shuper;
+}
+
+function OrderedCollection() {
+  this.elms = new Array(...args());
+}
+
+OrderedCollection.prototype.add = function (elm) {
+  this.elms.push(...args(elm));
+}
+
+OrderedCollection.prototype.at = function (index) {
+  return this.elms[index];
+}
+
+OrderedCollection.prototype.size = function () {
+  return this.elms.length;
+}
+
+OrderedCollection.prototype.removeFirst = function () {
+  return this.elms.pop(...args());
+}
+
+OrderedCollection.prototype.remove = function (elm) {
+  var index = 0, skipped = 0;
+  for (var i = 0; i < this.elms.length; i++) {
+    var value = this.elms[i];
+    if (value != elm) {
+      this.elms[index] = value;
+      index++;
+    } else {
+      skipped++;
+    }
+  }
+  for (var i = 0; i < skipped; i++)
+    this.elms.pop(...args());
+}
+
+/* --- *
+ * S t r e n g t h
+ * --- */
+
+/**
+ * Strengths are used to measure the relative importance of constraints.
+ * New strengths may be inserted in the strength hierarchy without
+ * disrupting current constraints.  Strengths cannot be created outside
+ * this class, so pointer comparison can be used for value comparison.
+ */
+function Strength(strengthValue, name) {
+  this.strengthValue = strengthValue;
+  this.name = name;
+}
+
+Strength.stronger = function (s1, s2) {
+  return s1.strengthValue < s2.strengthValue;
+}
+
+Strength.weaker = function (s1, s2) {
+  return s1.strengthValue > s2.strengthValue;
+}
+
+Strength.weakestOf = function (s1, s2) {
+  return this.weaker(...args(s1, s2)) ? s1 : s2;
+}
+
+Strength.strongest = function (s1, s2) {
+  return this.stronger(...args(s1, s2)) ? s1 : s2;
+}
+
+Strength.prototype.nextWeaker = function () {
+  switch (this.strengthValue) {
+    case 0: return Strength.WEAKEST;
+    case 1: return Strength.WEAK_DEFAULT;
+    case 2: return Strength.NORMAL;
+    case 3: return Strength.STRONG_DEFAULT;
+    case 4: return Strength.PREFERRED;
+    case 5: return Strength.REQUIRED;
+  }
+}
+
+// Strength constants.
+Strength.REQUIRED        = new Strength(...args(0, "required"));
+Strength.STONG_PREFERRED = new Strength(...args(1, "strongPreferred"));
+Strength.PREFERRED       = new Strength(...args(2, "preferred"));
+Strength.STRONG_DEFAULT  = new Strength(...args(3, "strongDefault"));
+Strength.NORMAL          = new Strength(...args(4, "normal"));
+Strength.WEAK_DEFAULT    = new Strength(...args(5, "weakDefault"));
+Strength.WEAKEST         = new Strength(...args(6, "weakest"));
+
+/* --- *
+ * C o n s t r a i n t
+ * --- */
+
+/**
+ * An abstract class representing a system-maintainable relationship
+ * (or "constraint") between a set of variables. A constraint supplies
+ * a strength instance variable; concrete subclasses provide a means
+ * of storing the constrained variables and other information required
+ * to represent a constraint.
+ */
+function Constraint(strength) {
+  this.strength = strength;
+}
+
+/**
+ * Activate this constraint and attempt to satisfy it.
+ */
+Constraint.prototype.addConstraint = function () {
+  this.addToGraph(...args());
+  planner.incrementalAdd(...args(this));
+}
+
+/**
+ * Attempt to find a way to enforce this constraint. If successful,
+ * record the solution, perhaps modifying the current dataflow
+ * graph. Answer the constraint that this constraint overrides, if
+ * there is one, or nil, if there isn't.
+ * Assume: I am not already satisfied.
+ */
+Constraint.prototype.satisfy = function (mark) {
+  this.chooseMethod(...args(mark));
+  if (!this.isSatisfied(...args())) {
+    if (this.strength == Strength.REQUIRED)
+      alert(...args("Could not satisfy a required constraint!"));
+    return null;
+  }
+  this.markInputs(...args(mark));
+  var out = this.output(...args());
+  var overridden = out.determinedBy;
+  if (overridden != null) overridden.markUnsatisfied(...args());
+  out.determinedBy = this;
+  if (!planner.addPropagate(...args(this, mark)))
+    alert(...args("Cycle encountered"));
+  out.mark = mark;
+  return overridden;
+}
+
+Constraint.prototype.destroyConstraint = function () {
+  if (this.isSatisfied(...args())) planner.incrementalRemove(...args(this));
+  else this.removeFromGraph(...args());
+}
+
+/**
+ * Normal constraints are not input constraints.  An input constraint
+ * is one that depends on external state, such as the mouse, the
+ * keybord, a clock, or some arbitraty piece of imperative code.
+ */
+Constraint.prototype.isInput = function () {
+  return false;
+}
+
+/* --- *
+ * U n a r y   C o n s t r a i n t
+ * --- */
+
+/**
+ * Abstract superclass for constraints having a single possible output
+ * variable.
+ */
+function UnaryConstraint(v, strength) {
+  UnaryConstraint.superConstructor.call(this, strength);
+  this.myOutput = v;
+  this.satisfied = false;
+  this.addConstraint(...args());
+}
+
+UnaryConstraint.inheritsFrom(...args(Constraint));
+
+/**
+ * Adds this constraint to the constraint graph
+ */
+UnaryConstraint.prototype.addToGraph = function () {
+  this.myOutput.addConstraint(...args(this));
+  this.satisfied = false;
+}
+
+/**
+ * Decides if this constraint can be satisfied and records that
+ * decision.
+ */
+UnaryConstraint.prototype.chooseMethod = function (mark) {
+  this.satisfied = (this.myOutput.mark != mark)
+    && Strength.stronger(...args(this.strength, this.myOutput.walkStrength));
+}
+
+/**
+ * Returns true if this constraint is satisfied in the current solution.
+ */
+UnaryConstraint.prototype.isSatisfied = function () {
+  return this.satisfied;
+}
+
+UnaryConstraint.prototype.markInputs = function (mark) {
+  // has no inputs
+}
+
+/**
+ * Returns the current output variable.
+ */
+UnaryConstraint.prototype.output = function () {
+  return this.myOutput;
+}
+
+/**
+ * Calculate the walkabout strength, the stay flag, and, if it is
+ * 'stay', the value for the current output of this constraint. Assume
+ * this constraint is satisfied.
+ */
+UnaryConstraint.prototype.recalculate = function () {
+  this.myOutput.walkStrength = this.strength;
+  this.myOutput.stay = !this.isInput(...args());
+  if (this.myOutput.stay) this.execute(...args()); // Stay optimization
+}
+
+/**
+ * Records that this constraint is unsatisfied
+ */
+UnaryConstraint.prototype.markUnsatisfied = function () {
+  this.satisfied = false;
+}
+
+UnaryConstraint.prototype.inputsKnown = function () {
+  return true;
+}
+
+UnaryConstraint.prototype.removeFromGraph = function () {
+  if (this.myOutput != null) this.myOutput.removeConstraint(...args(this));
+  this.satisfied = false;
+}
+
+/* --- *
+ * S t a y   C o n s t r a i n t
+ * --- */
+
+/**
+ * Variables that should, with some level of preference, stay the same.
+ * Planners may exploit the fact that instances, if satisfied, will not
+ * change their output during plan execution.  This is called "stay
+ * optimization".
+ */
+function StayConstraint(v, str) {
+  StayConstraint.superConstructor.call(this, v, str);
+}
+
+StayConstraint.inheritsFrom(...args(UnaryConstraint));
+
+StayConstraint.prototype.execute = function () {
+  // Stay constraints do nothing
+}
+
+/* --- *
+ * E d i t   C o n s t r a i n t
+ * --- */
+
+/**
+ * A unary input constraint used to mark a variable that the client
+ * wishes to change.
+ */
+function EditConstraint(v, str) {
+  EditConstraint.superConstructor.call(this, v, str);
+}
+
+EditConstraint.inheritsFrom(...args(UnaryConstraint));
+
+/**
+ * Edits indicate that a variable is to be changed by imperative code.
+ */
+EditConstraint.prototype.isInput = function () {
+  return true;
+}
+
+EditConstraint.prototype.execute = function () {
+  // Edit constraints do nothing
+}
+
+/* --- *
+ * B i n a r y   C o n s t r a i n t
+ * --- */
+
+var Direction = new Object(...args());
+Direction.NONE     = 0;
+Direction.FORWARD  = 1;
+Direction.BACKWARD = -1;
+
+/**
+ * Abstract superclass for constraints having two possible output
+ * variables.
+ */
+function BinaryConstraint(var1, var2, strength) {
+  BinaryConstraint.superConstructor.call(this, strength);
+  this.v1 = var1;
+  this.v2 = var2;
+  this.direction = Direction.NONE;
+  this.addConstraint(...args());
+}
+
+BinaryConstraint.inheritsFrom(...args(Constraint));
+
+/**
+ * Decides if this constraint can be satisfied and which way it
+ * should flow based on the relative strength of the variables related,
+ * and record that decision.
+ */
+BinaryConstraint.prototype.chooseMethod = function (mark) {
+  if (this.v1.mark == mark) {
+    this.direction = (this.v2.mark != mark && Strength.stronger(...args(this.strength, this.v2.walkStrength)))
+      ? Direction.FORWARD
+      : Direction.NONE;
+  }
+  if (this.v2.mark == mark) {
+    this.direction = (this.v1.mark != mark && Strength.stronger(...args(this.strength, this.v1.walkStrength)))
+      ? Direction.BACKWARD
+      : Direction.NONE;
+  }
+  if (Strength.weaker(...args(this.v1.walkStrength, this.v2.walkStrength))) {
+    this.direction = Strength.stronger(...args(this.strength, this.v1.walkStrength))
+      ? Direction.BACKWARD
+      : Direction.NONE;
+  } else {
+    this.direction = Strength.stronger(...args(this.strength, this.v2.walkStrength))
+      ? Direction.FORWARD
+      : Direction.BACKWARD
+  }
+}
+
+/**
+ * Add this constraint to the constraint graph
+ */
+BinaryConstraint.prototype.addToGraph = function () {
+  this.v1.addConstraint(...args(this));
+  this.v2.addConstraint(...args(this));
+  this.direction = Direction.NONE;
+}
+
+/**
+ * Answer true if this constraint is satisfied in the current solution.
+ */
+BinaryConstraint.prototype.isSatisfied = function () {
+  return this.direction != Direction.NONE;
+}
+
+/**
+ * Mark the input variable with the given mark.
+ */
+BinaryConstraint.prototype.markInputs = function (mark) {
+  this.input(...args()).mark = mark;
+}
+
+/**
+ * Returns the current input variable
+ */
+BinaryConstraint.prototype.input = function () {
+  return (this.direction == Direction.FORWARD) ? this.v1 : this.v2;
+}
+
+/**
+ * Returns the current output variable
+ */
+BinaryConstraint.prototype.output = function () {
+  return (this.direction == Direction.FORWARD) ? this.v2 : this.v1;
+}
+
+/**
+ * Calculate the walkabout strength, the stay flag, and, if it is
+ * 'stay', the value for the current output of this
+ * constraint. Assume this constraint is satisfied.
+ */
+BinaryConstraint.prototype.recalculate = function () {
+  var ihn = this.input(...args()), out = this.output(...args());
+  out.walkStrength = Strength.weakestOf(...args(this.strength, ihn.walkStrength));
+  out.stay = ihn.stay;
+  if (out.stay) this.execute(...args());
+}
+
+/**
+ * Record the fact that this constraint is unsatisfied.
+ */
+BinaryConstraint.prototype.markUnsatisfied = function () {
+  this.direction = Direction.NONE;
+}
+
+BinaryConstraint.prototype.inputsKnown = function (mark) {
+  var i = this.input(...args());
+  return i.mark == mark || i.stay || i.determinedBy == null;
+}
+
+BinaryConstraint.prototype.removeFromGraph = function () {
+  if (this.v1 != null) this.v1.removeConstraint(...args(this));
+  if (this.v2 != null) this.v2.removeConstraint(...args(this));
+  this.direction = Direction.NONE;
+}
+
+/* --- *
+ * S c a l e   C o n s t r a i n t
+ * --- */
+
+/**
+ * Relates two variables by the linear scaling relationship: "v2 =
+ * (v1 * scale) + offset". Either v1 or v2 may be changed to maintain
+ * this relationship but the scale factor and offset are considered
+ * read-only.
+ */
+function ScaleConstraint(src, scale, offset, dest, strength) {
+  this.direction = Direction.NONE;
+  this.scale = scale;
+  this.offset = offset;
+  ScaleConstraint.superConstructor.call(this, src, dest, strength);
+}
+
+ScaleConstraint.inheritsFrom(...args(BinaryConstraint));
+
+/**
+ * Adds this constraint to the constraint graph.
+ */
+ScaleConstraint.prototype.addToGraph = function () {
+  ScaleConstraint.superConstructor.prototype.addToGraph.call(this);
+  this.scale.addConstraint(...args(this));
+  this.offset.addConstraint(...args(this));
+}
+
+ScaleConstraint.prototype.removeFromGraph = function () {
+  ScaleConstraint.superConstructor.prototype.removeFromGraph.call(this);
+  if (this.scale != null) this.scale.removeConstraint(...args(this));
+  if (this.offset != null) this.offset.removeConstraint(...args(this));
+}
+
+ScaleConstraint.prototype.markInputs = function (mark) {
+  ScaleConstraint.superConstructor.prototype.markInputs.call(this, mark);
+  this.scale.mark = this.offset.mark = mark;
+}
+
+/**
+ * Enforce this constraint. Assume that it is satisfied.
+ */
+ScaleConstraint.prototype.execute = function () {
+  if (this.direction == Direction.FORWARD) {
+    this.v2.value = this.v1.value * this.scale.value + this.offset.value;
+  } else {
+    this.v1.value = (this.v2.value - this.offset.value) / this.scale.value;
+  }
+}
+
+/**
+ * Calculate the walkabout strength, the stay flag, and, if it is
+ * 'stay', the value for the current output of this constraint. Assume
+ * this constraint is satisfied.
+ */
+ScaleConstraint.prototype.recalculate = function () {
+  var ihn = this.input(...args()), out = this.output(...args());
+  out.walkStrength = Strength.weakestOf(...args(this.strength, ihn.walkStrength));
+  out.stay = ihn.stay && this.scale.stay && this.offset.stay;
+  if (out.stay) this.execute(...args());
+}
+
+/* --- *
+ * E q u a l i t  y   C o n s t r a i n t
+ * --- */
+
+/**
+ * Constrains two variables to have the same value.
+ */
+function EqualityConstraint(var1, var2, strength) {
+  EqualityConstraint.superConstructor.call(this, var1, var2, strength);
+}
+
+EqualityConstraint.inheritsFrom(...args(BinaryConstraint));
+
+/**
+ * Enforce this constraint. Assume that it is satisfied.
+ */
+EqualityConstraint.prototype.execute = function () {
+  this.output(...args()).value = this.input(...args()).value;
+}
+
+/* --- *
+ * V a r i a b l e
+ * --- */
+
+/**
+ * A constrained variable. In addition to its value, it maintain the
+ * structure of the constraint graph, the current dataflow graph, and
+ * various parameters of interest to the DeltaBlue incremental
+ * constraint solver.
+ **/
+function Variable(name, initialValue) {
+  this.value = initialValue || 0;
+  this.constraints = new OrderedCollection(...args());
+  this.determinedBy = null;
+  this.mark = 0;
+  this.walkStrength = Strength.WEAKEST;
+  this.stay = true;
+  this.name = name;
+}
+
+/**
+ * Add the given constraint to the set of all constraints that refer
+ * this variable.
+ */
+Variable.prototype.addConstraint = function (c) {
+  this.constraints.add(...args(c));
+}
+
+/**
+ * Removes all traces of c from this variable.
+ */
+Variable.prototype.removeConstraint = function (c) {
+  this.constraints.remove(...args(c));
+  if (this.determinedBy == c) this.determinedBy = null;
+}
+
+/* --- *
+ * P l a n n e r
+ * --- */
+
+/**
+ * The DeltaBlue planner
+ */
+function Planner() {
+  this.currentMark = 0;
+}
+
+/**
+ * Attempt to satisfy the given constraint and, if successful,
+ * incrementally update the dataflow graph.  Details: If satifying
+ * the constraint is successful, it may override a weaker constraint
+ * on its output. The algorithm attempts to resatisfy that
+ * constraint using some other method. This process is repeated
+ * until either a) it reaches a variable that was not previously
+ * determined by any constraint or b) it reaches a constraint that
+ * is too weak to be satisfied using any of its methods. The
+ * variables of constraints that have been processed are marked with
+ * a unique mark value so that we know where we've been. This allows
+ * the algorithm to avoid getting into an infinite loop even if the
+ * constraint graph has an inadvertent cycle.
+ */
+Planner.prototype.incrementalAdd = function (c) {
+  var mark = this.newMark(...args());
+  var overridden = c.satisfy(...args(mark));
+  while (overridden != null)
+    overridden = overridden.satisfy(...args(mark));
+}
+
+/**
+ * Entry point for retracting a constraint. Remove the given
+ * constraint and incrementally update the dataflow graph.
+ * Details: Retracting the given constraint may allow some currently
+ * unsatisfiable downstream constraint to be satisfied. We therefore collect
+ * a list of unsatisfied downstream constraints and attempt to
+ * satisfy each one in turn. This list is traversed by constraint
+ * strength, strongest first, as a heuristic for avoiding
+ * unnecessarily adding and then overriding weak constraints.
+ * Assume: c is satisfied.
+ */
+Planner.prototype.incrementalRemove = function (c) {
+  var out = c.output(...args());
+  c.markUnsatisfied(...args());
+  c.removeFromGraph(...args());
+  var unsatisfied = this.removePropagateFrom(...args(out));
+  var strength = Strength.REQUIRED;
+  do {
+    for (var i = 0; i < unsatisfied.size(...args()); i++) {
+      var u = unsatisfied.at(...args(i));
+      if (u.strength == strength)
+        this.incrementalAdd(...args(u));
+    }
+    strength = strength.nextWeaker(...args());
+  } while (strength != Strength.WEAKEST);
+}
+
+/**
+ * Select a previously unused mark value.
+ */
+Planner.prototype.newMark = function () {
+  return ++this.currentMark;
+}
+
+/**
+ * Extract a plan for resatisfaction starting from the given source
+ * constraints, usually a set of input constraints. This method
+ * assumes that stay optimization is desired; the plan will contain
+ * only constraints whose output variables are not stay. Constraints
+ * that do no computation, such as stay and edit constraints, are
+ * not included in the plan.
+ * Details: The outputs of a constraint are marked when it is added
+ * to the plan under construction. A constraint may be appended to
+ * the plan when all its input variables are known. A variable is
+ * known if either a) the variable is marked (indicating that has
+ * been computed by a constraint appearing earlier in the plan), b)
+ * the variable is 'stay' (i.e. it is a constant at plan execution
+ * time), or c) the variable is not determined by any
+ * constraint. The last provision is for past states of history
+ * variables, which are not stay but which are also not computed by
+ * any constraint.
+ * Assume: sources are all satisfied.
+ */
+Planner.prototype.makePlan = function (sources) {
+  var mark = this.newMark(...args());
+  var plan = new Plan(...args());
+  var todo = sources;
+  while (todo.size(...args()) > 0) {
+    var c = todo.removeFirst(...args());
+    if (c.output(...args()).mark != mark && c.inputsKnown(...args(mark))) {
+      plan.addConstraint(...args(c));
+      c.output(...args()).mark = mark;
+      this.addConstraintsConsumingTo(...args(c.output(...args()), todo));
+    }
+  }
+  return plan;
+}
+
+/**
+ * Extract a plan for resatisfying starting from the output of the
+ * given constraints, usually a set of input constraints.
+ */
+Planner.prototype.extractPlanFromConstraints = function (constraints) {
+  var sources = new OrderedCollection(...args());
+  for (var i = 0; i < constraints.size(...args()); i++) {
+    var c = constraints.at(...args(i));
+    if (c.isInput(...args()) && c.isSatisfied(...args()))
+      // not in plan already and eligible for inclusion
+      sources.add(...args(c));
+  }
+  return this.makePlan(...args(sources));
+}
+
+/**
+ * Recompute the walkabout strengths and stay flags of all variables
+ * downstream of the given constraint and recompute the actual
+ * values of all variables whose stay flag is true. If a cycle is
+ * detected, remove the given constraint and answer
+ * false. Otherwise, answer true.
+ * Details: Cycles are detected when a marked variable is
+ * encountered downstream of the given constraint. The sender is
+ * assumed to have marked the inputs of the given constraint with
+ * the given mark. Thus, encountering a marked node downstream of
+ * the output constraint means that there is a path from the
+ * constraint's output to one of its inputs.
+ */
+Planner.prototype.addPropagate = function (c, mark) {
+  var todo = new OrderedCollection(...args());
+  todo.add(...args(c));
+  while (todo.size(...args()) > 0) {
+    var d = todo.removeFirst(...args());
+    if (d.output(...args()).mark == mark) {
+      this.incrementalRemove(...args(c));
+      return false;
+    }
+    d.recalculate(...args());
+    this.addConstraintsConsumingTo(...args(d.output(...args()), todo));
+  }
+  return true;
+}
+
+
+/**
+ * Update the walkabout strengths and stay flags of all variables
+ * downstream of the given constraint. Answer a collection of
+ * unsatisfied constraints sorted in order of decreasing strength.
+ */
+Planner.prototype.removePropagateFrom = function (out) {
+  out.determinedBy = null;
+  out.walkStrength = Strength.WEAKEST;
+  out.stay = true;
+  var unsatisfied = new OrderedCollection(...args());
+  var todo = new OrderedCollection(...args());
+  todo.add(...args(out));
+  while (todo.size(...args()) > 0) {
+    var v = todo.removeFirst(...args());
+    for (var i = 0; i < v.constraints.size(...args()); i++) {
+      var c = v.constraints.at(...args(i));
+      if (!c.isSatisfied(...args()))
+        unsatisfied.add(...args(c));
+    }
+    var determining = v.determinedBy;
+    for (var i = 0; i < v.constraints.size(...args()); i++) {
+      var next = v.constraints.at(...args(i));
+      if (next != determining && next.isSatisfied(...args())) {
+        next.recalculate(...args());
+        todo.add(...args(next.output(...args())));
+      }
+    }
+  }
+  return unsatisfied;
+}
+
+Planner.prototype.addConstraintsConsumingTo = function (v, coll) {
+  var determining = v.determinedBy;
+  var cc = v.constraints;
+  for (var i = 0; i < cc.size(...args()); i++) {
+    var c = cc.at(...args(i));
+    if (c != determining && c.isSatisfied(...args()))
+      coll.add(...args(c));
+  }
+}
+
+/* --- *
+ * P l a n
+ * --- */
+
+/**
+ * A Plan is an ordered list of constraints to be executed in sequence
+ * to resatisfy all currently satisfiable constraints in the face of
+ * one or more changing inputs.
+ */
+function Plan() {
+  this.v = new OrderedCollection(...args());
+}
+
+Plan.prototype.addConstraint = function (c) {
+  this.v.add(...args(c));
+}
+
+Plan.prototype.size = function () {
+  return this.v.size(...args());
+}
+
+Plan.prototype.constraintAt = function (index) {
+  return this.v.at(...args(index));
+}
+
+Plan.prototype.execute = function () {
+  for (var i = 0; i < this.size(...args()); i++) {
+    var c = this.constraintAt(...args(i));
+    c.execute(...args());
+  }
+}
+
+/* --- *
+ * M a i n
+ * --- */
+
+/**
+ * This is the standard DeltaBlue benchmark. A long chain of equality
+ * constraints is constructed with a stay constraint on one end. An
+ * edit constraint is then added to the opposite end and the time is
+ * measured for adding and removing this constraint, and extracting
+ * and executing a constraint satisfaction plan. There are two cases.
+ * In case 1, the added constraint is stronger than the stay
+ * constraint and values must propagate down the entire length of the
+ * chain. In case 2, the added constraint is weaker than the stay
+ * constraint so it cannot be accomodated. The cost in this case is,
+ * of course, very low. Typical situations lie somewhere between these
+ * two extremes.
+ */
+function chainTest(n) {
+  planner = new Planner(...args());
+  var prev = null, first = null, last = null;
+
+  // Build chain of n equality constraints
+  for (var i = 0; i <= n; i++) {
+    var name = "v" + i;
+    var v = new Variable(...args(name));
+    if (prev != null)
+      new EqualityConstraint(...args(prev, v, Strength.REQUIRED));
+    if (i == 0) first = v;
+    if (i == n) last = v;
+    prev = v;
+  }
+
+  new StayConstraint(...args(last, Strength.STRONG_DEFAULT));
+  var edit = new EditConstraint(...args(first, Strength.PREFERRED));
+  var edits = new OrderedCollection(...args());
+  edits.add(...args(edit));
+  var plan = planner.extractPlanFromConstraints(...args(edits));
+  for (var i = 0; i < 100; i++) {
+    first.value = i;
+    plan.execute(...args());
+    if (last.value != i)
+      alert(...args("Chain test failed."));
+  }
+}
+
+/**
+ * This test constructs a two sets of variables related to each
+ * other by a simple linear transformation (scale and offset). The
+ * time is measured to change a variable on either side of the
+ * mapping and to change the scale and offset factors.
+ */
+function projectionTest(n) {
+  planner = new Planner(...args());
+  var scale = new Variable(...args("scale", 10));
+  var offset = new Variable(...args("offset", 1000));
+  var src = null, dst = null;
+
+  var dests = new OrderedCollection(...args());
+  for (var i = 0; i < n; i++) {
+    src = new Variable(...args("src" + i, i));
+    dst = new Variable(...args("dst" + i, i));
+    dests.add(...args(dst));
+    new StayConstraint(...args(src, Strength.NORMAL));
+    new ScaleConstraint(...args(src, scale, offset, dst, Strength.REQUIRED));
+  }
+
+  change(...args(src, 17));
+  if (dst.value != 1170) alert(...args("Projection 1 failed"));
+  change(...args(dst, 1050));
+  if (src.value != 5) alert(...args("Projection 2 failed"));
+  change(...args(scale, 5));
+  for (var i = 0; i < n - 1; i++) {
+    if (dests.at(...args(i)).value != i * 5 + 1000)
+      alert(...args("Projection 3 failed"));
+  }
+  change(...args(offset, 2000));
+  for (var i = 0; i < n - 1; i++) {
+    if (dests.at(...args(i)).value != i * 5 + 2000)
+      alert(...args("Projection 4 failed"));
+  }
+}
+
+function change(v, newValue) {
+  var edit = new EditConstraint(...args(v, Strength.PREFERRED));
+  var edits = new OrderedCollection(...args());
+  edits.add(...args(edit));
+  var plan = planner.extractPlanFromConstraints(...args(edits));
+  for (var i = 0; i < 10; i++) {
+    v.value = newValue;
+    plan.execute(...args());
+  }
+  edit.destroyConstraint(...args());
+}
+
+// Global variable holding the current planner.
+var planner = null;
+
+function deltaBlue() {
+  chainTest(...args(100));
+  projectionTest(...args(100));
+}
+
+for (var i = 0; i < 30; ++i)
+    deltaBlue(...args());
index bd18c1e..fda72c9 100644 (file)
@@ -845,6 +845,7 @@ if (ENABLE_FTL_JIT)
         ftl/FTLJITFinalizer.cpp
         ftl/FTLJSCall.cpp
         ftl/FTLJSCallBase.cpp
+        ftl/FTLJSCallVarargs.cpp
         ftl/FTLLink.cpp
         ftl/FTLLocation.cpp
         ftl/FTLLowerDFGToLLVM.cpp
index fd30ebd..0a47b84 100644 (file)
@@ -1,3 +1,314 @@
+2015-02-18  Filip Pizlo  <fpizlo@apple.com>
+
+        DFG should really support varargs
+        https://bugs.webkit.org/show_bug.cgi?id=141332
+
+        Reviewed by Oliver Hunt.
+        
+        This adds comprehensive vararg call support to the DFG and FTL compilers. Previously, if a
+        function had a varargs call, then it could only be compiled if that varargs call was just
+        forwarding arguments and we were inlining the function rather than compiling it directly. Also,
+        only varargs calls were dealt with; varargs constructs were not.
+        
+        This lifts all of those restrictions. Every varargs call or construct can now be compiled by both
+        the DFG and the FTL. Those calls can also be inlined, too - provided that profiling gives us a
+        sensible bound on arguments list length. When we inline a varargs call, the act of loading the
+        varargs is now made explicit in IR. I believe that we have enough IR machinery in place that we
+        would be able to do the arguments forwarding optimization as an IR transformation. This patch
+        doesn't implement that yet, and keeps the old bytecode-based varargs argument forwarding
+        optimization for now.
+        
+        There are three major IR features introduced in this patch:
+        
+        CallVarargs/ConstructVarargs: these are like Call/Construct except that they take an arguments
+        array rather than a list of arguments. Currently, they splat this arguments array onto the stack
+        using the same basic technique as the baseline JIT has always done. Except, these nodes indicate
+        that we are not interested in doing the non-escaping "arguments" optimization.
+        
+        CallForwardVarargs: this is a form of CallVarargs that just does the non-escaping "arguments"
+        optimization, aka forwarding arguments. It's somewhat lazy that this doesn't include
+        ConstructForwardVarargs, but the reason is that once we eliminate the lazy tear-off for
+        arguments, this whole thing will have to be tweaked - and for now forwarding on construct is just
+        not important in benchmarks. ConstructVarargs will still do forwarding, just not inlined.
+        
+        LoadVarargs: loads all elements out of an array onto the stack in a manner suitable for a varargs
+        call. This is used only when a varargs call (or construct) was inlined. The bytecode parser will
+        make room on the stack for the arguments, and will use LoadVarars to put those arguments into
+        place.
+        
+        In the future, we can consider adding strength reductions like:
+        
+        - If CallVarargs/ConstructVarargs see an array of known size with known elements, turn them into
+          Call/Construct.
+        
+        - If CallVarargs/ConstructVarargs are passed an unmodified, unescaped Arguments object, then
+          turn them into CallForwardVarargs/ConstructForwardVarargs.
+        
+        - If LoadVarargs sees an array of known size, then turn it into a sequence of GetByVals and
+          PutLocals.
+        
+        - If LoadVarargs sees an unmodified, unescaped Arguments object, then turn it into something like
+          LoadForwardVarargs.
+        
+        - If CallVarargs/ConstructVarargs/LoadVarargs see the result of a splice (or other Array
+          prototype function), then do the splice and varargs loading in one go (maybe via a new node
+          type).
+
+        * CMakeLists.txt:
+        * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * assembler/MacroAssembler.h:
+        (JSC::MacroAssembler::rshiftPtr):
+        (JSC::MacroAssembler::urshiftPtr):
+        * assembler/MacroAssemblerARM64.h:
+        (JSC::MacroAssemblerARM64::urshift64):
+        * assembler/MacroAssemblerX86_64.h:
+        (JSC::MacroAssemblerX86_64::urshift64):
+        * assembler/X86Assembler.h:
+        (JSC::X86Assembler::shrq_i8r):
+        * bytecode/CallLinkInfo.h:
+        (JSC::CallLinkInfo::CallLinkInfo):
+        * bytecode/CallLinkStatus.cpp:
+        (JSC::CallLinkStatus::computeFor):
+        (JSC::CallLinkStatus::setProvenConstantCallee):
+        (JSC::CallLinkStatus::dump):
+        * bytecode/CallLinkStatus.h:
+        (JSC::CallLinkStatus::maxNumArguments):
+        (JSC::CallLinkStatus::setIsProved): Deleted.
+        * bytecode/CodeOrigin.cpp:
+        (WTF::printInternal):
+        * bytecode/CodeOrigin.h:
+        (JSC::InlineCallFrame::varargsKindFor):
+        (JSC::InlineCallFrame::specializationKindFor):
+        (JSC::InlineCallFrame::isVarargs):
+        (JSC::InlineCallFrame::isNormalCall): Deleted.
+        * bytecode/ExitKind.cpp:
+        (JSC::exitKindToString):
+        * bytecode/ExitKind.h:
+        * bytecode/ValueRecovery.cpp:
+        (JSC::ValueRecovery::dumpInContext):
+        * dfg/DFGAbstractInterpreterInlines.h:
+        (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
+        * dfg/DFGArgumentsSimplificationPhase.cpp:
+        (JSC::DFG::ArgumentsSimplificationPhase::run):
+        * dfg/DFGByteCodeParser.cpp:
+        (JSC::DFG::ByteCodeParser::flush):
+        (JSC::DFG::ByteCodeParser::addCall):
+        (JSC::DFG::ByteCodeParser::handleCall):
+        (JSC::DFG::ByteCodeParser::handleVarargsCall):
+        (JSC::DFG::ByteCodeParser::emitFunctionChecks):
+        (JSC::DFG::ByteCodeParser::inliningCost):
+        (JSC::DFG::ByteCodeParser::inlineCall):
+        (JSC::DFG::ByteCodeParser::attemptToInlineCall):
+        (JSC::DFG::ByteCodeParser::handleInlining):
+        (JSC::DFG::ByteCodeParser::handleMinMax):
+        (JSC::DFG::ByteCodeParser::handleIntrinsic):
+        (JSC::DFG::ByteCodeParser::handleTypedArrayConstructor):
+        (JSC::DFG::ByteCodeParser::handleConstantInternalFunction):
+        (JSC::DFG::ByteCodeParser::parseBlock):
+        (JSC::DFG::ByteCodeParser::removeLastNodeFromGraph): Deleted.
+        (JSC::DFG::ByteCodeParser::undoFunctionChecks): Deleted.
+        * dfg/DFGCapabilities.cpp:
+        (JSC::DFG::capabilityLevel):
+        * dfg/DFGCapabilities.h:
+        (JSC::DFG::functionCapabilityLevel):
+        (JSC::DFG::mightCompileFunctionFor):
+        * dfg/DFGClobberize.h:
+        (JSC::DFG::clobberize):
+        * dfg/DFGCommon.cpp:
+        (WTF::printInternal):
+        * dfg/DFGCommon.h:
+        (JSC::DFG::canInline):
+        (JSC::DFG::leastUpperBound):
+        * dfg/DFGDoesGC.cpp:
+        (JSC::DFG::doesGC):
+        * dfg/DFGFixupPhase.cpp:
+        (JSC::DFG::FixupPhase::fixupNode):
+        * dfg/DFGGraph.cpp:
+        (JSC::DFG::Graph::dump):
+        (JSC::DFG::Graph::dumpBlockHeader):
+        (JSC::DFG::Graph::isLiveInBytecode):
+        (JSC::DFG::Graph::valueProfileFor):
+        (JSC::DFG::Graph::methodOfGettingAValueProfileFor):
+        * dfg/DFGGraph.h:
+        (JSC::DFG::Graph::valueProfileFor): Deleted.
+        (JSC::DFG::Graph::methodOfGettingAValueProfileFor): Deleted.
+        * dfg/DFGJITCompiler.cpp:
+        (JSC::DFG::JITCompiler::compileExceptionHandlers):
+        (JSC::DFG::JITCompiler::link):
+        * dfg/DFGMayExit.cpp:
+        (JSC::DFG::mayExit):
+        * dfg/DFGNode.h:
+        (JSC::DFG::Node::hasCallVarargsData):
+        (JSC::DFG::Node::callVarargsData):
+        (JSC::DFG::Node::hasLoadVarargsData):
+        (JSC::DFG::Node::loadVarargsData):
+        (JSC::DFG::Node::hasHeapPrediction):
+        * dfg/DFGNodeType.h:
+        * dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
+        (JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
+        * dfg/DFGOSRExitCompilerCommon.cpp:
+        (JSC::DFG::reifyInlinedCallFrames):
+        * dfg/DFGOperations.cpp:
+        * dfg/DFGOperations.h:
+        * dfg/DFGPlan.cpp:
+        (JSC::DFG::dumpAndVerifyGraph):
+        (JSC::DFG::Plan::compileInThreadImpl):
+        * dfg/DFGPreciseLocalClobberize.h:
+        (JSC::DFG::PreciseLocalClobberizeAdaptor::readTop):
+        (JSC::DFG::PreciseLocalClobberizeAdaptor::writeTop):
+        * dfg/DFGPredictionPropagationPhase.cpp:
+        (JSC::DFG::PredictionPropagationPhase::propagate):
+        * dfg/DFGSSAConversionPhase.cpp:
+        * dfg/DFGSafeToExecute.h:
+        (JSC::DFG::safeToExecute):
+        * dfg/DFGSpeculativeJIT.h:
+        (JSC::DFG::SpeculativeJIT::isFlushed):
+        (JSC::DFG::SpeculativeJIT::callOperation):
+        * dfg/DFGSpeculativeJIT32_64.cpp:
+        (JSC::DFG::SpeculativeJIT::emitCall):
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::emitCall):
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGStackLayoutPhase.cpp:
+        (JSC::DFG::StackLayoutPhase::run):
+        (JSC::DFG::StackLayoutPhase::assign):
+        * dfg/DFGStrengthReductionPhase.cpp:
+        (JSC::DFG::StrengthReductionPhase::handleNode):
+        * dfg/DFGTypeCheckHoistingPhase.cpp:
+        (JSC::DFG::TypeCheckHoistingPhase::run):
+        * dfg/DFGValidate.cpp:
+        (JSC::DFG::Validate::validateCPS):
+        * ftl/FTLAbbreviations.h:
+        (JSC::FTL::functionType):
+        (JSC::FTL::buildCall):
+        * ftl/FTLCapabilities.cpp:
+        (JSC::FTL::canCompile):
+        * ftl/FTLCompile.cpp:
+        (JSC::FTL::mmAllocateDataSection):
+        * ftl/FTLInlineCacheSize.cpp:
+        (JSC::FTL::sizeOfCall):
+        (JSC::FTL::sizeOfCallVarargs):
+        (JSC::FTL::sizeOfCallForwardVarargs):
+        (JSC::FTL::sizeOfConstructVarargs):
+        (JSC::FTL::sizeOfIn):
+        (JSC::FTL::sizeOfICFor):
+        (JSC::FTL::sizeOfCheckIn): Deleted.
+        * ftl/FTLInlineCacheSize.h:
+        * ftl/FTLIntrinsicRepository.h:
+        * ftl/FTLJSCall.cpp:
+        (JSC::FTL::JSCall::JSCall):
+        * ftl/FTLJSCallBase.cpp:
+        * ftl/FTLJSCallBase.h:
+        * ftl/FTLJSCallVarargs.cpp: Added.
+        (JSC::FTL::JSCallVarargs::JSCallVarargs):
+        (JSC::FTL::JSCallVarargs::numSpillSlotsNeeded):
+        (JSC::FTL::JSCallVarargs::emit):
+        (JSC::FTL::JSCallVarargs::link):
+        * ftl/FTLJSCallVarargs.h: Added.
+        (JSC::FTL::JSCallVarargs::node):
+        (JSC::FTL::JSCallVarargs::stackmapID):
+        (JSC::FTL::JSCallVarargs::operator<):
+        * ftl/FTLLowerDFGToLLVM.cpp:
+        (JSC::FTL::LowerDFGToLLVM::lower):
+        (JSC::FTL::LowerDFGToLLVM::compileNode):
+        (JSC::FTL::LowerDFGToLLVM::compileGetMyArgumentsLength):
+        (JSC::FTL::LowerDFGToLLVM::compileGetMyArgumentByVal):
+        (JSC::FTL::LowerDFGToLLVM::compileCallOrConstructVarargs):
+        (JSC::FTL::LowerDFGToLLVM::compileLoadVarargs):
+        (JSC::FTL::LowerDFGToLLVM::compileIn):
+        (JSC::FTL::LowerDFGToLLVM::emitStoreBarrier):
+        (JSC::FTL::LowerDFGToLLVM::vmCall):
+        (JSC::FTL::LowerDFGToLLVM::vmCallNoExceptions):
+        (JSC::FTL::LowerDFGToLLVM::callCheck):
+        * ftl/FTLOutput.h:
+        (JSC::FTL::Output::call):
+        * ftl/FTLState.cpp:
+        (JSC::FTL::State::State):
+        * ftl/FTLState.h:
+        * interpreter/Interpreter.cpp:
+        (JSC::sizeOfVarargs):
+        (JSC::sizeFrameForVarargs):
+        * interpreter/Interpreter.h:
+        * interpreter/StackVisitor.cpp:
+        (JSC::StackVisitor::readInlinedFrame):
+        * jit/AssemblyHelpers.cpp:
+        (JSC::AssemblyHelpers::emitExceptionCheck):
+        * jit/AssemblyHelpers.h:
+        (JSC::AssemblyHelpers::addressFor):
+        (JSC::AssemblyHelpers::calleeFrameSlot):
+        (JSC::AssemblyHelpers::calleeArgumentSlot):
+        (JSC::AssemblyHelpers::calleeFrameTagSlot):
+        (JSC::AssemblyHelpers::calleeFramePayloadSlot):
+        (JSC::AssemblyHelpers::calleeArgumentTagSlot):
+        (JSC::AssemblyHelpers::calleeArgumentPayloadSlot):
+        (JSC::AssemblyHelpers::calleeFrameCallerFrame):
+        (JSC::AssemblyHelpers::selectScratchGPR):
+        * jit/CCallHelpers.h:
+        (JSC::CCallHelpers::setupArgumentsWithExecState):
+        * jit/GPRInfo.h:
+        * jit/JIT.cpp:
+        (JSC::JIT::privateCompile):
+        * jit/JIT.h:
+        * jit/JITCall.cpp:
+        (JSC::JIT::compileSetupVarargsFrame):
+        (JSC::JIT::compileOpCall):
+        * jit/JITCall32_64.cpp:
+        (JSC::JIT::compileSetupVarargsFrame):
+        (JSC::JIT::compileOpCall):
+        * jit/JITOperations.h:
+        * jit/SetupVarargsFrame.cpp:
+        (JSC::emitSetupVarargsFrameFastCase):
+        * jit/SetupVarargsFrame.h:
+        * runtime/Arguments.h:
+        (JSC::Arguments::create):
+        (JSC::Arguments::registerArraySizeInBytes):
+        (JSC::Arguments::finishCreation):
+        * runtime/Options.h:
+        * tests/stress/construct-varargs-inline-smaller-Foo.js: Added.
+        (Foo):
+        (bar):
+        (checkEqual):
+        (test):
+        * tests/stress/construct-varargs-inline.js: Added.
+        (Foo):
+        (bar):
+        (checkEqual):
+        (test):
+        * tests/stress/construct-varargs-no-inline.js: Added.
+        (Foo):
+        (bar):
+        (checkEqual):
+        (test):
+        * tests/stress/get-argument-by-val-in-inlined-varargs-call-out-of-bounds.js: Added.
+        (foo):
+        (bar):
+        * tests/stress/get-argument-by-val-safe-in-inlined-varargs-call-out-of-bounds.js: Added.
+        (foo):
+        (bar):
+        * tests/stress/get-my-argument-by-val-creates-arguments.js: Added.
+        (blah):
+        (foo):
+        (bar):
+        (checkEqual):
+        (test):
+        * tests/stress/load-varargs-then-inlined-call-exit-in-foo.js: Added.
+        (foo):
+        (bar):
+        (checkEqual):
+        * tests/stress/load-varargs-then-inlined-call-inlined.js: Added.
+        (foo):
+        (bar):
+        (baz):
+        (checkEqual):
+        (test):
+        * tests/stress/load-varargs-then-inlined-call.js: Added.
+        (foo):
+        (bar):
+        (checkEqual):
+        (test):
+
 2015-02-17  Michael Saboff  <msaboff@apple.com>
 
         Unreviewed, Restoring the C LOOP insta-crash fix in r180184.
index dd4183f..2a6795b 100644 (file)
     <ClCompile Include="..\ftl\FTLJITFinalizer.cpp" />
     <ClCompile Include="..\ftl\FTLJSCall.cpp" />
     <ClCompile Include="..\ftl\FTLJSCallBase.cpp" />
+    <ClCompile Include="..\ftl\FTLJSCallVarargs.cpp" />
     <ClCompile Include="..\ftl\FTLLink.cpp" />
     <ClCompile Include="..\ftl\FTLLocation.cpp" />
     <ClCompile Include="..\ftl\FTLLowerDFGToLLVM.cpp" />
     <ClInclude Include="..\ftl\FTLJITFinalizer.h" />
     <ClInclude Include="..\ftl\FTLJSCall.h" />
     <ClInclude Include="..\ftl\FTLJSCallBase.h" />
+    <ClInclude Include="..\ftl\FTLJSCallVarargs.h" />
     <ClInclude Include="..\ftl\FTLLink.h" />
     <ClInclude Include="..\ftl\FTLLocation.h" />
     <ClInclude Include="..\ftl\FTLLowerDFGToLLVM.h" />
index 4228724..439012a 100644 (file)
                0FCEFAE0180738C000472CE4 /* FTLLocation.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FCEFADE180738C000472CE4 /* FTLLocation.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FD1202F1A8AED12000F5280 /* FTLJSCallBase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD1202D1A8AED12000F5280 /* FTLJSCallBase.cpp */; };
                0FD120301A8AED12000F5280 /* FTLJSCallBase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD1202E1A8AED12000F5280 /* FTLJSCallBase.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0FD120331A8C85BD000F5280 /* FTLJSCallVarargs.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD120311A8C85BD000F5280 /* FTLJSCallVarargs.cpp */; };
+               0FD120341A8C85BD000F5280 /* FTLJSCallVarargs.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD120321A8C85BD000F5280 /* FTLJSCallVarargs.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FD2C92416D01EE900C7803F /* StructureInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD2C92316D01EE900C7803F /* StructureInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FD3C82614115D4000FD81CB /* DFGDriver.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD3C82014115CF800FD81CB /* DFGDriver.cpp */; };
                0FD3C82814115D4F00FD81CB /* DFGDriver.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD3C82214115D0E00FD81CB /* DFGDriver.h */; };
                0FCEFADE180738C000472CE4 /* FTLLocation.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLLocation.h; path = ftl/FTLLocation.h; sourceTree = "<group>"; };
                0FD1202D1A8AED12000F5280 /* FTLJSCallBase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLJSCallBase.cpp; path = ftl/FTLJSCallBase.cpp; sourceTree = "<group>"; };
                0FD1202E1A8AED12000F5280 /* FTLJSCallBase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLJSCallBase.h; path = ftl/FTLJSCallBase.h; sourceTree = "<group>"; };
+               0FD120311A8C85BD000F5280 /* FTLJSCallVarargs.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLJSCallVarargs.cpp; path = ftl/FTLJSCallVarargs.cpp; sourceTree = "<group>"; };
+               0FD120321A8C85BD000F5280 /* FTLJSCallVarargs.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLJSCallVarargs.h; path = ftl/FTLJSCallVarargs.h; sourceTree = "<group>"; };
                0FD2C92316D01EE900C7803F /* StructureInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StructureInlines.h; sourceTree = "<group>"; };
                0FD3C82014115CF800FD81CB /* DFGDriver.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGDriver.cpp; path = dfg/DFGDriver.cpp; sourceTree = "<group>"; };
                0FD3C82214115D0E00FD81CB /* DFGDriver.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGDriver.h; path = dfg/DFGDriver.h; sourceTree = "<group>"; };
                                0F6B1CB4185FC9E900845D97 /* FTLJSCall.h */,
                                0FD1202D1A8AED12000F5280 /* FTLJSCallBase.cpp */,
                                0FD1202E1A8AED12000F5280 /* FTLJSCallBase.h */,
+                               0FD120311A8C85BD000F5280 /* FTLJSCallVarargs.cpp */,
+                               0FD120321A8C85BD000F5280 /* FTLJSCallVarargs.h */,
                                0F8F2B93172E049E007DBDA5 /* FTLLink.cpp */,
                                0F8F2B94172E049E007DBDA5 /* FTLLink.h */,
                                0FCEFADD180738C000472CE4 /* FTLLocation.cpp */,
                                6514F21918B3E1670098FF8B /* Bytecodes.h in Headers */,
                                65C0285D1717966800351E35 /* ARMv7DOpcode.h in Headers */,
                                0F8335B81639C1EA001443B5 /* ArrayAllocationProfile.h in Headers */,
+                               0FD120341A8C85BD000F5280 /* FTLJSCallVarargs.h in Headers */,
                                A7A8AF3517ADB5F3005AB174 /* ArrayBuffer.h in Headers */,
                                0FFC99D5184EE318009C10AB /* ArrayBufferNeuteringWatchpoint.h in Headers */,
                                A7A8AF3717ADB5F3005AB174 /* ArrayBufferView.h in Headers */,
                                A78A9774179738B8009DF744 /* DFGFailedFinalizer.cpp in Sources */,
                                A78A9776179738B8009DF744 /* DFGFinalizer.cpp in Sources */,
                                0F2BDC15151C5D4D00CD8910 /* DFGFixupPhase.cpp in Sources */,
+                               0FD120331A8C85BD000F5280 /* FTLJSCallVarargs.cpp in Sources */,
                                0F9D339617FFC4E60073C2BC /* DFGFlushedAt.cpp in Sources */,
                                A7D89CF717A0B8CC00773AD8 /* DFGFlushFormat.cpp in Sources */,
                                86EC9DC71328DF82002B2AD7 /* DFGGraph.cpp in Sources */,
index c70f2b7..fd4c5bb 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2008, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2008, 2012-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -471,6 +471,16 @@ public:
     {
         lshift32(trustedImm32ForShift(imm), srcDest);
     }
+    
+    void rshiftPtr(Imm32 imm, RegisterID srcDest)
+    {
+        rshift32(trustedImm32ForShift(imm), srcDest);
+    }
+
+    void urshiftPtr(Imm32 imm, RegisterID srcDest)
+    {
+        urshift32(trustedImm32ForShift(imm), srcDest);
+    }
 
     void negPtr(RegisterID dest)
     {
@@ -750,6 +760,16 @@ public:
         lshift64(trustedImm32ForShift(imm), srcDest);
     }
 
+    void rshiftPtr(Imm32 imm, RegisterID srcDest)
+    {
+        rshift64(trustedImm32ForShift(imm), srcDest);
+    }
+
+    void urshiftPtr(Imm32 imm, RegisterID srcDest)
+    {
+        urshift64(trustedImm32ForShift(imm), srcDest);
+    }
+
     void negPtr(RegisterID dest)
     {
         neg64(dest);
index 0a6dcea..86d34bb 100644 (file)
@@ -689,6 +689,26 @@ public:
         urshift32(dest, imm, dest);
     }
 
+    void urshift64(RegisterID src, RegisterID shiftAmount, RegisterID dest)
+    {
+        m_assembler.lsr<64>(dest, src, shiftAmount);
+    }
+    
+    void urshift64(RegisterID src, TrustedImm32 imm, RegisterID dest)
+    {
+        m_assembler.lsr<64>(dest, src, imm.m_value & 0x1f);
+    }
+
+    void urshift64(RegisterID shiftAmount, RegisterID dest)
+    {
+        urshift64(dest, shiftAmount, dest);
+    }
+    
+    void urshift64(TrustedImm32 imm, RegisterID dest)
+    {
+        urshift64(dest, imm, dest);
+    }
+
     void xor32(RegisterID src, RegisterID dest)
     {
         xor32(dest, src, dest);
index a8243e2..920de74 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2008, 2012, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2008, 2012, 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -334,6 +334,11 @@ public:
         m_assembler.sarq_i8r(imm.m_value, dest);
     }
     
+    void urshift64(TrustedImm32 imm, RegisterID dest)
+    {
+        m_assembler.shrq_i8r(imm.m_value, dest);
+    }
+    
     void mul64(RegisterID src, RegisterID dest)
     {
         m_assembler.imulq_rr(src, dest);
index 7877eb7..e9ba4c5 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2008, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2008, 2012-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -881,6 +881,16 @@ public:
         }
     }
 
+    void shrq_i8r(int imm, RegisterID dst)
+    {
+        if (imm == 1)
+            m_formatter.oneByteOp64(OP_GROUP2_Ev1, GROUP2_OP_SHR, dst);
+        else {
+            m_formatter.oneByteOp64(OP_GROUP2_EvIb, GROUP2_OP_SHR, dst);
+            m_formatter.immediate8(imm);
+        }
+    }
+
     void shlq_i8r(int imm, RegisterID dst)
     {
         if (imm == 1)
index 040cdb6..50c0746 100644 (file)
@@ -61,6 +61,7 @@ struct CallLinkInfo : public BasicRawSentinelNode<CallLinkInfo> {
         , hasSeenShouldRepatch(false)
         , hasSeenClosure(false)
         , callType(None)
+        , maxNumArguments(0)
         , slowPathCount(0)
     {
     }
@@ -91,6 +92,7 @@ struct CallLinkInfo : public BasicRawSentinelNode<CallLinkInfo> {
     bool hasSeenClosure : 1;
     unsigned callType : 5; // CallType
     unsigned calleeGPR : 8;
+    uint8_t maxNumArguments; // Only used for varargs calls.
     uint32_t slowPathCount;
     CodeOrigin codeOrigin;
 
index c8271e0..bf4618b 100644 (file)
@@ -129,7 +129,9 @@ CallLinkStatus CallLinkStatus::computeFor(
     // We don't really need this, but anytime we have to debug this code, it becomes indispensable.
     UNUSED_PARAM(profiledBlock);
     
-    return computeFromCallLinkInfo(locker, callLinkInfo);
+    CallLinkStatus result = computeFromCallLinkInfo(locker, callLinkInfo);
+    result.m_maxNumArguments = callLinkInfo.maxNumArguments;
+    return result;
 }
 
 CallLinkStatus CallLinkStatus::computeFromCallLinkInfo(
@@ -291,6 +293,13 @@ CallLinkStatus CallLinkStatus::computeFor(
     return computeFor(profiledBlock, codeOrigin.bytecodeIndex, baselineMap);
 }
 
+void CallLinkStatus::setProvenConstantCallee(CallVariant variant)
+{
+    m_variants = CallVariantList{ variant };
+    m_couldTakeSlowPath = false;
+    m_isProved = true;
+}
+
 bool CallLinkStatus::isClosureCall() const
 {
     for (unsigned i = m_variants.size(); i--;) {
@@ -322,6 +331,9 @@ void CallLinkStatus::dump(PrintStream& out) const
     
     if (!m_variants.isEmpty())
         out.print(comma, listDump(m_variants));
+    
+    if (m_maxNumArguments)
+        out.print(comma, "maxNumArguments = ", m_maxNumArguments);
 }
 
 } // namespace JSC
index 545c1bc..3ae2316 100644 (file)
@@ -68,12 +68,6 @@ public:
     {
     }
     
-    CallLinkStatus& setIsProved(bool isProved)
-    {
-        m_isProved = isProved;
-        return *this;
-    }
-    
     static CallLinkStatus computeFor(
         CodeBlock*, unsigned bytecodeIndex, const CallLinkInfoMap&);
 
@@ -108,6 +102,8 @@ public:
     static CallLinkStatus computeFor(
         CodeBlock*, CodeOrigin, const CallLinkInfoMap&, const ContextMap&);
     
+    void setProvenConstantCallee(CallVariant);
+    
     bool isSet() const { return !m_variants.isEmpty() || m_couldTakeSlowPath; }
     
     bool operator!() const { return !isSet(); }
@@ -123,6 +119,8 @@ public:
     
     bool isClosureCall() const; // Returns true if any callee is a closure call.
     
+    unsigned maxNumArguments() const { return m_maxNumArguments; }
+    
     void dump(PrintStream&) const;
     
 private:
@@ -137,6 +135,7 @@ private:
     CallVariantList m_variants;
     bool m_couldTakeSlowPath;
     bool m_isProved;
+    unsigned m_maxNumArguments;
 };
 
 } // namespace JSC
index 81b1e6a..6e1dd7d 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -206,6 +206,12 @@ void printInternal(PrintStream& out, JSC::InlineCallFrame::Kind kind)
     case JSC::InlineCallFrame::Construct:
         out.print("Construct");
         return;
+    case JSC::InlineCallFrame::CallVarargs:
+        out.print("CallVarargs");
+        return;
+    case JSC::InlineCallFrame::ConstructVarargs:
+        out.print("ConstructVarargs");
+        return;
     case JSC::InlineCallFrame::GetterCall:
         out.print("GetterCall");
         return;
index 03dd781..3a96d67 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -121,6 +121,8 @@ struct InlineCallFrame {
     enum Kind {
         Call,
         Construct,
+        CallVarargs,
+        ConstructVarargs,
         
         // For these, the stackOffset incorporates the argument count plus the true return PC
         // slot.
@@ -140,30 +142,48 @@ struct InlineCallFrame {
         return Call;
     }
     
+    static Kind varargsKindFor(CodeSpecializationKind kind)
+    {
+        switch (kind) {
+        case CodeForCall:
+            return CallVarargs;
+        case CodeForConstruct:
+            return ConstructVarargs;
+        }
+        RELEASE_ASSERT_NOT_REACHED();
+        return Call;
+    }
+    
     static CodeSpecializationKind specializationKindFor(Kind kind)
     {
         switch (kind) {
         case Call:
+        case CallVarargs:
         case GetterCall:
         case SetterCall:
             return CodeForCall;
         case Construct:
+        case ConstructVarargs:
             return CodeForConstruct;
         }
         RELEASE_ASSERT_NOT_REACHED();
         return CodeForCall;
     }
     
-    static bool isNormalCall(Kind kind)
+    static bool isVarargs(Kind kind)
     {
         switch (kind) {
-        case Call:
-        case Construct:
+        case CallVarargs:
+        case ConstructVarargs:
             return true;
         default:
             return false;
         }
     }
+    bool isVarargs() const
+    {
+        return isVarargs(static_cast<Kind>(kind));
+    }
     
     Vector<ValueRecovery> arguments; // Includes 'this'.
     WriteBarrier<ScriptExecutable> executable;
@@ -171,10 +191,11 @@ struct InlineCallFrame {
     CodeOrigin caller;
     BitVector capturedVars; // Indexed by the machine call frame's variable numbering.
 
-    signed stackOffset : 29;
-    unsigned kind : 2; // real type is Kind
+    signed stackOffset : 28;
+    unsigned kind : 3; // real type is Kind
     bool isClosureCall : 1; // If false then we know that callee/scope are constants and the DFG won't treat them as variables, i.e. they have to be recovered manually.
     VirtualRegister argumentsRegister; // This is only set if the code uses arguments. The unmodified arguments register follows the unmodifiedArgumentsRegister() convention (see CodeBlock.h).
+    VirtualRegister argumentCountRegister; // Only set when we inline a varargs call.
     
     // There is really no good notion of a "default" set of values for
     // InlineCallFrame's fields. This constructor is here just to reduce confusion if
index 87ad2ed..a3f8150 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -66,6 +66,8 @@ const char* exitKindToString(ExitKind kind)
         return "ArgumentsEscaped";
     case NotStringObject:
         return "NotStringObject";
+    case VarargsOverflow:
+        return "VarargsOverflow";
     case Uncountable:
         return "Uncountable";
     case UncountableInvalidation:
index 150135d..855a867 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -45,6 +45,7 @@ enum ExitKind : uint8_t {
     InadequateCoverage, // We exited because we ended up in code that didn't have profiling coverage.
     ArgumentsEscaped, // We exited because arguments escaped but we didn't expect them to.
     NotStringObject, // We exited because we shouldn't have attempted to optimize string object access.
+    VarargsOverflow, // We exited because a varargs call passed more arguments than we expected.
     Uncountable, // We exited for none of the above reasons, and we should not count it. Most uses of this should be viewed as a FIXME.
     UncountableInvalidation, // We exited because the code block was invalidated; this means that we've already counted the reasons why the code block was invalidated.
     WatchdogTimerFired, // We exited because we need to service the watchdog timer.
index b7de34b..29aa56f 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -92,25 +92,25 @@ void ValueRecovery::dumpInContext(PrintStream& out, DumpContext* context) const
         return;
 #endif
     case DisplacedInJSStack:
-        out.printf("*%d", virtualRegister().offset());
+        out.print("*", virtualRegister());
         return;
     case Int32DisplacedInJSStack:
-        out.printf("*int32(%d)", virtualRegister().offset());
+        out.print("*int32(", virtualRegister(), ")");
         return;
     case Int52DisplacedInJSStack:
-        out.printf("*int52(%d)", virtualRegister().offset());
+        out.print("*int52(", virtualRegister(), ")");
         return;
     case StrictInt52DisplacedInJSStack:
-        out.printf("*strictInt52(%d)", virtualRegister().offset());
+        out.print("*strictInt52(", virtualRegister(), ")");
         return;
     case DoubleDisplacedInJSStack:
-        out.printf("*double(%d)", virtualRegister().offset());
+        out.print("*double(", virtualRegister(), ")");
         return;
     case CellDisplacedInJSStack:
-        out.printf("*cell(%d)", virtualRegister().offset());
+        out.print("*cell(", virtualRegister(), ")");
         return;
     case BooleanDisplacedInJSStack:
-        out.printf("*bool(%d)", virtualRegister().offset());
+        out.print("*bool(", virtualRegister(), ")");
         return;
     case ArgumentsThatWereNotCreated:
         out.printf("arguments");
index 267d605..402b5cd 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -195,9 +195,20 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
     }
         
     case SetArgument:
-        // Assert that the state of arguments has been set.
-        ASSERT(!m_state.block()->valuesAtHead.operand(node->local()).isClear());
+        // Assert that the state of arguments has been set. SetArgument means that someone set
+        // the argument values out-of-band, and currently this always means setting to a
+        // non-clear value.
+        ASSERT(!m_state.variables().operand(node->local()).isClear());
         break;
+        
+    case LoadVarargs: {
+        clobberWorld(node->origin.semantic, clobberLimit);
+        LoadVarargsData* data = node->loadVarargsData();
+        m_state.variables().operand(data->count).setType(SpecInt32);
+        for (unsigned i = data->limit - 1; i--;)
+            m_state.variables().operand(data->start.offset() + i).makeHeapTop();
+        break;
+    }
             
     case BitAnd:
     case BitOr:
@@ -1325,7 +1336,8 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
         // the arguments a bit. Note that this is not sufficient to force constant folding
         // of GetMyArgumentsLength, because GetMyArgumentsLength is a clobbering operation.
         // We perform further optimizations on this later on.
-        if (node->origin.semantic.inlineCallFrame) {
+        if (node->origin.semantic.inlineCallFrame
+            && !node->origin.semantic.inlineCallFrame->isVarargs()) {
             setConstant(
                 node, jsNumber(node->origin.semantic.inlineCallFrame->arguments.size() - 1));
             m_state.setDidClobber(true); // Pretend that we clobbered to prevent constant folding.
@@ -1974,6 +1986,9 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
     case Construct:
     case NativeCall:
     case NativeConstruct:
+    case CallVarargs:
+    case CallForwardVarargs:
+    case ConstructVarargs:
         clobberWorld(node->origin.semantic, clobberLimit);
         forNode(node).makeHeapTop();
         break;
index 920c466..c98fad5 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2013, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -503,6 +503,8 @@ public:
                     NodeOrigin origin = node->origin;
                     if (!origin.semantic.inlineCallFrame)
                         break;
+                    if (origin.semantic.inlineCallFrame->isVarargs())
+                        break;
                     
                     // We know exactly what this will return. But only after we have checked
                     // that nobody has escaped our arguments.
index 85e2a1c..7ef4e7f 100644 (file)
@@ -170,7 +170,8 @@ private:
     }
 
     // Helper for min and max.
-    bool handleMinMax(int resultOperand, NodeType op, int registerOffset, int argumentCountIncludingThis);
+    template<typename ChecksFunctor>
+    bool handleMinMax(int resultOperand, NodeType op, int registerOffset, int argumentCountIncludingThis, const ChecksFunctor& insertChecks);
     
     // Handle calls. This resolves issues surrounding inlining and intrinsics.
     void handleCall(
@@ -182,20 +183,25 @@ private:
         Node* callTarget, int argCount, int registerOffset, CallLinkStatus);
     void handleCall(int result, NodeType op, CodeSpecializationKind, unsigned instructionSize, int callee, int argCount, int registerOffset);
     void handleCall(Instruction* pc, NodeType op, CodeSpecializationKind);
-    void emitFunctionChecks(CallVariant, Node* callTarget, int registerOffset, CodeSpecializationKind);
-    void undoFunctionChecks(CallVariant);
+    void handleVarargsCall(Instruction* pc, NodeType op, CodeSpecializationKind);
+    void emitFunctionChecks(CallVariant, Node* callTarget, VirtualRegister thisArgumnt);
     void emitArgumentPhantoms(int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind);
     unsigned inliningCost(CallVariant, int argumentCountIncludingThis, CodeSpecializationKind); // Return UINT_MAX if it's not an inlining candidate. By convention, intrinsics have a cost of 1.
     // Handle inlining. Return true if it succeeded, false if we need to plant a call.
-    bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind, SpeculatedType prediction);
+    bool handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus&, int registerOffset, VirtualRegister thisArgument, VirtualRegister argumentsArgument, unsigned argumentsOffset, int argumentCountIncludingThis, unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind, SpeculatedType prediction);
     enum CallerLinkability { CallerDoesNormalLinking, CallerLinksManually };
-    bool attemptToInlineCall(Node* callTargetNode, int resultOperand, CallVariant, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind, CallerLinkability, SpeculatedType prediction, unsigned& inliningBalance);
-    void inlineCall(Node* callTargetNode, int resultOperand, CallVariant, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind, CallerLinkability);
+    template<typename ChecksFunctor>
+    bool attemptToInlineCall(Node* callTargetNode, int resultOperand, CallVariant, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind, CallerLinkability, SpeculatedType prediction, unsigned& inliningBalance, const ChecksFunctor& insertChecks);
+    template<typename ChecksFunctor>
+    void inlineCall(Node* callTargetNode, int resultOperand, CallVariant, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind, CallerLinkability, const ChecksFunctor& insertChecks);
     void cancelLinkingForBlock(InlineStackEntry*, BasicBlock*); // Only works when the given block is the last one to have been added for that inline stack entry.
     // Handle intrinsic functions. Return true if it succeeded, false if we need to plant a call.
-    bool handleIntrinsic(int resultOperand, Intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction);
-    bool handleTypedArrayConstructor(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, TypedArrayType);
-    bool handleConstantInternalFunction(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind);
+    template<typename ChecksFunctor>
+    bool handleIntrinsic(int resultOperand, Intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, const ChecksFunctor& insertChecks);
+    template<typename ChecksFunctor>
+    bool handleTypedArrayConstructor(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, TypedArrayType, const ChecksFunctor& insertChecks);
+    template<typename ChecksFunctor>
+    bool handleConstantInternalFunction(int resultOperand, InternalFunction*, int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind, const ChecksFunctor& insertChecks);
     Node* handlePutByOffset(Node* base, unsigned identifier, PropertyOffset, Node* value);
     Node* handleGetByOffset(SpeculatedType, Node* base, const StructureSet&, unsigned identifierNumber, PropertyOffset, NodeType op = GetByOffset);
     void handleGetById(
@@ -528,6 +534,8 @@ private:
             numArguments = inlineCallFrame->arguments.size();
             if (inlineCallFrame->isClosureCall)
                 flushDirect(inlineStackEntry->remapOperand(VirtualRegister(JSStack::Callee)));
+            if (inlineCallFrame->isVarargs())
+                flushDirect(inlineStackEntry->remapOperand(VirtualRegister(JSStack::ArgumentCount)));
         } else
             numArguments = inlineStackEntry->m_codeBlock->numParameters();
         for (unsigned argument = numArguments; argument-- > 1;)
@@ -654,13 +662,6 @@ private:
         return result;
     }
     
-    void removeLastNodeFromGraph(NodeType expectedNodeType)
-    {
-        Node* node = m_currentBlock->takeLast();
-        RELEASE_ASSERT(node->op() == expectedNodeType);
-        m_graph.m_allocator.free(node);
-    }
-
     void addVarArgChild(Node* child)
     {
         m_graph.m_varArgChildren.append(Edge(child));
@@ -691,7 +692,7 @@ private:
             op, opInfo, callee, argCount, registerOffset, prediction);
         VirtualRegister resultReg(result);
         if (resultReg.isValid())
-            set(VirtualRegister(result), call);
+            set(resultReg, call);
         return call;
     }
     
@@ -1042,8 +1043,8 @@ void ByteCodeParser::handleCall(
 {
     ASSERT(registerOffset <= 0);
     
-    if (callTarget->hasConstant())
-        callLinkStatus = CallLinkStatus(callTarget->asJSValue()).setIsProved(true);
+    if (callTarget->isCellConstant())
+        callLinkStatus.setProvenConstantCallee(CallVariant(callTarget->asCell()));
     
     if (Options::verboseDFGByteCodeParsing())
         dataLog("    Handling call at ", currentCodeOrigin(), ": ", callLinkStatus, "\n");
@@ -1060,7 +1061,7 @@ void ByteCodeParser::handleCall(
     
     OpInfo callOpInfo;
     
-    if (handleInlining(callTarget, result, callLinkStatus, registerOffset, argumentCountIncludingThis, nextOffset, op, kind, prediction)) {
+    if (handleInlining(callTarget, result, callLinkStatus, registerOffset, virtualRegisterForArgument(0, registerOffset), VirtualRegister(), 0, argumentCountIncludingThis, nextOffset, op, kind, prediction)) {
         if (m_graph.compilation())
             m_graph.compilation()->noticeInlinedCall();
         return;
@@ -1072,7 +1073,7 @@ void ByteCodeParser::handleCall(
         JSFunction* function = callee.function();
         CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
         if (function && function->isHostFunction()) {
-            emitFunctionChecks(callee, callTarget, registerOffset, specializationKind);
+            emitFunctionChecks(callee, callTarget, virtualRegisterForArgument(0, registerOffset));
             callOpInfo = OpInfo(m_graph.freeze(function));
 
             if (op == Call)
@@ -1088,11 +1089,54 @@ void ByteCodeParser::handleCall(
     addCall(result, op, callOpInfo, callTarget, argumentCountIncludingThis, registerOffset, prediction);
 }
 
-void ByteCodeParser::emitFunctionChecks(CallVariant callee, Node* callTarget, int registerOffset, CodeSpecializationKind kind)
+void ByteCodeParser::handleVarargsCall(Instruction* pc, NodeType op, CodeSpecializationKind kind)
 {
-    Node* thisArgument;
+    ASSERT(OPCODE_LENGTH(op_call_varargs) == OPCODE_LENGTH(op_construct_varargs));
+    
+    int result = pc[1].u.operand;
+    int callee = pc[2].u.operand;
+    int thisReg = pc[3].u.operand;
+    int arguments = pc[4].u.operand;
+    int firstFreeReg = pc[5].u.operand;
+    int firstVarArgOffset = pc[6].u.operand;
+    
+    SpeculatedType prediction = getPrediction();
+    
+    Node* callTarget = get(VirtualRegister(callee));
+    
+    CallLinkStatus callLinkStatus = CallLinkStatus::computeFor(
+        m_inlineStackTop->m_profiledBlock, currentCodeOrigin(),
+        m_inlineStackTop->m_callLinkInfos, m_callContextMap);
+    if (callTarget->isCellConstant())
+        callLinkStatus.setProvenConstantCallee(CallVariant(callTarget->asCell()));
+    
+    if (callLinkStatus.canOptimize()
+        && handleInlining(callTarget, result, callLinkStatus, firstFreeReg, VirtualRegister(thisReg), VirtualRegister(arguments), firstVarArgOffset, 0, m_currentIndex + OPCODE_LENGTH(op_call_varargs), op, InlineCallFrame::varargsKindFor(kind), prediction)) {
+        if (m_graph.compilation())
+            m_graph.compilation()->noticeInlinedCall();
+        return;
+    }
+    
+    CallVarargsData* data = m_graph.m_callVarargsData.add();
+    data->firstVarArgOffset = firstVarArgOffset;
+    
+    Node* thisChild;
     if (kind == CodeForCall)
-        thisArgument = get(virtualRegisterForArgument(0, registerOffset));
+        thisChild = get(VirtualRegister(thisReg));
+    else
+        thisChild = nullptr;
+    
+    Node* call = addToGraph(op, OpInfo(data), OpInfo(prediction), callTarget, get(VirtualRegister(arguments)), thisChild);
+    VirtualRegister resultReg(result);
+    if (resultReg.isValid())
+        set(resultReg, call);
+}
+
+void ByteCodeParser::emitFunctionChecks(CallVariant callee, Node* callTarget, VirtualRegister thisArgumentReg)
+{
+    Node* thisArgument;
+    if (thisArgumentReg.isValid())
+        thisArgument = get(thisArgumentReg);
     else
         thisArgument = 0;
 
@@ -1110,13 +1154,6 @@ void ByteCodeParser::emitFunctionChecks(CallVariant callee, Node* callTarget, in
     addToGraph(CheckCell, OpInfo(m_graph.freeze(calleeCell)), callTargetForCheck, thisArgument);
 }
 
-void ByteCodeParser::undoFunctionChecks(CallVariant callee)
-{
-    removeLastNodeFromGraph(CheckCell);
-    if (callee.isClosureCall())
-        removeLastNodeFromGraph(GetExecutable);
-}
-
 void ByteCodeParser::emitArgumentPhantoms(int registerOffset, int argumentCountIncludingThis, CodeSpecializationKind kind)
 {
     for (int i = kind == CodeForCall ? 0 : 1; i < argumentCountIncludingThis; ++i)
@@ -1131,7 +1168,7 @@ unsigned ByteCodeParser::inliningCost(CallVariant callee, int argumentCountInclu
     FunctionExecutable* executable = callee.functionExecutable();
     if (!executable) {
         if (verbose)
-            dataLog("    Failing because there is no function executable.");
+            dataLog("    Failing because there is no function executable.\n");
         return UINT_MAX;
     }
     
@@ -1158,6 +1195,16 @@ unsigned ByteCodeParser::inliningCost(CallVariant callee, int argumentCountInclu
     }
     CapabilityLevel capabilityLevel = inlineFunctionForCapabilityLevel(
         codeBlock, kind, callee.isClosureCall());
+    if (verbose) {
+        dataLog("    Kind: ", kind, "\n");
+        dataLog("    Is closure call: ", callee.isClosureCall(), "\n");
+        dataLog("    Capability level: ", capabilityLevel, "\n");
+        dataLog("    Might inline function: ", mightInlineFunctionFor(codeBlock, kind), "\n");
+        dataLog("    Might compile function: ", mightCompileFunctionFor(codeBlock, kind), "\n");
+        dataLog("    Is supported for inlining: ", isSupportedForInlining(codeBlock), "\n");
+        dataLog("    Needs activation: ", codeBlock->ownerExecutable()->needsActivation(), "\n");
+        dataLog("    Is inlining candidate: ", codeBlock->ownerExecutable()->isInliningCandidate(), "\n");
+    }
     if (!canInline(capabilityLevel)) {
         if (verbose)
             dataLog("    Failing because the function is not inlineable.\n");
@@ -1210,13 +1257,15 @@ unsigned ByteCodeParser::inliningCost(CallVariant callee, int argumentCountInclu
     return codeBlock->instructionCount();
 }
 
-void ByteCodeParser::inlineCall(Node* callTargetNode, int resultOperand, CallVariant callee, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind, CallerLinkability callerLinkability)
+template<typename ChecksFunctor>
+void ByteCodeParser::inlineCall(Node* callTargetNode, int resultOperand, CallVariant callee, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind, CallerLinkability callerLinkability, const ChecksFunctor& insertChecks)
 {
     CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
     
     ASSERT(inliningCost(callee, argumentCountIncludingThis, specializationKind) != UINT_MAX);
     
     CodeBlock* codeBlock = callee.functionExecutable()->baselineCodeBlockFor(specializationKind);
+    insertChecks(codeBlock);
 
     // FIXME: Don't flush constants!
     
@@ -1356,44 +1405,62 @@ void ByteCodeParser::cancelLinkingForBlock(InlineStackEntry* inlineStackEntry, B
     }
 }
 
-bool ByteCodeParser::attemptToInlineCall(Node* callTargetNode, int resultOperand, CallVariant callee, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind, CallerLinkability callerLinkability, SpeculatedType prediction, unsigned& inliningBalance)
+template<typename ChecksFunctor>
+bool ByteCodeParser::attemptToInlineCall(Node* callTargetNode, int resultOperand, CallVariant callee, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, InlineCallFrame::Kind kind, CallerLinkability callerLinkability, SpeculatedType prediction, unsigned& inliningBalance, const ChecksFunctor& insertChecks)
 {
     CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
     
     if (!inliningBalance)
         return false;
     
+    bool didInsertChecks = false;
+    auto insertChecksWithAccounting = [&] () {
+        insertChecks(nullptr);
+        didInsertChecks = true;
+    };
+    
+    if (verbose)
+        dataLog("    Considering callee ", callee, "\n");
+    
     if (InternalFunction* function = callee.internalFunction()) {
-        if (handleConstantInternalFunction(resultOperand, function, registerOffset, argumentCountIncludingThis, specializationKind)) {
+        if (handleConstantInternalFunction(resultOperand, function, registerOffset, argumentCountIncludingThis, specializationKind, insertChecksWithAccounting)) {
+            RELEASE_ASSERT(didInsertChecks);
             addToGraph(Phantom, callTargetNode);
             emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
             inliningBalance--;
             return true;
         }
+        RELEASE_ASSERT(!didInsertChecks);
         return false;
     }
     
     Intrinsic intrinsic = callee.intrinsicFor(specializationKind);
     if (intrinsic != NoIntrinsic) {
-        if (handleIntrinsic(resultOperand, intrinsic, registerOffset, argumentCountIncludingThis, prediction)) {
+        if (handleIntrinsic(resultOperand, intrinsic, registerOffset, argumentCountIncludingThis, prediction, insertChecksWithAccounting)) {
+            RELEASE_ASSERT(didInsertChecks);
             addToGraph(Phantom, callTargetNode);
             emitArgumentPhantoms(registerOffset, argumentCountIncludingThis, specializationKind);
             inliningBalance--;
             return true;
         }
+        RELEASE_ASSERT(!didInsertChecks);
         return false;
     }
     
     unsigned myInliningCost = inliningCost(callee, argumentCountIncludingThis, specializationKind);
     if (myInliningCost > inliningBalance)
         return false;
-    
-    inlineCall(callTargetNode, resultOperand, callee, registerOffset, argumentCountIncludingThis, nextOffset, kind, callerLinkability);
+
+    inlineCall(callTargetNode, resultOperand, callee, registerOffset, argumentCountIncludingThis, nextOffset, kind, callerLinkability, insertChecks);
     inliningBalance -= myInliningCost;
     return true;
 }
 
-bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus, int registerOffset, int argumentCountIncludingThis, unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind kind, SpeculatedType prediction)
+bool ByteCodeParser::handleInlining(
+    Node* callTargetNode, int resultOperand, const CallLinkStatus& callLinkStatus,
+    int registerOffsetOrFirstFreeReg, VirtualRegister thisArgument,
+    VirtualRegister argumentsArgument, unsigned argumentsOffset, int argumentCountIncludingThis,
+    unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind kind, SpeculatedType prediction)
 {
     if (verbose) {
         dataLog("Handling inlining...\n");
@@ -1407,6 +1474,13 @@ bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, con
         return false;
     }
     
+    if (InlineCallFrame::isVarargs(kind)
+        && callLinkStatus.maxNumArguments() > Options::maximumVarargsForInlining()) {
+        if (verbose)
+            dataLog("Bailing inlining because of varargs.\n");
+        return false;
+    }
+        
     unsigned inliningBalance = Options::maximumFunctionForCallInlineCandidateInstructionCount();
     if (specializationKind == CodeForConstruct)
         inliningBalance = std::min(inliningBalance, Options::maximumFunctionForConstructInlineCandidateInstructionCount());
@@ -1417,17 +1491,105 @@ bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, con
     // simplification on the fly and this helps reduce compile times, but we can only leverage
     // this in cases where we don't need control flow diamonds to check the callee.
     if (!callLinkStatus.couldTakeSlowPath() && callLinkStatus.size() == 1) {
-        emitFunctionChecks(
-            callLinkStatus[0], callTargetNode, registerOffset, specializationKind);
+        int registerOffset;
+        
+        // Only used for varargs calls.
+        unsigned mandatoryMinimum = 0;
+        unsigned maxNumArguments = 0;
+
+        if (InlineCallFrame::isVarargs(kind)) {
+            if (FunctionExecutable* functionExecutable = callLinkStatus[0].functionExecutable())
+                mandatoryMinimum = functionExecutable->parameterCount();
+            else
+                mandatoryMinimum = 0;
+            
+            // includes "this"
+            maxNumArguments = std::max(
+                callLinkStatus.maxNumArguments(),
+                mandatoryMinimum + 1);
+            
+            // We sort of pretend that this *is* the number of arguments that were passed.
+            argumentCountIncludingThis = maxNumArguments;
+            
+            registerOffset = registerOffsetOrFirstFreeReg + 1;
+            registerOffset -= maxNumArguments; // includes "this"
+            registerOffset -= JSStack::CallFrameHeaderSize;
+            registerOffset = -WTF::roundUpToMultipleOf(
+                stackAlignmentRegisters(),
+                -registerOffset);
+        } else
+            registerOffset = registerOffsetOrFirstFreeReg;
+        
         bool result = attemptToInlineCall(
             callTargetNode, resultOperand, callLinkStatus[0], registerOffset,
             argumentCountIncludingThis, nextOffset, kind, CallerDoesNormalLinking, prediction,
-            inliningBalance);
-        if (!result && !callLinkStatus.isProved())
-            undoFunctionChecks(callLinkStatus[0]);
+            inliningBalance, [&] (CodeBlock* codeBlock) {
+                emitFunctionChecks(callLinkStatus[0], callTargetNode, specializationKind == CodeForCall ? thisArgument : VirtualRegister());
+
+                // If we have a varargs call, we want to extract the arguments right now.
+                if (InlineCallFrame::isVarargs(kind)) {
+                    int remappedRegisterOffset =
+                        m_inlineStackTop->remapOperand(VirtualRegister(registerOffset)).offset();
+                    
+                    int argumentStart = registerOffset + JSStack::CallFrameHeaderSize;
+                    int remappedArgumentStart =
+                        m_inlineStackTop->remapOperand(VirtualRegister(argumentStart)).offset();
+
+                    LoadVarargsData* data = m_graph.m_loadVarargsData.add();
+                    data->start = VirtualRegister(remappedArgumentStart + 1);
+                    data->count = VirtualRegister(remappedRegisterOffset + JSStack::ArgumentCount);
+                    data->offset = argumentsOffset;
+                    data->limit = maxNumArguments;
+                    data->mandatoryMinimum = mandatoryMinimum;
+            
+                    addToGraph(LoadVarargs, OpInfo(data), get(argumentsArgument));
+            
+                    // In DFG IR before SSA, we cannot insert control flow between after the
+                    // LoadVarargs and the last SetArgument. This isn't a problem once we get to DFG
+                    // SSA. Fortunately, we also have other reasons for not inserting control flow
+                    // before SSA.
+            
+                    VariableAccessData* countVariable = newVariableAccessData(
+                        VirtualRegister(remappedRegisterOffset + JSStack::ArgumentCount), false);
+                    // This is pretty lame, but it will force the count to be flushed as an int. This doesn't
+                    // matter very much, since our use of a SetArgument and Flushes for this local slot is
+                    // mostly just a formality.
+                    countVariable->predict(SpecInt32);
+                    countVariable->mergeIsProfitableToUnbox(true);
+                    Node* setArgumentCount = addToGraph(SetArgument, OpInfo(countVariable));
+                    m_currentBlock->variablesAtTail.setOperand(countVariable->local(), setArgumentCount);
+            
+                    if (specializationKind == CodeForCall)
+                        set(VirtualRegister(argumentStart), get(thisArgument), ImmediateNakedSet);
+                    for (unsigned argument = 1; argument < maxNumArguments; ++argument) {
+                        VariableAccessData* variable = newVariableAccessData(
+                            VirtualRegister(remappedArgumentStart + argument), false);
+                        variable->mergeShouldNeverUnbox(true); // We currently have nowhere to put the type check on the LoadVarargs. LoadVarargs is effectful, so after it finishes, we cannot exit.
+                        
+                        // For a while it had been my intention to do things like this inside the
+                        // prediction injection phase. But in this case it's really best to do it here,
+                        // because it's here that we have access to the variable access datas for the
+                        // inlining we're about to do.
+                        //
+                        // Something else that's interesting here is that we'd really love to get
+                        // predictions from the arguments loaded at the callsite, rather than the
+                        // arguments received inside the callee. But that probably won't matter for most
+                        // calls.
+                        if (codeBlock && argument < static_cast<unsigned>(codeBlock->numParameters())) {
+                            ConcurrentJITLocker locker(codeBlock->m_lock);
+                            if (ValueProfile* profile = codeBlock->valueProfileForArgument(argument))
+                                variable->predict(profile->computeUpdatedPrediction(locker));
+                        }
+                        
+                        Node* setArgument = addToGraph(SetArgument, OpInfo(variable));
+                        m_currentBlock->variablesAtTail.setOperand(variable->local(), setArgument);
+                    }
+                }
+            });
         if (verbose) {
             dataLog("Done inlining (simple).\n");
             dataLog("Stack: ", currentCodeOrigin(), "\n");
+            dataLog("Result: ", result, "\n");
         }
         return result;
     }
@@ -1437,11 +1599,9 @@ bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, con
     // do more detailed polyvariant/polymorphic profiling; and second, it reduces compile times in
     // the DFG. And by polyvariant profiling we mean polyvariant profiling of *this* call. Note that
     // we could improve that aspect of this by doing polymorphic inlining but having the profiling
-    // also. Currently we opt against this, but it could be interesting. That would require having a
-    // separate node for call edge profiling.
-    // FIXME: Introduce the notion of a separate call edge profiling node.
-    // https://bugs.webkit.org/show_bug.cgi?id=136033
-    if (!isFTL(m_graph.m_plan.mode) || !Options::enablePolymorphicCallInlining()) {
+    // also.
+    if (!isFTL(m_graph.m_plan.mode) || !Options::enablePolymorphicCallInlining()
+        || InlineCallFrame::isVarargs(kind)) {
         if (verbose) {
             dataLog("Bailing inlining (hard).\n");
             dataLog("Stack: ", currentCodeOrigin(), "\n");
@@ -1482,6 +1642,8 @@ bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, con
         dataLog("Stack: ", currentCodeOrigin(), "\n");
     }
     
+    int registerOffset = registerOffsetOrFirstFreeReg;
+    
     // This makes me wish that we were in SSA all the time. We need to pick a variable into which to
     // store the callee so that it will be accessible to all of the blocks we're about to create. We
     // get away with doing an immediate-set here because we wouldn't have performed any side effects
@@ -1526,7 +1688,7 @@ bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, con
         bool inliningResult = attemptToInlineCall(
             myCallTargetNode, resultOperand, callLinkStatus[i], registerOffset,
             argumentCountIncludingThis, nextOffset, kind, CallerLinksManually, prediction,
-            inliningBalance);
+            inliningBalance, [&] (CodeBlock*) { });
         
         if (!inliningResult) {
             // That failed so we let the block die. Nothing interesting should have been added to
@@ -1610,14 +1772,17 @@ bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, con
     return true;
 }
 
-bool ByteCodeParser::handleMinMax(int resultOperand, NodeType op, int registerOffset, int argumentCountIncludingThis)
+template<typename ChecksFunctor>
+bool ByteCodeParser::handleMinMax(int resultOperand, NodeType op, int registerOffset, int argumentCountIncludingThis, const ChecksFunctor& insertChecks)
 {
     if (argumentCountIncludingThis == 1) { // Math.min()
+        insertChecks();
         set(VirtualRegister(resultOperand), addToGraph(JSConstant, OpInfo(m_constantNaN)));
         return true;
     }
      
     if (argumentCountIncludingThis == 2) { // Math.min(x)
+        insertChecks();
         Node* result = get(VirtualRegister(virtualRegisterForArgument(1, registerOffset)));
         addToGraph(Phantom, Edge(result, NumberUse));
         set(VirtualRegister(resultOperand), result);
@@ -1625,6 +1790,7 @@ bool ByteCodeParser::handleMinMax(int resultOperand, NodeType op, int registerOf
     }
     
     if (argumentCountIncludingThis == 3) { // Math.min(x, y)
+        insertChecks();
         set(VirtualRegister(resultOperand), addToGraph(op, get(virtualRegisterForArgument(1, registerOffset)), get(virtualRegisterForArgument(2, registerOffset))));
         return true;
     }
@@ -1633,11 +1799,13 @@ bool ByteCodeParser::handleMinMax(int resultOperand, NodeType op, int registerOf
     return false;
 }
 
-bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction)
+template<typename ChecksFunctor>
+bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int registerOffset, int argumentCountIncludingThis, SpeculatedType prediction, const ChecksFunctor& insertChecks)
 {
     switch (intrinsic) {
     case AbsIntrinsic: {
         if (argumentCountIncludingThis == 1) { // Math.abs()
+            insertChecks();
             set(VirtualRegister(resultOperand), addToGraph(JSConstant, OpInfo(m_constantNaN)));
             return true;
         }
@@ -1645,6 +1813,7 @@ bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int
         if (!MacroAssembler::supportsFloatingPointAbs())
             return false;
 
+        insertChecks();
         Node* node = addToGraph(ArithAbs, get(virtualRegisterForArgument(1, registerOffset)));
         if (m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, Overflow))
             node->mergeFlags(NodeMayOverflowInDFG);
@@ -1653,29 +1822,33 @@ bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int
     }
 
     case MinIntrinsic:
-        return handleMinMax(resultOperand, ArithMin, registerOffset, argumentCountIncludingThis);
+        return handleMinMax(resultOperand, ArithMin, registerOffset, argumentCountIncludingThis, insertChecks);
         
     case MaxIntrinsic:
-        return handleMinMax(resultOperand, ArithMax, registerOffset, argumentCountIncludingThis);
+        return handleMinMax(resultOperand, ArithMax, registerOffset, argumentCountIncludingThis, insertChecks);
         
     case SqrtIntrinsic:
     case CosIntrinsic:
     case SinIntrinsic: {
         if (argumentCountIncludingThis == 1) {
+            insertChecks();
             set(VirtualRegister(resultOperand), addToGraph(JSConstant, OpInfo(m_constantNaN)));
             return true;
         }
         
         switch (intrinsic) {
         case SqrtIntrinsic:
+            insertChecks();
             set(VirtualRegister(resultOperand), addToGraph(ArithSqrt, get(virtualRegisterForArgument(1, registerOffset))));
             return true;
             
         case CosIntrinsic:
+            insertChecks();
             set(VirtualRegister(resultOperand), addToGraph(ArithCos, get(virtualRegisterForArgument(1, registerOffset))));
             return true;
             
         case SinIntrinsic:
+            insertChecks();
             set(VirtualRegister(resultOperand), addToGraph(ArithSin, get(virtualRegisterForArgument(1, registerOffset))));
             return true;
             
@@ -1688,9 +1861,11 @@ bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int
     case PowIntrinsic: {
         if (argumentCountIncludingThis < 3) {
             // Math.pow() and Math.pow(x) return NaN.
+            insertChecks();
             set(VirtualRegister(resultOperand), addToGraph(JSConstant, OpInfo(m_constantNaN)));
             return true;
         }
+        insertChecks();
         VirtualRegister xOperand = virtualRegisterForArgument(1, registerOffset);
         VirtualRegister yOperand = virtualRegisterForArgument(2, registerOffset);
         set(VirtualRegister(resultOperand), addToGraph(ArithPow, get(xOperand), get(yOperand)));
@@ -1710,6 +1885,7 @@ bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int
         case Array::Double:
         case Array::Contiguous:
         case Array::ArrayStorage: {
+            insertChecks();
             Node* arrayPush = addToGraph(ArrayPush, OpInfo(arrayMode.asWord()), OpInfo(prediction), get(virtualRegisterForArgument(0, registerOffset)), get(virtualRegisterForArgument(1, registerOffset)));
             set(VirtualRegister(resultOperand), arrayPush);
             
@@ -1733,6 +1909,7 @@ bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int
         case Array::Double:
         case Array::Contiguous:
         case Array::ArrayStorage: {
+            insertChecks();
             Node* arrayPop = addToGraph(ArrayPop, OpInfo(arrayMode.asWord()), OpInfo(prediction), get(virtualRegisterForArgument(0, registerOffset)));
             set(VirtualRegister(resultOperand), arrayPop);
             return true;
@@ -1747,6 +1924,7 @@ bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int
         if (argumentCountIncludingThis != 2)
             return false;
 
+        insertChecks();
         VirtualRegister thisOperand = virtualRegisterForArgument(0, registerOffset);
         VirtualRegister indexOperand = virtualRegisterForArgument(1, registerOffset);
         Node* charCode = addToGraph(StringCharCodeAt, OpInfo(ArrayMode(Array::String).asWord()), get(thisOperand), get(indexOperand));
@@ -1759,6 +1937,7 @@ bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int
         if (argumentCountIncludingThis != 2)
             return false;
 
+        insertChecks();
         VirtualRegister thisOperand = virtualRegisterForArgument(0, registerOffset);
         VirtualRegister indexOperand = virtualRegisterForArgument(1, registerOffset);
         Node* charCode = addToGraph(StringCharAt, OpInfo(ArrayMode(Array::String).asWord()), get(thisOperand), get(indexOperand));
@@ -1770,6 +1949,7 @@ bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int
         if (argumentCountIncludingThis != 2)
             return false;
 
+        insertChecks();
         VirtualRegister indexOperand = virtualRegisterForArgument(1, registerOffset);
         Node* charCode = addToGraph(StringFromCharCode, get(indexOperand));
 
@@ -1782,6 +1962,7 @@ bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int
         if (argumentCountIncludingThis != 2)
             return false;
         
+        insertChecks();
         Node* regExpExec = addToGraph(RegExpExec, OpInfo(0), OpInfo(prediction), get(virtualRegisterForArgument(0, registerOffset)), get(virtualRegisterForArgument(1, registerOffset)));
         set(VirtualRegister(resultOperand), regExpExec);
         
@@ -1792,6 +1973,7 @@ bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int
         if (argumentCountIncludingThis != 2)
             return false;
         
+        insertChecks();
         Node* regExpExec = addToGraph(RegExpTest, OpInfo(0), OpInfo(prediction), get(virtualRegisterForArgument(0, registerOffset)), get(virtualRegisterForArgument(1, registerOffset)));
         set(VirtualRegister(resultOperand), regExpExec);
         
@@ -1801,6 +1983,7 @@ bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int
     case IMulIntrinsic: {
         if (argumentCountIncludingThis != 3)
             return false;
+        insertChecks();
         VirtualRegister leftOperand = virtualRegisterForArgument(1, registerOffset);
         VirtualRegister rightOperand = virtualRegisterForArgument(2, registerOffset);
         Node* left = get(leftOperand);
@@ -1812,29 +1995,34 @@ bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int
     case FRoundIntrinsic: {
         if (argumentCountIncludingThis != 2)
             return false;
+        insertChecks();
         VirtualRegister operand = virtualRegisterForArgument(1, registerOffset);
         set(VirtualRegister(resultOperand), addToGraph(ArithFRound, get(operand)));
         return true;
     }
         
     case DFGTrueIntrinsic: {
+        insertChecks();
         set(VirtualRegister(resultOperand), jsConstant(jsBoolean(true)));
         return true;
     }
         
     case OSRExitIntrinsic: {
+        insertChecks();
         addToGraph(ForceOSRExit);
         set(VirtualRegister(resultOperand), addToGraph(JSConstant, OpInfo(m_constantUndefined)));
         return true;
     }
         
     case IsFinalTierIntrinsic: {
+        insertChecks();
         set(VirtualRegister(resultOperand),
             jsConstant(jsBoolean(Options::useFTLJIT() ? isFTL(m_graph.m_plan.mode) : true)));
         return true;
     }
         
     case SetInt32HeapPredictionIntrinsic: {
+        insertChecks();
         for (int i = 1; i < argumentCountIncludingThis; ++i) {
             Node* node = get(virtualRegisterForArgument(i, registerOffset));
             if (node->hasHeapPrediction())
@@ -1847,6 +2035,7 @@ bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int
     case FiatInt52Intrinsic: {
         if (argumentCountIncludingThis != 2)
             return false;
+        insertChecks();
         VirtualRegister operand = virtualRegisterForArgument(1, registerOffset);
         if (enableInt52())
             set(VirtualRegister(resultOperand), addToGraph(FiatInt52, get(operand)));
@@ -1860,9 +2049,10 @@ bool ByteCodeParser::handleIntrinsic(int resultOperand, Intrinsic intrinsic, int
     }
 }
 
+template<typename ChecksFunctor>
 bool ByteCodeParser::handleTypedArrayConstructor(
     int resultOperand, InternalFunction* function, int registerOffset,
-    int argumentCountIncludingThis, TypedArrayType type)
+    int argumentCountIncludingThis, TypedArrayType type, const ChecksFunctor& insertChecks)
 {
     if (!isTypedView(type))
         return false;
@@ -1906,16 +2096,21 @@ bool ByteCodeParser::handleTypedArrayConstructor(
     
     if (argumentCountIncludingThis != 2)
         return false;
-    
+
+    insertChecks();
     set(VirtualRegister(resultOperand),
         addToGraph(NewTypedArray, OpInfo(type), get(virtualRegisterForArgument(1, registerOffset))));
     return true;
 }
 
+template<typename ChecksFunctor>
 bool ByteCodeParser::handleConstantInternalFunction(
     int resultOperand, InternalFunction* function, int registerOffset,
-    int argumentCountIncludingThis, CodeSpecializationKind kind)
+    int argumentCountIncludingThis, CodeSpecializationKind kind, const ChecksFunctor& insertChecks)
 {
+    if (verbose)
+        dataLog("    Handling constant internal function ", JSValue(function), "\n");
+    
     // If we ever find that we have a lot of internal functions that we specialize for,
     // then we should probably have some sort of hashtable dispatch, or maybe even
     // dispatch straight through the MethodTable of the InternalFunction. But for now,
@@ -1927,6 +2122,7 @@ bool ByteCodeParser::handleConstantInternalFunction(
         if (function->globalObject() != m_inlineStackTop->m_codeBlock->globalObject())
             return false;
         
+        insertChecks();
         if (argumentCountIncludingThis == 2) {
             set(VirtualRegister(resultOperand),
                 addToGraph(NewArrayWithSize, OpInfo(ArrayWithUndecided), get(virtualRegisterForArgument(1, registerOffset))));
@@ -1941,6 +2137,8 @@ bool ByteCodeParser::handleConstantInternalFunction(
     }
     
     if (function->classInfo() == StringConstructor::info()) {
+        insertChecks();
+        
         Node* result;
         
         if (argumentCountIncludingThis <= 1)
@@ -1958,7 +2156,7 @@ bool ByteCodeParser::handleConstantInternalFunction(
     for (unsigned typeIndex = 0; typeIndex < NUMBER_OF_TYPED_ARRAY_TYPES; ++typeIndex) {
         bool result = handleTypedArrayConstructor(
             resultOperand, function, registerOffset, argumentCountIncludingThis,
-            indexToTypedArrayType(typeIndex));
+            indexToTypedArrayType(typeIndex), insertChecks);
         if (result)
             return true;
     }
@@ -3122,43 +3320,70 @@ bool ByteCodeParser::parseBlock(unsigned limit)
             int thisReg = currentInstruction[3].u.operand;
             int arguments = currentInstruction[4].u.operand;
             int firstFreeReg = currentInstruction[5].u.operand;
+            int firstVarArgOffset = currentInstruction[6].u.operand;
             
-            ASSERT(inlineCallFrame());
-            ASSERT_UNUSED(arguments, arguments == m_inlineStackTop->m_codeBlock->argumentsRegister().offset());
-            ASSERT(!m_inlineStackTop->m_codeBlock->symbolTable()->slowArguments());
-
-            addToGraph(CheckArgumentsNotCreated);
-
-            unsigned argCount = inlineCallFrame()->arguments.size();
+            if (arguments == m_inlineStackTop->m_codeBlock->uncheckedArgumentsRegister().offset()
+                && !m_inlineStackTop->m_codeBlock->symbolTable()->slowArguments()) {
+                if (inlineCallFrame()
+                    && !inlineCallFrame()->isVarargs()
+                    && !firstVarArgOffset) {
+                    addToGraph(CheckArgumentsNotCreated);
+
+                    unsigned argCount = inlineCallFrame()->arguments.size();
             
-            // Let's compute the register offset. We start with the last used register, and
-            // then adjust for the things we want in the call frame.
-            int registerOffset = firstFreeReg + 1;
-            registerOffset -= argCount; // We will be passing some arguments.
-            registerOffset -= JSStack::CallFrameHeaderSize; // We will pretend to have a call frame header.
+                    // Let's compute the register offset. We start with the last used register, and
+                    // then adjust for the things we want in the call frame.
+                    int registerOffset = firstFreeReg + 1;
+                    registerOffset -= argCount; // We will be passing some arguments.
+                    registerOffset -= JSStack::CallFrameHeaderSize; // We will pretend to have a call frame header.
             
-            // Get the alignment right.
-            registerOffset = -WTF::roundUpToMultipleOf(
-                stackAlignmentRegisters(),
-                -registerOffset);
-
-            ensureLocals(
-                m_inlineStackTop->remapOperand(
-                    VirtualRegister(registerOffset)).toLocal());
+                    // Get the alignment right.
+                    registerOffset = -WTF::roundUpToMultipleOf(
+                        stackAlignmentRegisters(),
+                        -registerOffset);
+
+                    ensureLocals(
+                        m_inlineStackTop->remapOperand(
+                            VirtualRegister(registerOffset)).toLocal());
             
-            // The bytecode wouldn't have set up the arguments. But we'll do it and make it
-            // look like the bytecode had done it.
-            int nextRegister = registerOffset + JSStack::CallFrameHeaderSize;
-            set(VirtualRegister(nextRegister++), get(VirtualRegister(thisReg)), ImmediateNakedSet);
-            for (unsigned argument = 1; argument < argCount; ++argument)
-                set(VirtualRegister(nextRegister++), get(virtualRegisterForArgument(argument)), ImmediateNakedSet);
+                    // The bytecode wouldn't have set up the arguments. But we'll do it and make it
+                    // look like the bytecode had done it.
+                    int nextRegister = registerOffset + JSStack::CallFrameHeaderSize;
+                    set(VirtualRegister(nextRegister++), get(VirtualRegister(thisReg)), ImmediateNakedSet);
+                    for (unsigned argument = 1; argument < argCount; ++argument)
+                        set(VirtualRegister(nextRegister++), get(virtualRegisterForArgument(argument)), ImmediateNakedSet);
             
-            handleCall(
-                result, Call, CodeForCall, OPCODE_LENGTH(op_call_varargs),
-                callee, argCount, registerOffset);
+                    handleCall(
+                        result, Call, CodeForCall, OPCODE_LENGTH(op_call_varargs),
+                        callee, argCount, registerOffset);
+                    NEXT_OPCODE(op_call_varargs);
+                }
+                
+                // Emit CallForwardVarargs
+                // FIXME: This means we cannot inline forwarded varargs calls inside a varargs
+                // call frame. We will probably fix that once we finally get rid of the
+                // arguments object special-casing.
+                CallVarargsData* data = m_graph.m_callVarargsData.add();
+                data->firstVarArgOffset = firstVarArgOffset;
+                
+                Node* call = addToGraph(
+                    CallForwardVarargs, OpInfo(data), OpInfo(getPrediction()),
+                    get(VirtualRegister(callee)), get(VirtualRegister(thisReg)));
+                VirtualRegister resultReg(result);
+                if (resultReg.isValid())
+                    set(resultReg, call);
+                NEXT_OPCODE(op_call_varargs);
+            }
+            
+            handleVarargsCall(currentInstruction, CallVarargs, CodeForCall);
             NEXT_OPCODE(op_call_varargs);
         }
             
+        case op_construct_varargs: {
+            handleVarargsCall(currentInstruction, ConstructVarargs, CodeForConstruct);
+            NEXT_OPCODE(op_construct_varargs);
+        }
+            
         case op_jneq_ptr:
             // Statically speculate for now. It makes sense to let speculate-only jneq_ptr
             // support simmer for a while before making it more general, since it's
index 37b38e2..ee4d4d1 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -96,6 +96,8 @@ inline void debugFail(CodeBlock* codeBlock, OpcodeID opcodeID, CapabilityLevel r
 
 CapabilityLevel capabilityLevel(OpcodeID opcodeID, CodeBlock* codeBlock, Instruction* pc)
 {
+    UNUSED_PARAM(codeBlock); // This function does some bytecode parsing. Ordinarily bytecode parsing requires the owning CodeBlock. It's sort of strange that we don't use it here right now.
+    
     switch (opcodeID) {
     case op_enter:
     case op_touch_entry:
@@ -182,6 +184,8 @@ CapabilityLevel capabilityLevel(OpcodeID opcodeID, CodeBlock* codeBlock, Instruc
     case op_throw_static_error:
     case op_call:
     case op_construct:
+    case op_call_varargs:
+    case op_construct_varargs:
     case op_init_lazy_reg:
     case op_create_arguments:
     case op_tear_off_arguments:
@@ -223,14 +227,6 @@ CapabilityLevel capabilityLevel(OpcodeID opcodeID, CodeBlock* codeBlock, Instruc
         return CanCompileAndInline;
     }
 
-    case op_call_varargs:
-        if (codeBlock->usesArguments() && pc[4].u.operand == codeBlock->argumentsRegister().offset()
-            && !pc[6].u.operand)
-            return CanInline;
-        // FIXME: We should handle this.
-        // https://bugs.webkit.org/show_bug.cgi?id=127626
-        return CannotCompile;
-
     case op_new_regexp: 
     case op_create_lexical_environment:
     case op_new_func:
index da0390d..3047ad3 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -86,9 +86,7 @@ inline CapabilityLevel functionCapabilityLevel(bool mightCompile, bool mightInli
         return leastUpperBound(CanCompileAndInline, computedCapabilityLevel);
     if (mightCompile && !mightInline)
         return leastUpperBound(CanCompile, computedCapabilityLevel);
-    if (!mightCompile && mightInline)
-        return leastUpperBound(CanInline, computedCapabilityLevel);
-    if (!mightCompile && !mightInline)
+    if (!mightCompile)
         return CannotCompile;
     RELEASE_ASSERT_NOT_REACHED();
     return CannotCompile;
@@ -142,6 +140,14 @@ inline bool mightInlineFunctionFor(CodeBlock* codeBlock, CodeSpecializationKind
     return mightInlineFunctionForConstruct(codeBlock);
 }
 
+inline bool mightCompileFunctionFor(CodeBlock* codeBlock, CodeSpecializationKind kind)
+{
+    if (kind == CodeForCall)
+        return mightCompileFunctionForCall(codeBlock);
+    ASSERT(kind == CodeForConstruct);
+    return mightCompileFunctionForConstruct(codeBlock);
+}
+
 inline bool mightInlineFunction(CodeBlock* codeBlock)
 {
     return mightInlineFunctionFor(codeBlock, codeBlock->specializationKind());
index 2c35f5c..723edaa 100644 (file)
@@ -366,6 +366,9 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
     case Construct:
     case NativeCall:
     case NativeConstruct:
+    case CallVarargs:
+    case CallForwardVarargs:
+    case ConstructVarargs:
     case ToPrimitive:
     case In:
     case GetMyArgumentsLengthSafe:
@@ -401,6 +404,13 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
         def(HeapLocation(VariableLoc, AbstractHeap(Variables, node->local())), node->child1().node());
         return;
         
+    case LoadVarargs:
+        // This actually writes to local variables as well. But when it reads the array, it does
+        // so in a way that may trigger getters or various traps.
+        read(World);
+        write(World);
+        return;
+        
     case GetLocalUnlinked:
         read(AbstractHeap(Variables, node->unlinkedLocal()));
         def(HeapLocation(VariableLoc, AbstractHeap(Variables, node->unlinkedLocal())), node);
@@ -881,7 +891,7 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
         return;
     }
     
-    RELEASE_ASSERT_NOT_REACHED();
+    DFG_CRASH(graph, node, toCString("Unrecognized node type: ", Graph::opName(node->op())).data());
 }
 
 class NoOpClobberize {
index a11d7b8..69ce603 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 #include "config.h"
 #include "DFGCommon.h"
 
-#if ENABLE(DFG_JIT)
-
 #include "DFGNode.h"
 #include "JSCInlines.h"
+#include <wtf/PrintStream.h>
+
+#if ENABLE(DFG_JIT)
 
 namespace JSC { namespace DFG {
 
@@ -131,3 +132,28 @@ void printInternal(PrintStream& out, ProofStatus status)
 
 #endif // ENABLE(DFG_JIT)
 
+namespace WTF {
+
+using namespace JSC::DFG;
+
+void printInternal(PrintStream& out, CapabilityLevel capabilityLevel)
+{
+    switch (capabilityLevel) {
+    case CannotCompile:
+        out.print("CannotCompile");
+        return;
+    case CanCompile:
+        out.print("CanCompile");
+        return;
+    case CanCompileAndInline:
+        out.print("CanCompileAndInline");
+        return;
+    case CapabilityLevelNotSet:
+        out.print("CapabilityLevelNotSet");
+        return;
+    }
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
+} // namespace WTF
+
index 68e7a41..e91274d 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011-2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -292,7 +292,6 @@ namespace JSC { namespace DFG {
 
 enum CapabilityLevel {
     CannotCompile,
-    CanInline,
     CanCompile,
     CanCompileAndInline,
     CapabilityLevelNotSet
@@ -312,7 +311,6 @@ inline bool canCompile(CapabilityLevel level)
 inline bool canInline(CapabilityLevel level)
 {
     switch (level) {
-    case CanInline:
     case CanCompileAndInline:
         return true;
     default:
@@ -325,14 +323,6 @@ inline CapabilityLevel leastUpperBound(CapabilityLevel a, CapabilityLevel b)
     switch (a) {
     case CannotCompile:
         return CannotCompile;
-    case CanInline:
-        switch (b) {
-        case CanInline:
-        case CanCompileAndInline:
-            return CanInline;
-        default:
-            return CannotCompile;
-        }
     case CanCompile:
         switch (b) {
         case CanCompile:
@@ -364,5 +354,11 @@ inline bool shouldShowDisassembly(CompilationMode mode = DFGMode)
 
 } } // namespace JSC::DFG
 
+namespace WTF {
+
+void printInternal(PrintStream&, JSC::DFG::CapabilityLevel);
+
+} // namespace WTF
+
 #endif // DFGCommon_h
 
index 61aa9cb..b0b7896 100644 (file)
@@ -117,6 +117,10 @@ bool doesGC(Graph& graph, Node* node)
     case CompareStrictEq:
     case Call:
     case Construct:
+    case CallVarargs:
+    case ConstructVarargs:
+    case LoadVarargs:
+    case CallForwardVarargs:
     case NativeCall:
     case NativeConstruct:
     case Breakpoint:
index 0dc3bf2..0c40677 100644 (file)
@@ -1214,6 +1214,10 @@ private:
         case AllocationProfileWatchpoint:
         case Call:
         case Construct:
+        case CallVarargs:
+        case ConstructVarargs:
+        case CallForwardVarargs:
+        case LoadVarargs:
         case ProfileControlFlow:
         case NativeCall:
         case NativeConstruct:
index 8b70cc9..7c9f804 100644 (file)
@@ -313,6 +313,18 @@ void Graph::dump(PrintStream& out, const char* prefix, Node* node, DumpContext*
         out.print(comma, RawPointer(node->storagePointer()));
     if (node->hasObjectMaterializationData())
         out.print(comma, node->objectMaterializationData());
+    if (node->hasCallVarargsData())
+        out.print(comma, "firstVarArgOffset = ", node->callVarargsData()->firstVarArgOffset);
+    if (node->hasLoadVarargsData()) {
+        LoadVarargsData* data = node->loadVarargsData();
+        out.print(comma, "start = ", data->start, ", count = ", data->count);
+        if (data->machineStart.isValid())
+            out.print(", machineStart = ", data->machineStart);
+        if (data->machineCount.isValid())
+            out.print(", machineCount = ", data->machineCount);
+        out.print(", offset = ", data->offset, ", mandatoryMinimum = ", data->mandatoryMinimum);
+        out.print(", limit = ", data->limit);
+    }
     if (node->isConstant())
         out.print(comma, pointerDumpInContext(node->constant(), context));
     if (node->isJump())
@@ -400,7 +412,7 @@ void Graph::dumpBlockHeader(PrintStream& out, const char* prefix, BasicBlock* bl
             Node* phiNode = block->phis[i];
             if (!phiNode->shouldGenerate() && phiNodeDumpMode == DumpLivePhisOnly)
                 continue;
-            out.print(" @", phiNode->index(), "<", phiNode->refCount(), ">->(");
+            out.print(" @", phiNode->index(), "<", phiNode->local(), ",", phiNode->refCount(), ">->(");
             if (phiNode->child1()) {
                 out.print("@", phiNode->child1()->index());
                 if (phiNode->child2()) {
@@ -869,10 +881,12 @@ bool Graph::isLiveInBytecode(VirtualRegister operand, CodeOrigin codeOrigin)
             if (reg.isArgument()) {
                 RELEASE_ASSERT(reg.offset() < JSStack::CallFrameHeaderSize);
                 
-                if (!codeOrigin.inlineCallFrame->isClosureCall)
-                    return false;
+                if (codeOrigin.inlineCallFrame->isClosureCall
+                    && reg.offset() == JSStack::Callee)
+                    return true;
                 
-                if (reg.offset() == JSStack::Callee)
+                if (codeOrigin.inlineCallFrame->isVarargs()
+                    && reg.offset() == JSStack::ArgumentCount)
                     return true;
                 
                 return false;
@@ -1235,6 +1249,51 @@ void Graph::handleAssertionFailure(
     crash(*this, toCString("While handling block ", pointerDump(block), "\n\n"), file, line, function, assertion);
 }
 
+ValueProfile* Graph::valueProfileFor(Node* node)
+{
+    if (!node)
+        return nullptr;
+        
+    CodeBlock* profiledBlock = baselineCodeBlockFor(node->origin.semantic);
+        
+    if (node->hasLocal(*this)) {
+        if (!node->local().isArgument())
+            return nullptr;
+        int argument = node->local().toArgument();
+        Node* argumentNode = m_arguments[argument];
+        if (!argumentNode)
+            return nullptr;
+        if (node->variableAccessData() != argumentNode->variableAccessData())
+            return nullptr;
+        return profiledBlock->valueProfileForArgument(argument);
+    }
+        
+    if (node->hasHeapPrediction())
+        return profiledBlock->valueProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex);
+        
+    return nullptr;
+}
+
+MethodOfGettingAValueProfile Graph::methodOfGettingAValueProfileFor(Node* node)
+{
+    if (!node)
+        return MethodOfGettingAValueProfile();
+    
+    if (ValueProfile* valueProfile = valueProfileFor(node))
+        return MethodOfGettingAValueProfile(valueProfile);
+    
+    if (node->op() == GetLocal) {
+        CodeBlock* profiledBlock = baselineCodeBlockFor(node->origin.semantic);
+        
+        return MethodOfGettingAValueProfile::fromLazyOperand(
+            profiledBlock,
+            LazyOperandValueProfileKey(
+                node->origin.semantic.bytecodeIndex, node->local()));
+    }
+    
+    return MethodOfGettingAValueProfile();
+}
+
 } } // namespace JSC::DFG
 
 #endif // ENABLE(DFG_JIT)
index 5898b80..ba0934c 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -480,50 +480,8 @@ public:
         return m_profiledBlock->uncheckedActivationRegister();
     }
     
-    ValueProfile* valueProfileFor(Node* node)
-    {
-        if (!node)
-            return nullptr;
-        
-        CodeBlock* profiledBlock = baselineCodeBlockFor(node->origin.semantic);
-        
-        if (node->hasLocal(*this)) {
-            if (!node->local().isArgument())
-                return 0;
-            int argument = node->local().toArgument();
-            Node* argumentNode = m_arguments[argument];
-            if (!argumentNode)
-                return nullptr;
-            if (node->variableAccessData() != argumentNode->variableAccessData())
-                return nullptr;
-            return profiledBlock->valueProfileForArgument(argument);
-        }
-        
-        if (node->hasHeapPrediction())
-            return profiledBlock->valueProfileForBytecodeOffset(node->origin.semantic.bytecodeIndex);
-        
-        return 0;
-    }
-    
-    MethodOfGettingAValueProfile methodOfGettingAValueProfileFor(Node* node)
-    {
-        if (!node)
-            return MethodOfGettingAValueProfile();
-        
-        if (ValueProfile* valueProfile = valueProfileFor(node))
-            return MethodOfGettingAValueProfile(valueProfile);
-        
-        if (node->op() == GetLocal) {
-            CodeBlock* profiledBlock = baselineCodeBlockFor(node->origin.semantic);
-        
-            return MethodOfGettingAValueProfile::fromLazyOperand(
-                profiledBlock,
-                LazyOperandValueProfileKey(
-                    node->origin.semantic.bytecodeIndex, node->local()));
-        }
-        
-        return MethodOfGettingAValueProfile();
-    }
+    ValueProfile* valueProfileFor(Node*);
+    MethodOfGettingAValueProfile methodOfGettingAValueProfileFor(Node*);
     
     bool usesArguments() const
     {
@@ -861,6 +819,8 @@ public:
     Bag<MultiGetByOffsetData> m_multiGetByOffsetData;
     Bag<MultiPutByOffsetData> m_multiPutByOffsetData;
     Bag<ObjectMaterializationData> m_objectMaterializationData;
+    Bag<CallVarargsData> m_callVarargsData;
+    Bag<LoadVarargsData> m_loadVarargsData;
     Vector<InlineVariableData, 4> m_inlineVariableData;
     HashMap<CodeBlock*, std::unique_ptr<FullBytecodeLiveness>> m_bytecodeLiveness;
     bool m_hasArguments;
index 1379e51..fb1cdee 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -122,6 +122,7 @@ void JITCompiler::compileExceptionHandlers()
         // lookupExceptionHandlerFromCallerFrame is passed two arguments, the VM and the exec (the CallFrame*).
         move(TrustedImmPtr(vm()), GPRInfo::argumentGPR0);
         move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
+        addPtr(TrustedImm32(m_graph.stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, stackPointerRegister);
 
 #if CPU(X86)
         // FIXME: should use the call abstraction, but this is currently in the SpeculativeJIT layer!
@@ -247,7 +248,7 @@ void JITCompiler::link(LinkBuffer& linkBuffer)
         JSCallRecord& record = m_jsCalls[i];
         CallLinkInfo& info = *record.m_info;
         ThunkGenerator generator = linkThunkGeneratorFor(
-            info.callType == CallLinkInfo::Construct ? CodeForConstruct : CodeForCall,
+            info.specializationKind(),
             RegisterPreservationNotRequired);
         linkBuffer.link(record.m_slowCall, FunctionPtr(m_vm->getCTIStub(generator).code().executableAddress()));
         info.callReturnLocation = linkBuffer.locationOfNearCall(record.m_slowCall);
index 559fff2..e77b7c3 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -84,6 +84,7 @@ bool mayExit(Graph& graph, Node* node)
     case GetCallee:
     case GetScope:
     case PhantomLocal:
+    case CountExecution:
         break;
         
     default:
index 2718b7b..7d083ce 100644 (file)
@@ -183,6 +183,20 @@ struct SwitchData {
     bool didUseJumpTable;
 };
 
+struct CallVarargsData {
+    int firstVarArgOffset;
+};
+
+struct LoadVarargsData {
+    VirtualRegister start; // Local for the first element.
+    VirtualRegister count; // Local for the count.
+    VirtualRegister machineStart;
+    VirtualRegister machineCount;
+    unsigned offset; // Which array element to start with. Usually this is 0.
+    unsigned mandatoryMinimum; // The number of elements on the stack that must be initialized; if the array is too short then the missing elements must get undefined. Does not include "this".
+    unsigned limit; // Maximum number of elements to load. Includes "this".
+};
+
 // This type used in passing an immediate argument to Node constructor;
 // distinguishes an immediate value (typically an index into a CodeBlock data structure - 
 // a constant index, argument, or identifier) from a Node*.
@@ -895,6 +909,35 @@ struct Node {
         return bitwise_cast<WriteBarrier<Unknown>*>(m_opInfo);
     }
     
+    bool hasCallVarargsData()
+    {
+        switch (op()) {
+        case CallVarargs:
+        case CallForwardVarargs:
+        case ConstructVarargs:
+            return true;
+        default:
+            return false;
+        }
+    }
+    
+    CallVarargsData* callVarargsData()
+    {
+        ASSERT(hasCallVarargsData());
+        return bitwise_cast<CallVarargsData*>(m_opInfo);
+    }
+    
+    bool hasLoadVarargsData()
+    {
+        return op() == LoadVarargs;
+    }
+    
+    LoadVarargsData* loadVarargsData()
+    {
+        ASSERT(hasLoadVarargsData());
+        return bitwise_cast<LoadVarargsData*>(m_opInfo);
+    }
+    
     bool hasResult()
     {
         return !!result();
@@ -1049,6 +1092,9 @@ struct Node {
         case GetMyArgumentByValSafe:
         case Call:
         case Construct:
+        case CallVarargs:
+        case ConstructVarargs:
+        case CallForwardVarargs:
         case NativeCall:
         case NativeConstruct:
         case GetByOffset:
index 4e8e5e6..e003137 100644 (file)
@@ -147,6 +147,7 @@ namespace JSC { namespace DFG {
     /* this must be the directly subsequent property put. Note that PutByVal */\
     /* opcodes use VarArgs beause they may have up to 4 children. */\
     macro(GetByVal, NodeResultJS | NodeMustGenerate) \
+    macro(LoadVarargs, NodeMustGenerate) \
     macro(PutByValDirect, NodeMustGenerate | NodeHasVarArgs) \
     macro(PutByVal, NodeMustGenerate | NodeHasVarArgs) \
     macro(PutByValAlias, NodeMustGenerate | NodeHasVarArgs) \
@@ -217,6 +218,9 @@ namespace JSC { namespace DFG {
     /* Calls. */\
     macro(Call, NodeResultJS | NodeMustGenerate | NodeHasVarArgs) \
     macro(Construct, NodeResultJS | NodeMustGenerate | NodeHasVarArgs) \
+    macro(CallVarargs, NodeResultJS | NodeMustGenerate) \
+    macro(CallForwardVarargs, NodeResultJS | NodeMustGenerate) \
+    macro(ConstructVarargs, NodeResultJS | NodeMustGenerate) \
     macro(NativeCall, NodeResultJS | NodeMustGenerate | NodeHasVarArgs) \
     macro(NativeConstruct, NodeResultJS | NodeMustGenerate | NodeHasVarArgs) \
     \
index 85b6f3f..41c6d24 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -156,6 +156,17 @@ void LocalOSRAvailabilityCalculator::executeNode(Node* node)
         break;
     }
         
+    case LoadVarargs: {
+        LoadVarargsData* data = node->loadVarargsData();
+        m_availability.m_locals.operand(data->count) =
+            Availability(FlushedAt(FlushedInt32, data->machineCount));
+        for (unsigned i = data->limit; i--;) {
+            m_availability.m_locals.operand(VirtualRegister(data->start.offset() + i)) =
+                Availability(FlushedAt(FlushedJSValue, VirtualRegister(data->machineStart.offset() + i)));
+        }
+        break;
+    }
+        
     default:
         break;
     }
index be3e2c9..2d690bf 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -152,7 +152,9 @@ void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit)
         
         switch (inlineCallFrame->kind) {
         case InlineCallFrame::Call:
-        case InlineCallFrame::Construct: {
+        case InlineCallFrame::Construct:
+        case InlineCallFrame::CallVarargs:
+        case InlineCallFrame::ConstructVarargs: {
             CallLinkInfo* callLinkInfo =
                 baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex);
             RELEASE_ASSERT(callLinkInfo);
@@ -195,12 +197,13 @@ void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit)
         if (trueReturnPC)
             jit.storePtr(AssemblyHelpers::TrustedImmPtr(trueReturnPC), AssemblyHelpers::addressFor(inlineCallFrame->stackOffset + virtualRegisterForArgument(inlineCallFrame->arguments.size()).offset()));
                          
-#if USE(JSVALUE64)
         jit.storePtr(AssemblyHelpers::TrustedImmPtr(baselineCodeBlock), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::CodeBlock)));
+        if (!inlineCallFrame->isVarargs())
+            jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->arguments.size()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
+#if USE(JSVALUE64)
         jit.store64(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
         uint32_t locationBits = CallFrame::Location::encodeAsBytecodeOffset(codeOrigin.bytecodeIndex);
         jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
-        jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->arguments.size()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
         if (!inlineCallFrame->isClosureCall)
             jit.store64(AssemblyHelpers::TrustedImm64(JSValue::encode(JSValue(inlineCallFrame->calleeConstant()))), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::Callee)));
         
@@ -208,12 +211,10 @@ void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit)
         if (baselineCodeBlock->usesArguments())
             jit.loadPtr(AssemblyHelpers::addressFor(VirtualRegister(inlineCallFrame->stackOffset + unmodifiedArgumentsRegister(baselineCodeBlock->argumentsRegister()).offset())), GPRInfo::regT3);
 #else // USE(JSVALUE64) // so this is the 32-bit part
-        jit.storePtr(AssemblyHelpers::TrustedImmPtr(baselineCodeBlock), AssemblyHelpers::addressFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::CodeBlock)));
         jit.storePtr(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
         Instruction* instruction = baselineCodeBlock->instructions().begin() + codeOrigin.bytecodeIndex;
         uint32_t locationBits = CallFrame::Location::encodeAsBytecodeInstruction(instruction);
         jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
-        jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->arguments.size()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::ArgumentCount)));
         jit.store32(AssemblyHelpers::TrustedImm32(JSValue::CellTag), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::Callee)));
         if (!inlineCallFrame->isClosureCall)
             jit.storePtr(AssemblyHelpers::TrustedImmPtr(inlineCallFrame->calleeConstant()), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + JSStack::Callee)));
index 33641e9..9c46fc9 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -1018,6 +1018,27 @@ void JIT_OPERATION operationNotifyWrite(ExecState* exec, VariableWatchpointSet*
     set->notifyWrite(vm, value, "Executed NotifyWrite");
 }
 
+int32_t JIT_OPERATION operationSizeOfVarargs(ExecState* exec, EncodedJSValue encodedArguments, int32_t firstVarArgOffset)
+{
+    VM& vm = exec->vm();
+    NativeCallFrameTracer tracer(&vm, exec);
+    JSValue arguments = JSValue::decode(encodedArguments);
+    
+    return sizeOfVarargs(exec, arguments, firstVarArgOffset);
+}
+
+void JIT_OPERATION operationLoadVarargs(ExecState* exec, int32_t firstElementDest, EncodedJSValue encodedArguments, int32_t offset, int32_t length, int32_t mandatoryMinimum)
+{
+    VM& vm = exec->vm();
+    NativeCallFrameTracer tracer(&vm, exec);
+    JSValue arguments = JSValue::decode(encodedArguments);
+    
+    loadVarargs(exec, VirtualRegister(firstElementDest), arguments, offset, length);
+    
+    for (int32_t i = length; i < mandatoryMinimum; ++i)
+        exec->r(firstElementDest + i) = jsUndefined();
+}
+
 double JIT_OPERATION operationFModOnInts(int32_t a, int32_t b)
 {
     return fmod(a, b);
index 2ae4687..78574e1 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -125,6 +125,8 @@ JSCell* JIT_OPERATION operationMakeRope3(ExecState*, JSString*, JSString*, JSStr
 char* JIT_OPERATION operationFindSwitchImmTargetForDouble(ExecState*, EncodedJSValue, size_t tableIndex);
 char* JIT_OPERATION operationSwitchString(ExecState*, size_t tableIndex, JSString*);
 void JIT_OPERATION operationNotifyWrite(ExecState*, VariableWatchpointSet*, EncodedJSValue);
+int32_t JIT_OPERATION operationSizeOfVarargs(ExecState*, EncodedJSValue arguments, int32_t firstVarArgOffset);
+void JIT_OPERATION operationLoadVarargs(ExecState*, int32_t firstElementDest, EncodedJSValue arguments, int32_t offset, int32_t length, int32_t mandatoryMinimum);
 
 int64_t JIT_OPERATION operationConvertBoxedDoubleToInt52(EncodedJSValue);
 int64_t JIT_OPERATION operationConvertDoubleToInt52(double);
index 32deaa1..383128a 100644 (file)
 
 namespace JSC { namespace DFG {
 
-static void dumpAndVerifyGraph(Graph& graph, const char* text)
+static void dumpAndVerifyGraph(Graph& graph, const char* text, bool forceDump = false)
 {
     GraphDumpMode modeForFinalValidate = DumpGraph;
-    if (verboseCompilationEnabled(graph.m_plan.mode)) {
+    if (verboseCompilationEnabled(graph.m_plan.mode) || forceDump) {
         dataLog(text, "\n");
         graph.dump();
         modeForFinalValidate = DontDumpGraph;
@@ -369,7 +369,7 @@ Plan::CompilationPath Plan::compileInThreadImpl(LongLivedState& longLivedState)
             return FailPath;
         }
 
-        dumpAndVerifyGraph(dfg, "Graph just before FTL lowering:");
+        dumpAndVerifyGraph(dfg, "Graph just before FTL lowering:", shouldShowDisassembly(mode));
         
         bool haveLLVM;
         Safepoint::Result safepointResult;
index 15f86dc..cbda2c4 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -133,11 +133,25 @@ private:
                 m_read(VirtualRegister(inlineCallFrame->stackOffset + virtualRegisterForArgument(i).offset()));
             if (inlineCallFrame->isClosureCall)
                 m_read(VirtualRegister(inlineCallFrame->stackOffset + JSStack::Callee));
+            if (inlineCallFrame->isVarargs())
+                m_read(VirtualRegister(inlineCallFrame->stackOffset + JSStack::ArgumentCount));
         }
     }
     
     void writeTop()
     {
+        if (m_node->op() == LoadVarargs) {
+            // Make sure we note the writes to the locals that will store the array elements and
+            // count.
+            LoadVarargsData* data = m_node->loadVarargsData();
+            m_write(data->count);
+            for (unsigned i = data->limit; i--;)
+                m_write(VirtualRegister(data->start.offset() + i));
+        }
+        
+        // Note that we don't need to do anything special for CallForwardVarargs, since it reads
+        // our arguments the same way that any effectful thing might.
+        
         if (m_graph.m_codeBlock->usesArguments()) {
             for (unsigned i = m_graph.m_codeBlock->numParameters(); i-- > 1;)
                 m_write(virtualRegisterForArgument(i));
index 5126299..5a5cec5 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -188,6 +188,9 @@ private:
         case GetDirectPname:
         case Call:
         case Construct:
+        case CallVarargs:
+        case ConstructVarargs:
+        case CallForwardVarargs:
         case NativeCall:
         case NativeConstruct:
         case GetGlobalVar:
@@ -635,6 +638,7 @@ private:
         case ConstantStoragePointer:
         case MovHint:
         case ZombieHint:
+        case LoadVarargs:
             break;
             
         // This gets ignored because it only pretends to produce a value.
index dd5835f..cc0a06b 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
index a8c85f0..9199e54 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -189,6 +189,10 @@ bool safeToExecute(AbstractStateType& state, Graph& graph, Node* node)
     case CompareStrictEq:
     case Call:
     case Construct:
+    case CallVarargs:
+    case ConstructVarargs:
+    case LoadVarargs:
+    case CallForwardVarargs:
     case NewObject:
     case NewArray:
     case NewArrayWithSize:
index 33c777e..86c836d 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -578,7 +578,6 @@ public:
         }
     }
 
-#ifndef NDEBUG
     // Used to ASSERT flushRegisters() has been called prior to
     // calling out from JIT code to a C helper function.
     bool isFlushed()
@@ -593,7 +592,6 @@ public:
         }
         return true;
     }
-#endif
 
 #if USE(JSVALUE64)
     static MacroAssembler::Imm64 valueOfJSConstantAsImm64(Node* node)
@@ -1454,6 +1452,26 @@ public:
         return appendCallWithExceptionCheckSetResult(operation, result);
     }
 
+    JITCompiler::Call callOperation(Z_JITOperation_EJZZ operation, GPRReg result, GPRReg arg1, unsigned arg2, unsigned arg3)
+    {
+        m_jit.setupArgumentsWithExecState(arg1, TrustedImm32(arg2), TrustedImm32(arg3));
+        return appendCallWithExceptionCheckSetResult(operation, result);
+    }
+    JITCompiler::Call callOperation(F_JITOperation_EFJZZ operation, GPRReg result, GPRReg arg1, GPRReg arg2, unsigned arg3, GPRReg arg4)
+    {
+        m_jit.setupArgumentsWithExecState(arg1, arg2, TrustedImm32(arg3), arg4);
+        return appendCallWithExceptionCheckSetResult(operation, result);
+    }
+    JITCompiler::Call callOperation(Z_JITOperation_EJZ operation, GPRReg result, GPRReg arg1, unsigned arg2)
+    {
+        m_jit.setupArgumentsWithExecState(arg1, TrustedImm32(arg2));
+        return appendCallWithExceptionCheckSetResult(operation, result);
+    }
+    JITCompiler::Call callOperation(V_JITOperation_EZJZZZ operation, unsigned arg1, GPRReg arg2, unsigned arg3, GPRReg arg4, unsigned arg5)
+    {
+        m_jit.setupArgumentsWithExecState(TrustedImm32(arg1), arg2, TrustedImm32(arg3), arg4, TrustedImm32(arg5));
+        return appendCallWithExceptionCheck(operation);
+    }
 #else // USE(JSVALUE32_64)
 
 // EncodedJSValue in JSVALUE32_64 is a 64-bit integer. When being compiled in ARM EABI, it must be aligned even-numbered register (r0, r2 or [sp]).
@@ -1750,6 +1768,26 @@ public:
         return appendCallWithExceptionCheckSetResult(operation, result);
     }
 
+    JITCompiler::Call callOperation(Z_JITOperation_EJZZ operation, GPRReg result, GPRReg arg1Tag, GPRReg arg1Payload, unsigned arg2, unsigned arg3)
+    {
+        m_jit.setupArgumentsWithExecState(arg1Payload, arg1Tag, TrustedImm32(arg2), TrustedImm32(arg3));
+        return appendCallWithExceptionCheckSetResult(operation, result);
+    }
+    JITCompiler::Call callOperation(F_JITOperation_EFJZZ operation, GPRReg result, GPRReg arg1, GPRReg arg2Tag, GPRReg arg2Payload, unsigned arg3, GPRReg arg4)
+    {
+        m_jit.setupArgumentsWithExecState(arg1, arg2Payload, arg2Tag, TrustedImm32(arg3), arg4);
+        return appendCallWithExceptionCheckSetResult(operation, result);
+    }
+    JITCompiler::Call callOperation(Z_JITOperation_EJZ operation, GPRReg result, GPRReg arg1Tag, GPRReg arg1Payload, unsigned arg2)
+    {
+        m_jit.setupArgumentsWithExecState(arg1Payload, arg1Tag, TrustedImm32(arg2));
+        return appendCallWithExceptionCheckSetResult(operation, result);
+    }
+    JITCompiler::Call callOperation(V_JITOperation_EZJZZZ operation, unsigned arg1, GPRReg arg2Tag, GPRReg arg2Payload, unsigned arg3, GPRReg arg4, unsigned arg5)
+    {
+        m_jit.setupArgumentsWithExecState(TrustedImm32(arg1), arg2Payload, arg2Tag, TrustedImm32(arg3), arg4, TrustedImm32(arg5));
+        return appendCallWithExceptionCheck(operation);
+    }
 #undef EABI_32BIT_DUMMY_ARG
 #undef SH4_32BIT_DUMMY_ARG
     
index 1e720ec..62b1836 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
  * Copyright (C) 2011 Intel Corporation. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -40,6 +40,7 @@
 #include "JSPropertyNameEnumerator.h"
 #include "ObjectPrototype.h"
 #include "JSCInlines.h"
+#include "SetupVarargsFrame.h"
 #include "TypeProfilerLog.h"
 
 namespace JSC { namespace DFG {
@@ -638,35 +639,171 @@ void SpeculativeJIT::compileMiscStrictEq(Node* node)
 
 void SpeculativeJIT::emitCall(Node* node)
 {
-    bool isCall = node->op() == Call;
-    if (!isCall)
-        ASSERT(node->op() == Construct);
-
-    // For constructors, the this argument is not passed but we have to make space
-    // for it.
-    int dummyThisArgument = isCall ? 0 : 1;
-
-    CallLinkInfo::CallType callType = isCall ? CallLinkInfo::Call : CallLinkInfo::Construct;
-
-    Edge calleeEdge = m_jit.graph().m_varArgChildren[node->firstChild()];
+    CallLinkInfo::CallType callType;
+    bool isCall;
+    bool isVarargs;
+    switch (node->op()) {
+    case Call:
+        callType = CallLinkInfo::Call;
+        isCall = true;
+        isVarargs = false;
+        break;
+    case Construct:
+        callType = CallLinkInfo::Construct;
+        isCall = false;
+        isVarargs = false;
+        break;
+    case CallVarargs:
+    case CallForwardVarargs:
+        callType = CallLinkInfo::CallVarargs;
+        isCall = true;
+        isVarargs = true;
+        break;
+    case ConstructVarargs:
+        callType = CallLinkInfo::ConstructVarargs;
+        isCall = false;
+        isVarargs = true;
+        break;
+    default:
+        DFG_CRASH(m_jit.graph(), node, "bad node type");
+        break;
+    }
 
-    // The call instruction's first child is either the function (normal call) or the
-    // receiver (method call). subsequent children are the arguments.
-    int numPassedArgs = node->numChildren() - 1;
+    Edge calleeEdge = m_jit.graph().child(node, 0);
     
-    int numArgs = numPassedArgs + dummyThisArgument;
-
-    m_jit.store32(MacroAssembler::TrustedImm32(numArgs), m_jit.calleeFramePayloadSlot(JSStack::ArgumentCount));
-
-    for (int i = 0; i < numPassedArgs; i++) {
-        Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i];
-        JSValueOperand arg(this, argEdge);
-        GPRReg argTagGPR = arg.tagGPR();
-        GPRReg argPayloadGPR = arg.payloadGPR();
-        use(argEdge);
+    // Gotta load the arguments somehow. Varargs is trickier.
+    if (isVarargs) {
+        CallVarargsData* data = node->callVarargsData();
+
+        GPRReg argumentsPayloadGPR;
+        GPRReg argumentsTagGPR;
+        GPRReg scratchGPR1;
+        GPRReg scratchGPR2;
+        GPRReg scratchGPR3;
+        
+        if (node->op() == CallForwardVarargs) {
+            // We avoid calling flushRegisters() inside the control flow of CallForwardVarargs.
+            flushRegisters();
+        }
+        
+        auto loadArgumentsGPR = [&] (GPRReg reservedGPR) {
+            if (node->op() == CallForwardVarargs) {
+                argumentsTagGPR = JITCompiler::selectScratchGPR(reservedGPR);
+                argumentsPayloadGPR = JITCompiler::selectScratchGPR(reservedGPR, argumentsTagGPR);
+                m_jit.load32(
+                    JITCompiler::tagFor(
+                        m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic)),
+                    argumentsTagGPR);
+                m_jit.load32(
+                    JITCompiler::payloadFor(
+                        m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic)),
+                    argumentsPayloadGPR);
+            } else {
+                if (reservedGPR != InvalidGPRReg)
+                    lock(reservedGPR);
+                JSValueOperand arguments(this, node->child2());
+                argumentsTagGPR = arguments.tagGPR();
+                argumentsPayloadGPR = arguments.payloadGPR();
+                if (reservedGPR != InvalidGPRReg)
+                    unlock(reservedGPR);
+                flushRegisters();
+            }
+            
+            scratchGPR1 = JITCompiler::selectScratchGPR(argumentsPayloadGPR, argumentsTagGPR, reservedGPR);
+            scratchGPR2 = JITCompiler::selectScratchGPR(argumentsPayloadGPR, argumentsTagGPR, scratchGPR1, reservedGPR);
+            scratchGPR3 = JITCompiler::selectScratchGPR(argumentsPayloadGPR, argumentsTagGPR, scratchGPR1, scratchGPR2, reservedGPR);
+        };
+        
+        loadArgumentsGPR(InvalidGPRReg);
+        
+        // At this point we have the whole register file to ourselves, and argumentsGPR has the
+        // arguments register. Select some scratch registers.
+        
+        // We will use scratchGPR2 to point to our stack frame.
+        
+        unsigned numUsedStackSlots = m_jit.graph().m_nextMachineLocal;
+        
+        JITCompiler::Jump haveArguments;
+        GPRReg resultGPR = GPRInfo::regT0;
+        if (node->op() == CallForwardVarargs) {
+            // Do the horrific foo.apply(this, arguments) optimization.
+            // FIXME: do this optimization at the IR level instead of dynamically by testing the
+            // arguments register. This will happen once we get rid of the arguments lazy creation and
+            // lazy tear-off.
+            
+            JITCompiler::JumpList slowCase;
+            slowCase.append(
+                m_jit.branch32(
+                    JITCompiler::NotEqual,
+                    argumentsTagGPR, TrustedImm32(JSValue::EmptyValueTag)));
+            
+            m_jit.move(TrustedImm32(numUsedStackSlots), scratchGPR2);
+            emitSetupVarargsFrameFastCase(m_jit, scratchGPR2, scratchGPR1, scratchGPR2, scratchGPR3, node->origin.semantic.inlineCallFrame, data->firstVarArgOffset, slowCase);
+            resultGPR = scratchGPR2;
+            
+            haveArguments = m_jit.jump();
+            slowCase.link(&m_jit);
+        }
 
-        m_jit.store32(argTagGPR, m_jit.calleeArgumentTagSlot(i + dummyThisArgument));
-        m_jit.store32(argPayloadGPR, m_jit.calleeArgumentPayloadSlot(i + dummyThisArgument));
+        DFG_ASSERT(m_jit.graph(), node, isFlushed());
+        
+        // Right now, arguments is in argumentsTagGPR/argumentsPayloadGPR and the register file is
+        // flushed.
+        callOperation(operationSizeFrameForVarargs, GPRInfo::returnValueGPR, argumentsTagGPR, argumentsPayloadGPR, numUsedStackSlots, data->firstVarArgOffset);
+        
+        // Now we have the argument count of the callee frame, but we've lost the arguments operand.
+        // Reconstruct the arguments operand while preserving the callee frame.
+        loadArgumentsGPR(GPRInfo::returnValueGPR);
+        m_jit.move(TrustedImm32(numUsedStackSlots), scratchGPR1);
+        emitSetVarargsFrame(m_jit, GPRInfo::returnValueGPR, false, scratchGPR1, scratchGPR1);
+        m_jit.addPtr(TrustedImm32(-(sizeof(CallerFrameAndPC) + WTF::roundUpToMultipleOf(stackAlignmentBytes(), 6 * sizeof(void*)))), scratchGPR1, JITCompiler::stackPointerRegister);
+        
+        callOperation(operationSetupVarargsFrame, GPRInfo::returnValueGPR, scratchGPR1, argumentsTagGPR, argumentsPayloadGPR, data->firstVarArgOffset, GPRInfo::returnValueGPR);
+        m_jit.move(GPRInfo::returnValueGPR, resultGPR);
+        
+        if (node->op() == CallForwardVarargs)
+            haveArguments.link(&m_jit);
+        
+        m_jit.addPtr(TrustedImm32(sizeof(CallerFrameAndPC)), resultGPR, JITCompiler::stackPointerRegister);
+        
+        DFG_ASSERT(m_jit.graph(), node, isFlushed());
+        
+        if (node->op() != CallForwardVarargs)
+            use(node->child2());
+        
+        if (isCall) {
+            // Now set up the "this" argument.
+            JSValueOperand thisArgument(this, node->op() == CallForwardVarargs ? node->child2() : node->child3());
+            GPRReg thisArgumentTagGPR = thisArgument.tagGPR();
+            GPRReg thisArgumentPayloadGPR = thisArgument.payloadGPR();
+            thisArgument.use();
+            
+            m_jit.store32(thisArgumentTagGPR, JITCompiler::calleeArgumentTagSlot(0));
+            m_jit.store32(thisArgumentPayloadGPR, JITCompiler::calleeArgumentPayloadSlot(0));
+        }
+    } else {
+        // For constructors, the this argument is not passed but we have to make space
+        // for it.
+        int dummyThisArgument = isCall ? 0 : 1;
+        
+        // The call instruction's first child is either the function (normal call) or the
+        // receiver (method call). subsequent children are the arguments.
+        int numPassedArgs = node->numChildren() - 1;
+        
+        int numArgs = numPassedArgs + dummyThisArgument;
+        
+        m_jit.store32(MacroAssembler::TrustedImm32(numArgs), m_jit.calleeFramePayloadSlot(JSStack::ArgumentCount));
+        
+        for (int i = 0; i < numPassedArgs; i++) {
+            Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i];
+            JSValueOperand arg(this, argEdge);
+            GPRReg argTagGPR = arg.tagGPR();
+            GPRReg argPayloadGPR = arg.payloadGPR();
+            use(argEdge);
+            
+            m_jit.store32(argTagGPR, m_jit.calleeArgumentTagSlot(i + dummyThisArgument));
+            m_jit.store32(argPayloadGPR, m_jit.calleeArgumentPayloadSlot(i + dummyThisArgument));
+        }
     }
 
     JSValueOperand callee(this, calleeEdge);
@@ -724,6 +861,10 @@ void SpeculativeJIT::emitCall(Node* node)
     info->codeOrigin = node->origin.semantic;
     info->calleeGPR = calleePayloadGPR;
     m_jit.addJSCall(fastCall, slowCall, targetToCheck, info);
+    
+    // If we were varargs, then after the calls are done, we need to reestablish our stack pointer.
+    if (isVarargs)
+        m_jit.addPtr(TrustedImm32(m_jit.graph().stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, JITCompiler::stackPointerRegister);
 }
 
 template<bool strict>
@@ -4156,9 +4297,59 @@ void SpeculativeJIT::compile(Node* node)
 
     case Call:
     case Construct:
+    case CallVarargs:
+    case CallForwardVarargs:
+    case ConstructVarargs:
         emitCall(node);
         break;
 
+    case LoadVarargs: {
+        LoadVarargsData* data = node->loadVarargsData();
+        
+        GPRReg argumentsTagGPR;
+        GPRReg argumentsPayloadGPR;
+        {
+            JSValueOperand arguments(this, node->child1());
+            argumentsTagGPR = arguments.tagGPR();
+            argumentsPayloadGPR = arguments.payloadGPR();
+            flushRegisters();
+        }
+        
+        callOperation(operationSizeOfVarargs, GPRInfo::returnValueGPR, argumentsTagGPR, argumentsPayloadGPR, data->offset);
+        
+        lock(GPRInfo::returnValueGPR);
+        {
+            JSValueOperand arguments(this, node->child1());
+            argumentsTagGPR = arguments.tagGPR();
+            argumentsPayloadGPR = arguments.payloadGPR();
+            flushRegisters();
+        }
+        unlock(GPRInfo::returnValueGPR);
+        
+        // FIXME: There is a chance that we will call an effectful length property twice. This is safe
+        // from the standpoint of the VM's integrity, but it's subtly wrong from a spec compliance
+        // standpoint. The best solution would be one where we can exit *into* the op_call_varargs right
+        // past the sizing.
+        // https://bugs.webkit.org/show_bug.cgi?id=141448
+
+        GPRReg argCountIncludingThisGPR =
+            JITCompiler::selectScratchGPR(GPRInfo::returnValueGPR, argumentsTagGPR, argumentsPayloadGPR);
+        
+        m_jit.add32(TrustedImm32(1), GPRInfo::returnValueGPR, argCountIncludingThisGPR);
+        speculationCheck(
+            VarargsOverflow, JSValueSource(), Edge(), m_jit.branch32(
+                MacroAssembler::Above,
+                argCountIncludingThisGPR,
+                TrustedImm32(data->limit)));
+        
+        m_jit.store32(argCountIncludingThisGPR, JITCompiler::payloadFor(data->machineCount));
+        
+        callOperation(operationLoadVarargs, data->machineStart.offset(), argumentsTagGPR, argumentsPayloadGPR, data->offset, GPRInfo::returnValueGPR, data->mandatoryMinimum);
+        
+        noResult(node);
+        break;
+    }
+        
     case CreateActivation: {
         GPRTemporary result(this);
         GPRReg resultGPR = result.gpr();
@@ -4278,9 +4469,20 @@ void SpeculativeJIT::compile(Node* node)
                     TrustedImm32(JSValue::EmptyValueTag)));
         }
         
-        ASSERT(!node->origin.semantic.inlineCallFrame);
-        m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultGPR);
-        m_jit.sub32(TrustedImm32(1), resultGPR);
+        if (node->origin.semantic.inlineCallFrame
+            && !node->origin.semantic.inlineCallFrame->isVarargs()) {
+            m_jit.move(
+                TrustedImm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1),
+                resultGPR);
+        } else {
+            VirtualRegister argumentCountRegister;
+            if (!node->origin.semantic.inlineCallFrame)
+                argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+            else
+                argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+            m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultGPR);
+            m_jit.sub32(TrustedImm32(1), resultGPR);
+        }
         int32Result(resultGPR, node);
         break;
     }
@@ -4296,14 +4498,21 @@ void SpeculativeJIT::compile(Node* node)
             JITCompiler::tagFor(m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic)),
             TrustedImm32(JSValue::EmptyValueTag));
         
-        if (node->origin.semantic.inlineCallFrame) {
+        if (node->origin.semantic.inlineCallFrame
+            && !node->origin.semantic.inlineCallFrame->isVarargs()) {
             m_jit.move(
-                Imm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1),
+                TrustedImm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1),
                 resultPayloadGPR);
         } else {
-            m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultPayloadGPR);
+            VirtualRegister argumentCountRegister;
+            if (!node->origin.semantic.inlineCallFrame)
+                argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+            else
+                argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+            m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultPayloadGPR);
             m_jit.sub32(TrustedImm32(1), resultPayloadGPR);
         }
+        
         m_jit.move(TrustedImm32(JSValue::Int32Tag), resultTagGPR);
         
         // FIXME: the slow path generator should perform a forward speculation that the
@@ -4339,7 +4548,8 @@ void SpeculativeJIT::compile(Node* node)
                     TrustedImm32(JSValue::EmptyValueTag)));
         }
             
-        if (node->origin.semantic.inlineCallFrame) {
+        if (node->origin.semantic.inlineCallFrame
+            && !node->origin.semantic.inlineCallFrame->isVarargs()) {
             speculationCheck(
                 Uncountable, JSValueRegs(), 0,
                 m_jit.branch32(
@@ -4347,7 +4557,12 @@ void SpeculativeJIT::compile(Node* node)
                     indexGPR,
                     Imm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1)));
         } else {
-            m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultPayloadGPR);
+            VirtualRegister argumentCountRegister;
+            if (!node->origin.semantic.inlineCallFrame)
+                argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+            else
+                argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+            m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultPayloadGPR);
             m_jit.sub32(TrustedImm32(1), resultPayloadGPR);
             speculationCheck(
                 Uncountable, JSValueRegs(), 0,
@@ -4416,14 +4631,20 @@ void SpeculativeJIT::compile(Node* node)
                 JITCompiler::tagFor(m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic)),
                 TrustedImm32(JSValue::EmptyValueTag)));
         
-        if (node->origin.semantic.inlineCallFrame) {
+        if (node->origin.semantic.inlineCallFrame
+            && !node->origin.semantic.inlineCallFrame->isVarargs()) {
             slowPath.append(
                 m_jit.branch32(
                     JITCompiler::AboveOrEqual,
                     indexGPR,
                     Imm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1)));
         } else {
-            m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultPayloadGPR);
+            VirtualRegister argumentCountRegister;
+            if (!node->origin.semantic.inlineCallFrame)
+                argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+            else
+                argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+            m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultPayloadGPR);
             m_jit.sub32(TrustedImm32(1), resultPayloadGPR);
             slowPath.append(
                 m_jit.branch32(JITCompiler::AboveOrEqual, indexGPR, resultPayloadGPR));
index 33c314c..3f18556 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -39,6 +39,7 @@
 #include "JSCInlines.h"
 #include "JSPropertyNameEnumerator.h"
 #include "ObjectPrototype.h"
+#include "SetupVarargsFrame.h"
 #include "SpillRegistersMode.h"
 #include "TypeProfilerLog.h"
 
@@ -624,38 +625,163 @@ void SpeculativeJIT::compileMiscStrictEq(Node* node)
 
 void SpeculativeJIT::emitCall(Node* node)
 {
-    bool isCall = node->op() == Call;
-    if (!isCall)
-        DFG_ASSERT(m_jit.graph(), node, node->op() == Construct);
-    
-    // For constructors, the this argument is not passed but we have to make space
-    // for it.
-    int dummyThisArgument = isCall ? 0 : 1;
+    CallLinkInfo::CallType callType;
+    bool isCall;
+    bool isVarargs;
+    switch (node->op()) {
+    case Call:
+        callType = CallLinkInfo::Call;
+        isCall = true;
+        isVarargs = false;
+        break;
+    case Construct:
+        callType = CallLinkInfo::Construct;
+        isCall = false;
+        isVarargs = false;
+        break;
+    case CallVarargs:
+    case CallForwardVarargs:
+        callType = CallLinkInfo::CallVarargs;
+        isCall = true;
+        isVarargs = true;
+        break;
+    case ConstructVarargs:
+        callType = CallLinkInfo::ConstructVarargs;
+        isCall = false;
+        isVarargs = true;
+        break;
+    default:
+        DFG_CRASH(m_jit.graph(), node, "bad node type");
+        break;
+    }
+
+    Edge calleeEdge = m_jit.graph().child(node, 0);
     
-    CallLinkInfo::CallType callType = isCall ? CallLinkInfo::Call : CallLinkInfo::Construct;
+    // Gotta load the arguments somehow. Varargs is trickier.
+    if (isVarargs) {
+        CallVarargsData* data = node->callVarargsData();
+
+        GPRReg argumentsGPR;
+        GPRReg scratchGPR1;
+        GPRReg scratchGPR2;
+        GPRReg scratchGPR3;
+        
+        if (node->op() == CallForwardVarargs) {
+            // We avoid calling flushRegisters() inside the control flow of CallForwardVarargs.
+            flushRegisters();
+        }
+        
+        auto loadArgumentsGPR = [&] (GPRReg reservedGPR) {
+            if (node->op() == CallForwardVarargs) {
+                argumentsGPR = JITCompiler::selectScratchGPR(reservedGPR);
+                m_jit.load64(
+                    JITCompiler::addressFor(
+                        m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic)),
+                    argumentsGPR);
+            } else {
+                if (reservedGPR != InvalidGPRReg)
+                    lock(reservedGPR);
+                JSValueOperand arguments(this, node->child2());
+                argumentsGPR = arguments.gpr();
+                if (reservedGPR != InvalidGPRReg)
+                    unlock(reservedGPR);
+                flushRegisters();
+            }
+            
+            scratchGPR1 = JITCompiler::selectScratchGPR(argumentsGPR, reservedGPR);
+            scratchGPR2 = JITCompiler::selectScratchGPR(argumentsGPR, scratchGPR1, reservedGPR);
+            scratchGPR3 = JITCompiler::selectScratchGPR(argumentsGPR, scratchGPR1, scratchGPR2, reservedGPR);
+        };
+        
+        loadArgumentsGPR(InvalidGPRReg);
+        
+        // At this point we have the whole register file to ourselves, and argumentsGPR has the
+        // arguments register. Select some scratch registers.
+        
+        // We will use scratchGPR2 to point to our stack frame.
+
+        unsigned numUsedStackSlots = m_jit.graph().m_nextMachineLocal;
+        
+        JITCompiler::Jump haveArguments;
+        GPRReg resultGPR = GPRInfo::regT0;
+        if (node->op() == CallForwardVarargs) {
+            // Do the horrific foo.apply(this, arguments) optimization.
+            // FIXME: do this optimization at the IR level instead of dynamically by testing the
+            // arguments register. This will happen once we get rid of the arguments lazy creation and
+            // lazy tear-off.
+            
+            JITCompiler::JumpList slowCase;
+            slowCase.append(m_jit.branchTest64(JITCompiler::NonZero, argumentsGPR));
+            
+            m_jit.move(TrustedImm32(numUsedStackSlots), scratchGPR2);
+            emitSetupVarargsFrameFastCase(m_jit, scratchGPR2, scratchGPR1, scratchGPR2, scratchGPR3, node->origin.semantic.inlineCallFrame, data->firstVarArgOffset, slowCase);
+            resultGPR = scratchGPR2;
+            
+            haveArguments = m_jit.jump();
+            slowCase.link(&m_jit);
+        }
+
+        DFG_ASSERT(m_jit.graph(), node, isFlushed());
+        
+        // Right now, arguments is in argumentsGPR and the register file is flushed.
+        callOperation(operationSizeFrameForVarargs, GPRInfo::returnValueGPR, argumentsGPR, numUsedStackSlots, data->firstVarArgOffset);
+        
+        // Now we have the argument count of the callee frame, but we've lost the arguments operand.
+        // Reconstruct the arguments operand while preserving the callee frame.
+        loadArgumentsGPR(GPRInfo::returnValueGPR);
+        m_jit.move(TrustedImm32(numUsedStackSlots), scratchGPR1);
+        emitSetVarargsFrame(m_jit, GPRInfo::returnValueGPR, false, scratchGPR1, scratchGPR1);
+        m_jit.addPtr(TrustedImm32(-(sizeof(CallerFrameAndPC) + WTF::roundUpToMultipleOf(stackAlignmentBytes(), 5 * sizeof(void*)))), scratchGPR1, JITCompiler::stackPointerRegister);
+        
+        callOperation(operationSetupVarargsFrame, GPRInfo::returnValueGPR, scratchGPR1, argumentsGPR, data->firstVarArgOffset, GPRInfo::returnValueGPR);
+        m_jit.move(GPRInfo::returnValueGPR, resultGPR);
+        
+        if (node->op() == CallForwardVarargs)
+            haveArguments.link(&m_jit);
+        
+        m_jit.addPtr(TrustedImm32(sizeof(CallerFrameAndPC)), resultGPR, JITCompiler::stackPointerRegister);
+        
+        DFG_ASSERT(m_jit.graph(), node, isFlushed());
+        
+        // We don't need the arguments array anymore.
+        if (node->op() != CallForwardVarargs)
+            use(node->child2());
+        
+        if (isCall) {
+            // Now set up the "this" argument.
+            JSValueOperand thisArgument(this, node->op() == CallForwardVarargs ? node->child2() : node->child3());
+            GPRReg thisArgumentGPR = thisArgument.gpr();
+            thisArgument.use();
+            
+            m_jit.store64(thisArgumentGPR, JITCompiler::calleeArgumentSlot(0));
+        }
+    } else {
+        // For constructors, the this argument is not passed but we have to make space
+        // for it.
+        int dummyThisArgument = isCall ? 0 : 1;
     
-    Edge calleeEdge = m_jit.graph().m_varArgChildren[node->firstChild()];
-    // The call instruction's first child is the function; the subsequent children are the
-    // arguments.
-    int numPassedArgs = node->numChildren() - 1;
+        // The call instruction's first child is the function; the subsequent children are the
+        // arguments.
+        int numPassedArgs = node->numChildren() - 1;
     
-    int numArgs = numPassedArgs + dummyThisArgument;
+        int numArgs = numPassedArgs + dummyThisArgument;
     
-    m_jit.store32(MacroAssembler::TrustedImm32(numArgs), m_jit.calleeFramePayloadSlot(JSStack::ArgumentCount));
+        m_jit.store32(MacroAssembler::TrustedImm32(numArgs), JITCompiler::calleeFramePayloadSlot(JSStack::ArgumentCount));
     
-    for (int i = 0; i < numPassedArgs; i++) {
-        Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i];
-        JSValueOperand arg(this, argEdge);
-        GPRReg argGPR = arg.gpr();
-        use(argEdge);
+        for (int i = 0; i < numPassedArgs; i++) {
+            Edge argEdge = m_jit.graph().m_varArgChildren[node->firstChild() + 1 + i];
+            JSValueOperand arg(this, argEdge);
+            GPRReg argGPR = arg.gpr();
+            use(argEdge);
         
-        m_jit.store64(argGPR, m_jit.calleeArgumentSlot(i + dummyThisArgument));
+            m_jit.store64(argGPR, JITCompiler::calleeArgumentSlot(i + dummyThisArgument));
+        }
     }
 
     JSValueOperand callee(this, calleeEdge);
     GPRReg calleeGPR = callee.gpr();
-    use(calleeEdge);
-    m_jit.store64(calleeGPR, m_jit.calleeFrameSlot(JSStack::Callee));
+    callee.use();
+    m_jit.store64(calleeGPR, JITCompiler::calleeFrameSlot(JSStack::Callee));
     
     flushRegisters();
 
@@ -692,6 +818,10 @@ void SpeculativeJIT::emitCall(Node* node)
     callLinkInfo->calleeGPR = calleeGPR;
     
     m_jit.addJSCall(fastCall, slowCall, targetToCheck, callLinkInfo);
+    
+    // If we were varargs, then after the calls are done, we need to reestablish our stack pointer.
+    if (isVarargs)
+        m_jit.addPtr(TrustedImm32(m_jit.graph().stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, JITCompiler::stackPointerRegister);
 }
 
 // Clang should allow unreachable [[clang::fallthrough]] in template functions if any template expansion uses it
@@ -4221,9 +4351,56 @@ void SpeculativeJIT::compile(Node* node)
 
     case Call:
     case Construct:
+    case CallVarargs:
+    case CallForwardVarargs:
+    case ConstructVarargs:
         emitCall(node);
         break;
         
+    case LoadVarargs: {
+        LoadVarargsData* data = node->loadVarargsData();
+        
+        GPRReg argumentsGPR;
+        {
+            JSValueOperand arguments(this, node->child1());
+            argumentsGPR = arguments.gpr();
+            flushRegisters();
+        }
+        
+        callOperation(operationSizeOfVarargs, GPRInfo::returnValueGPR, argumentsGPR, data->offset);
+        
+        lock(GPRInfo::returnValueGPR);
+        {
+            JSValueOperand arguments(this, node->child1());
+            argumentsGPR = arguments.gpr();
+            flushRegisters();
+        }
+        unlock(GPRInfo::returnValueGPR);
+        
+        // FIXME: There is a chance that we will call an effectful length property twice. This is safe
+        // from the standpoint of the VM's integrity, but it's subtly wrong from a spec compliance
+        // standpoint. The best solution would be one where we can exit *into* the op_call_varargs right
+        // past the sizing.
+        // https://bugs.webkit.org/show_bug.cgi?id=141448
+
+        GPRReg argCountIncludingThisGPR =
+            JITCompiler::selectScratchGPR(GPRInfo::returnValueGPR, argumentsGPR);
+        
+        m_jit.add32(TrustedImm32(1), GPRInfo::returnValueGPR, argCountIncludingThisGPR);
+        speculationCheck(
+            VarargsOverflow, JSValueSource(), Edge(), m_jit.branch32(
+                MacroAssembler::Above,
+                argCountIncludingThisGPR,
+                TrustedImm32(data->limit)));
+        
+        m_jit.store32(argCountIncludingThisGPR, JITCompiler::payloadFor(data->machineCount));
+        
+        callOperation(operationLoadVarargs, data->machineStart.offset(), argumentsGPR, data->offset, GPRInfo::returnValueGPR, data->mandatoryMinimum);
+        
+        noResult(node);
+        break;
+    }
+        
     case CreateActivation: {
         DFG_ASSERT(m_jit.graph(), node, !node->origin.semantic.inlineCallFrame);
         
@@ -4328,9 +4505,20 @@ void SpeculativeJIT::compile(Node* node)
                         m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic))));
         }
         
-        DFG_ASSERT(m_jit.graph(), node, !node->origin.semantic.inlineCallFrame);
-        m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultGPR);
-        m_jit.sub32(TrustedImm32(1), resultGPR);
+        if (node->origin.semantic.inlineCallFrame
+            && !node->origin.semantic.inlineCallFrame->isVarargs()) {
+            m_jit.move(
+                TrustedImm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1),
+                resultGPR);
+        } else {
+            VirtualRegister argumentCountRegister;
+            if (!node->origin.semantic.inlineCallFrame)
+                argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+            else
+                argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+            m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultGPR);
+            m_jit.sub32(TrustedImm32(1), resultGPR);
+        }
         int32Result(resultGPR, node);
         break;
     }
@@ -4344,20 +4532,22 @@ void SpeculativeJIT::compile(Node* node)
             JITCompiler::addressFor(
                 m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic)));
         
-        if (node->origin.semantic.inlineCallFrame) {
+        if (node->origin.semantic.inlineCallFrame
+            && !node->origin.semantic.inlineCallFrame->isVarargs()) {
             m_jit.move(
                 Imm64(JSValue::encode(jsNumber(node->origin.semantic.inlineCallFrame->arguments.size() - 1))),
                 resultGPR);
         } else {
-            m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultGPR);
+            VirtualRegister argumentCountRegister;
+            if (!node->origin.semantic.inlineCallFrame)
+                argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+            else
+                argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+            m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultGPR);
             m_jit.sub32(TrustedImm32(1), resultGPR);
             m_jit.or64(GPRInfo::tagTypeNumberRegister, resultGPR);
         }
         
-        // FIXME: the slow path generator should perform a forward speculation that the
-        // result is an integer. For now we postpone the speculation by having this return
-        // a JSValue.
-        
         addSlowPathGenerator(
             slowPathCall(
                 created, this, operationGetArgumentsLength, resultGPR,
@@ -4384,7 +4574,8 @@ void SpeculativeJIT::compile(Node* node)
                         m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic))));
         }
 
-        if (node->origin.semantic.inlineCallFrame) {
+        if (node->origin.semantic.inlineCallFrame
+            && !node->origin.semantic.inlineCallFrame->isVarargs()) {
             speculationCheck(
                 Uncountable, JSValueRegs(), 0,
                 m_jit.branch32(
@@ -4392,7 +4583,12 @@ void SpeculativeJIT::compile(Node* node)
                     indexGPR,
                     Imm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1)));
         } else {
-            m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultGPR);
+            VirtualRegister argumentCountRegister;
+            if (!node->origin.semantic.inlineCallFrame)
+                argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+            else
+                argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+            m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultGPR);
             m_jit.sub32(TrustedImm32(1), resultGPR);
             speculationCheck(
                 Uncountable, JSValueRegs(), 0,
@@ -4449,14 +4645,20 @@ void SpeculativeJIT::compile(Node* node)
                 JITCompiler::addressFor(
                     m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic))));
         
-        if (node->origin.semantic.inlineCallFrame) {
+        if (node->origin.semantic.inlineCallFrame
+            && !node->origin.semantic.inlineCallFrame->isVarargs()) {
             slowPath.append(
                 m_jit.branch32(
                     JITCompiler::AboveOrEqual,
                     resultGPR,
                     Imm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1)));
         } else {
-            m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultGPR);
+            VirtualRegister argumentCountRegister;
+            if (!node->origin.semantic.inlineCallFrame)
+                argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+            else
+                argumentCountRegister = node->origin.semantic.inlineCallFrame->argumentCountRegister;
+            m_jit.load32(JITCompiler::payloadFor(argumentCountRegister), resultGPR);
             m_jit.sub32(TrustedImm32(1), resultGPR);
             slowPath.append(
                 m_jit.branch32(JITCompiler::AboveOrEqual, indexGPR, resultGPR));
index f86e08d..90f0de1 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -56,7 +56,7 @@ public:
         BitVector usedLocals;
         
         // Collect those variables that are used from IR.
-        bool hasGetLocalUnlinked = false;
+        bool hasNodesThatNeedFixup = false;
         for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) {
             BasicBlock* block = m_graph.block(blockIndex);
             if (!block)
@@ -81,7 +81,22 @@ public:
                     if (operand.isArgument())
                         break;
                     usedLocals.set(operand.toLocal());
-                    hasGetLocalUnlinked = true;
+                    hasNodesThatNeedFixup = true;
+                    break;
+                }
+                    
+                case LoadVarargs: {
+                    LoadVarargsData* data = node->loadVarargsData();
+                    if (data->count.isLocal())
+                        usedLocals.set(data->count.toLocal());
+                    if (data->start.isLocal()) {
+                        // This part really relies on the contiguity of stack layout
+                        // assignments.
+                        ASSERT(VirtualRegister(data->start.offset() + data->limit - 1).isLocal());
+                        for (unsigned i = data->limit; i--;) 
+                            usedLocals.set(VirtualRegister(data->start.offset() + i).toLocal());
+                    } // the else case shouldn't happen.
+                    hasNodesThatNeedFixup = true;
                     break;
                 }
                     
@@ -113,6 +128,11 @@ public:
             usedLocals.set(argumentsRegister.toLocal());
             usedLocals.set(unmodifiedArgumentsRegister(argumentsRegister).toLocal());
             
+            if (inlineCallFrame->isVarargs()) {
+                usedLocals.set(VirtualRegister(
+                    JSStack::ArgumentCount + inlineCallFrame->stackOffset).toLocal());
+            }
+            
             for (unsigned argument = inlineCallFrame->arguments.size(); argument-- > 1;) {
                 usedLocals.set(VirtualRegister(
                     virtualRegisterForArgument(argument).offset() +
@@ -148,24 +168,21 @@ public:
             if (allocation[local] == UINT_MAX)
                 continue;
             
-            variable->machineLocal() = virtualRegisterForLocal(
-                allocation[variable->local().toLocal()]);
+            variable->machineLocal() = assign(allocation, variable->local());
         }
         
         if (codeBlock()->usesArguments()) {
-            VirtualRegister argumentsRegister = virtualRegisterForLocal(
-                allocation[codeBlock()->argumentsRegister().toLocal()]);
+            VirtualRegister argumentsRegister =
+                assign(allocation, codeBlock()->argumentsRegister());
             RELEASE_ASSERT(
-                virtualRegisterForLocal(allocation[
-                    unmodifiedArgumentsRegister(
-                        codeBlock()->argumentsRegister()).toLocal()])
+                assign(allocation, unmodifiedArgumentsRegister(codeBlock()->argumentsRegister()))
                 == unmodifiedArgumentsRegister(argumentsRegister));
             codeBlock()->setArgumentsRegister(argumentsRegister);
         }
         
         if (codeBlock()->uncheckedActivationRegister().isValid()) {
             codeBlock()->setActivationRegister(
-                virtualRegisterForLocal(allocation[codeBlock()->activationRegister().toLocal()]));
+                assign(allocation, codeBlock()->activationRegister()));
         }
         
         // This register is never valid for DFG code blocks.
@@ -176,15 +193,19 @@ public:
             InlineCallFrame* inlineCallFrame = data.inlineCallFrame;
             
             if (m_graph.usesArguments(inlineCallFrame)) {
-                inlineCallFrame->argumentsRegister = virtualRegisterForLocal(
-                    allocation[m_graph.argumentsRegisterFor(inlineCallFrame).toLocal()]);
+                inlineCallFrame->argumentsRegister = assign(
+                    allocation, m_graph.argumentsRegisterFor(inlineCallFrame));
 
                 RELEASE_ASSERT(
-                    virtualRegisterForLocal(allocation[unmodifiedArgumentsRegister(
-                        m_graph.argumentsRegisterFor(inlineCallFrame)).toLocal()])
+                    assign(allocation, unmodifiedArgumentsRegister(m_graph.argumentsRegisterFor(inlineCallFrame)))
                     == unmodifiedArgumentsRegister(inlineCallFrame->argumentsRegister));
             }
             
+            if (inlineCallFrame->isVarargs()) {
+                inlineCallFrame->argumentCountRegister = assign(
+                    allocation, VirtualRegister(inlineCallFrame->stackOffset + JSStack::ArgumentCount));
+            }
+            
             for (unsigned argument = inlineCallFrame->arguments.size(); argument-- > 1;) {
                 ArgumentPosition& position = m_graph.m_argumentPositions[
                     data.argumentPositionStart + argument];
@@ -227,9 +248,7 @@ public:
                     symbolTable->parameterCount());
                 for (size_t i = symbolTable->parameterCount(); i--;) {
                     newSlowArguments[i] = slowArguments[i];
-                    VirtualRegister reg = VirtualRegister(slowArguments[i].index);
-                    if (reg.isLocal())
-                        newSlowArguments[i].index = virtualRegisterForLocal(allocation[reg.toLocal()]).offset();
+                    newSlowArguments[i].index = assign(allocation, VirtualRegister(slowArguments[i].index)).offset();
                 }
             
                 m_graph.m_slowArguments = WTF::move(newSlowArguments);
@@ -237,7 +256,7 @@ public:
         }
         
         // Fix GetLocalUnlinked's variable references.
-        if (hasGetLocalUnlinked) {
+        if (hasNodesThatNeedFixup) {
             for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) {
                 BasicBlock* block = m_graph.block(blockIndex);
                 if (!block)
@@ -246,10 +265,14 @@ public:
                     Node* node = block->at(nodeIndex);
                     switch (node->op()) {
                     case GetLocalUnlinked: {
-                        VirtualRegister operand = node->unlinkedLocal();
-                        if (operand.isLocal())
-                            operand = virtualRegisterForLocal(allocation[operand.toLocal()]);
-                        node->setUnlinkedMachineLocal(operand);
+                        node->setUnlinkedMachineLocal(assign(allocation, node->unlinkedLocal()));
+                        break;
+                    }
+                        
+                    case LoadVarargs: {
+                        LoadVarargsData* data = node->loadVarargsData();
+                        data->machineCount = assign(allocation, data->count);
+                        data->machineStart = assign(allocation, data->start);
                         break;
                     }
                         
@@ -262,6 +285,20 @@ public:
         
         return true;
     }
+
+private:
+    VirtualRegister assign(const Vector<unsigned>& allocation, VirtualRegister src)
+    {
+        VirtualRegister result = src;
+        if (result.isLocal()) {
+            unsigned myAllocation = allocation[result.toLocal()];
+            if (myAllocation == UINT_MAX)
+                result = VirtualRegister();
+            else
+                result = virtualRegisterForLocal(myAllocation);
+        }
+        return result;
+    }
 };
 
 bool performStackLayout(Graph& graph)
index 5e30dfe..f214cb9 100644 (file)
@@ -271,6 +271,7 @@ private:
                     case GetScope:
                     case PhantomLocal:
                     case GetCallee:
+                    case CountExecution:
                         break;
                 
                     default:
index 5945315..d0875b5 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2013, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -115,7 +115,6 @@ public:
                 // from the node, before doing any appending.
                 switch (node->op()) {
                 case SetArgument: {
-                    ASSERT(!blockIndex);
                     // Insert a GetLocal and a CheckStructure immediately following this
                     // SetArgument, if the variable was a candidate for structure hoisting.
                     // If the basic block previously only had the SetArgument as its
@@ -127,6 +126,9 @@ public:
                     if (!iter->value.m_structure && !iter->value.m_arrayModeIsValid)
                         break;
 
+                    // Currently we should only be doing this hoisting for SetArguments at the prologue.
+                    ASSERT(!blockIndex);
+
                     NodeOrigin origin = node->origin;
                     
                     Node* getLocal = insertionSet.insertNode(
index 7d7b506..56eb756 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012-2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -50,7 +50,7 @@ public:
             startCrashing(); \
             dataLogF("\n\n\nAt "); \
             reportValidationContext context; \
-            dataLogF(": validation %s (%s:%d) failed.\n", #assertion, __FILE__, __LINE__); \
+            dataLogF(": validation failed: %s (%s:%d).\n", #assertion, __FILE__, __LINE__); \
             dumpGraphIfAppropriate(); \
             WTFReportAssertionFailure(__FILE__, __LINE__, WTF_PRETTY_FUNCTION, #assertion); \
             CRASH(); \
@@ -62,11 +62,11 @@ public:
             startCrashing(); \
             dataLogF("\n\n\nAt "); \
             reportValidationContext context; \
-            dataLogF(": validation (%s = ", #left); \
+            dataLogF(": validation failed: (%s = ", #left); \
             dataLog(left); \
             dataLogF(") == (%s = ", #right); \
             dataLog(right); \
-            dataLogF(") (%s:%d) failed.\n", __FILE__, __LINE__); \
+            dataLogF(") (%s:%d).\n", __FILE__, __LINE__); \
             dumpGraphIfAppropriate(); \
             WTFReportAssertionFailure(__FILE__, __LINE__, WTF_PRETTY_FUNCTION, #left " == " #right); \
             CRASH(); \
@@ -456,6 +456,14 @@ private:
                         break;
                     setLocalPositions.operand(node->local()) = i;
                     break;
+                case SetArgument:
+                    if (node->variableAccessData()->isCaptured())
+                        break;
+                    // This acts like a reset. It's ok to have a second GetLocal for a local in the same
+                    // block if we had a SetArgument for that local.
+                    getLocalPositions.operand(node->local()) = notSet;
+                    setLocalPositions.operand(node->local()) = notSet;
+                    break;
                 default:
                     break;
                 }
index fc7cb1b..d3bff4c 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -77,6 +77,9 @@ static inline LType structType(LContext context, LType element1, LType element2,
     return structType(context, elements, 2, packing);
 }
 
+// FIXME: Make the Variadicity argument not be the last argument to functionType() so that this function
+// can use C++11 variadic templates
+// https://bugs.webkit.org/show_bug.cgi?id=141575
 enum Variadicity { NotVariadic, Variadic };
 static inline LType functionType(LType returnType, const LType* paramTypes, unsigned paramCount, Variadicity variadicity)
 {
@@ -110,6 +113,16 @@ static inline LType functionType(LType returnType, LType param1, LType param2, L
     LType paramTypes[] = { param1, param2, param3, param4 };
     return functionType(returnType, paramTypes, 4, variadicity);
 }
+static inline LType functionType(LType returnType, LType param1, LType param2, LType param3, LType param4, LType param5, Variadicity variadicity = NotVariadic)
+{
+    LType paramTypes[] = { param1, param2, param3, param4, param5 };
+    return functionType(returnType, paramTypes, 5, variadicity);
+}
+static inline LType functionType(LType returnType, LType param1, LType param2, LType param3, LType param4, LType param5, LType param6, Variadicity variadicity = NotVariadic)
+{
+    LType paramTypes[] = { param1, param2, param3, param4, param5, param6 };
+    return functionType(returnType, paramTypes, 6, variadicity);
+}
 
 static inline LType typeOf(LValue value) { return llvm->TypeOf(value); }
 
@@ -298,41 +311,13 @@ static inline LValue buildCall(LBuilder builder, LValue function, LValue arg1)
 {
     return buildCall(builder, function, &arg1, 1);
 }
-static inline LValue buildCall(LBuilder builder, LValue function, LValue arg1, LValue arg2)
-{
-    LValue args[] = { arg1, arg2 };
-    return buildCall(builder, function, args, 2);
-}
-static inline LValue buildCall(LBuilder builder, LValue function, LValue arg1, LValue arg2, LValue arg3)
-{
-    LValue args[] = { arg1, arg2, arg3 };
-    return buildCall(builder, function, args, 3);
-}
-static inline LValue buildCall(LBuilder builder, LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4)
+template<typename... Args>
+LValue buildCall(LBuilder builder, LValue function, LValue arg1, Args... args)
 {
-    LValue args[] = { arg1, arg2, arg3, arg4 };
-    return buildCall(builder, function, args, 4);
-}
-static inline LValue buildCall(LBuilder builder, LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5)
-{
-    LValue args[] = { arg1, arg2, arg3, arg4, arg5 };
-    return buildCall(builder, function, args, 5);
-}
-static inline LValue buildCall(LBuilder builder, LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5, LValue arg6)
-{
-    LValue args[] = { arg1, arg2, arg3, arg4, arg5, arg6 };
-    return buildCall(builder, function, args, 6);
-}
-static inline LValue buildCall(LBuilder builder, LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5, LValue arg6, LValue arg7)
-{
-    LValue args[] = { arg1, arg2, arg3, arg4, arg5, arg6, arg7 };
-    return buildCall(builder, function, args, 7);
-}
-static inline LValue buildCall(LBuilder builder, LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5, LValue arg6, LValue arg7, LValue arg8)
-{
-    LValue args[] = { arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8 };
-    return buildCall(builder, function, args, 8);
+    LValue argsArray[] = { arg1, args... };
+    return buildCall(builder, function, argsArray, sizeof(argsArray) / sizeof(LValue));
 }
+
 static inline void setInstructionCallingConvention(LValue instruction, LCallConv callingConvention) { llvm->SetInstructionCallConv(instruction, callingConvention); }
 static inline LValue buildExtractValue(LBuilder builder, LValue aggVal, unsigned index) { return llvm->BuildExtractValue(builder, aggVal, index, ""); }
 static inline LValue buildSelect(LBuilder builder, LValue condition, LValue taken, LValue notTaken) { return llvm->BuildSelect(builder, condition, taken, notTaken, ""); }
index c050739..b4313be 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -120,6 +120,10 @@ inline CapabilityLevel canCompile(Node* node)
     case StoreBarrierWithNullCheck:
     case Call:
     case Construct:
+    case CallVarargs:
+    case CallForwardVarargs:
+    case ConstructVarargs:
+    case LoadVarargs:
     case NativeCall:
     case NativeConstruct:
     case ValueToInt32:
index c8612c9..f06efc6 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
  * Copyright (C) 2014 Samsung Electronics
  * Copyright (C) 2014 University of Szeged
  *
@@ -128,6 +128,20 @@ static void dumpDataSection(DataSection* section, const char* prefix)
     }
 }
 
+static int offsetOfStackRegion(StackMaps::RecordMap& recordMap, uint32_t stackmapID)
+{
+    StackMaps::RecordMap::iterator iter = recordMap.find(stackmapID);
+    RELEASE_ASSERT(iter != recordMap.end());
+    RELEASE_ASSERT(iter->value.size() == 1);
+    RELEASE_ASSERT(iter->value[0].locations.size() == 1);
+    Location capturedLocation =
+        Location::forStackmaps(nullptr, iter->value[0].locations[0]);
+    RELEASE_ASSERT(capturedLocation.kind() == Location::Register);
+    RELEASE_ASSERT(capturedLocation.gpr() == GPRInfo::callFrameRegister);
+    RELEASE_ASSERT(!(capturedLocation.addend() % sizeof(Register)));
+    return capturedLocation.addend() / sizeof(Register);
+}
+
 template<typename DescriptorType>
 void generateICFastPath(
     State& state, CodeBlock* codeBlock, GeneratedFunction generatedFunction,
@@ -243,6 +257,32 @@ static RegisterSet usedRegistersFor(const StackMaps::Record& record)
     return RegisterSet(record.usedRegisterSet(), RegisterSet::calleeSaveRegisters());
 }
 
+template<typename CallType>
+void adjustCallICsForStackmaps(Vector<CallType>& calls, StackMaps::RecordMap& recordMap)
+{
+    // Handling JS calls is weird: we need to ensure that we sort them by the PC in LLVM
+    // generated code. That implies first pruning the ones that LLVM didn't generate.
+
+    Vector<CallType> oldCalls;
+    oldCalls.swap(calls);
+    
+    for (unsigned i = 0; i < oldCalls.size(); ++i) {
+        CallType& call = oldCalls[i];
+        
+        StackMaps::RecordMap::iterator iter = recordMap.find(call.stackmapID());
+        if (iter == recordMap.end())
+            continue;
+        
+        for (unsigned j = 0; j < iter->value.size(); ++j) {
+            CallType copy = call;
+            copy.m_instructionOffset = iter->value[j].instructionOffset;
+            calls.append(copy);
+        }
+    }
+
+    std::sort(calls.begin(), calls.end());
+}
+
 static void fixFunctionBasedOnStackMaps(
     State& state, CodeBlock* codeBlock, JITCode* jitCode, GeneratedFunction generatedFunction,
     StackMaps::RecordMap& recordMap, bool didSeeUnwindInfo)
@@ -251,16 +291,14 @@ static void fixFunctionBasedOnStackMaps(
     VM& vm = graph.m_vm;
     StackMaps stackmaps = jitCode->stackmaps;
     
-    StackMaps::RecordMap::iterator iter = recordMap.find(state.capturedStackmapID);
-    RELEASE_ASSERT(iter != recordMap.end());
-    RELEASE_ASSERT(iter->value.size() == 1);
-    RELEASE_ASSERT(iter->value[0].locations.size() == 1);
-    Location capturedLocation =
-        Location::forStackmaps(&jitCode->stackmaps, iter->value[0].locations[0]);
-    RELEASE_ASSERT(capturedLocation.kind() == Location::Register);
-    RELEASE_ASSERT(capturedLocation.gpr() == GPRInfo::callFrameRegister);
-    RELEASE_ASSERT(!(capturedLocation.addend() % sizeof(Register)));
-    int32_t localsOffset = capturedLocation.addend() / sizeof(Register) + graph.m_nextMachineLocal;
+    int localsOffset =
+        offsetOfStackRegion(recordMap, state.capturedStackmapID) + graph.m_nextMachineLocal;
+    
+    int varargsSpillSlotsOffset;
+    if (state.varargsSpillSlotsStackmapID != UINT_MAX)
+        varargsSpillSlotsOffset = offsetOfStackRegion(recordMap, state.varargsSpillSlotsStackmapID);
+    else
+        varargsSpillSlotsOffset = 0;
     
     for (unsigned i = graph.m_inlineVariableData.size(); i--;) {
         InlineCallFrame* inlineCallFrame = graph.m_inlineVariableData[i].inlineCallFrame;
@@ -293,18 +331,12 @@ static void fixFunctionBasedOnStackMaps(
         
         // At this point it's perfectly fair to just blow away all state and restore the
         // JS JIT view of the universe.
-        checkJIT.move(MacroAssembler::TrustedImm64(TagTypeNumber), GPRInfo::tagTypeNumberRegister);
-        checkJIT.move(MacroAssembler::TrustedImm64(TagMask), GPRInfo::tagMaskRegister);
-
         checkJIT.move(MacroAssembler::TrustedImmPtr(&vm), GPRInfo::argumentGPR0);
         checkJIT.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
         MacroAssembler::Call callLookupExceptionHandler = checkJIT.call();
         checkJIT.jumpToExceptionHandler();
 
         stackOverflowException = checkJIT.label();
-        checkJIT.move(MacroAssembler::TrustedImm64(TagTypeNumber), GPRInfo::tagTypeNumberRegister);
-        checkJIT.move(MacroAssembler::TrustedImm64(TagMask), GPRInfo::tagMaskRegister);
-
         checkJIT.move(MacroAssembler::TrustedImmPtr(&vm), GPRInfo::argumentGPR0);
         checkJIT.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
         MacroAssembler::Call callLookupExceptionHandlerFromCallerFrame = checkJIT.call();
@@ -336,7 +368,7 @@ static void fixFunctionBasedOnStackMaps(
             if (verboseCompilationEnabled())
                 dataLog("Handling OSR stackmap #", exit.m_stackmapID, " for ", exit.m_codeOrigin, "\n");
 
-            iter = recordMap.find(exit.m_stackmapID);
+            auto iter = recordMap.find(exit.m_stackmapID);
             if (iter == recordMap.end()) {
                 // It was optimized out.
                 continue;
@@ -375,7 +407,7 @@ static void fixFunctionBasedOnStackMaps(
             if (verboseCompilationEnabled())
                 dataLog("Handling GetById stackmap #", getById.stackmapID(), "\n");
             
-            iter = recordMap.find(getById.stackmapID());
+            auto iter = recordMap.find(getById.stackmapID());
             if (iter == recordMap.end()) {
                 // It was optimized out.
                 continue;
@@ -412,7 +444,7 @@ static void fixFunctionBasedOnStackMaps(
             if (verboseCompilationEnabled())
                 dataLog("Handling PutById stackmap #", putById.stackmapID(), "\n");
             
-            iter = recordMap.find(putById.stackmapID());
+            auto iter = recordMap.find(putById.stackmapID());
             if (iter == recordMap.end()) {
                 // It was optimized out.
                 continue;
@@ -444,14 +476,13 @@ static void fixFunctionBasedOnStackMaps(
             }
         }
 
-
         for (unsigned i = state.checkIns.size(); i--;) {
             CheckInDescriptor& checkIn = state.checkIns[i];
             
             if (verboseCompilationEnabled())
                 dataLog("Handling checkIn stackmap #", checkIn.stackmapID(), "\n");
             
-            iter = recordMap.find(checkIn.stackmapID());
+            auto iter = recordMap.find(checkIn.stackmapID());
             if (iter == recordMap.end()) {
                 // It was optimized out.
                 continue;
@@ -480,7 +511,6 @@ static void fixFunctionBasedOnStackMaps(
                 checkIn.m_generators.append(CheckInGenerator(stubInfo, slowCall, begin));
             }
         }
-
         
         exceptionTarget.link(&slowPathJIT);
         MacroAssembler::Jump exceptionJump = slowPathJIT.jump();
@@ -503,29 +533,11 @@ static void fixFunctionBasedOnStackMaps(
         for (unsigned i = state.checkIns.size(); i--;) {
             generateCheckInICFastPath(
                 state, codeBlock, generatedFunction, recordMap, state.checkIns[i],
-                sizeOfCheckIn()); 
+                sizeOfIn()); 
         } 
     }
     
-    // Handling JS calls is weird: we need to ensure that we sort them by the PC in LLVM
-    // generated code. That implies first pruning the ones that LLVM didn't generate.
-    Vector<JSCall> oldCalls = state.jsCalls;
-    state.jsCalls.resize(0);
-    for (unsigned i = 0; i < oldCalls.size(); ++i) {
-        JSCall& call = oldCalls[i];
-        
-        StackMaps::RecordMap::iterator iter = recordMap.find(call.stackmapID());
-        if (iter == recordMap.end())
-            continue;
-
-        for (unsigned j = 0; j < iter->value.size(); ++j) {
-            JSCall copy = call;
-            copy.m_instructionOffset = iter->value[j].instructionOffset;
-            state.jsCalls.append(copy);
-        }
-    }
-    
-    std::sort(state.jsCalls.begin(), state.jsCalls.end());
+    adjustCallICsForStackmaps(state.jsCalls, recordMap);
     
     for (unsigned i = state.jsCalls.size(); i--;) {
         JSCall& call = state.jsCalls[i];
@@ -547,9 +559,32 @@ static void fixFunctionBasedOnStackMaps(
         call.link(vm, linkBuffer);
     }
     
+    adjustCallICsForStackmaps(state.jsCallVarargses, recordMap);
+    
+    for (unsigned i = state.jsCallVarargses.size(); i--;) {
+        JSCallVarargs& call = state.jsCallVarargses[i];
+        
+        CCallHelpers fastPathJIT(&vm, codeBlock);
+        call.emit(fastPathJIT, graph, varargsSpillSlotsOffset);
+        
+        char* startOfIC = bitwise_cast<char*>(generatedFunction) + call.m_instructionOffset;
+        size_t sizeOfIC = sizeOfICFor(call.node());
+
+        LinkBuffer linkBuffer(vm, fastPathJIT, startOfIC, sizeOfIC);
+        if (!linkBuffer.isValid()) {
+            dataLog("Failed to insert inline cache for varargs call (specifically, ", Graph::opName(call.node()->op()), ") because we thought the size would be ", sizeOfIC, " but it ended up being ", fastPathJIT.m_assembler.codeSize(), " prior to compaction.\n");
+            RELEASE_ASSERT_NOT_REACHED();
+        }
+        
+        MacroAssembler::AssemblerType_T::fillNops(
+            startOfIC + linkBuffer.size(), sizeOfIC - linkBuffer.size());
+        
+        call.link(vm, linkBuffer, state.finalizer->handleExceptionsLinkBuffer->entrypoint());
+    }
+    
     RepatchBuffer repatchBuffer(codeBlock);
 
-    iter = recordMap.find(state.handleStackOverflowExceptionStackmapID);
+    auto iter = recordMap.find(state.handleStackOverflowExceptionStackmapID);
     // It's sort of remotely possible that we won't have an in-band exception handling
     // path, for some kinds of functions.
     if (iter != recordMap.end()) {
index f9cfce4..1d0beec 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 
 #if ENABLE(FTL_JIT)
 
+#include "DFGNode.h"
 #include "JITInlineCacheGenerator.h"
 #include "MacroAssembler.h"
 
 namespace JSC { namespace FTL {
 
+using namespace DFG;
+
 // The default sizes are x86-64-specific, and were found empirically. They have to cover the worst
 // possible combination of registers leading to the largest possible encoding of each instruction in
 // the IC.
@@ -61,25 +64,74 @@ size_t sizeOfPutById()
 #endif
 }
 
-size_t sizeOfCheckIn()
+size_t sizeOfCall()
 {
 #if CPU(ARM64)
-    return 4;
+    return 56;
 #else
-    return 5
+    return 53;
 #endif
 }
 
+size_t sizeOfCallVarargs()
+{
+#if CPU(ARM64)
+    return 300;
+#else
+    return 275;
+#endif
+}
 
-size_t sizeOfCall()
+size_t sizeOfCallForwardVarargs()
 {
 #if CPU(ARM64)
-    return 56;
+    return 460;
 #else
-    return 53;
+    return 372;
 #endif
 }
 
+size_t sizeOfConstructVarargs()
+{
+#if CPU(ARM64)
+    return 284;
+#else
+    return 253;
+#endif
+}
+
+size_t sizeOfIn()
+{
+#if CPU(ARM64)
+    return 4;
+#else
+    return 5; 
+#endif
+}
+
+size_t sizeOfICFor(Node* node)
+{
+    switch (node->op()) {
+    case GetById:
+        return sizeOfGetById();
+    case PutById:
+        return sizeOfPutById();
+    case Call:
+    case Construct:
+        return sizeOfCall();
+    case CallVarargs:
+        return sizeOfCallVarargs();
+    case CallForwardVarargs:
+        return sizeOfCallForwardVarargs();
+    case ConstructVarargs:
+        return sizeOfConstructVarargs();
+    case In:
+        return sizeOfIn();
+    default:
+        return 0;
+    }
+}
+
 } } // namespace JSC::FTL
 
 #endif // ENABLE(FTL_JIT)
index db76424..6fe9116 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 
 #if ENABLE(FTL_JIT)
 
-namespace JSC { namespace FTL {
+namespace JSC {
+
+namespace DFG {
+struct Node;
+}
+
+namespace FTL {
 
 size_t sizeOfGetById();
 size_t sizeOfPutById();
 size_t sizeOfCall();
-size_t sizeOfCheckIn();
+size_t sizeOfCallVarargs();
+size_t sizeOfCallForwardVarargs();
+size_t sizeOfConstructVarargs();
+size_t sizeOfIn();
+
+size_t sizeOfICFor(DFG::Node*);
 
 } } // namespace JSC::FTL
 
index f284e8d..4f36029 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -99,10 +99,12 @@ namespace JSC { namespace FTL {
     macro(V_JITOperation_EC, functionType(voidType, intPtr, intPtr)) \
     macro(V_JITOperation_ECb, functionType(voidType, intPtr, intPtr)) \
     macro(V_JITOperation_EVwsJ, functionType(voidType, intPtr, intPtr, int64)) \
+    macro(V_JITOperation_EZJZZZ, functionType(voidType, intPtr, int32, int64, int32, int32, int32)) \
     macro(V_JITOperation_J, functionType(voidType, int64)) \
     macro(V_JITOperation_Z, functionType(voidType, int32)) \
     macro(Z_JITOperation_D, functionType(int32, doubleType)) \
-    macro(Z_JITOperation_EC, functionType(int32, intPtr, intPtr))
+    macro(Z_JITOperation_EC, functionType(int32, intPtr, intPtr)) \
+    macro(Z_JITOperation_EJZ, functionType(int32, intPtr, int64, int32))
     
 class IntrinsicRepository : public CommonValues {
 public:
index b06ec03..e6f6bda 100644 (file)
@@ -48,6 +48,7 @@ JSCall::JSCall(unsigned stackmapID, Node* node)
     , m_stackmapID(stackmapID)
     , m_instructionOffset(0)
 {
+    ASSERT(node->op() == Call || node->op() == Construct);
 }
 
 } } // namespace JSC::FTL
index 20367c2..84f4cd0 100644 (file)
@@ -33,6 +33,8 @@
 
 namespace JSC { namespace FTL {
 
+using namespace DFG;
+
 JSCallBase::JSCallBase()
     : m_type(CallLinkInfo::None)
     , m_callLinkInfo(nullptr)
index 9922656..595ac69 100644 (file)
@@ -36,6 +36,10 @@ namespace JSC {
 
 class LinkBuffer;
 
+namespace DFG {
+struct Node;
+}
+
 namespace FTL {
 
 class JSCallBase {
diff --git a/Source/JavaScriptCore/ftl/FTLJSCallVarargs.cpp b/Source/JavaScriptCore/ftl/FTLJSCallVarargs.cpp
new file mode 100644 (file)
index 0000000..b729ff0
--- /dev/null
@@ -0,0 +1,241 @@
+/*
+ * Copyright (C) 2015 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "FTLJSCallVarargs.h"
+
+#if ENABLE(FTL_JIT)
+
+#include "DFGGraph.h"
+#include "DFGNode.h"
+#include "DFGOperations.h"
+#include "JSCInlines.h"
+#include "LinkBuffer.h"
+#include "ScratchRegisterAllocator.h"
+#include "SetupVarargsFrame.h"
+
+namespace JSC { namespace FTL {
+
+using namespace DFG;
+
+JSCallVarargs::JSCallVarargs()
+    : m_stackmapID(UINT_MAX)
+    , m_node(nullptr)
+    , m_instructionOffset(UINT_MAX)
+{
+}
+
+JSCallVarargs::JSCallVarargs(unsigned stackmapID, Node* node)
+    : m_stackmapID(stackmapID)
+    , m_node(node)
+    , m_callBase(
+        node->op() == ConstructVarargs ? CallLinkInfo::ConstructVarargs : CallLinkInfo::CallVarargs,
+        node->origin.semantic)
+    , m_instructionOffset(0)
+{
+    ASSERT(node->op() == CallVarargs || node->op() == CallForwardVarargs || node->op() == ConstructVarargs);
+}
+
+unsigned JSCallVarargs::numSpillSlotsNeeded()
+{
+    return 4;
+}
+
+void JSCallVarargs::emit(CCallHelpers& jit, Graph& graph, int32_t spillSlotsOffset)
+{
+    // We are passed three pieces of information:
+    // - The callee.
+    // - The arguments object.
+    // - The "this" value, if it's a constructor call.
+    
+    bool isCall = m_node->op() != ConstructVarargs;
+    
+    CallVarargsData* data = m_node->callVarargsData();
+    
+    GPRReg calleeGPR = GPRInfo::argumentGPR0;
+    
+    GPRReg argumentsGPR = InvalidGPRReg;
+    GPRReg thisGPR = InvalidGPRReg;
+    bool argumentsOnStack = false;
+    
+    switch (m_node->op()) {
+    case CallVarargs:
+        argumentsGPR = GPRInfo::argumentGPR1;
+        thisGPR = GPRInfo::argumentGPR2;
+        break;
+    case CallForwardVarargs:
+        thisGPR = GPRInfo::argumentGPR1;
+        argumentsOnStack = true;
+        break;
+    case ConstructVarargs:
+        argumentsGPR = GPRInfo::argumentGPR1;
+        break;
+    default:
+        RELEASE_ASSERT_NOT_REACHED();
+        break;
+    }
+    
+    const unsigned calleeSpillSlot = 0;
+    const unsigned argumentsSpillSlot = 1;
+    const unsigned thisSpillSlot = 2;
+    const unsigned stackPointerSpillSlot = 3;
+    
+    // Get some scratch registers.
+    RegisterSet usedRegisters;
+    usedRegisters.merge(RegisterSet::stackRegisters());
+    usedRegisters.merge(RegisterSet::reservedHardwareRegisters());
+    usedRegisters.merge(RegisterSet::calleeSaveRegisters());
+    usedRegisters.set(calleeGPR);
+    if (argumentsGPR != InvalidGPRReg)
+        usedRegisters.set(argumentsGPR);
+    if (thisGPR != InvalidGPRReg)
+        usedRegisters.set(thisGPR);
+    ScratchRegisterAllocator allocator(usedRegisters);
+    GPRReg scratchGPR1 = allocator.allocateScratchGPR();
+    GPRReg scratchGPR2 = allocator.allocateScratchGPR();
+    GPRReg scratchGPR3 = allocator.allocateScratchGPR();
+    if (argumentsOnStack)
+        argumentsGPR = allocator.allocateScratchGPR();
+    RELEASE_ASSERT(!allocator.numberOfReusedRegisters());
+    
+    auto loadArguments = [&] (bool clobbered) {
+        if (argumentsOnStack) {
+            jit.load64(
+                CCallHelpers::addressFor(graph.machineArgumentsRegisterFor(m_node->origin.semantic)),
+                argumentsGPR);
+        } else if (clobbered) {
+            jit.load64(
+                CCallHelpers::addressFor(spillSlotsOffset + argumentsSpillSlot), argumentsGPR);
+        }
+    };
+    
+    auto computeUsedStack = [&] (GPRReg targetGPR, unsigned extra) {
+        if (isARM64()) {
+            // Have to do this the weird way because $sp on ARM64 means zero when used in a subtraction.
+            jit.move(CCallHelpers::stackPointerRegister, targetGPR);
+            jit.negPtr(targetGPR);
+            jit.addPtr(GPRInfo::callFrameRegister, targetGPR);
+        } else {
+            jit.move(GPRInfo::callFrameRegister, targetGPR);
+            jit.subPtr(CCallHelpers::stackPointerRegister, targetGPR);
+        }
+        if (extra)
+            jit.subPtr(CCallHelpers::TrustedImm32(extra), targetGPR);
+        jit.urshiftPtr(CCallHelpers::Imm32(3), targetGPR);
+    };
+    
+    auto callWithExceptionCheck = [&] (void* callee) {
+        jit.move(CCallHelpers::TrustedImmPtr(callee), GPRInfo::nonPreservedNonArgumentGPR);
+        jit.call(GPRInfo::nonPreservedNonArgumentGPR);
+        m_exceptions.append(jit.emitExceptionCheck(AssemblyHelpers::NormalExceptionCheck, AssemblyHelpers::FarJumpWidth));
+    };
+    
+    loadArguments(false);
+
+    if (isARM64()) {
+        jit.move(CCallHelpers::stackPointerRegister, scratchGPR1);
+        jit.storePtr(scratchGPR1, CCallHelpers::addressFor(spillSlotsOffset + stackPointerSpillSlot));
+    } else
+        jit.storePtr(CCallHelpers::stackPointerRegister, CCallHelpers::addressFor(spillSlotsOffset + stackPointerSpillSlot));
+    
+    // Attempt the forwarding fast path, if it's been requested.
+    CCallHelpers::Jump haveArguments;
+    if (m_node->op() == CallForwardVarargs) {
+        // Do the horrific foo.apply(this, arguments) optimization.
+        // FIXME: do this optimization at the IR level.
+        
+        CCallHelpers::JumpList slowCase;
+        slowCase.append(jit.branchTest64(CCallHelpers::NonZero, argumentsGPR));
+        
+        computeUsedStack(scratchGPR2, 0);
+        emitSetupVarargsFrameFastCase(jit, scratchGPR2, scratchGPR1, scratchGPR2, scratchGPR3, m_node->origin.semantic.inlineCallFrame, data->firstVarArgOffset, slowCase);
+        
+        jit.move(calleeGPR, GPRInfo::regT0);
+        haveArguments = jit.jump();
+        slowCase.link(&jit);
+    }
+    
+    // Gotta spill the callee, arguments, and this because we will need them later and we will have some
+    // calls that clobber them.
+    jit.store64(calleeGPR, CCallHelpers::addressFor(spillSlotsOffset + calleeSpillSlot));
+    if (!argumentsOnStack)
+        jit.store64(argumentsGPR, CCallHelpers::addressFor(spillSlotsOffset + argumentsSpillSlot));
+    if (isCall)
+        jit.store64(thisGPR, CCallHelpers::addressFor(spillSlotsOffset + thisSpillSlot));
+    
+    unsigned extraStack = sizeof(CallerFrameAndPC) +
+        WTF::roundUpToMultipleOf(stackAlignmentBytes(), 5 * sizeof(void*));
+    computeUsedStack(scratchGPR1, 0);
+    jit.subPtr(CCallHelpers::TrustedImm32(extraStack), CCallHelpers::stackPointerRegister);
+    jit.setupArgumentsWithExecState(argumentsGPR, scratchGPR1, CCallHelpers::TrustedImm32(data->firstVarArgOffset));
+    callWithExceptionCheck(bitwise_cast<void*>(operationSizeFrameForVarargs));
+    
+    jit.move(GPRInfo::returnValueGPR, scratchGPR1);
+    computeUsedStack(scratchGPR2, extraStack);
+    loadArguments(true);
+    emitSetVarargsFrame(jit, scratchGPR1, false, scratchGPR2, scratchGPR2);
+    jit.addPtr(CCallHelpers::TrustedImm32(-extraStack), scratchGPR2, CCallHelpers::stackPointerRegister);
+    jit.setupArgumentsWithExecState(scratchGPR2, argumentsGPR, CCallHelpers::TrustedImm32(data->firstVarArgOffset), scratchGPR1);
+    callWithExceptionCheck(bitwise_cast<void*>(operationSetupVarargsFrame));
+    
+    jit.move(GPRInfo::returnValueGPR, scratchGPR2);
+    
+    if (isCall)
+        jit.load64(CCallHelpers::addressFor(spillSlotsOffset + thisSpillSlot), thisGPR);
+    jit.load64(CCallHelpers::addressFor(spillSlotsOffset + calleeSpillSlot), GPRInfo::regT0);
+    
+    if (m_node->op() == CallForwardVarargs)
+        haveArguments.link(&jit);
+    
+    jit.addPtr(CCallHelpers::TrustedImm32(sizeof(CallerFrameAndPC)), scratchGPR2, CCallHelpers::stackPointerRegister);
+    
+    if (isCall)
+        jit.store64(thisGPR, CCallHelpers::calleeArgumentSlot(0));
+    
+    // Henceforth we make the call. The base FTL call machinery expects the callee in regT0 and for the
+    // stack frame to already be set up, which it is.
+    jit.store64(GPRInfo::regT0, CCallHelpers::calleeFrameSlot(JSStack::Callee));
+    
+    m_callBase.emit(jit);
+    
+    // Undo the damage we've done.
+    if (isARM64()) {
+        GPRReg scratchGPRAtReturn = CCallHelpers::selectScratchGPR(GPRInfo::returnValueGPR);
+        jit.loadPtr(CCallHelpers::addressFor(spillSlotsOffset + stackPointerSpillSlot), scratchGPRAtReturn);
+        jit.move(scratchGPRAtReturn, CCallHelpers::stackPointerRegister);
+    } else
+        jit.loadPtr(CCallHelpers::addressFor(spillSlotsOffset + stackPointerSpillSlot), CCallHelpers::stackPointerRegister);
+}
+
+void JSCallVarargs::link(VM& vm, LinkBuffer& linkBuffer, CodeLocationLabel exceptionHandler)
+{
+    m_callBase.link(vm, linkBuffer);
+    linkBuffer.link(m_exceptions, exceptionHandler);
+}
+
+} } // namespace JSC::FTL
+
+#endif // ENABLE(FTL_JIT)
+
diff --git a/Source/JavaScriptCore/ftl/FTLJSCallVarargs.h b/Source/JavaScriptCore/ftl/FTLJSCallVarargs.h
new file mode 100644 (file)
index 0000000..cdaefb9
--- /dev/null
@@ -0,0 +1,78 @@
+/*
+ * Copyright (C) 2015 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef FTLJSCallVarargs_h
+#define FTLJSCallVarargs_h
+
+#if ENABLE(FTL_JIT)
+
+#include "FTLJSCallBase.h"
+
+namespace JSC {
+
+class LinkBuffer;
+
+namespace DFG {
+class Graph;
+struct Node;
+}
+
+namespace FTL {
+
+class JSCallVarargs {
+public:
+    JSCallVarargs();
+    JSCallVarargs(unsigned stackmapID, DFG::Node*);
+    
+    DFG::Node* node() const { return m_node; }
+    
+    static unsigned numSpillSlotsNeeded();
+    
+    void emit(CCallHelpers&, DFG::Graph&, int32_t spillSlotsOffset);
+    void link(VM&, LinkBuffer&, CodeLocationLabel exceptionHandler);
+    
+    unsigned stackmapID() const { return m_stackmapID; }
+    
+    bool operator<(const JSCallVarargs& other) const
+    {
+        return m_instructionOffset < other.m_instructionOffset;
+    }
+    
+private:
+    unsigned m_stackmapID;
+    DFG::Node* m_node;
+    JSCallBase m_callBase;
+    CCallHelpers::JumpList m_exceptions;
+
+public:
+    uint32_t m_instructionOffset;
+};
+
+} } // namespace JSC::FTL
+
+#endif // ENABLE(FTL_JIT)
+
+#endif // FTLJSCallVarargs_h
+
index 92bc133..00b1707 100644 (file)
@@ -192,6 +192,30 @@ public:
             m_out.stackmapIntrinsic(), m_out.constInt64(m_ftlState.capturedStackmapID),
             m_out.int32Zero, capturedAlloca);
         
+        // If we have any CallVarargs then we nee to have a spill slot for it.
+        bool hasVarargs = false;
+        for (BasicBlock* block : preOrder) {
+            for (Node* node : *block) {
+                switch (node->op()) {
+                case CallVarargs:
+                case CallForwardVarargs:
+                case ConstructVarargs:
+                    hasVarargs = true;
+                    break;
+                default:
+                    break;
+                }
+            }
+        }
+        if (hasVarargs) {
+            LValue varargsSpillSlots = m_out.alloca(
+                arrayType(m_out.int64, JSCallVarargs::numSpillSlotsNeeded()));
+            m_ftlState.varargsSpillSlotsStackmapID = m_stackmapIDs++;
+            m_out.call(
+                m_out.stackmapIntrinsic(), m_out.constInt64(m_ftlState.varargsSpillSlotsStackmapID),
+                m_out.int32Zero, varargsSpillSlots);
+        }
+        
         m_callFrame = m_out.ptrToInt(
             m_out.call(m_out.frameAddressIntrinsic(), m_out.int32Zero), m_out.intPtr);
         m_tagTypeNumber = m_out.constInt64(TagTypeNumber);
@@ -698,6 +722,14 @@ private:
         case Construct:
             compileCallOrConstruct();
             break;
+        case CallVarargs:
+        case CallForwardVarargs:
+        case ConstructVarargs:
+            compileCallOrConstructVarargs();
+            break;
+        case LoadVarargs:
+            compileLoadVarargs();
+            break;
 #if ENABLE(FTL_NATIVE_CALL_INLINING)
         case NativeCall:
         case NativeConstruct:
@@ -2082,8 +2114,22 @@ private:
     {
         checkArgumentsNotCreated();
 
-        DFG_ASSERT(m_graph, m_node, !m_node->origin.semantic.inlineCallFrame);
-        setInt32(m_out.add(m_out.load32NonNegative(payloadFor(JSStack::ArgumentCount)), m_out.constInt32(-1)));
+        if (m_node->origin.semantic.inlineCallFrame
+            && !m_node->origin.semantic.inlineCallFrame->isVarargs()) {
+            setInt32(
+                m_out.constInt32(
+                    m_node->origin.semantic.inlineCallFrame->arguments.size() - 1));
+        } else {
+            VirtualRegister argumentCountRegister;
+            if (!m_node->origin.semantic.inlineCallFrame)
+                argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+            else
+                argumentCountRegister = m_node->origin.semantic.inlineCallFrame->argumentCountRegister;
+            setInt32(
+                m_out.add(
+                    m_out.load32NonNegative(payloadFor(argumentCountRegister)),
+                    m_out.constInt32(-1)));
+        }
     }
     
     void compileGetMyArgumentByVal()
@@ -2095,10 +2141,17 @@ private:
         LValue index = lowInt32(m_node->child1());
         
         LValue limit;
-        if (codeOrigin.inlineCallFrame)
+        if (codeOrigin.inlineCallFrame
+            && !codeOrigin.inlineCallFrame->isVarargs())
             limit = m_out.constInt32(codeOrigin.inlineCallFrame->arguments.size() - 1);
-        else
-            limit = m_out.sub(m_out.load32(payloadFor(JSStack::ArgumentCount)), m_out.int32One);
+        else {
+            VirtualRegister argumentCountRegister;
+            if (!codeOrigin.inlineCallFrame)
+                argumentCountRegister = VirtualRegister(JSStack::ArgumentCount);
+            else
+                argumentCountRegister = codeOrigin.inlineCallFrame->argumentCountRegister;
+            limit = m_out.sub(m_out.load32(payloadFor(argumentCountRegister)), m_out.int32One);
+        }
         
         speculate(Uncountable, noValue(), 0, m_out.aboveOrEqual(index, limit));
         
@@ -3811,6 +3864,87 @@ private:
         
         setJSValue(call);
     }
+    
+    void compileCallOrConstructVarargs()
+    {
+        LValue jsCallee = lowJSValue(m_node->child1());
+        
+        LValue jsArguments = nullptr;
+        LValue thisArg = nullptr;
+        
+        switch (m_node->op()) {
+        case CallVarargs:
+            jsArguments = lowJSValue(m_node->child2());
+            thisArg = lowJSValue(m_node->child3());
+            break;
+        case CallForwardVarargs:
+            thisArg = lowJSValue(m_node->child2());
+            break;
+        case ConstructVarargs:
+            jsArguments = lowJSValue(m_node->child2());
+            break;
+        default:
+            DFG_CRASH(m_graph, m_node, "bad node type");
+            break;
+        }
+        
+        unsigned stackmapID = m_stackmapIDs++;
+        
+        Vector<LValue> arguments;
+        arguments.append(m_out.constInt64(stackmapID));
+        arguments.append(m_out.constInt32(sizeOfICFor(m_node)));
+        arguments.append(constNull(m_out.ref8));
+        arguments.append(m_out.constInt32(1 + !!jsArguments + !!thisArg));
+        arguments.append(jsCallee);
+        if (jsArguments)
+            arguments.append(jsArguments);
+        if (thisArg)
+            arguments.append(thisArg);
+        
+        callPreflight();
+        
+        LValue call = m_out.call(m_out.patchpointInt64Intrinsic(), arguments);
+        setInstructionCallingConvention(call, LLVMCCallConv);
+        
+        m_ftlState.jsCallVarargses.append(JSCallVarargs(stackmapID, m_node));
+        
+        setJSValue(call);
+    }
+    
+    void compileLoadVarargs()
+    {
+        LoadVarargsData* data = m_node->loadVarargsData();
+        LValue jsArguments = lowJSValue(m_node->child1());
+        
+        LValue length = vmCall(
+            m_out.operation(operationSizeOfVarargs), m_callFrame, jsArguments,
+            m_out.constInt32(data->offset));
+        
+        // FIXME: There is a chance that we will call an effectful length property twice. This is safe
+        // from the standpoint of the VM's integrity, but it's subtly wrong from a spec compliance
+        // standpoint. The best solution would be one where we can exit *into* the op_call_varargs right
+        // past the sizing.
+        // https://bugs.webkit.org/show_bug.cgi?id=141448
+        
+        LValue lengthIncludingThis = m_out.add(length, m_out.int32One);
+        speculate(
+            VarargsOverflow, noValue(), nullptr,
+            m_out.above(lengthIncludingThis, m_out.constInt32(data->limit)));
+        
+        m_out.store32(lengthIncludingThis, payloadFor(data->machineCount));
+        
+        // FIXME: This computation is rather silly. If operationLaodVarargs just took a pointer instead
+        // of a VirtualRegister, we wouldn't have to do this.
+        // https://bugs.webkit.org/show_bug.cgi?id=141660
+        LValue machineStart = m_out.lShr(
+            m_out.sub(addressFor(data->machineStart.offset()).value(), m_callFrame),
+            m_out.constIntPtr(3));
+        
+        vmCall(
+            m_out.operation(operationLoadVarargs), m_callFrame,
+            m_out.castToInt32(machineStart), jsArguments, m_out.constInt32(data->offset),
+            length, m_out.constInt32(data->mandatoryMinimum));
+    }
 
     void compileJump()
     {
@@ -4115,7 +4249,7 @@ private:
             
                 LValue call = m_out.call(
                     m_out.patchpointInt64Intrinsic(),
-                    m_out.constInt64(stackmapID), m_out.constInt32(sizeOfCheckIn()),
+                    m_out.constInt64(stackmapID), m_out.constInt32(sizeOfIn()),
                     constNull(m_out.ref8), m_out.constInt32(1), cell);
 
                 setInstructionCallingConvention(call, LLVMAnyRegCallConv);
@@ -6510,7 +6644,7 @@ private:
 
         // Buffer is out of space, flush it.
         m_out.appendTo(bufferIsFull, continuation);
-        vmCall(m_out.operation(operationFlushWriteBarrierBuffer), m_callFrame, base, NoExceptions);
+        vmCallNoExceptions(m_out.operation(operationFlushWriteBarrierBuffer), m_callFrame, base);
         m_out.jump(continuation);
 
         m_out.appendTo(continuation, lastNext);
@@ -6519,44 +6653,23 @@ private:
 #endif
     }
 
-    enum ExceptionCheckMode { NoExceptions, CheckExceptions };
-    
-    LValue vmCall(LValue function, ExceptionCheckMode mode = CheckExceptions)
-    {
-        callPreflight();
-        LValue result = m_out.call(function);
-        callCheck(mode);
-        return result;
-    }
-    LValue vmCall(LValue function, LValue arg1, ExceptionCheckMode mode = CheckExceptions)
-    {
-        callPreflight();
-        LValue result = m_out.call(function, arg1);
-        callCheck(mode);
-        return result;
-    }
-    LValue vmCall(LValue function, LValue arg1, LValue arg2, ExceptionCheckMode mode = CheckExceptions)
+    template<typename... Args>
+    LValue vmCall(LValue function, Args... args)
     {
         callPreflight();
-        LValue result = m_out.call(function, arg1, arg2);
-        callCheck(mode);
+        LValue result = m_out.call(function, args...);
+        callCheck();
         return result;
     }
-    LValue vmCall(LValue function, LValue arg1, LValue arg2, LValue arg3, ExceptionCheckMode mode = CheckExceptions)
-    {
-        callPreflight();
-        LValue result = m_out.call(function, arg1, arg2, arg3);
-        callCheck(mode);
-        return result;
-    }
-    LValue vmCall(LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, ExceptionCheckMode mode = CheckExceptions)
+    
+    template<typename... Args>
+    LValue vmCallNoExceptions(LValue function, Args... args)
     {
         callPreflight();
-        LValue result = m_out.call(function, arg1, arg2, arg3, arg4);
-        callCheck(mode);
+        LValue result = m_out.call(function, args...);
         return result;
     }
-    
+
     void callPreflight(CodeOrigin codeOrigin)
     {
         m_out.store32(
@@ -6570,11 +6683,8 @@ private:
         callPreflight(m_node->origin.semantic);
     }
     
-    void callCheck(ExceptionCheckMode mode = CheckExceptions)
+    void callCheck()
     {
-        if (mode == NoExceptions)
-            return;
-        
         if (Options::enableExceptionFuzz())
             m_out.call(m_out.operation(operationExceptionFuzz));
         
index af82cbd..27febd3 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -357,13 +357,8 @@ public:
     LValue call(LValue function, const VectorType& vector) { return buildCall(m_builder, function, vector); }
     LValue call(LValue function) { return buildCall(m_builder, function); }
     LValue call(LValue function, LValue arg1) { return buildCall(m_builder, function, arg1); }
-    LValue call(LValue function, LValue arg1, LValue arg2) { return buildCall(m_builder, function, arg1, arg2); }
-    LValue call(LValue function, LValue arg1, LValue arg2, LValue arg3) { return buildCall(m_builder, function, arg1, arg2, arg3); }
-    LValue call(LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4) { return buildCall(m_builder, function, arg1, arg2, arg3, arg4); }
-    LValue call(LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5) { return buildCall(m_builder, function, arg1, arg2, arg3, arg4, arg5); }
-    LValue call(LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5, LValue arg6) { return buildCall(m_builder, function, arg1, arg2, arg3, arg4, arg5, arg6); }
-    LValue call(LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5, LValue arg6, LValue arg7) { return buildCall(m_builder, function, arg1, arg2, arg3, arg4, arg5, arg6, arg7); }
-    LValue call(LValue function, LValue arg1, LValue arg2, LValue arg3, LValue arg4, LValue arg5, LValue arg6, LValue arg7, LValue arg8) { return buildCall(m_builder, function, arg1, arg2, arg3, arg4, arg5, arg6, arg7, arg8); }
+    template<typename... Args>
+    LValue call(LValue function, LValue arg1, Args... args) { return buildCall(m_builder, function, arg1, args...); }
     
     template<typename FunctionType>
     LValue operation(FunctionType function)
index 62ce058..7937050 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -49,6 +49,10 @@ State::State(Graph& graph)
     , module(0)
     , function(0)
     , generatedFunction(0)
+    , handleStackOverflowExceptionStackmapID(UINT_MAX)
+    , handleExceptionStackmapID(UINT_MAX)
+    , capturedStackmapID(UINT_MAX)
+    , varargsSpillSlotsStackmapID(UINT_MAX)
     , unwindDataSection(0)
     , unwindDataSectionSize(0)
 {
index e986517..56e17a3 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -36,6 +36,7 @@
 #include "FTLJITCode.h"
 #include "FTLJITFinalizer.h"
 #include "FTLJSCall.h"
+#include "FTLJSCallVarargs.h"
 #include "FTLStackMaps.h"
 #include "FTLState.h"
 #include <wtf/Noncopyable.h>
@@ -71,10 +72,12 @@ public:
     unsigned handleStackOverflowExceptionStackmapID;
     unsigned handleExceptionStackmapID;
     unsigned capturedStackmapID;
+    unsigned varargsSpillSlotsStackmapID;
     SegmentedVector<GetByIdDescriptor> getByIds;
     SegmentedVector<PutByIdDescriptor> putByIds;
     SegmentedVector<CheckInDescriptor> checkIns;
     Vector<JSCall> jsCalls;
+    Vector<JSCallVarargs> jsCallVarargses;
     Vector<CString> codeSectionNames;
     Vector<CString> dataSectionNames;
     void* unwindDataSection;
index 96eb2ab..74768c6 100644 (file)
@@ -134,7 +134,7 @@ JSValue eval(CallFrame* callFrame)
     return interpreter->execute(eval, callFrame, thisValue, callerScopeChain);
 }
 
-unsigned sizeFrameForVarargs(CallFrame* callFrame, JSStack* stack, JSValue arguments, unsigned numUsedStackSlots, uint32_t firstVarArgOffset)
+unsigned sizeOfVarargs(CallFrame* callFrame, JSValue arguments, uint32_t firstVarArgOffset)
 {
     unsigned length;
     if (!arguments)
@@ -156,6 +156,13 @@ unsigned sizeFrameForVarargs(CallFrame* callFrame, JSStack* stack, JSValue argum
     else
         length = 0;
     
+    return length;
+}
+
+unsigned sizeFrameForVarargs(CallFrame* callFrame, JSStack* stack, JSValue arguments, unsigned numUsedStackSlots, uint32_t firstVarArgOffset)
+{
+    unsigned length = sizeOfVarargs(callFrame, arguments, firstVarArgOffset);
+    
     CallFrame* calleeFrame = calleeFrameForVarargs(callFrame, numUsedStackSlots, length + 1);
     if (length > Arguments::MaxArguments || !stack->ensureCapacityFor(calleeFrame->registers())) {
         throwStackOverflowError(callFrame);
index 7eed235..3bfed01 100644 (file)
@@ -308,6 +308,7 @@ namespace JSC {
         return CallFrame::create(callFrame->registers() - paddedCalleeFrameOffset);
     }
 
+    unsigned sizeOfVarargs(CallFrame* exec, JSValue arguments, uint32_t firstVarArgOffset);
     unsigned sizeFrameForVarargs(CallFrame* exec, JSStack*, JSValue arguments, unsigned numUsedStackSlots, uint32_t firstVarArgOffset);
     void loadVarargs(CallFrame* execCaller, VirtualRegister firstElementDest, JSValue source, uint32_t offset, uint32_t length);
     void setupVarargsFrame(CallFrame* execCaller, CallFrame* execCallee, JSValue arguments, uint32_t firstVarArgOffset, uint32_t length);
index b3036b7..c3a088d 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -149,7 +149,10 @@ void StackVisitor::readInlinedFrame(CallFrame* callFrame, CodeOrigin* codeOrigin
 
         m_frame.m_callFrame = callFrame;
         m_frame.m_inlineCallFrame = inlineCallFrame;
-        m_frame.m_argumentCountIncludingThis = inlineCallFrame->arguments.size();
+        if (inlineCallFrame->argumentCountRegister.isValid())
+            m_frame.m_argumentCountIncludingThis = callFrame->r(inlineCallFrame->argumentCountRegister.offset()).unboxedInt32();
+        else
+            m_frame.m_argumentCountIncludingThis = inlineCallFrame->arguments.size();
         m_frame.m_codeBlock = inlineCallFrame->baselineCodeBlock();
         m_frame.m_bytecodeOffset = codeOrigin->bytecodeIndex;
 
index 059e5d9..443cd6c 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -212,15 +212,27 @@ void AssemblyHelpers::callExceptionFuzz()
     addPtr(TrustedImm32(stackAlignmentBytes()), stackPointerRegister);
 }
 
-AssemblyHelpers::Jump AssemblyHelpers::emitExceptionCheck(ExceptionCheckKind kind)
+AssemblyHelpers::Jump AssemblyHelpers::emitExceptionCheck(ExceptionCheckKind kind, ExceptionJumpWidth width)
 {
     callExceptionFuzz();
+
+    if (width == FarJumpWidth)
+        kind = (kind == NormalExceptionCheck ? InvertedExceptionCheck : NormalExceptionCheck);
     
+    Jump result;
 #if USE(JSVALUE64)
-    return branchTest64(kind == NormalExceptionCheck ? NonZero : Zero, AbsoluteAddress(vm()->addressOfException()));
+    result = branchTest64(kind == NormalExceptionCheck ? NonZero : Zero, AbsoluteAddress(vm()->addressOfException()));
 #elif USE(JSVALUE32_64)
-    return branch32(kind == NormalExceptionCheck ? NotEqual : Equal, AbsoluteAddress(reinterpret_cast<char*>(vm()->addressOfException()) + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), TrustedImm32(JSValue::EmptyValueTag));
+    result = branch32(kind == NormalExceptionCheck ? NotEqual : Equal, AbsoluteAddress(reinterpret_cast<char*>(vm()->addressOfException()) + OBJECT_OFFSETOF(JSValue, u.asBits.tag)), TrustedImm32(JSValue::EmptyValueTag));
 #endif
+    
+    if (width == NormalJumpWidth)
+        return result;
+    
+    PatchableJump realJump = patchableJump();
+    result.link(this);
+    
+    return realJump.m_jump;
 }
 
 void AssemblyHelpers::emitStoreStructureWithTypeInfo(AssemblyHelpers& jit, TrustedImmPtr structure, RegisterID dest)
index 1a40059..70213ef 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -338,6 +338,10 @@ public:
     }
     static Address addressFor(VirtualRegister virtualRegister)
     {
+        // NB. It's tempting on some architectures to sometimes use an offset from the stack
+        // register because for some offsets that will encode to a smaller instruction. But we
+        // cannot do this. We use this in places where the stack pointer has been moved to some
+        // unpredictable location.
         ASSERT(virtualRegister.isValid());
         return Address(GPRInfo::callFrameRegister, virtualRegister.offset() * sizeof(Register));
     }
@@ -367,39 +371,39 @@ public:
     }
 
     // Access to our fixed callee CallFrame.
-    Address calleeFrameSlot(int slot)
+    static Address calleeFrameSlot(int slot)
     {
         ASSERT(slot >= JSStack::CallerFrameAndPCSize);
-        return MacroAssembler::Address(MacroAssembler::stackPointerRegister, sizeof(Register) * (slot - JSStack::CallerFrameAndPCSize));
+        return Address(stackPointerRegister, sizeof(Register) * (slot - JSStack::CallerFrameAndPCSize));
     }
 
     // Access to our fixed callee CallFrame.
-    Address calleeArgumentSlot(int argument)
+    static Address calleeArgumentSlot(int argument)
     {
         return calleeFrameSlot(virtualRegisterForArgument(argument).offset());
     }
 
-    Address calleeFrameTagSlot(int slot)
+    static Address calleeFrameTagSlot(int slot)
     {
         return calleeFrameSlot(slot).withOffset(TagOffset);
     }
 
-    Address calleeFramePayloadSlot(int slot)
+    static Address calleeFramePayloadSlot(int slot)
     {
         return calleeFrameSlot(slot).withOffset(PayloadOffset);
     }
 
-    Address calleeArgumentTagSlot(int argument)
+    static Address calleeArgumentTagSlot(int argument)
     {
         return calleeArgumentSlot(argument).withOffset(TagOffset);
     }
 
-    Address calleeArgumentPayloadSlot(int argument)
+    static Address calleeArgumentPayloadSlot(int argument)
     {
         return calleeArgumentSlot(argument).withOffset(PayloadOffset);
     }
 
-    Address calleeFrameCallerFrame()
+    static Address calleeFrameCallerFrame()
     {
         return calleeFrameSlot(0).withOffset(CallFrame::callerFrameOffset());
     }
@@ -409,21 +413,24 @@ public:
         return branch8(Below, Address(cellReg, JSCell::typeInfoTypeOffset()), TrustedImm32(ObjectType));
     }
 
-    static GPRReg selectScratchGPR(GPRReg preserve1 = InvalidGPRReg, GPRReg preserve2 = InvalidGPRReg, GPRReg preserve3 = InvalidGPRReg, GPRReg preserve4 = InvalidGPRReg)
+    static GPRReg selectScratchGPR(GPRReg preserve1 = InvalidGPRReg, GPRReg preserve2 = InvalidGPRReg, GPRReg preserve3 = InvalidGPRReg, GPRReg preserve4 = InvalidGPRReg, GPRReg preserve5 = InvalidGPRReg)
     {
-        if (preserve1 != GPRInfo::regT0 && preserve2 != GPRInfo::regT0 && preserve3 != GPRInfo::regT0 && preserve4 != GPRInfo::regT0)
+        if (preserve1 != GPRInfo::regT0 && preserve2 != GPRInfo::regT0 && preserve3 != GPRInfo::regT0 && preserve4 != GPRInfo::regT0 && preserve5 != GPRInfo::regT0)
             return GPRInfo::regT0;
 
-        if (preserve1 != GPRInfo::regT1 && preserve2 != GPRInfo::regT1 && preserve3 != GPRInfo::regT1 && preserve4 != GPRInfo::regT1)
+        if (preserve1 != GPRInfo::regT1 && preserve2 != GPRInfo::regT1 && preserve3 != GPRInfo::regT1 && preserve4 != GPRInfo::regT1 && preserve5 != GPRInfo::regT1)
             return GPRInfo::regT1;
 
-        if (preserve1 != GPRInfo::regT2 && preserve2 != GPRInfo::regT2 && preserve3 != GPRInfo::regT2 && preserve4 != GPRInfo::regT2)
+        if (preserve1 != GPRInfo::regT2 && preserve2 != GPRInfo::regT2 && preserve3 != GPRInfo::regT2 && preserve4 != GPRInfo::regT2 && preserve5 != GPRInfo::regT2)
             return GPRInfo::regT2;
 
-        if (preserve1 != GPRInfo::regT3 && preserve2 != GPRInfo::regT3 && preserve3 != GPRInfo::regT3 && preserve4 != GPRInfo::regT3)
+        if (preserve1 != GPRInfo::regT3 && preserve2 != GPRInfo::regT3 && preserve3 != GPRInfo::regT3 && preserve4 != GPRInfo::regT3 && preserve5 != GPRInfo::regT3)
             return GPRInfo::regT3;
 
-        return GPRInfo::regT4;
+        if (preserve1 != GPRInfo::regT4 && preserve2 != GPRInfo::regT4 && preserve3 != GPRInfo::regT4 && preserve4 != GPRInfo::regT4 && preserve5 != GPRInfo::regT4)
+            return GPRInfo::regT4;
+
+        return GPRInfo::regT5;
     }
 
     // Add a debug call. This call has no effect on JIT code execution state.
@@ -571,7 +578,9 @@ public:
     void callExceptionFuzz();
     
     enum ExceptionCheckKind { NormalExceptionCheck, InvertedExceptionCheck };
-    Jump emitExceptionCheck(ExceptionCheckKind kind = NormalExceptionCheck);
+    enum ExceptionJumpWidth { NormalJumpWidth, FarJumpWidth };
+    Jump emitExceptionCheck(
+        ExceptionCheckKind = NormalExceptionCheck, ExceptionJumpWidth = NormalJumpWidth);
 
 #if ENABLE(SAMPLING_COUNTERS)
     static void emitCount(MacroAssembler& jit, AbstractSamplingCounter& counter, int32_t increment = 1)
index b2a795e..685c12a 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -289,6 +289,15 @@ public:
         addCallArgument(arg3);
     }
 
+    ALWAYS_INLINE void setupArgumentsWithExecState(GPRReg arg1, TrustedImm32 arg2, TrustedImm32 arg3)
+    {
+        resetCallArguments();
+        addCallArgument(GPRInfo::callFrameRegister);
+        addCallArgument(arg1);
+        addCallArgument(arg2);
+        addCallArgument(arg3);
+    }
+
     ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImm32 arg1, GPRReg arg2, GPRReg arg3)
     {
         resetCallArguments();
@@ -298,6 +307,29 @@ public:
         addCallArgument(arg3);
     }
 
+    ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImm32 arg1, GPRReg arg2, TrustedImm32 arg3, GPRReg arg4, TrustedImm32 arg5)
+    {
+        resetCallArguments();
+        addCallArgument(GPRInfo::callFrameRegister);
+        addCallArgument(arg1);
+        addCallArgument(arg2);
+        addCallArgument(arg3);
+        addCallArgument(arg4);
+        addCallArgument(arg5);
+    }
+
+    ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImm32 arg1, GPRReg arg2, GPRReg arg3, TrustedImm32 arg4, GPRReg arg5, TrustedImm32 arg6)
+    {
+        resetCallArguments();
+        addCallArgument(GPRInfo::callFrameRegister);
+        addCallArgument(arg1);
+        addCallArgument(arg2);
+        addCallArgument(arg3);
+        addCallArgument(arg4);
+        addCallArgument(arg5);
+        addCallArgument(arg6);
+    }
+
     ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImmPtr arg1, GPRReg arg2, GPRReg arg3)
     {
         resetCallArguments();
@@ -1785,6 +1817,15 @@ public:
         move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
     }
 
+    ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImm32 arg1, GPRReg arg2, TrustedImm32 arg3, GPRReg arg4, TrustedImm32 arg5)
+    {
+        setupTwoStubArgsGPR<GPRInfo::argumentGPR2, GPRInfo::argumentGPR4>(arg2, arg4);
+        move(arg1, GPRInfo::argumentGPR1);
+        move(arg3, GPRInfo::argumentGPR3);
+        move(arg5, GPRInfo::argumentGPR5);
+        move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
+    }
+
     ALWAYS_INLINE void setupArgumentsWithExecState(TrustedImmPtr arg1, GPRReg arg2, GPRReg arg3, TrustedImm32 arg4, TrustedImm32 arg5)
     {
         setupTwoStubArgsGPR<GPRInfo::argumentGPR2, GPRInfo::argumentGPR3>(arg2, arg3);
index 4b770a6..df8c3c0 100644 (file)
@@ -402,6 +402,7 @@ public:
     static const GPRReg returnValueGPR = X86Registers::eax; // regT0
     static const GPRReg returnValueGPR2 = X86Registers::edx; // regT1
     static const GPRReg nonPreservedNonReturnGPR = X86Registers::esi;
+    static const GPRReg nonPreservedNonArgumentGPR = X86Registers::r10;
     static const GPRReg patchpointScratchRegister = MacroAssembler::scratchRegister;
 
     static GPRReg toRegister(unsigned index)
@@ -577,6 +578,7 @@ public:
     static const GPRReg returnValueGPR = ARM64Registers::x0; // regT0
     static const GPRReg returnValueGPR2 = ARM64Registers::x1; // regT1
     static const GPRReg nonPreservedNonReturnGPR = ARM64Registers::x2;
+    static const GPRReg nonPreservedNonArgumentGPR = ARM64Registers::x8;
     static const GPRReg patchpointScratchRegister = ARM64Registers::ip0;
 
     // GPRReg mapping is direct, the machine regsiter numbers can
index d80eb84..9abf94b 100644 (file)
@@ -464,11 +464,6 @@ CompilationResult JIT::privateCompile(JITCompilationEffort effort)
         m_canBeOptimizedOrInlined = false;
         m_shouldEmitProfiling = false;
         break;
-    case DFG::CanInline:
-        m_canBeOptimized = false;
-        m_canBeOptimizedOrInlined = true;
-        m_shouldEmitProfiling = true;
-        break;
     case DFG::CanCompile:
     case DFG::CanCompileAndInline:
         m_canBeOptimized = true;
index d5bad5e..9b7ac09 100644 (file)
@@ -296,7 +296,7 @@ namespace JSC {
 
         void compileOpCall(OpcodeID, Instruction*, unsigned callLinkInfoIndex);
         void compileOpCallSlowCase(OpcodeID, Instruction*, Vector<SlowCaseEntry>::iterator&, unsigned callLinkInfoIndex);
-        void compileSetupVarargsFrame(Instruction*);
+        void compileSetupVarargsFrame(Instruction*, CallLinkInfo*);
         void compileCallEval(Instruction*);
         void compileCallEvalSlowCase(Instruction*, Vector<SlowCaseEntry>::iterator&);
         void emitPutCallResult(Instruction*);
index aaa9a3d..eb037ab 100644 (file)
@@ -55,7 +55,7 @@ void JIT::emitPutCallResult(Instruction* instruction)
     emitPutVirtualRegister(dst);
 }
 
-void JIT::compileSetupVarargsFrame(Instruction* instruction)
+void JIT::compileSetupVarargsFrame(Instruction* instruction, CallLinkInfo* info)
 {
     int thisValue = instruction[3].u.operand;
     int arguments = instruction[4].u.operand;
@@ -90,6 +90,16 @@ void JIT::compileSetupVarargsFrame(Instruction* instruction)
     if (canOptimize)
         end.link(this);
     
+    // Profile the argument count.
+    load32(Address(regT1, JSStack::ArgumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset), regT2);
+    load8(&info->maxNumArguments, regT0);
+    Jump notBiggest = branch32(Above, regT0, regT2);
+    Jump notSaturated = branch32(BelowOrEqual, regT2, TrustedImm32(255));
+    move(TrustedImm32(255), regT2);
+    notSaturated.link(this);
+    store8(regT2, &info->maxNumArguments);
+    notBiggest.link(this);
+    
     // Initialize 'this'.
     emitGetVirtualRegister(thisValue, regT0);
     store64(regT0, Address(regT1, CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register))));
@@ -134,6 +144,8 @@ void JIT::compileCallEvalSlowCase(Instruction* instruction, Vector<SlowCaseEntry
 
 void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned callLinkInfoIndex)
 {
+    CallLinkInfo* info = m_codeBlock->addCallLinkInfo();
+
     int callee = instruction[2].u.operand;
 
     /* Caller always:
@@ -152,7 +164,7 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca
     COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_call_varargs), call_and_call_varargs_opcodes_must_be_same_length);
     COMPILE_ASSERT(OPCODE_LENGTH(op_call) == OPCODE_LENGTH(op_construct_varargs), call_and_construct_varargs_opcodes_must_be_same_length);
     if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs)
-        compileSetupVarargsFrame(instruction);
+        compileSetupVarargsFrame(instruction, info);
     else {
         int argCount = instruction[3].u.operand;
         int registerOffset = -instruction[4].u.operand;
@@ -176,8 +188,6 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca
 
     store64(regT0, Address(stackPointerRegister, JSStack::Callee * static_cast<int>(sizeof(Register)) - sizeof(CallerFrameAndPC)));
     
-    CallLinkInfo* info = m_codeBlock->addCallLinkInfo();
-
     if (opcodeID == op_call_eval) {
         compileCallEval(instruction);
         return;
index ee5e371..4c4fe7c 100644 (file)
@@ -115,7 +115,7 @@ void JIT::emit_op_construct(Instruction* currentInstruction)
     compileOpCall(op_construct, currentInstruction, m_callLinkInfoIndex++);
 }
 
-void JIT::compileSetupVarargsFrame(Instruction* instruction)
+void JIT::compileSetupVarargsFrame(Instruction* instruction, CallLinkInfo* info)
 {
     int thisValue = instruction[3].u.operand;
     int arguments = instruction[4].u.operand;
@@ -150,6 +150,16 @@ void JIT::compileSetupVarargsFrame(Instruction* instruction)
     if (canOptimize)
         end.link(this);
 
+    // Profile the argument count.
+    load32(Address(regT1, JSStack::ArgumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset), regT2);
+    load8(&info->maxNumArguments, regT0);
+    Jump notBiggest = branch32(Above, regT0, regT2);
+    Jump notSaturated = branch32(BelowOrEqual, regT2, TrustedImm32(255));
+    move(TrustedImm32(255), regT2);
+    notSaturated.link(this);
+    store8(regT2, &info->maxNumArguments);
+    notBiggest.link(this);
+    
     // Initialize 'this'.
     emitLoad(thisValue, regT2, regT0);
     store32(regT0, Address(regT1, PayloadOffset + (CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register)))));
@@ -198,6 +208,7 @@ void JIT::compileCallEvalSlowCase(Instruction* instruction, Vector<SlowCaseEntry
 
 void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned callLinkInfoIndex)
 {
+    CallLinkInfo* info = m_codeBlock->addCallLinkInfo();
     int callee = instruction[2].u.operand;
 
     /* Caller always:
@@ -214,7 +225,7 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca
     */
     
     if (opcodeID == op_call_varargs || opcodeID == op_construct_varargs)
-        compileSetupVarargsFrame(instruction);
+        compileSetupVarargsFrame(instruction, info);
     else {
         int argCount = instruction[3].u.operand;
         int registerOffset = -instruction[4].u.operand;
@@ -239,8 +250,6 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca
     store32(regT0, Address(stackPointerRegister, JSStack::Callee * static_cast<int>(sizeof(Register)) + PayloadOffset - sizeof(CallerFrameAndPC)));
     store32(regT1, Address(stackPointerRegister, JSStack::Callee * static_cast<int>(sizeof(Register)) + TagOffset - sizeof(CallerFrameAndPC)));
 
-    CallLinkInfo* info = m_codeBlock->addCallLinkInfo();
-
     if (opcodeID == op_call_eval) {
         compileCallEval(instruction);
         return;
index 767ea5b..763c0dd 100644 (file)
@@ -150,6 +150,7 @@ typedef int64_t JIT_OPERATION(*Q_JITOperation_D)(double);
 typedef int32_t JIT_OPERATION (*Z_JITOperation_D)(double);
 typedef int32_t JIT_OPERATION (*Z_JITOperation_E)(ExecState*);
 typedef int32_t JIT_OPERATION (*Z_JITOperation_EC)(ExecState*, JSCell*);
+typedef int32_t JIT_OPERATION (*Z_JITOperation_EJZ)(ExecState*, EncodedJSValue, int32_t);
 typedef int32_t JIT_OPERATION (*Z_JITOperation_EJZZ)(ExecState*, EncodedJSValue, int32_t, int32_t);
 typedef size_t JIT_OPERATION (*S_JITOperation_ECC)(ExecState*, JSCell*, JSCell*);
 typedef size_t JIT_OPERATION (*S_JITOperation_EJ)(ExecState*, EncodedJSValue);
@@ -189,6 +190,7 @@ typedef void JIT_OPERATION (*V_JITOperation_ESsiJJI)(ExecState*, StructureStubIn
 typedef void JIT_OPERATION (*V_JITOperation_EVwsJ)(ExecState*, VariableWatchpointSet*, EncodedJSValue);
 typedef void JIT_OPERATION (*V_JITOperation_EZ)(ExecState*, int32_t);
 typedef void JIT_OPERATION (*V_JITOperation_EZJ)(ExecState*, int32_t, EncodedJSValue);
+typedef void JIT_OPERATION (*V_JITOperation_EZJZZZ)(ExecState*, int32_t, EncodedJSValue, int32_t, int32_t, int32_t);
 typedef void JIT_OPERATION (*V_JITOperation_EVm)(ExecState*, VM*);
 typedef void JIT_OPERATION (*V_JITOperation_J)(EncodedJSValue);
 typedef void JIT_OPERATION (*V_JITOperation_Z)(int32_t);
index 1bdfc84..bb63596 100644 (file)
@@ -101,11 +101,31 @@ void emitSetupVarargsFrameFastCase(CCallHelpers& jit, GPRReg numUsedSlotsGPR, GP
 
 void emitSetupVarargsFrameFastCase(CCallHelpers& jit, GPRReg numUsedSlotsGPR, GPRReg scratchGPR1, GPRReg scratchGPR2, GPRReg scratchGPR3, unsigned firstVarArgOffset, CCallHelpers::JumpList& slowCase)
 {
-    emitSetupVarargsFrameFastCase(
-        jit, numUsedSlotsGPR, scratchGPR1, scratchGPR2, scratchGPR3,
-        ValueRecovery::displacedInJSStack(VirtualRegister(JSStack::ArgumentCount), DataFormatInt32),
-        VirtualRegister(CallFrame::argumentOffset(0)),
-        firstVarArgOffset, slowCase);
+    emitSetupVarargsFrameFastCase(jit, numUsedSlotsGPR, scratchGPR1, scratchGPR2, scratchGPR3, nullptr, firstVarArgOffset, slowCase);
+}
+
+void emitSetupVarargsFrameFastCase(CCallHelpers& jit, GPRReg numUsedSlotsGPR, GPRReg scratchGPR1, GPRReg scratchGPR2, GPRReg scratchGPR3, InlineCallFrame* inlineCallFrame, unsigned firstVarArgOffset, CCallHelpers::JumpList& slowCase)
+{
+    ValueRecovery argumentCountRecovery;
+    VirtualRegister firstArgumentReg;
+    if (inlineCallFrame) {
+        if (inlineCallFrame->isVarargs()) {
+            argumentCountRecovery = ValueRecovery::displacedInJSStack(
+                inlineCallFrame->argumentCountRegister, DataFormatInt32);
+        } else {
+            argumentCountRecovery = ValueRecovery::constant(
+                jsNumber(inlineCallFrame->arguments.size()));
+        }
+        if (inlineCallFrame->arguments.size() > 1)
+            firstArgumentReg = inlineCallFrame->arguments[1].virtualRegister();
+        else
+            firstArgumentReg = VirtualRegister(0);
+    } else {
+        argumentCountRecovery = ValueRecovery::displacedInJSStack(
+            VirtualRegister(JSStack::ArgumentCount), DataFormatInt32);
+        firstArgumentReg = VirtualRegister(CallFrame::argumentOffset(0));
+    }
+    emitSetupVarargsFrameFastCase(jit, numUsedSlotsGPR, scratchGPR1, scratchGPR2, scratchGPR3, argumentCountRecovery, firstArgumentReg, firstVarArgOffset, slowCase);
 }
 
 } // namespace JSC
index 4c04587..0e8933a 100644 (file)
@@ -42,6 +42,9 @@ void emitSetupVarargsFrameFastCase(CCallHelpers&, GPRReg numUsedSlotsGPR, GPRReg
 // Variant that assumes normal stack frame.
 void emitSetupVarargsFrameFastCase(CCallHelpers&, GPRReg numUsedSlotsGPR, GPRReg scratchGPR1, GPRReg scratchGPR2, GPRReg scratchGPR3, unsigned firstVarArgOffset, CCallHelpers::JumpList& slowCase);
 
+// Variant for potentially inlined stack frames.
+void emitSetupVarargsFrameFastCase(CCallHelpers&, GPRReg numUsedSlotsGPR, GPRReg scratchGPR1, GPRReg scratchGPR2, GPRReg scratchGPR3, InlineCallFrame*, unsigned firstVarArgOffset, CCallHelpers::JumpList& slowCase);
+
 } // namespace JSC
 
 #endif // ENABLE(JIT)
index a20f0fe..87cf721 100644 (file)
@@ -56,7 +56,7 @@ public:
         
     static Arguments* create(VM& vm, CallFrame* callFrame, InlineCallFrame* inlineCallFrame, ArgumentsMode mode = NormalArgumentsCreationMode)
     {
-        Arguments* arguments = new (NotNull, allocateCell<Arguments>(vm.heap, offsetOfInlineRegisterArray() + registerArraySizeInBytes(inlineCallFrame))) Arguments(callFrame);
+        Arguments* arguments = new (NotNull, allocateCell<Arguments>(vm.heap, offsetOfInlineRegisterArray() + registerArraySizeInBytes(callFrame, inlineCallFrame))) Arguments(callFrame);
         arguments->finishCreation(callFrame, inlineCallFrame, mode);
         return arguments;
     }
@@ -124,7 +124,15 @@ private:
     void createStrictModeCalleeIfNecessary(ExecState*);
 
     static size_t registerArraySizeInBytes(CallFrame* callFrame) { return sizeof(WriteBarrier<Unknown>) * callFrame->argumentCount(); }
-    static size_t registerArraySizeInBytes(InlineCallFrame* inlineCallFrame) { return sizeof(WriteBarrier<Unknown>) * (inlineCallFrame->arguments.size() - 1); }
+    static size_t registerArraySizeInBytes(CallFrame* callFrame, InlineCallFrame* inlineCallFrame)
+    {
+        unsigned argumentCountIncludingThis;
+        if (inlineCallFrame->argumentCountRegister.isValid())
+            argumentCountIncludingThis = callFrame->r(inlineCallFrame->argumentCountRegister.offset()).unboxedInt32();
+        else
+            argumentCountIncludingThis = inlineCallFrame->arguments.size();
+        return sizeof(WriteBarrier<Unknown>) * (argumentCountIncludingThis - 1);
+    }
     bool isArgument(size_t);
     bool trySetArgument(VM&, size_t argument, JSValue);
     JSValue tryGetArgument(size_t argument);
@@ -340,11 +348,15 @@ inline void Arguments::finishCreation(CallFrame* callFrame, InlineCallFrame* inl
     m_overrodeCallee = false;
     m_overrodeCaller = false;
     m_isStrictMode = jsCast<FunctionExecutable*>(inlineCallFrame->executable.get())->isStrictMode();
-
+    
+    if (inlineCallFrame->argumentCountRegister.isValid())
+        m_numArguments = callFrame->r(inlineCallFrame->argumentCountRegister.offset()).unboxedInt32();
+    else
+        m_numArguments = inlineCallFrame->arguments.size();
+    m_numArguments--;
+    
     switch (mode) {
     case NormalArgumentsCreationMode: {
-        m_numArguments = inlineCallFrame->arguments.size() - 1;
-        
         if (m_numArguments) {
             int offsetForArgumentOne = inlineCallFrame->arguments[1].virtualRegister().offset();
             m_registers = reinterpret_cast<WriteBarrierBase<Unknown>*>(callFrame->registers()) + offsetForArgumentOne - virtualRegisterForArgument(1).offset();
@@ -361,7 +373,6 @@ inline void Arguments::finishCreation(CallFrame* callFrame, InlineCallFrame* inl
     }
         
     case ClonedArgumentsCreationMode: {
-        m_numArguments = inlineCallFrame->arguments.size() - 1;
         if (m_numArguments) {
             int offsetForArgumentOne = inlineCallFrame->arguments[1].virtualRegister().offset();
             m_registers = reinterpret_cast<WriteBarrierBase<Unknown>*>(callFrame->registers()) + offsetForArgumentOne - virtualRegisterForArgument(1).offset();
index 3f71197..6509603 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -211,6 +211,8 @@ typedef const char* optionString;
     /* from super long compiles that take a lot of memory. */\
     v(unsigned, maximumInliningCallerSize, 10000) \
     \
+    v(unsigned, maximumVarargsForInlining, 100) \
+    \
     v(bool, enablePolyvariantCallInlining, true) \
     v(bool, enablePolyvariantByIdInlining, true) \
     \
diff --git a/Source/JavaScriptCore/tests/stress/construct-varargs-inline-smaller-Foo.js b/Source/JavaScriptCore/tests/stress/construct-varargs-inline-smaller-Foo.js
new file mode 100644 (file)
index 0000000..77b5b3d
--- /dev/null
@@ -0,0 +1,35 @@
+function Foo(a, b) {
+    var array = [];
+    for (var i = 0; i < arguments.length; ++i)
+        array.push(arguments[i]);
+    this.f = array;
+}
+
+function bar(array) {
+    return new Foo(...array);
+}
+
+noInline(bar);
+
+function checkEqual(a, b) {
+    if (a.length != b.length)
+        throw "Error: bad value of c, length mismatch: " + a + " versus " + b;
+    for (var i = a.length; i--;) {
+        if (a[i] != b[i])
+            throw "Error: bad value of c, mismatch at i = " + i + ": " + a + " versus " + b;
+    }</