FTL should support global and eval code
authorfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Thu, 16 Mar 2017 21:19:23 +0000 (21:19 +0000)
committerfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Thu, 16 Mar 2017 21:19:23 +0000 (21:19 +0000)
https://bugs.webkit.org/show_bug.cgi?id=169656

Reviewed by Geoffrey Garen and Saam Barati.

JSTests:

Added basic performance tests of global and eval code. These tests will run a lot faster in with
the FTL because of the object allocation.

* microbenchmarks/eval-code-ftl-reentry.js: Added.
* microbenchmarks/eval-code-ftl.js: Added.
* microbenchmarks/global-code-ftl.js: Added.
* stress/arith-log-on-various-types.js: This was a flaky fail with concurrent JIT, so I stopped running it with concurrent JIT. The failure was its assertion about how many times something gets compiled.

Source/JavaScriptCore:

Turned off the restriction against global and eval code running in the FTL, and then fixed all of
the things that didn't work.

This is a big speed-up on microbenchmarks that I wrote for this patch. One of the reasons why we
hadn't done this earlier is that we've never seen a benchmark that needed it. Global and eval
code rarely gets FTL-hot. Still, this seems like possibly a small JetStream speed-up.

* dfg/DFGJITCode.cpp:
(JSC::DFG::JITCode::setOSREntryBlock): I outlined this for better debugging.
* dfg/DFGJITCode.h:
(JSC::DFG::JITCode::setOSREntryBlock): Deleted.
* dfg/DFGNode.h:
(JSC::DFG::Node::isSemanticallySkippable): It turns out that global code often has InvalidationPoints before LoopHints. They are also skippable from the standpoint of OSR entrypoint analysis.
* dfg/DFGOperations.cpp: Don't do any normal compiles of global code - just do OSR compiles.
* ftl/FTLCapabilities.cpp: Enable FTL for global and eval code.
(JSC::FTL::canCompile):
* ftl/FTLCompile.cpp: Just debugging clean-ups.
(JSC::FTL::compile):
* ftl/FTLJITFinalizer.cpp: Implement finalize() and ensure that we only do things with the entrypoint buffer if we have one. We won't have one for eval code that we aren't OSR entering into.
(JSC::FTL::JITFinalizer::finalize):
(JSC::FTL::JITFinalizer::finalizeFunction):
(JSC::FTL::JITFinalizer::finalizeCommon):
* ftl/FTLJITFinalizer.h:
* ftl/FTLLink.cpp: When entering a function normally, we need the "entrypoint" to put the arity check code. Global and eval code don't need this.
(JSC::FTL::link):
* ftl/FTLOSREntry.cpp: Fix a dataLog statement.
(JSC::FTL::prepareOSREntry):
* ftl/FTLOSRExitCompiler.cpp: Remove dead code that happened to assert that we're exiting from a function.
(JSC::FTL::compileStub):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@214069 268f45cc-cd09-0410-ab3c-d52691b4dbfc

17 files changed:
JSTests/ChangeLog
JSTests/microbenchmarks/eval-code-ftl-reentry.js [new file with mode: 0644]
JSTests/microbenchmarks/eval-code-ftl.js [new file with mode: 0644]
JSTests/microbenchmarks/global-code-ftl.js [new file with mode: 0644]
JSTests/stress/arith-log-on-various-types.js
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/dfg/DFGJITCode.cpp
Source/JavaScriptCore/dfg/DFGJITCode.h
Source/JavaScriptCore/dfg/DFGNode.h
Source/JavaScriptCore/dfg/DFGOperations.cpp
Source/JavaScriptCore/ftl/FTLCapabilities.cpp
Source/JavaScriptCore/ftl/FTLCompile.cpp
Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp
Source/JavaScriptCore/ftl/FTLJITFinalizer.h
Source/JavaScriptCore/ftl/FTLLink.cpp
Source/JavaScriptCore/ftl/FTLOSREntry.cpp
Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp

index 712b4dc..f5ae05e 100644 (file)
@@ -1,3 +1,18 @@
+2017-03-16  Filip Pizlo  <fpizlo@apple.com>
+
+        FTL should support global and eval code
+        https://bugs.webkit.org/show_bug.cgi?id=169656
+
+        Reviewed by Geoffrey Garen and Saam Barati.
+        
+        Added basic performance tests of global and eval code. These tests will run a lot faster in with
+        the FTL because of the object allocation.
+
+        * microbenchmarks/eval-code-ftl-reentry.js: Added.
+        * microbenchmarks/eval-code-ftl.js: Added.
+        * microbenchmarks/global-code-ftl.js: Added.
+        * stress/arith-log-on-various-types.js: This was a flaky fail with concurrent JIT, so I stopped running it with concurrent JIT. The failure was its assertion about how many times something gets compiled.
+
 2017-03-16  Caio Lima  <ticaiolima@gmail.com>
 
         [ESnext] Implement Object Spread
diff --git a/JSTests/microbenchmarks/eval-code-ftl-reentry.js b/JSTests/microbenchmarks/eval-code-ftl-reentry.js
new file mode 100644 (file)
index 0000000..04233c7
--- /dev/null
@@ -0,0 +1,10 @@
+for (var _i = 0; _i < 1000; ++_i) {
+    eval(
+        "var result = 0;\n" +
+        "var n = 15000;\n" + 
+        "for (var i = 0; i < n; ++i)\n" +
+        "    result += {f: 1}.f;\n" +
+        "if (result != n)\n" +
+        "    throw \"Error: bad result: \" + result;\n");
+}
+
diff --git a/JSTests/microbenchmarks/eval-code-ftl.js b/JSTests/microbenchmarks/eval-code-ftl.js
new file mode 100644 (file)
index 0000000..90c7257
--- /dev/null
@@ -0,0 +1,8 @@
+eval(
+    "var result = 0;\n" +
+    "var n = 15000000;\n" + 
+    "for (var i = 0; i < n; ++i)\n" +
+    "    result += {f: 1}.f;\n" +
+    "if (result != n)\n" +
+    "    throw \"Error: bad result: \" + result;\n");
+
diff --git a/JSTests/microbenchmarks/global-code-ftl.js b/JSTests/microbenchmarks/global-code-ftl.js
new file mode 100644 (file)
index 0000000..8cd7306
--- /dev/null
@@ -0,0 +1,7 @@
+var result = 0;
+var n = 15000000;
+for (var i = 0; i < n; ++i)
+    result += {f: 1}.f;
+if (result != n)
+    throw "Error: bad result: " + result;
+
index d42175a..edb5145 100644 (file)
@@ -1,4 +1,5 @@
-//@ defaultNoEagerRun
+//@ runNoCJITValidatePhases
+//@ runFTLNoCJITValidate
 "use strict";
 
 let logOfFour = Math.log(4);
index cf52845..1cd1790 100644 (file)
@@ -1,3 +1,40 @@
+2017-03-16  Filip Pizlo  <fpizlo@apple.com>
+
+        FTL should support global and eval code
+        https://bugs.webkit.org/show_bug.cgi?id=169656
+
+        Reviewed by Geoffrey Garen and Saam Barati.
+        
+        Turned off the restriction against global and eval code running in the FTL, and then fixed all of
+        the things that didn't work.
+        
+        This is a big speed-up on microbenchmarks that I wrote for this patch. One of the reasons why we
+        hadn't done this earlier is that we've never seen a benchmark that needed it. Global and eval
+        code rarely gets FTL-hot. Still, this seems like possibly a small JetStream speed-up.
+
+        * dfg/DFGJITCode.cpp:
+        (JSC::DFG::JITCode::setOSREntryBlock): I outlined this for better debugging.
+        * dfg/DFGJITCode.h:
+        (JSC::DFG::JITCode::setOSREntryBlock): Deleted.
+        * dfg/DFGNode.h:
+        (JSC::DFG::Node::isSemanticallySkippable): It turns out that global code often has InvalidationPoints before LoopHints. They are also skippable from the standpoint of OSR entrypoint analysis.
+        * dfg/DFGOperations.cpp: Don't do any normal compiles of global code - just do OSR compiles.
+        * ftl/FTLCapabilities.cpp: Enable FTL for global and eval code.
+        (JSC::FTL::canCompile):
+        * ftl/FTLCompile.cpp: Just debugging clean-ups.
+        (JSC::FTL::compile):
+        * ftl/FTLJITFinalizer.cpp: Implement finalize() and ensure that we only do things with the entrypoint buffer if we have one. We won't have one for eval code that we aren't OSR entering into.
+        (JSC::FTL::JITFinalizer::finalize):
+        (JSC::FTL::JITFinalizer::finalizeFunction):
+        (JSC::FTL::JITFinalizer::finalizeCommon):
+        * ftl/FTLJITFinalizer.h:
+        * ftl/FTLLink.cpp: When entering a function normally, we need the "entrypoint" to put the arity check code. Global and eval code don't need this.
+        (JSC::FTL::link):
+        * ftl/FTLOSREntry.cpp: Fix a dataLog statement.
+        (JSC::FTL::prepareOSREntry):
+        * ftl/FTLOSRExitCompiler.cpp: Remove dead code that happened to assert that we're exiting from a function.
+        (JSC::FTL::compileStub):
+
 2017-03-16  Michael Saboff  <msaboff@apple.com>
 
         WebAssembly: function-tests/load-offset.js fails on ARM64
index 7fb9f71..f384e44 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -29,6 +29,7 @@
 #if ENABLE(DFG_JIT)
 
 #include "CodeBlock.h"
+#include "FTLForOSREntryJITCode.h"
 #include "JSCInlines.h"
 #include "TrackedReferences.h"
 
@@ -201,6 +202,15 @@ void JITCode::setOptimizationThresholdBasedOnCompilationResult(
     }
     RELEASE_ASSERT_NOT_REACHED();
 }
+
+void JITCode::setOSREntryBlock(VM& vm, const JSCell* owner, CodeBlock* osrEntryBlock)
+{
+    if (Options::verboseOSR()) {
+        dataLog(RawPointer(this), ": Setting OSR entry block to ", RawPointer(osrEntryBlock), "\n");
+        dataLog("OSR entries will go to ", osrEntryBlock->jitCode()->ftlForOSREntry()->addressForCall(ArityCheckNotRequired), "\n");
+    }
+    m_osrEntryBlock.set(vm, owner, osrEntryBlock);
+}
 #endif // ENABLE(FTL_JIT)
 
 void JITCode::validateReferences(const TrackedReferences& trackedReferences)
index e1f6c41..0244e27 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -118,7 +118,7 @@ public:
     RegisterSet liveRegistersToPreserveAtExceptionHandlingCallSite(CodeBlock*, CallSiteIndex) override;
 #if ENABLE(FTL_JIT)
     CodeBlock* osrEntryBlock() { return m_osrEntryBlock.get(); }
-    void setOSREntryBlock(VM& vm, const JSCell* owner, CodeBlock* osrEntryBlock) { m_osrEntryBlock.set(vm, owner, osrEntryBlock); }
+    void setOSREntryBlock(VM&, const JSCell* owner, CodeBlock* osrEntryBlock);
     void clearOSREntryBlock() { m_osrEntryBlock.clear(); }
 #endif
 
index ff54f91..df5c28a 100644 (file)
@@ -1908,9 +1908,11 @@ public:
         return m_refCount;
     }
     
+    // Return true if the execution of this Node does not affect our ability to OSR to the FTL.
+    // FIXME: Isn't this just like checking if the node has effects?
     bool isSemanticallySkippable()
     {
-        return op() == CountExecution;
+        return op() == CountExecution || op() == InvalidationPoint;
     }
 
     unsigned refCount()
index 012d004..6e3a7e0 100644 (file)
@@ -2307,6 +2307,14 @@ static bool shouldTriggerFTLCompile(CodeBlock* codeBlock, JITCode* jitCode)
 
 static void triggerFTLReplacementCompile(VM* vm, CodeBlock* codeBlock, JITCode* jitCode)
 {
+    if (codeBlock->codeType() == GlobalCode) {
+        // Global code runs once, so we don't want to do anything. We don't want to defer indefinitely,
+        // since this may have been spuriously called from tier-up initiated in a loop, and that loop may
+        // later want to run faster code. Deferring for warm-up seems safest.
+        jitCode->optimizeAfterWarmUp(codeBlock);
+        return;
+    }
+    
     Worklist::State worklistState;
     if (Worklist* worklist = existingGlobalFTLWorklistOrNull()) {
         worklistState = worklist->completeAllReadyPlansForVM(
@@ -2456,6 +2464,8 @@ static char* tierUpCommon(ExecState* exec, unsigned originBytecodeIndex, unsigne
     if (canOSRFromHere) {
         unsigned streamIndex = jitCode->bytecodeIndexToStreamIndex.get(originBytecodeIndex);
         if (CodeBlock* entryBlock = jitCode->osrEntryBlock()) {
+            if (Options::verboseOSR())
+                dataLog("OSR entry: From ", RawPointer(jitCode), " got entry block ", RawPointer(entryBlock), "\n");
             if (void* address = FTL::prepareOSREntry(exec, codeBlock, entryBlock, originBytecodeIndex, streamIndex)) {
                 CODEBLOCK_LOG_EVENT(entryBlock, "osrEntry", ("at bc#", originBytecodeIndex));
                 return static_cast<char*>(address);
@@ -2574,6 +2584,8 @@ static char* tierUpCommon(ExecState* exec, unsigned originBytecodeIndex, unsigne
     // It's possible that the for-entry compile already succeeded. In that case OSR
     // entry will succeed unless we ran out of stack. It's not clear what we should do.
     // We signal to try again after a while if that happens.
+    if (Options::verboseOSR())
+        dataLog("Immediate OSR entry: From ", RawPointer(jitCode), " got entry block ", RawPointer(jitCode->osrEntryBlock()), "\n");
     void* address = FTL::prepareOSREntry(
         exec, codeBlock, jitCode->osrEntryBlock(), originBytecodeIndex, streamIndex);
     return static_cast<char*>(address);
index c6f3839..ff8aff0 100644 (file)
@@ -398,12 +398,6 @@ CapabilityLevel canCompile(Graph& graph)
         return CannotCompile;
     }
     
-    if (graph.m_codeBlock->codeType() != FunctionCode) {
-        if (verboseCapabilities())
-            dataLog("FTL rejecting ", *graph.m_codeBlock, " because it doesn't belong to a function.\n");
-        return CannotCompile;
-    }
-
     if (UNLIKELY(graph.m_codeBlock->ownerScriptExecutable()->neverFTLOptimize())) {
         if (verboseCapabilities())
             dataLog("FTL rejecting ", *graph.m_codeBlock, " because it is marked as never FTL compile.\n");
index 974c53a..1cdfc63 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -48,6 +48,7 @@
 #include "LinkBuffer.h"
 #include "PCToCodeOriginMap.h"
 #include "ScratchRegisterAllocator.h"
+#include <wtf/Function.h>
 
 namespace JSC { namespace FTL {
 
@@ -77,10 +78,8 @@ void compile(State& state, Safepoint::Result& safepointResult)
     
     std::unique_ptr<RegisterAtOffsetList> registerOffsets =
         std::make_unique<RegisterAtOffsetList>(state.proc->calleeSaveRegisters());
-    if (shouldDumpDisassembly()) {
-        dataLog("Unwind info for ", CodeBlockWithJITType(state.graph.m_codeBlock, JITCode::FTLJIT), ":\n");
-        dataLog("    ", *registerOffsets, "\n");
-    }
+    if (shouldDumpDisassembly())
+        dataLog("Unwind info for ", CodeBlockWithJITType(state.graph.m_codeBlock, JITCode::FTLJIT), ": ", *registerOffsets, "\n");
     state.graph.m_codeBlock->setCalleeSaveRegisters(WTFMove(registerOffsets));
     ASSERT(!(state.proc->frameSize() % sizeof(EncodedJSValue)));
     state.jitCode->common.frameRegisterCount = state.proc->frameSize() / sizeof(EncodedJSValue);
@@ -160,7 +159,7 @@ void compile(State& state, Safepoint::Result& safepointResult)
     if (B3::Air::Disassembler* disassembler = state.proc->code().disassembler()) {
         PrintStream& out = WTF::dataFile();
 
-        out.print("\nGenerated FTL JIT code for ", CodeBlockWithJITType(state.graph.m_codeBlock, JITCode::FTLJIT), ", instruction count = ", state.graph.m_codeBlock->instructionCount(), ":\n");
+        out.print("Generated ", state.graph.m_plan.mode, " code for ", CodeBlockWithJITType(state.graph.m_codeBlock, JITCode::FTLJIT), ", instruction count = ", state.graph.m_codeBlock->instructionCount(), ":\n");
 
         LinkBuffer& linkBuffer = *state.finalizer->b3CodeLinkBuffer;
         B3::Value* currentB3Value = nullptr;
@@ -182,7 +181,7 @@ void compile(State& state, Safepoint::Result& safepointResult)
                 return;
 
             HashSet<Node*> localPrintedNodes;
-            std::function<void(Node*)> printNodeRecursive = [&] (Node* node) {
+            WTF::Function<void(Node*)> printNodeRecursive = [&] (Node* node) {
                 if (printedNodes.contains(node) || localPrintedNodes.contains(node))
                     return;
 
@@ -207,7 +206,7 @@ void compile(State& state, Safepoint::Result& safepointResult)
             printDFGNode(bitwise_cast<Node*>(value->origin().data()));
 
             HashSet<B3::Value*> localPrintedValues;
-            std::function<void(B3::Value*)> printValueRecursive = [&] (B3::Value* value) {
+            WTF::Function<void(B3::Value*)> printValueRecursive = [&] (B3::Value* value) {
                 if (printedValues.contains(value) || localPrintedValues.contains(value))
                     return;
 
index 00ea651..730c005 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013-2014, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -63,12 +63,16 @@ size_t JITFinalizer::codeSize()
 
 bool JITFinalizer::finalize()
 {
-    RELEASE_ASSERT_NOT_REACHED();
-    return false;
+    return finalizeCommon();
 }
 
 bool JITFinalizer::finalizeFunction()
 {
+    return finalizeCommon();
+}
+
+bool JITFinalizer::finalizeCommon()
+{
     bool dumpDisassembly = shouldDumpDisassembly() || Options::asyncDisassembly();
     
     jitCode->initializeB3Code(
@@ -76,10 +80,12 @@ bool JITFinalizer::finalizeFunction()
             dumpDisassembly, *b3CodeLinkBuffer,
             ("FTL B3 code for %s", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::FTLJIT)).data())));
 
-    jitCode->initializeArityCheckEntrypoint(
-        FINALIZE_CODE_IF(
-            dumpDisassembly, *entrypointLinkBuffer,
-            ("FTL entrypoint thunk for %s with B3 generated code at %p", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::FTLJIT)).data(), function)));
+    if (entrypointLinkBuffer) {
+        jitCode->initializeArityCheckEntrypoint(
+            FINALIZE_CODE_IF(
+                dumpDisassembly, *entrypointLinkBuffer,
+                ("FTL entrypoint thunk for %s with B3 generated code at %p", toCString(CodeBlockWithJITType(m_plan.codeBlock, JITCode::FTLJIT)).data(), function)));
+    }
     
     m_plan.codeBlock->setJITCode(*jitCode);
 
index 630c3b4..77c431b 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -56,6 +56,8 @@ public:
     size_t codeSize() override;
     bool finalize() override;
     bool finalizeFunction() override;
+    
+    bool finalizeCommon();
 
     std::unique_ptr<LinkBuffer> b3CodeLinkBuffer;
 
index d11b2a9..0a56ba1 100644 (file)
@@ -127,50 +127,52 @@ void link(State& state)
     
     switch (graph.m_plan.mode) {
     case FTLMode: {
-        CCallHelpers::JumpList mainPathJumps;
+        if (codeBlock->codeType() == FunctionCode) {
+            CCallHelpers::JumpList mainPathJumps;
     
-        jit.load32(
-            frame.withOffset(sizeof(Register) * CallFrameSlot::argumentCount),
-            GPRInfo::regT1);
-        mainPathJumps.append(jit.branch32(
-            CCallHelpers::AboveOrEqual, GPRInfo::regT1,
-            CCallHelpers::TrustedImm32(codeBlock->numParameters())));
-        jit.emitFunctionPrologue();
-        jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
-        jit.storePtr(GPRInfo::callFrameRegister, &vm.topCallFrame);
-        CCallHelpers::Call callArityCheck = jit.call();
-
-        auto noException = jit.branch32(CCallHelpers::GreaterThanOrEqual, GPRInfo::returnValueGPR, CCallHelpers::TrustedImm32(0));
-        jit.copyCalleeSavesToVMEntryFrameCalleeSavesBuffer();
-        jit.move(CCallHelpers::TrustedImmPtr(jit.vm()), GPRInfo::argumentGPR0);
-        jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
-        CCallHelpers::Call callLookupExceptionHandlerFromCallerFrame = jit.call();
-        jit.jumpToExceptionHandler();
-        noException.link(&jit);
-
-        if (!ASSERT_DISABLED) {
-            jit.load64(vm.addressOfException(), GPRInfo::regT1);
-            jit.jitAssertIsNull(GPRInfo::regT1);
-        }
-
-        jit.move(GPRInfo::returnValueGPR, GPRInfo::argumentGPR0);
-        jit.emitFunctionEpilogue();
-        mainPathJumps.append(jit.branchTest32(CCallHelpers::Zero, GPRInfo::argumentGPR0));
-        jit.emitFunctionPrologue();
-        CCallHelpers::Call callArityFixup = jit.call();
-        jit.emitFunctionEpilogue();
-        mainPathJumps.append(jit.jump());
+            jit.load32(
+                frame.withOffset(sizeof(Register) * CallFrameSlot::argumentCount),
+                GPRInfo::regT1);
+            mainPathJumps.append(jit.branch32(
+                                     CCallHelpers::AboveOrEqual, GPRInfo::regT1,
+                                     CCallHelpers::TrustedImm32(codeBlock->numParameters())));
+            jit.emitFunctionPrologue();
+            jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
+            jit.storePtr(GPRInfo::callFrameRegister, &vm.topCallFrame);
+            CCallHelpers::Call callArityCheck = jit.call();
+
+            auto noException = jit.branch32(CCallHelpers::GreaterThanOrEqual, GPRInfo::returnValueGPR, CCallHelpers::TrustedImm32(0));
+            jit.copyCalleeSavesToVMEntryFrameCalleeSavesBuffer();
+            jit.move(CCallHelpers::TrustedImmPtr(jit.vm()), GPRInfo::argumentGPR0);
+            jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR1);
+            CCallHelpers::Call callLookupExceptionHandlerFromCallerFrame = jit.call();
+            jit.jumpToExceptionHandler();
+            noException.link(&jit);
+
+            if (!ASSERT_DISABLED) {
+                jit.load64(vm.addressOfException(), GPRInfo::regT1);
+                jit.jitAssertIsNull(GPRInfo::regT1);
+            }
 
-        linkBuffer = std::make_unique<LinkBuffer>(vm, jit, codeBlock, JITCompilationCanFail);
-        if (linkBuffer->didFailToAllocate()) {
-            state.allocationFailed = true;
-            return;
+            jit.move(GPRInfo::returnValueGPR, GPRInfo::argumentGPR0);
+            jit.emitFunctionEpilogue();
+            mainPathJumps.append(jit.branchTest32(CCallHelpers::Zero, GPRInfo::argumentGPR0));
+            jit.emitFunctionPrologue();
+            CCallHelpers::Call callArityFixup = jit.call();
+            jit.emitFunctionEpilogue();
+            mainPathJumps.append(jit.jump());
+
+            linkBuffer = std::make_unique<LinkBuffer>(vm, jit, codeBlock, JITCompilationCanFail);
+            if (linkBuffer->didFailToAllocate()) {
+                state.allocationFailed = true;
+                return;
+            }
+            linkBuffer->link(callArityCheck, codeBlock->m_isConstructor ? operationConstructArityCheck : operationCallArityCheck);
+            linkBuffer->link(callLookupExceptionHandlerFromCallerFrame, lookupExceptionHandlerFromCallerFrame);
+            linkBuffer->link(callArityFixup, FunctionPtr((vm.getCTIStub(arityFixupGenerator)).code().executableAddress()));
+            linkBuffer->link(mainPathJumps, CodeLocationLabel(bitwise_cast<void*>(state.generatedFunction)));
         }
-        linkBuffer->link(callArityCheck, codeBlock->m_isConstructor ? operationConstructArityCheck : operationCallArityCheck);
-        linkBuffer->link(callLookupExceptionHandlerFromCallerFrame, lookupExceptionHandlerFromCallerFrame);
-        linkBuffer->link(callArityFixup, FunctionPtr((vm.getCTIStub(arityFixupGenerator)).code().executableAddress()));
-        linkBuffer->link(mainPathJumps, CodeLocationLabel(bitwise_cast<void*>(state.generatedFunction)));
-
+        
         state.jitCode->initializeAddressForCall(MacroAssemblerCodePtr(bitwise_cast<void*>(state.generatedFunction)));
         break;
     }
index 9a391e3..b32a536 100644 (file)
@@ -102,7 +102,7 @@ void* prepareOSREntry(
     
     void* result = entryCode->addressForCall(ArityCheckNotRequired).executableAddress();
     if (Options::verboseOSR())
-        dataLog("    Entry will succeed, going to address", RawPointer(result), "\n");
+        dataLog("    Entry will succeed, going to address ", RawPointer(result), "\n");
     
     return result;
 }
index 9919e71..0aae5c8 100644 (file)
@@ -394,36 +394,6 @@ static void compileStub(
         jit.store64(GPRInfo::regT0, unwindScratch + i);
     }
     
-    jit.load32(CCallHelpers::payloadFor(CallFrameSlot::argumentCount), GPRInfo::regT2);
-    
-    // Let's say that the FTL function had failed its arity check. In that case, the stack will
-    // contain some extra stuff.
-    //
-    // We compute the padded stack space:
-    //
-    //     paddedStackSpace = roundUp(codeBlock->numParameters - regT2 + 1)
-    //
-    // The stack will have regT2 + CallFrameHeaderSize stuff.
-    // We want to make the stack look like this, from higher addresses down:
-    //
-    //     - argument padding
-    //     - actual arguments
-    //     - call frame header
-
-    // This code assumes that we're dealing with FunctionCode.
-    RELEASE_ASSERT(codeBlock->codeType() == FunctionCode);
-    
-    jit.add32(
-        MacroAssembler::TrustedImm32(-codeBlock->numParameters()), GPRInfo::regT2,
-        GPRInfo::regT3);
-    MacroAssembler::Jump arityIntact = jit.branch32(
-        MacroAssembler::GreaterThanOrEqual, GPRInfo::regT3, MacroAssembler::TrustedImm32(0));
-    jit.neg32(GPRInfo::regT3);
-    jit.add32(MacroAssembler::TrustedImm32(1 + stackAlignmentRegisters() - 1), GPRInfo::regT3);
-    jit.and32(MacroAssembler::TrustedImm32(-stackAlignmentRegisters()), GPRInfo::regT3);
-    jit.add32(GPRInfo::regT3, GPRInfo::regT2);
-    arityIntact.link(&jit);
-
     CodeBlock* baselineCodeBlock = jit.baselineCodeBlockFor(exit.m_codeOrigin);
 
     // First set up SP so that our data doesn't get clobbered by signals.