OSR exits that are exception handlers should emit less code eagerly in the thunk...
authorsbarati@apple.com <sbarati@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 5 Dec 2015 00:04:01 +0000 (00:04 +0000)
committersbarati@apple.com <sbarati@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 5 Dec 2015 00:04:01 +0000 (00:04 +0000)
https://bugs.webkit.org/show_bug.cgi?id=151406

Reviewed by Filip Pizlo.

We no longer emit any extra code eagerly for an OSRExit that
is an exception handler. We emit all code lazily in the exit
itself. This has one interesting consequence which is that the
actual C call to compile the exit goes through an OSR exit generation
thunk that must now be aware of resetting the call frame and the stack
pointer to their proper values before making the compileOSRExit C
call. This has one interesting consequence in the FTL because the
FTL will do a pushToSaveImmediateWithoutTouchingRegisters with the
OSR exit index. We must take care to preserve this exit index when
we reset the stack pointer by re-pushing it onto the stack.

* bytecode/CodeBlock.h:
(JSC::CodeBlock::setJITCode):
(JSC::CodeBlock::jitCode):
(JSC::CodeBlock::jitCodeOffset):
(JSC::CodeBlock::jitType):
* dfg/DFGCommonData.h:
(JSC::DFG::CommonData::frameRegisterCountOffset):
* dfg/DFGJITCode.h:
(JSC::DFG::JITCode::setOSREntryBlock):
(JSC::DFG::JITCode::clearOSREntryBlock):
(JSC::DFG::JITCode::commonDataOffset):
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::linkOSRExits):
* dfg/DFGOSRExitCompiler.cpp:
* dfg/DFGOSRExitCompilerCommon.h:
(JSC::DFG::adjustFrameAndStackInOSRExitCompilerThunk):
* dfg/DFGThunks.cpp:
(JSC::DFG::osrExitGenerationThunkGenerator):
* ftl/FTLCompile.cpp:
(JSC::FTL::mmAllocateDataSection):
* ftl/FTLExitThunkGenerator.cpp:
(JSC::FTL::ExitThunkGenerator::~ExitThunkGenerator):
(JSC::FTL::ExitThunkGenerator::emitThunk):
(JSC::FTL::ExitThunkGenerator::emitThunks):
* ftl/FTLExitThunkGenerator.h:
(JSC::FTL::ExitThunkGenerator::didThings):
* ftl/FTLJITCode.h:
(JSC::FTL::JITCode::commonDataOffset):
* ftl/FTLOSRExitCompiler.cpp:
(JSC::FTL::compileStub):
(JSC::FTL::compileFTLOSRExit):
* ftl/FTLThunks.cpp:
(JSC::FTL::genericGenerationThunkGenerator):
(JSC::FTL::osrExitGenerationThunkGenerator):
(JSC::FTL::lazySlowPathGenerationThunkGenerator):
(JSC::FTL::registerClobberCheck):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@193485 268f45cc-cd09-0410-ab3c-d52691b4dbfc

14 files changed:
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/bytecode/CodeBlock.h
Source/JavaScriptCore/dfg/DFGCommonData.h
Source/JavaScriptCore/dfg/DFGJITCode.h
Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
Source/JavaScriptCore/dfg/DFGOSRExitCompiler.cpp
Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.h
Source/JavaScriptCore/dfg/DFGThunks.cpp
Source/JavaScriptCore/ftl/FTLCompile.cpp
Source/JavaScriptCore/ftl/FTLExitThunkGenerator.cpp
Source/JavaScriptCore/ftl/FTLExitThunkGenerator.h
Source/JavaScriptCore/ftl/FTLJITCode.h
Source/JavaScriptCore/ftl/FTLOSRExitCompiler.cpp
Source/JavaScriptCore/ftl/FTLThunks.cpp

index b25328a..5604645 100644 (file)
@@ -1,3 +1,58 @@
+2015-12-04  Saam barati  <sbarati@apple.com>
+
+        OSR exits that are exception handlers should emit less code eagerly in the thunk generator, and instead, should defer as much code generation as possible to be lazily generated in the exit itself
+        https://bugs.webkit.org/show_bug.cgi?id=151406
+
+        Reviewed by Filip Pizlo.
+
+        We no longer emit any extra code eagerly for an OSRExit that
+        is an exception handler. We emit all code lazily in the exit
+        itself. This has one interesting consequence which is that the
+        actual C call to compile the exit goes through an OSR exit generation
+        thunk that must now be aware of resetting the call frame and the stack
+        pointer to their proper values before making the compileOSRExit C
+        call. This has one interesting consequence in the FTL because the
+        FTL will do a pushToSaveImmediateWithoutTouchingRegisters with the
+        OSR exit index. We must take care to preserve this exit index when
+        we reset the stack pointer by re-pushing it onto the stack.
+
+        * bytecode/CodeBlock.h:
+        (JSC::CodeBlock::setJITCode):
+        (JSC::CodeBlock::jitCode):
+        (JSC::CodeBlock::jitCodeOffset):
+        (JSC::CodeBlock::jitType):
+        * dfg/DFGCommonData.h:
+        (JSC::DFG::CommonData::frameRegisterCountOffset):
+        * dfg/DFGJITCode.h:
+        (JSC::DFG::JITCode::setOSREntryBlock):
+        (JSC::DFG::JITCode::clearOSREntryBlock):
+        (JSC::DFG::JITCode::commonDataOffset):
+        * dfg/DFGJITCompiler.cpp:
+        (JSC::DFG::JITCompiler::linkOSRExits):
+        * dfg/DFGOSRExitCompiler.cpp:
+        * dfg/DFGOSRExitCompilerCommon.h:
+        (JSC::DFG::adjustFrameAndStackInOSRExitCompilerThunk):
+        * dfg/DFGThunks.cpp:
+        (JSC::DFG::osrExitGenerationThunkGenerator):
+        * ftl/FTLCompile.cpp:
+        (JSC::FTL::mmAllocateDataSection):
+        * ftl/FTLExitThunkGenerator.cpp:
+        (JSC::FTL::ExitThunkGenerator::~ExitThunkGenerator):
+        (JSC::FTL::ExitThunkGenerator::emitThunk):
+        (JSC::FTL::ExitThunkGenerator::emitThunks):
+        * ftl/FTLExitThunkGenerator.h:
+        (JSC::FTL::ExitThunkGenerator::didThings):
+        * ftl/FTLJITCode.h:
+        (JSC::FTL::JITCode::commonDataOffset):
+        * ftl/FTLOSRExitCompiler.cpp:
+        (JSC::FTL::compileStub):
+        (JSC::FTL::compileFTLOSRExit):
+        * ftl/FTLThunks.cpp:
+        (JSC::FTL::genericGenerationThunkGenerator):
+        (JSC::FTL::osrExitGenerationThunkGenerator):
+        (JSC::FTL::lazySlowPathGenerationThunkGenerator):
+        (JSC::FTL::registerClobberCheck):
+
 2015-12-04  Filip Pizlo  <fpizlo@apple.com>
 
         Having a bad time has a really awful time when it runs at the same time as the JIT
index 3cbe75c..19ff962 100644 (file)
@@ -302,6 +302,7 @@ public:
         m_jitCode = code;
     }
     PassRefPtr<JITCode> jitCode() { return m_jitCode; }
+    static ptrdiff_t jitCodeOffset() { return OBJECT_OFFSETOF(CodeBlock, m_jitCode); }
     JITCode::JITType jitType() const
     {
         JITCode* jitCode = m_jitCode.get();
index 2aa6456..bf4f94f 100644 (file)
@@ -95,6 +95,8 @@ public:
     
     void validateReferences(const TrackedReferences&);
 
+    static ptrdiff_t frameRegisterCountOffset() { return OBJECT_OFFSETOF(CommonData, frameRegisterCount); }
+
     RefPtr<InlineCallFrameSet> inlineCallFrames;
     Vector<CodeOrigin, 0, UnsafeVectorOverflow> codeOrigins;
     
index de23428..6939215 100644 (file)
@@ -122,6 +122,8 @@ public:
     void setOSREntryBlock(VM& vm, const JSCell* owner, CodeBlock* osrEntryBlock) { m_osrEntryBlock.set(vm, owner, osrEntryBlock); }
     void clearOSREntryBlock() { m_osrEntryBlock.clear(); }
 #endif
+
+    static ptrdiff_t commonDataOffset() { return OBJECT_OFFSETOF(JITCode, common); }
     
 private:
     friend class JITCompiler; // Allow JITCompiler to call setCodeRef().
index 534e4a6..68198c5 100644 (file)
@@ -86,14 +86,6 @@ void JITCompiler::linkOSRExits()
         else
             info.m_replacementDestination = label();
 
-        if (exit.m_willArriveAtOSRExitFromGenericUnwind) {
-            // We are acting as a defacto op_catch because we arrive here from genericUnwind().
-            // So, we must restore our call frame and stack pointer.
-            restoreCalleeSavesFromVMCalleeSavesBuffer();
-            loadPtr(vm()->addressOfCallFrameForCatch(), GPRInfo::callFrameRegister);
-            addPtr(TrustedImm32(graph().stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, stackPointerRegister);
-        }
-
         jitAssertHasValidCallFrame();
         store32(TrustedImm32(i), &vm()->osrExitIndex);
         exit.setPatchableCodeOffset(patchableJump());
index 76a0807..5f94132 100644 (file)
@@ -113,6 +113,9 @@ extern "C" {
 void compileOSRExit(ExecState* exec)
 {
     SamplingRegion samplingRegion("DFG OSR Exit Compilation");
+
+    if (exec->vm().callFrameForCatch)
+        RELEASE_ASSERT(exec->vm().callFrameForCatch == exec);
     
     CodeBlock* codeBlock = exec->codeBlock();
     ASSERT(codeBlock);
@@ -147,6 +150,15 @@ void compileOSRExit(ExecState* exec)
         CCallHelpers jit(vm, codeBlock);
         OSRExitCompiler exitCompiler(jit);
 
+        if (exit.m_willArriveAtOSRExitFromGenericUnwind) {
+            // We are acting as a defacto op_catch because we arrive here from genericUnwind().
+            // So, we must restore our call frame and stack pointer.
+            jit.restoreCalleeSavesFromVMCalleeSavesBuffer();
+            jit.loadPtr(vm->addressOfCallFrameForCatch(), GPRInfo::callFrameRegister);
+            jit.addPtr(CCallHelpers::TrustedImm32(codeBlock->stackPointerOffset() * sizeof(Register)),
+                GPRInfo::callFrameRegister, CCallHelpers::stackPointerRegister);
+        }
+
         jit.jitAssertHasValidCallFrame();
         
         if (vm->m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation) {
index 46b426d..02159ed 100644 (file)
@@ -30,6 +30,9 @@
 
 #include "CCallHelpers.h"
 #include "DFGOSRExit.h"
+#include "DFGCommonData.h"
+#include "DFGJITCode.h"
+#include "FTLJITCode.h"
 
 namespace JSC { namespace DFG {
 
@@ -37,6 +40,62 @@ void handleExitCounts(CCallHelpers&, const OSRExitBase&);
 void reifyInlinedCallFrames(CCallHelpers&, const OSRExitBase&);
 void adjustAndJumpToTarget(CCallHelpers&, const OSRExitBase&, bool isExitingToOpCatch);
 
+template <typename JITCodeType>
+void adjustFrameAndStackInOSRExitCompilerThunk(MacroAssembler& jit, VM* vm, JITCode::JITType jitType)
+{
+    ASSERT(jitType == JITCode::DFGJIT || jitType == JITCode::FTLJIT);
+    size_t scratchSize = sizeof(void*);
+    bool isFTLOSRExit = jitType == JITCode::FTLJIT;
+    if (isFTLOSRExit)
+        scratchSize += sizeof(void*);
+
+    ScratchBuffer* scratchBuffer = vm->scratchBufferForSize(scratchSize);
+    char* buffer = static_cast<char*>(scratchBuffer->dataBuffer());
+    jit.storePtr(GPRInfo::regT0, buffer);
+
+    if (isFTLOSRExit) {
+        // FTL OSRExits are entered via the code FTLExitThunkGenerator emits which does
+        // pushToSaveImmediateWithoutTouchRegisters with the OSR exit index. We need to load
+        // that top value and then push it back when we reset our SP.
+        jit.peek(GPRInfo::regT0);
+        jit.storePtr(GPRInfo::regT0, buffer + sizeof(void*));
+    }
+
+    // We need to reset FP in the case of an exception.
+    jit.loadPtr(vm->addressOfCallFrameForCatch(), GPRInfo::regT0);
+    MacroAssembler::Jump didNotHaveException = jit.branchTestPtr(MacroAssembler::Zero, GPRInfo::regT0);
+    jit.move(GPRInfo::regT0, GPRInfo::callFrameRegister);
+    didNotHaveException.link(&jit);
+    // We need to make sure SP is correct in case of an exception.
+    jit.loadPtr(MacroAssembler::Address(GPRInfo::callFrameRegister, JSStack::CodeBlock * static_cast<int>(sizeof(Register))), GPRInfo::regT0);
+    jit.loadPtr(MacroAssembler::Address(GPRInfo::regT0, CodeBlock::jitCodeOffset()), GPRInfo::regT0);
+    jit.addPtr(MacroAssembler::TrustedImm32(JITCodeType::commonDataOffset()), GPRInfo::regT0);
+    jit.load32(MacroAssembler::Address(GPRInfo::regT0, CommonData::frameRegisterCountOffset()), GPRInfo::regT0);
+    // This does virtualRegisterForLocal(frameRegisterCount - 1)*sizeof(Register) where:
+    // virtualRegisterForLocal(frameRegisterCount - 1)
+    //     = VirtualRegister::localToOperand(frameRegisterCount - 1)
+    //     = -1 - (frameRegisterCount - 1)
+    //     = -frameRegisterCount
+    jit.neg32(GPRInfo::regT0);
+    jit.mul32(MacroAssembler::TrustedImm32(sizeof(Register)), GPRInfo::regT0, GPRInfo::regT0);
+#if USE(JSVALUE64)
+    jit.signExtend32ToPtr(GPRInfo::regT0, GPRInfo::regT0);
+#endif
+    jit.addPtr(GPRInfo::callFrameRegister, GPRInfo::regT0);
+    jit.move(GPRInfo::regT0, MacroAssembler::stackPointerRegister);
+
+    if (isFTLOSRExit) {
+        // FTL OSRExits are entered via FTLExitThunkGenerator code with does
+        // pushToSaveImmediateWithoutTouchRegisters. We need to load that top
+        // register and then store it back when we have our SP back to a safe value.
+        jit.loadPtr(buffer + sizeof(void*), GPRInfo::regT0);
+        jit.pushToSave(GPRInfo::regT0);
+    }
+
+    jit.loadPtr(buffer, GPRInfo::regT0);
+}
+
+
 } } // namespace JSC::DFG
 
 #endif // ENABLE(DFG_JIT)
index 9b002ee..6d9d2ce 100644 (file)
 
 #include "CCallHelpers.h"
 #include "DFGOSRExitCompiler.h"
+#include "DFGJITCode.h"
 #include "FPRInfo.h"
 #include "GPRInfo.h"
 #include "LinkBuffer.h"
 #include "MacroAssembler.h"
 #include "JSCInlines.h"
+#include "DFGOSRExitCompilerCommon.h"
 
 namespace JSC { namespace DFG {
 
 MacroAssemblerCodeRef osrExitGenerationThunkGenerator(VM* vm)
 {
     MacroAssembler jit;
+
+    // This needs to happen before we use the scratch buffer because this function also uses the scratch buffer.
+    adjustFrameAndStackInOSRExitCompilerThunk<DFG::JITCode>(jit, vm, JITCode::DFGJIT);
     
     size_t scratchSize = sizeof(EncodedJSValue) * (GPRInfo::numberOfRegisters + FPRInfo::numberOfRegisters);
     ScratchBuffer* scratchBuffer = vm->scratchBufferForSize(scratchSize);
index 024aabe..c0125c8 100644 (file)
@@ -427,7 +427,8 @@ static void fixFunctionBasedOnStackMaps(
     
     int localsOffset = offsetOfStackRegion(recordMap, state.capturedStackmapID) + graph.m_nextMachineLocal;
     int varargsSpillSlotsOffset = offsetOfStackRegion(recordMap, state.varargsSpillSlotsStackmapID);
-    int jsCallThatMightThrowSpillOffset = offsetOfStackRegion(recordMap, state.exceptionHandlingSpillSlotStackmapID);
+    int osrExitFromGenericUnwindStackSpillSlot  = offsetOfStackRegion(recordMap, state.exceptionHandlingSpillSlotStackmapID);
+    jitCode->osrExitFromGenericUnwindStackSpillSlot = osrExitFromGenericUnwindStackSpillSlot;
     
     for (unsigned i = graph.m_inlineVariableData.size(); i--;) {
         InlineCallFrame* inlineCallFrame = graph.m_inlineVariableData[i].inlineCallFrame;
@@ -571,7 +572,7 @@ static void fixFunctionBasedOnStackMaps(
         }
     }
     ExitThunkGenerator exitThunkGenerator(state);
-    exitThunkGenerator.emitThunks(jsCallThatMightThrowSpillOffset);
+    exitThunkGenerator.emitThunks();
     if (exitThunkGenerator.didThings()) {
         RELEASE_ASSERT(state.finalizer->osrExit.size());
         
@@ -675,7 +676,7 @@ static void fixFunctionBasedOnStackMaps(
                     // taking place by ensuring we spill the original base value and then recover it from
                     // the spill slot as the first step in OSR exit.
                     if (OSRExit* exit = exceptionHandlerManager.callOperationOSRExit(iter->value[i].index))
-                        exit->spillRegistersToSpillSlot(slowPathJIT, jsCallThatMightThrowSpillOffset);
+                        exit->spillRegistersToSpillSlot(slowPathJIT, osrExitFromGenericUnwindStackSpillSlot);
                 }
                 MacroAssembler::Call call = callOperation(
                     state, usedRegisters, slowPathJIT, codeOrigin, addedUniqueExceptionJump ? &exceptionJumpsToLink.last().first : &exceptionTarget,
@@ -794,7 +795,7 @@ static void fixFunctionBasedOnStackMaps(
                     // This situation has a really interesting register preservation story.
                     // See comment above for GetByIds.
                     if (OSRExit* exit = exceptionHandlerManager.callOperationOSRExit(iter->value[i].index))
-                        exit->spillRegistersToSpillSlot(slowPathJIT, jsCallThatMightThrowSpillOffset);
+                        exit->spillRegistersToSpillSlot(slowPathJIT, osrExitFromGenericUnwindStackSpillSlot);
                 }
 
                 callOperation(state, usedRegisters, slowPathJIT, codeOrigin, addedUniqueExceptionJump ? &exceptionJumpsToLink.last().first : &exceptionTarget,
@@ -916,7 +917,7 @@ static void fixFunctionBasedOnStackMaps(
         JSCall& call = state.jsCalls[i];
 
         CCallHelpers fastPathJIT(&vm, codeBlock);
-        call.emit(fastPathJIT, state, jsCallThatMightThrowSpillOffset);
+        call.emit(fastPathJIT, state, osrExitFromGenericUnwindStackSpillSlot);
 
         char* startOfIC = bitwise_cast<char*>(generatedFunction) + call.m_instructionOffset;
 
@@ -931,7 +932,7 @@ static void fixFunctionBasedOnStackMaps(
         JSCallVarargs& call = state.jsCallVarargses[i];
         
         CCallHelpers fastPathJIT(&vm, codeBlock);
-        call.emit(fastPathJIT, state, varargsSpillSlotsOffset, jsCallThatMightThrowSpillOffset);
+        call.emit(fastPathJIT, state, varargsSpillSlotsOffset, osrExitFromGenericUnwindStackSpillSlot);
 
         char* startOfIC = bitwise_cast<char*>(generatedFunction) + call.m_instructionOffset;
         size_t sizeOfIC = sizeOfICFor(call.node());
index f04f703..0778268 100644 (file)
@@ -46,35 +46,24 @@ ExitThunkGenerator::~ExitThunkGenerator()
 {
 }
 
-void ExitThunkGenerator::emitThunk(unsigned index, int32_t osrExitFromGenericUnwindStackSpillSlot)
+void ExitThunkGenerator::emitThunk(unsigned index)
 {
-    OSRExitCompilationInfo& info = m_state.finalizer->osrExit[index];
     OSRExit& exit = m_state.jitCode->osrExit[index];
+    ASSERT_UNUSED(exit, !(exit.willArriveAtOSRExitFromGenericUnwind() && exit.willArriveAtOSRExitFromCallOperation()));
     
+    OSRExitCompilationInfo& info = m_state.finalizer->osrExit[index];
     info.m_thunkLabel = label();
 
-    ASSERT(!(exit.willArriveAtOSRExitFromGenericUnwind() && exit.willArriveAtOSRExitFromCallOperation()));
-    if (exit.willArriveAtOSRExitFromGenericUnwind()) {
-        restoreCalleeSavesFromVMCalleeSavesBuffer();
-        loadPtr(vm()->addressOfCallFrameForCatch(), framePointerRegister);
-        addPtr(TrustedImm32(- static_cast<int64_t>(m_state.jitCode->stackmaps.stackSizeForLocals())), 
-            framePointerRegister, stackPointerRegister);
-
-        if (exit.needsRegisterRecoveryOnGenericUnwindOSRExitPath())
-            exit.recoverRegistersFromSpillSlot(*this, osrExitFromGenericUnwindStackSpillSlot);
-    } else if (exit.willArriveAtOSRExitFromCallOperation())
-        exit.recoverRegistersFromSpillSlot(*this, osrExitFromGenericUnwindStackSpillSlot);
-    
     pushToSaveImmediateWithoutTouchingRegisters(TrustedImm32(index));
     info.m_thunkJump = patchableJump();
     
     m_didThings = true;
 }
 
-void ExitThunkGenerator::emitThunks(int32_t osrExitFromGenericUnwindStackSpillSlot)
+void ExitThunkGenerator::emitThunks()
 {
     for (unsigned i = 0; i < m_state.finalizer->osrExit.size(); ++i)
-        emitThunk(i, osrExitFromGenericUnwindStackSpillSlot);
+        emitThunk(i);
 }
 
 } } // namespace JSC::FTL
index d49d344..7452f4c 100644 (file)
@@ -42,8 +42,8 @@ public:
     ExitThunkGenerator(State& state);
     ~ExitThunkGenerator();
     
-    void emitThunk(unsigned index, int32_t osrExitFromGenericUnwindStackSpillSlot);
-    void emitThunks(int32_t osrExitFromGenericUnwindStackSpillSlot);
+    void emitThunk(unsigned index);
+    void emitThunks();
     
     bool didThings() const { return m_didThings; }
 
index 69f76e6..f95686e 100644 (file)
@@ -93,6 +93,7 @@ public:
     
     JITCode* ftl() override;
     DFG::CommonData* dfgCommon() override;
+    static ptrdiff_t commonDataOffset() { return OBJECT_OFFSETOF(JITCode, common); }
     
     DFG::CommonData common;
     SegmentedVector<OSRExit, 8> osrExit;
@@ -101,6 +102,7 @@ public:
     StackMaps stackmaps;
 #endif // !FTL_USES_B3
     Vector<std::unique_ptr<LazySlowPath>> lazySlowPaths;
+    int32_t osrExitFromGenericUnwindStackSpillSlot;
     
 private:
     CodePtr m_addressForCall;
index 8ceab6f..ac5cf0e 100644 (file)
@@ -206,6 +206,26 @@ static void compileStub(
 
     CCallHelpers jit(vm, codeBlock);
 
+    // The first thing we need to do is restablish our frame in the case of an exception.
+    if (exit.willArriveAtOSRExitFromGenericUnwind()) {
+        RELEASE_ASSERT(vm->callFrameForCatch); // The first time we hit this exit, like at all other times, this field should be non-null.
+        jit.restoreCalleeSavesFromVMCalleeSavesBuffer();
+        jit.loadPtr(vm->addressOfCallFrameForCatch(), MacroAssembler::framePointerRegister);
+        jit.addPtr(CCallHelpers::TrustedImm32(codeBlock->stackPointerOffset() * sizeof(Register)),
+            MacroAssembler::framePointerRegister, CCallHelpers::stackPointerRegister);
+
+        if (exit.needsRegisterRecoveryOnGenericUnwindOSRExitPath())
+            exit.recoverRegistersFromSpillSlot(jit, jitCode->osrExitFromGenericUnwindStackSpillSlot);
+
+        // Do a pushToSave because that's what the exit compiler below expects the stack
+        // to look like because that's the last thing the ExitThunkGenerator does. The code
+        // below doesn't actually use the value that was pushed, but it does rely on the
+        // general shape of the stack being as it is in the non-exception OSR case.
+        jit.pushToSaveImmediateWithoutTouchingRegisters(CCallHelpers::TrustedImm32(0xbadbeef));
+    } else if (exit.willArriveAtOSRExitFromCallOperation())
+        exit.recoverRegistersFromSpillSlot(jit, jitCode->osrExitFromGenericUnwindStackSpillSlot);
+    
+
     // We need scratch space to save all registers, to build up the JS stack, to deal with unwind
     // fixup, pointers to all of the objects we materialize, and the elements inside those objects
     // that we materialize.
@@ -560,6 +580,9 @@ extern "C" void* compileFTLOSRExit(ExecState* exec, unsigned exitID)
 
     if (shouldDumpDisassembly() || Options::verboseOSR() || Options::verboseFTLOSRExit())
         dataLog("Compiling OSR exit with exitID = ", exitID, "\n");
+
+    if (exec->vm().callFrameForCatch)
+        RELEASE_ASSERT(exec->vm().callFrameForCatch == exec);
     
     CodeBlock* codeBlock = exec->codeBlock();
     
index cda81b5..cfbc75b 100644 (file)
@@ -29,6 +29,7 @@
 #if ENABLE(FTL_JIT)
 
 #include "AssemblyHelpers.h"
+#include "DFGOSRExitCompilerCommon.h"
 #include "FPRInfo.h"
 #include "FTLOSRExitCompiler.h"
 #include "FTLOperations.h"
@@ -40,10 +41,20 @@ namespace JSC { namespace FTL {
 
 using namespace DFG;
 
+enum class FrameAndStackAdjustmentRequirement {
+    Needed, 
+    NotNeeded 
+};
+
 static MacroAssemblerCodeRef genericGenerationThunkGenerator(
-    VM* vm, FunctionPtr generationFunction, const char* name, unsigned extraPopsToRestore)
+    VM* vm, FunctionPtr generationFunction, const char* name, unsigned extraPopsToRestore, FrameAndStackAdjustmentRequirement frameAndStackAdjustmentRequirement)
 {
     AssemblyHelpers jit(vm, 0);
+
+    if (frameAndStackAdjustmentRequirement == FrameAndStackAdjustmentRequirement::Needed) {
+        // This needs to happen before we use the scratch buffer because this function also uses the scratch buffer.
+        adjustFrameAndStackInOSRExitCompilerThunk<FTL::JITCode>(jit, vm, JITCode::FTLJIT);
+    }
     
     // Note that the "return address" will be the ID that we pass to the generation function.
     
@@ -115,14 +126,14 @@ MacroAssemblerCodeRef osrExitGenerationThunkGenerator(VM* vm)
 {
     unsigned extraPopsToRestore = 0;
     return genericGenerationThunkGenerator(
-        vm, compileFTLOSRExit, "FTL OSR exit generation thunk", extraPopsToRestore);
+        vm, compileFTLOSRExit, "FTL OSR exit generation thunk", extraPopsToRestore, FrameAndStackAdjustmentRequirement::Needed);
 }
 
 MacroAssemblerCodeRef lazySlowPathGenerationThunkGenerator(VM* vm)
 {
     unsigned extraPopsToRestore = 1;
     return genericGenerationThunkGenerator(
-        vm, compileFTLLazySlowPath, "FTL lazy slow path generation thunk", extraPopsToRestore);
+        vm, compileFTLLazySlowPath, "FTL lazy slow path generation thunk", extraPopsToRestore, FrameAndStackAdjustmentRequirement::NotNeeded);
 }
 
 static void registerClobberCheck(AssemblyHelpers& jit, RegisterSet dontClobber)