[Re-landing] Use JIT probes for DFG OSR exit.
authormark.lam@apple.com <mark.lam@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sun, 10 Sep 2017 00:21:55 +0000 (00:21 +0000)
committermark.lam@apple.com <mark.lam@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sun, 10 Sep 2017 00:21:55 +0000 (00:21 +0000)
https://bugs.webkit.org/show_bug.cgi?id=175144
<rdar://problem/33437050>

Not reviewed.  Original patch reviewed by Saam Barati.

JSTests:

Disable these tests for debug builds because they run too slow with the new OSR exit.

* stress/op_mod-ConstVar.js:
* stress/op_mod-VarConst.js:
* stress/op_mod-VarVar.js:

Source/JavaScriptCore:

Relanding r221774.

* JavaScriptCore.xcodeproj/project.pbxproj:
* assembler/MacroAssembler.cpp:
(JSC::stdFunctionCallback):
* assembler/MacroAssemblerPrinter.cpp:
(JSC::Printer::printCallback):
* assembler/ProbeContext.h:
(JSC::Probe::CPUState::gpr const):
(JSC::Probe::CPUState::spr const):
(JSC::Probe::Context::Context):
(JSC::Probe::Context::arg):
(JSC::Probe::Context::gpr):
(JSC::Probe::Context::spr):
(JSC::Probe::Context::fpr):
(JSC::Probe::Context::gprName):
(JSC::Probe::Context::sprName):
(JSC::Probe::Context::fprName):
(JSC::Probe::Context::gpr const):
(JSC::Probe::Context::spr const):
(JSC::Probe::Context::fpr const):
(JSC::Probe::Context::pc):
(JSC::Probe::Context::fp):
(JSC::Probe::Context::sp):
(JSC::Probe:: const): Deleted.
* assembler/ProbeFrame.h: Copied from Source/JavaScriptCore/assembler/ProbeFrame.h.
* assembler/ProbeStack.cpp:
(JSC::Probe::Page::Page):
* assembler/ProbeStack.h:
(JSC::Probe::Page::get):
(JSC::Probe::Page::set):
(JSC::Probe::Page::physicalAddressFor):
(JSC::Probe::Stack::lowWatermark):
(JSC::Probe::Stack::get):
(JSC::Probe::Stack::set):
* bytecode/ArithProfile.cpp:
* bytecode/ArithProfile.h:
* bytecode/ArrayProfile.h:
(JSC::ArrayProfile::observeArrayMode):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::addressOfOSRExitCounter): Deleted.
* bytecode/ExecutionCounter.h:
(JSC::ExecutionCounter::hasCrossedThreshold const):
(JSC::ExecutionCounter::setNewThresholdForOSRExit):
* bytecode/MethodOfGettingAValueProfile.cpp:
(JSC::MethodOfGettingAValueProfile::reportValue):
* bytecode/MethodOfGettingAValueProfile.h:
* dfg/DFGDriver.cpp:
(JSC::DFG::compileImpl):
* dfg/DFGJITCode.cpp:
(JSC::DFG::JITCode::findPC): Deleted.
* dfg/DFGJITCode.h:
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::linkOSRExits):
(JSC::DFG::JITCompiler::link):
* dfg/DFGOSRExit.cpp:
(JSC::DFG::jsValueFor):
(JSC::DFG::restoreCalleeSavesFor):
(JSC::DFG::saveCalleeSavesFor):
(JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer):
(JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer):
(JSC::DFG::saveOrCopyCalleeSavesFor):
(JSC::DFG::createDirectArgumentsDuringExit):
(JSC::DFG::createClonedArgumentsDuringExit):
(JSC::DFG::OSRExit::OSRExit):
(JSC::DFG::emitRestoreArguments):
(JSC::DFG::OSRExit::executeOSRExit):
(JSC::DFG::reifyInlinedCallFrames):
(JSC::DFG::adjustAndJumpToTarget):
(JSC::DFG::printOSRExit):
(JSC::DFG::OSRExit::setPatchableCodeOffset): Deleted.
(JSC::DFG::OSRExit::getPatchableCodeOffsetAsJump const): Deleted.
(JSC::DFG::OSRExit::codeLocationForRepatch const): Deleted.
(JSC::DFG::OSRExit::correctJump): Deleted.
(JSC::DFG::OSRExit::emitRestoreArguments): Deleted.
(JSC::DFG::OSRExit::compileOSRExit): Deleted.
(JSC::DFG::OSRExit::compileExit): Deleted.
(JSC::DFG::OSRExit::debugOperationPrintSpeculationFailure): Deleted.
* dfg/DFGOSRExit.h:
(JSC::DFG::OSRExitState::OSRExitState):
(JSC::DFG::OSRExit::considerAddingAsFrequentExitSite):
* dfg/DFGOSRExitCompilerCommon.cpp:
* dfg/DFGOSRExitCompilerCommon.h:
* dfg/DFGOperations.cpp:
* dfg/DFGOperations.h:
* dfg/DFGThunks.cpp:
(JSC::DFG::osrExitThunkGenerator):
(JSC::DFG::osrExitGenerationThunkGenerator): Deleted.
* dfg/DFGThunks.h:
* jit/AssemblyHelpers.cpp:
(JSC::AssemblyHelpers::debugCall): Deleted.
* jit/AssemblyHelpers.h:
* jit/JITOperations.cpp:
* jit/JITOperations.h:
* profiler/ProfilerOSRExit.h:
(JSC::Profiler::OSRExit::incCount):
* runtime/JSCJSValue.h:
* runtime/JSCJSValueInlines.h:
* runtime/VM.h:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@221832 268f45cc-cd09-0410-ab3c-d52691b4dbfc

40 files changed:
JSTests/ChangeLog
JSTests/stress/op_mod-ConstVar.js
JSTests/stress/op_mod-VarConst.js
JSTests/stress/op_mod-VarVar.js
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/assembler/MacroAssembler.cpp
Source/JavaScriptCore/assembler/MacroAssemblerPrinter.cpp
Source/JavaScriptCore/assembler/ProbeContext.h
Source/JavaScriptCore/assembler/ProbeFrame.h [new file with mode: 0644]
Source/JavaScriptCore/assembler/ProbeStack.cpp
Source/JavaScriptCore/assembler/ProbeStack.h
Source/JavaScriptCore/bytecode/ArithProfile.cpp
Source/JavaScriptCore/bytecode/ArithProfile.h
Source/JavaScriptCore/bytecode/ArrayProfile.h
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/bytecode/CodeBlock.h
Source/JavaScriptCore/bytecode/ExecutionCounter.h
Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.cpp
Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.h
Source/JavaScriptCore/dfg/DFGDriver.cpp
Source/JavaScriptCore/dfg/DFGJITCode.cpp
Source/JavaScriptCore/dfg/DFGJITCode.h
Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
Source/JavaScriptCore/dfg/DFGOSRExit.cpp
Source/JavaScriptCore/dfg/DFGOSRExit.h
Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.h
Source/JavaScriptCore/dfg/DFGOperations.cpp
Source/JavaScriptCore/dfg/DFGOperations.h
Source/JavaScriptCore/dfg/DFGThunks.cpp
Source/JavaScriptCore/dfg/DFGThunks.h
Source/JavaScriptCore/jit/AssemblyHelpers.cpp
Source/JavaScriptCore/jit/AssemblyHelpers.h
Source/JavaScriptCore/jit/JITOperations.cpp
Source/JavaScriptCore/jit/JITOperations.h
Source/JavaScriptCore/profiler/ProfilerOSRExit.h
Source/JavaScriptCore/runtime/JSCJSValue.h
Source/JavaScriptCore/runtime/JSCJSValueInlines.h
Source/JavaScriptCore/runtime/VM.h

index 4ae1d5b..265ba37 100644 (file)
@@ -1,3 +1,17 @@
+2017-09-09  Mark Lam  <mark.lam@apple.com>
+
+        [Re-landing] Use JIT probes for DFG OSR exit.
+        https://bugs.webkit.org/show_bug.cgi?id=175144
+        <rdar://problem/33437050>
+
+        Not reviewed.  Original patch reviewed by Saam Barati.
+
+        Disable these tests for debug builds because they run too slow with the new OSR exit.
+
+        * stress/op_mod-ConstVar.js:
+        * stress/op_mod-VarConst.js:
+        * stress/op_mod-VarVar.js:
+
 2017-09-08  Yusuke Suzuki  <utatane.tea@gmail.com>
 
         [DFG] NewArrayWithSize(size)'s size does not care negative zero
index 489188c..794ef05 100644 (file)
@@ -1,4 +1,4 @@
-//@ runFTLNoCJIT("--timeoutMultiplier=1.5")
+//@ if $buildType == "release" then runFTLNoCJIT("--timeoutMultiplier=1.5") else skip end
 
 // If all goes well, this test module will terminate silently. If not, it will print
 // errors. See binary-op-test.js for debugging options if needed.
index f03a4d4..406e0e5 100644 (file)
@@ -1,4 +1,4 @@
-//@ runFTLNoCJIT("--timeoutMultiplier=1.5")
+//@ if $buildType == "release" then runFTLNoCJIT("--timeoutMultiplier=1.5") else skip end
 
 // If all goes well, this test module will terminate silently. If not, it will print
 // errors. See binary-op-test.js for debugging options if needed.
index 13436a9..3110733 100644 (file)
@@ -1,4 +1,4 @@
-//@ runFTLNoCJIT("--timeoutMultiplier=1.5")
+//@ if $buildType == "release" then runFTLNoCJIT("--timeoutMultiplier=1.5") else skip end
 
 // If all goes well, this test module will terminate silently. If not, it will print
 // errors. See binary-op-test.js for debugging options if needed.
index 0d617d0..7bd189b 100644 (file)
@@ -1,3 +1,113 @@
+2017-09-09  Mark Lam  <mark.lam@apple.com>
+
+        [Re-landing] Use JIT probes for DFG OSR exit.
+        https://bugs.webkit.org/show_bug.cgi?id=175144
+        <rdar://problem/33437050>
+
+        Not reviewed.  Original patch reviewed by Saam Barati.
+
+        Relanding r221774.
+
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * assembler/MacroAssembler.cpp:
+        (JSC::stdFunctionCallback):
+        * assembler/MacroAssemblerPrinter.cpp:
+        (JSC::Printer::printCallback):
+        * assembler/ProbeContext.h:
+        (JSC::Probe::CPUState::gpr const):
+        (JSC::Probe::CPUState::spr const):
+        (JSC::Probe::Context::Context):
+        (JSC::Probe::Context::arg):
+        (JSC::Probe::Context::gpr):
+        (JSC::Probe::Context::spr):
+        (JSC::Probe::Context::fpr):
+        (JSC::Probe::Context::gprName):
+        (JSC::Probe::Context::sprName):
+        (JSC::Probe::Context::fprName):
+        (JSC::Probe::Context::gpr const):
+        (JSC::Probe::Context::spr const):
+        (JSC::Probe::Context::fpr const):
+        (JSC::Probe::Context::pc):
+        (JSC::Probe::Context::fp):
+        (JSC::Probe::Context::sp):
+        (JSC::Probe:: const): Deleted.
+        * assembler/ProbeFrame.h: Copied from Source/JavaScriptCore/assembler/ProbeFrame.h.
+        * assembler/ProbeStack.cpp:
+        (JSC::Probe::Page::Page):
+        * assembler/ProbeStack.h:
+        (JSC::Probe::Page::get):
+        (JSC::Probe::Page::set):
+        (JSC::Probe::Page::physicalAddressFor):
+        (JSC::Probe::Stack::lowWatermark):
+        (JSC::Probe::Stack::get):
+        (JSC::Probe::Stack::set):
+        * bytecode/ArithProfile.cpp:
+        * bytecode/ArithProfile.h:
+        * bytecode/ArrayProfile.h:
+        (JSC::ArrayProfile::observeArrayMode):
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize):
+        * bytecode/CodeBlock.h:
+        (JSC::CodeBlock::addressOfOSRExitCounter): Deleted.
+        * bytecode/ExecutionCounter.h:
+        (JSC::ExecutionCounter::hasCrossedThreshold const):
+        (JSC::ExecutionCounter::setNewThresholdForOSRExit):
+        * bytecode/MethodOfGettingAValueProfile.cpp:
+        (JSC::MethodOfGettingAValueProfile::reportValue):
+        * bytecode/MethodOfGettingAValueProfile.h:
+        * dfg/DFGDriver.cpp:
+        (JSC::DFG::compileImpl):
+        * dfg/DFGJITCode.cpp:
+        (JSC::DFG::JITCode::findPC): Deleted.
+        * dfg/DFGJITCode.h:
+        * dfg/DFGJITCompiler.cpp:
+        (JSC::DFG::JITCompiler::linkOSRExits):
+        (JSC::DFG::JITCompiler::link):
+        * dfg/DFGOSRExit.cpp:
+        (JSC::DFG::jsValueFor):
+        (JSC::DFG::restoreCalleeSavesFor):
+        (JSC::DFG::saveCalleeSavesFor):
+        (JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer):
+        (JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer):
+        (JSC::DFG::saveOrCopyCalleeSavesFor):
+        (JSC::DFG::createDirectArgumentsDuringExit):
+        (JSC::DFG::createClonedArgumentsDuringExit):
+        (JSC::DFG::OSRExit::OSRExit):
+        (JSC::DFG::emitRestoreArguments):
+        (JSC::DFG::OSRExit::executeOSRExit):
+        (JSC::DFG::reifyInlinedCallFrames):
+        (JSC::DFG::adjustAndJumpToTarget):
+        (JSC::DFG::printOSRExit):
+        (JSC::DFG::OSRExit::setPatchableCodeOffset): Deleted.
+        (JSC::DFG::OSRExit::getPatchableCodeOffsetAsJump const): Deleted.
+        (JSC::DFG::OSRExit::codeLocationForRepatch const): Deleted.
+        (JSC::DFG::OSRExit::correctJump): Deleted.
+        (JSC::DFG::OSRExit::emitRestoreArguments): Deleted.
+        (JSC::DFG::OSRExit::compileOSRExit): Deleted.
+        (JSC::DFG::OSRExit::compileExit): Deleted.
+        (JSC::DFG::OSRExit::debugOperationPrintSpeculationFailure): Deleted.
+        * dfg/DFGOSRExit.h:
+        (JSC::DFG::OSRExitState::OSRExitState):
+        (JSC::DFG::OSRExit::considerAddingAsFrequentExitSite):
+        * dfg/DFGOSRExitCompilerCommon.cpp:
+        * dfg/DFGOSRExitCompilerCommon.h:
+        * dfg/DFGOperations.cpp:
+        * dfg/DFGOperations.h:
+        * dfg/DFGThunks.cpp:
+        (JSC::DFG::osrExitThunkGenerator):
+        (JSC::DFG::osrExitGenerationThunkGenerator): Deleted.
+        * dfg/DFGThunks.h:
+        * jit/AssemblyHelpers.cpp:
+        (JSC::AssemblyHelpers::debugCall): Deleted.
+        * jit/AssemblyHelpers.h:
+        * jit/JITOperations.cpp:
+        * jit/JITOperations.h:
+        * profiler/ProfilerOSRExit.h:
+        (JSC::Profiler::OSRExit::incCount):
+        * runtime/JSCJSValue.h:
+        * runtime/JSCJSValueInlines.h:
+        * runtime/VM.h:
+
 2017-09-09  Ryan Haddad  <ryanhaddad@apple.com>
 
         Unreviewed, rolling out r221774.
index 3df622c..8e350c3 100644 (file)
                FE10AAEC1F44D545009DEDC5 /* ProbeStack.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE10AAE91F44D510009DEDC5 /* ProbeStack.cpp */; };
                FE10AAEE1F44D954009DEDC5 /* ProbeContext.h in Headers */ = {isa = PBXBuildFile; fileRef = FE10AAED1F44D946009DEDC5 /* ProbeContext.h */; settings = {ATTRIBUTES = (Private, ); }; };
                FE10AAF41F468396009DEDC5 /* ProbeContext.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */; };
+               FE10AAFF1F4E38E5009DEDC5 /* ProbeFrame.h in Headers */ = {isa = PBXBuildFile; fileRef = FE10AAFE1F4E38DA009DEDC5 /* ProbeFrame.h */; };
                FE1220271BE7F58C0039E6F2 /* JITAddGenerator.h in Headers */ = {isa = PBXBuildFile; fileRef = FE1220261BE7F5640039E6F2 /* JITAddGenerator.h */; };
                FE1220281BE7F5910039E6F2 /* JITAddGenerator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE1220251BE7F5640039E6F2 /* JITAddGenerator.cpp */; };
                FE187A011BFBE55E0038BBCA /* JITMulGenerator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE1879FF1BFBC73C0038BBCA /* JITMulGenerator.cpp */; };
                FE10AAEA1F44D512009DEDC5 /* ProbeStack.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProbeStack.h; sourceTree = "<group>"; };
                FE10AAED1F44D946009DEDC5 /* ProbeContext.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProbeContext.h; sourceTree = "<group>"; };
                FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ProbeContext.cpp; sourceTree = "<group>"; };
+               FE10AAFE1F4E38DA009DEDC5 /* ProbeFrame.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProbeFrame.h; sourceTree = "<group>"; };
                FE1220251BE7F5640039E6F2 /* JITAddGenerator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITAddGenerator.cpp; sourceTree = "<group>"; };
                FE1220261BE7F5640039E6F2 /* JITAddGenerator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITAddGenerator.h; sourceTree = "<group>"; };
                FE1879FF1BFBC73C0038BBCA /* JITMulGenerator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITMulGenerator.cpp; sourceTree = "<group>"; };
                                FE63DD531EA9B60E00103A69 /* Printer.h */,
                                FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */,
                                FE10AAED1F44D946009DEDC5 /* ProbeContext.h */,
+                               FE10AAFE1F4E38DA009DEDC5 /* ProbeFrame.h */,
                                FE10AAE91F44D510009DEDC5 /* ProbeStack.cpp */,
                                FE10AAEA1F44D512009DEDC5 /* ProbeStack.h */,
                                FE533CA01F217C310016A1FE /* testmasm.cpp */,
                                AD2FCC1D1DB59CB200B3E736 /* WebAssemblyModulePrototype.lut.h in Headers */,
                                AD4937C81DDD0AAE0077C807 /* WebAssemblyModuleRecord.h in Headers */,
                                AD2FCC2D1DB838FD00B3E736 /* WebAssemblyPrototype.h in Headers */,
+                               FE10AAFF1F4E38E5009DEDC5 /* ProbeFrame.h in Headers */,
                                AD2FCBF91DB58DAD00B3E736 /* WebAssemblyRuntimeErrorConstructor.h in Headers */,
                                AD2FCC1E1DB59CB200B3E736 /* WebAssemblyRuntimeErrorConstructor.lut.h in Headers */,
                                AD2FCBFB1DB58DAD00B3E736 /* WebAssemblyRuntimeErrorPrototype.h in Headers */,
index d19b6c5..82b25c8 100644 (file)
@@ -38,7 +38,7 @@ const double MacroAssembler::twoToThe32 = (double)0x100000000ull;
 #if ENABLE(MASM_PROBE)
 static void stdFunctionCallback(Probe::Context& context)
 {
-    auto func = static_cast<const std::function<void(Probe::Context&)>*>(context.arg);
+    auto func = context.arg<const std::function<void(Probe::Context&)>*>();
     (*func)(context);
 }
     
index 443f77f..57ed63e 100644 (file)
@@ -175,7 +175,7 @@ void printMemory(PrintStream& out, Context& context)
 void printCallback(Probe::Context& probeContext)
 {
     auto& out = WTF::dataFile();
-    PrintRecordList& list = *reinterpret_cast<PrintRecordList*>(probeContext.arg);
+    PrintRecordList& list = *probeContext.arg<PrintRecordList*>();
     for (size_t i = 0; i < list.size(); i++) {
         auto& record = list[i];
         Context context(probeContext, record.data);
index caa52ba..0e52034 100644 (file)
@@ -45,14 +45,8 @@ struct CPUState {
     inline uintptr_t& spr(SPRegisterID);
     inline double& fpr(FPRegisterID);
 
-    template<typename T, typename std::enable_if<std::is_integral<T>::value>::type* = nullptr>
-    T gpr(RegisterID) const;
-    template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type* = nullptr>
-    T gpr(RegisterID) const;
-    template<typename T, typename std::enable_if<std::is_integral<T>::value>::type* = nullptr>
-    T spr(SPRegisterID) const;
-    template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type* = nullptr>
-    T spr(SPRegisterID) const;
+    template<typename T> T gpr(RegisterID) const;
+    template<typename T> T spr(SPRegisterID) const;
     template<typename T> T fpr(FPRegisterID) const;
 
     void*& pc();
@@ -85,32 +79,24 @@ inline double& CPUState::fpr(FPRegisterID id)
     return fprs[id];
 }
 
-template<typename T, typename std::enable_if<std::is_integral<T>::value>::type*>
-T CPUState::gpr(RegisterID id) const
-{
-    CPUState* cpu = const_cast<CPUState*>(this);
-    return static_cast<T>(cpu->gpr(id));
-}
-
-template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type*>
+template<typename T>
 T CPUState::gpr(RegisterID id) const
 {
     CPUState* cpu = const_cast<CPUState*>(this);
-    return reinterpret_cast<T>(cpu->gpr(id));
+    auto& from = cpu->gpr(id);
+    typename std::remove_const<T>::type to { };
+    std::memcpy(&to, &from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues.
+    return to;
 }
 
-template<typename T, typename std::enable_if<std::is_integral<T>::value>::type*>
-T CPUState::spr(SPRegisterID id) const
-{
-    CPUState* cpu = const_cast<CPUState*>(this);
-    return static_cast<T>(cpu->spr(id));
-}
-
-template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type*>
+template<typename T>
 T CPUState::spr(SPRegisterID id) const
 {
     CPUState* cpu = const_cast<CPUState*>(this);
-    return reinterpret_cast<T>(cpu->spr(id));
+    auto& from = cpu->spr(id);
+    typename std::remove_const<T>::type to { };
+    std::memcpy(&to, &from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues.
+    return to;
 }
 
 template<typename T>
@@ -205,25 +191,31 @@ public:
     using FPRegisterID = MacroAssembler::FPRegisterID;
 
     Context(State* state)
-        : m_state(state)
-        , arg(state->arg)
-        , cpu(state->cpu)
+        : cpu(state->cpu)
+        , m_state(state)
     { }
 
-    uintptr_t& gpr(RegisterID id) { return m_state->cpu.gpr(id); }
-    uintptr_t& spr(SPRegisterID id) { return m_state->cpu.spr(id); }
-    double& fpr(FPRegisterID id) { return m_state->cpu.fpr(id); }
-    const char* gprName(RegisterID id) { return m_state->cpu.gprName(id); }
-    const char* sprName(SPRegisterID id) { return m_state->cpu.sprName(id); }
-    const char* fprName(FPRegisterID id) { return m_state->cpu.fprName(id); }
+    template<typename T>
+    T arg() { return reinterpret_cast<T>(m_state->arg); }
+
+    uintptr_t& gpr(RegisterID id) { return cpu.gpr(id); }
+    uintptr_t& spr(SPRegisterID id) { return cpu.spr(id); }
+    double& fpr(FPRegisterID id) { return cpu.fpr(id); }
+    const char* gprName(RegisterID id) { return cpu.gprName(id); }
+    const char* sprName(SPRegisterID id) { return cpu.sprName(id); }
+    const char* fprName(FPRegisterID id) { return cpu.fprName(id); }
 
-    void*& pc() { return m_state->cpu.pc(); }
-    void*& fp() { return m_state->cpu.fp(); }
-    void*& sp() { return m_state->cpu.sp(); }
+    template<typename T> T gpr(RegisterID id) const { return cpu.gpr<T>(id); }
+    template<typename T> T spr(SPRegisterID id) const { return cpu.spr<T>(id); }
+    template<typename T> T fpr(FPRegisterID id) const { return cpu.fpr<T>(id); }
 
-    template<typename T> T pc() { return m_state->cpu.pc<T>(); }
-    template<typename T> T fp() { return m_state->cpu.fp<T>(); }
-    template<typename T> T sp() { return m_state->cpu.sp<T>(); }
+    void*& pc() { return cpu.pc(); }
+    void*& fp() { return cpu.fp(); }
+    void*& sp() { return cpu.sp(); }
+
+    template<typename T> T pc() { return cpu.pc<T>(); }
+    template<typename T> T fp() { return cpu.fp<T>(); }
+    template<typename T> T sp() { return cpu.sp<T>(); }
 
     Stack& stack()
     {
@@ -234,13 +226,10 @@ public:
     bool hasWritesToFlush() { return m_stack.hasWritesToFlush(); }
     Stack* releaseStack() { return new Stack(WTFMove(m_stack)); }
 
-private:
-    State* m_state;
-public:
-    void* arg;
     CPUState& cpu;
 
 private:
+    State* m_state;
     Stack m_stack;
 
     friend JS_EXPORT_PRIVATE void* probeStateForContext(Context&); // Not for general use. This should only be for writing tests.
diff --git a/Source/JavaScriptCore/assembler/ProbeFrame.h b/Source/JavaScriptCore/assembler/ProbeFrame.h
new file mode 100644 (file)
index 0000000..cab368d
--- /dev/null
@@ -0,0 +1,94 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#if ENABLE(MASM_PROBE)
+
+#include "CallFrame.h"
+#include "ProbeStack.h"
+
+namespace JSC {
+namespace Probe {
+
+class Frame {
+public:
+    Frame(void* frameBase, Stack& stack)
+        : m_frameBase { reinterpret_cast<uint8_t*>(frameBase) }
+        , m_stack { stack }
+    { }
+
+    template<typename T = JSValue>
+    T argument(int argument)
+    {
+        return get<T>(CallFrame::argumentOffset(argument) * sizeof(Register));
+    }
+    template<typename T = JSValue>
+    T operand(int operand)
+    {
+        return get<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register));
+    }
+    template<typename T = JSValue>
+    T operand(int operand, ptrdiff_t offset)
+    {
+        return get<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register) + offset);
+    }
+
+    template<typename T>
+    void setArgument(int argument, T value)
+    {
+        return set<T>(CallFrame::argumentOffset(argument) * sizeof(Register), value);
+    }
+    template<typename T>
+    void setOperand(int operand, T value)
+    {
+        set<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register), value);
+    }
+    template<typename T>
+    void setOperand(int operand, ptrdiff_t offset, T value)
+    {
+        set<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register) + offset, value);
+    }
+
+    template<typename T = JSValue>
+    T get(ptrdiff_t offset)
+    {
+        return m_stack.get<T>(m_frameBase + offset);
+    }
+    template<typename T>
+    void set(ptrdiff_t offset, T value)
+    {
+        m_stack.set<T>(m_frameBase + offset, value);
+    }
+
+private:
+    uint8_t* m_frameBase;
+    Stack& m_stack;
+};
+
+} // namespace Probe
+} // namespace JSC
+
+#endif // ENABLE(MASM_PROBE)
index 37484b3..da7b239 100644 (file)
@@ -35,6 +35,7 @@ namespace Probe {
 
 Page::Page(void* baseAddress)
     : m_baseLogicalAddress(baseAddress)
+    , m_physicalAddressOffset(reinterpret_cast<uint8_t*>(&m_buffer) - reinterpret_cast<uint8_t*>(baseAddress))
 {
     memcpy(&m_buffer, baseAddress, s_pageSize);
 }
index 593da33..8ff8277 100644 (file)
@@ -56,14 +56,28 @@ public:
     template<typename T>
     T get(void* logicalAddress)
     {
-        return *physicalAddressFor<T*>(logicalAddress);
+        void* from = physicalAddressFor(logicalAddress);
+        typename std::remove_const<T>::type to { };
+        std::memcpy(&to, from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues.
+        return to;
+    }
+    template<typename T>
+    T get(void* logicalBaseAddress, ptrdiff_t offset)
+    {
+        return get<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset);
     }
 
     template<typename T>
     void set(void* logicalAddress, T value)
     {
         m_dirtyBits |= dirtyBitFor(logicalAddress);
-        *physicalAddressFor<T*>(logicalAddress) = value;
+        void* to = physicalAddressFor(logicalAddress);
+        std::memcpy(to, &value, sizeof(T)); // Use std::memcpy to avoid strict aliasing issues.
+    }
+    template<typename T>
+    void set(void* logicalBaseAddress, ptrdiff_t offset, T value)
+    {
+        set<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset, value);
     }
 
     bool hasWritesToFlush() const { return !!m_dirtyBits; }
@@ -80,18 +94,16 @@ private:
         return static_cast<uintptr_t>(1) << (offset >> s_chunkSizeShift);
     }
 
-    template<typename T, typename = typename std::enable_if<std::is_pointer<T>::value>::type>
-    T physicalAddressFor(void* logicalAddress)
+    void* physicalAddressFor(void* logicalAddress)
     {
-        uintptr_t offset = reinterpret_cast<uintptr_t>(logicalAddress) & s_pageMask;
-        void* physicalAddress = reinterpret_cast<uint8_t*>(&m_buffer) + offset;
-        return reinterpret_cast<T>(physicalAddress);
+        return reinterpret_cast<uint8_t*>(logicalAddress) + m_physicalAddressOffset;
     }
 
     void flushWrites();
 
     void* m_baseLogicalAddress { nullptr };
     uintptr_t m_dirtyBits { 0 };
+    ptrdiff_t m_physicalAddressOffset;
 
     static constexpr size_t s_pageSize = 1024;
     static constexpr uintptr_t s_pageMask = s_pageSize - 1;
@@ -120,40 +132,39 @@ public:
     { }
     Stack(Stack&& other);
 
-    void* lowWatermark() { return m_lowWatermark; }
+    void* lowWatermark()
+    {
+        // We use the chunkAddress for the low watermark because we'll be doing write backs
+        // to the stack in increments of chunks. Hence, we'll treat the lowest address of
+        // the chunk as the low watermark of any given set address.
+        return Page::chunkAddressFor(m_lowWatermark);
+    }
 
     template<typename T>
-    typename std::enable_if<!std::is_same<double, typename std::remove_cv<T>::type>::value, T>::type get(void* address)
+    T get(void* address)
     {
         Page* page = pageFor(address);
         return page->get<T>(address);
     }
+    template<typename T>
+    T get(void* logicalBaseAddress, ptrdiff_t offset)
+    {
+        return get<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset);
+    }
 
-    template<typename T, typename = typename std::enable_if<!std::is_same<double, typename std::remove_cv<T>::type>::value>::type>
+    template<typename T>
     void set(void* address, T value)
     {
         Page* page = pageFor(address);
         page->set<T>(address, value);
 
-        // We use the chunkAddress for the low watermark because we'll be doing write backs
-        // to the stack in increments of chunks. Hence, we'll treat the lowest address of
-        // the chunk as the low watermark of any given set address.
-        void* chunkAddress = Page::chunkAddressFor(address);
-        if (chunkAddress < m_lowWatermark)
-            m_lowWatermark = chunkAddress;
+        if (address < m_lowWatermark)
+            m_lowWatermark = address;
     }
-
     template<typename T>
-    typename std::enable_if<std::is_same<double, typename std::remove_cv<T>::type>::value, T>::type get(void* address)
-    {
-        Page* page = pageFor(address);
-        return bitwise_cast<double>(page->get<uint64_t>(address));
-    }
-
-    template<typename T, typename = typename std::enable_if<std::is_same<double, typename std::remove_cv<T>::type>::value>::type>
-    void set(void* address, double value)
+    void set(void* logicalBaseAddress, ptrdiff_t offset, T value)
     {
-        set<uint64_t>(address, bitwise_cast<uint64_t>(value));
+        set<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset, value);
     }
 
     JS_EXPORT_PRIVATE Page* ensurePageFor(void* address);
index 1fa7c79..f36505a 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2016-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -32,6 +32,8 @@
 namespace JSC {
 
 #if ENABLE(JIT)
+// FIXME: This is being supplanted by observeResult(). Remove this one
+// https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
 void ArithProfile::emitObserveResult(CCallHelpers& jit, JSValueRegs regs, TagRegistersMode mode)
 {
     if (!shouldEmitSetDouble() && !shouldEmitSetNonNumber())
index 40fad1b..6213e79 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2016-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -211,6 +211,8 @@ public:
 #if ENABLE(JIT)    
     // Sets (Int32Overflow | Int52Overflow | NonNegZeroDouble | NegZeroDouble) if it sees a
     // double. Sets NonNumber if it sees a non-number.
+    // FIXME: This is being supplanted by observeResult(). Remove this one
+    // https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
     void emitObserveResult(CCallHelpers&, JSValueRegs, TagRegistersMode = HaveTagRegisters);
     
     // Sets (Int32Overflow | Int52Overflow | NonNegZeroDouble | NegZeroDouble).
index 68c11a5..c10c5e2 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -218,6 +218,7 @@ public:
     void computeUpdatedPrediction(const ConcurrentJSLocker&, CodeBlock*);
     void computeUpdatedPrediction(const ConcurrentJSLocker&, CodeBlock*, Structure* lastSeenStructure);
     
+    void observeArrayMode(ArrayModes mode) { m_observedArrayModes |= mode; }
     ArrayModes observedArrayModes(const ConcurrentJSLocker&) const { return m_observedArrayModes; }
     bool mayInterceptIndexedAccesses(const ConcurrentJSLocker&) const { return m_mayInterceptIndexedAccesses; }
     
index 8fe20a4..3702ab3 100644 (file)
@@ -2315,6 +2315,53 @@ bool CodeBlock::checkIfOptimizationThresholdReached()
     return m_jitExecuteCounter.checkIfThresholdCrossedAndSet(this);
 }
 
+auto CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize(DFG::OSRExitState& exitState) -> OptimizeAction
+{
+    DFG::OSRExitBase& exit = exitState.exit;
+    if (!exitKindMayJettison(exit.m_kind)) {
+        // FIXME: We may want to notice that we're frequently exiting
+        // at an op_catch that we didn't compile an entrypoint for, and
+        // then trigger a reoptimization of this CodeBlock:
+        // https://bugs.webkit.org/show_bug.cgi?id=175842
+        return OptimizeAction::None;
+    }
+
+    exit.m_count++;
+    m_osrExitCounter++;
+
+    CodeBlock* baselineCodeBlock = exitState.baselineCodeBlock;
+    ASSERT(baselineCodeBlock == baselineAlternative());
+    if (UNLIKELY(baselineCodeBlock->jitExecuteCounter().hasCrossedThreshold()))
+        return OptimizeAction::ReoptimizeNow;
+
+    // We want to figure out if there's a possibility that we're in a loop. For the outermost
+    // code block in the inline stack, we handle this appropriately by having the loop OSR trigger
+    // check the exit count of the replacement of the CodeBlock from which we are OSRing. The
+    // problem is the inlined functions, which might also have loops, but whose baseline versions
+    // don't know where to look for the exit count. Figure out if those loops are severe enough
+    // that we had tried to OSR enter. If so, then we should use the loop reoptimization trigger.
+    // Otherwise, we should use the normal reoptimization trigger.
+
+    bool didTryToEnterInLoop = false;
+    for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {
+        if (inlineCallFrame->baselineCodeBlock->ownerScriptExecutable()->didTryToEnterInLoop()) {
+            didTryToEnterInLoop = true;
+            break;
+        }
+    }
+
+    uint32_t exitCountThreshold = didTryToEnterInLoop
+        ? exitCountThresholdForReoptimizationFromLoop()
+        : exitCountThresholdForReoptimization();
+
+    if (m_osrExitCounter > exitCountThreshold)
+        return OptimizeAction::ReoptimizeNow;
+
+    // Too few fails. Adjust the execution counter such that the target is to only optimize after a while.
+    baselineCodeBlock->m_jitExecuteCounter.setNewThresholdForOSRExit(exitState.activeThreshold, exitState.memoryUsageAdjustedThreshold);
+    return OptimizeAction::None;
+}
+
 void CodeBlock::optimizeNextInvocation()
 {
     if (Options::verboseOSR())
index eb04699..65a9613 100644 (file)
 
 namespace JSC {
 
+namespace DFG {
+struct OSRExitState;
+} // namespace DFG
+
 class BytecodeLivenessAnalysis;
 class CodeBlockSet;
 class ExecState;
@@ -762,8 +766,10 @@ public:
 
     void countOSRExit() { m_osrExitCounter++; }
 
-    uint32_t* addressOfOSRExitCounter() { return &m_osrExitCounter; }
+    enum class OptimizeAction { None, ReoptimizeNow };
+    OptimizeAction updateOSRExitCounterAndCheckIfNeedToReoptimize(DFG::OSRExitState&);
 
+    // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
     static ptrdiff_t offsetOfOSRExitCounter() { return OBJECT_OFFSETOF(CodeBlock, m_osrExitCounter); }
 
     uint32_t adjustedExitCountThreshold(uint32_t desiredThreshold);
index f78a912..c971f0a 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -41,6 +41,7 @@ enum CountingVariant {
 double applyMemoryUsageHeuristics(int32_t value, CodeBlock*);
 int32_t applyMemoryUsageHeuristicsAndConvertToInt(int32_t value, CodeBlock*);
 
+// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 inline int32_t formattedTotalExecutionCount(float value)
 {
     union {
@@ -57,11 +58,19 @@ public:
     ExecutionCounter();
     void forceSlowPathConcurrently(); // If you use this, checkIfThresholdCrossedAndSet() may still return false.
     bool checkIfThresholdCrossedAndSet(CodeBlock*);
+    bool hasCrossedThreshold() const { return m_counter >= 0; }
     void setNewThreshold(int32_t threshold, CodeBlock*);
     void deferIndefinitely();
     double count() const { return static_cast<double>(m_totalCount) + m_counter; }
     void dump(PrintStream&) const;
     
+    void setNewThresholdForOSRExit(uint32_t activeThreshold, double memoryUsageAdjustedThreshold)
+    {
+        m_activeThreshold = activeThreshold;
+        m_counter = static_cast<int32_t>(-memoryUsageAdjustedThreshold);
+        m_totalCount = memoryUsageAdjustedThreshold;
+    }
+
     static int32_t maximumExecutionCountsBetweenCheckpoints()
     {
         switch (countingVariant) {
index f479e5f..acd3078 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2013, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -46,6 +46,8 @@ MethodOfGettingAValueProfile MethodOfGettingAValueProfile::fromLazyOperand(
     return result;
 }
 
+// FIXME: This is being supplanted by reportValue(). Remove this one
+// https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
 void MethodOfGettingAValueProfile::emitReportValue(CCallHelpers& jit, JSValueRegs regs) const
 {
     switch (m_kind) {
@@ -74,6 +76,34 @@ void MethodOfGettingAValueProfile::emitReportValue(CCallHelpers& jit, JSValueReg
     RELEASE_ASSERT_NOT_REACHED();
 }
 
+void MethodOfGettingAValueProfile::reportValue(JSValue value)
+{
+    switch (m_kind) {
+    case None:
+        return;
+
+    case Ready:
+        *u.profile->specFailBucket(0) = JSValue::encode(value);
+        return;
+
+    case LazyOperand: {
+        LazyOperandValueProfileKey key(u.lazyOperand.bytecodeOffset, VirtualRegister(u.lazyOperand.operand));
+
+        ConcurrentJSLocker locker(u.lazyOperand.codeBlock->m_lock);
+        LazyOperandValueProfile* profile =
+            u.lazyOperand.codeBlock->lazyOperandValueProfiles().add(locker, key);
+        *profile->specFailBucket(0) = JSValue::encode(value);
+        return;
+    }
+
+    case ArithProfileReady: {
+        u.arithProfile->observeResult(value);
+        return;
+    } }
+
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
 } // namespace JSC
 
 #endif // ENABLE(DFG_JIT)
index 6ed743e..f475dad 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -70,9 +70,12 @@ public:
         CodeBlock*, const LazyOperandValueProfileKey&);
     
     explicit operator bool() const { return m_kind != None; }
-    
+
+    // FIXME: emitReportValue is being supplanted by reportValue(). Remove this one
+    // https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
     void emitReportValue(CCallHelpers&, JSValueRegs) const;
-    
+    void reportValue(JSValue);
+
 private:
     enum Kind {
         None,
index 2149e6c..7b6d4d6 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011-2014, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -89,7 +89,7 @@ static CompilationResult compileImpl(
     
     // Make sure that any stubs that the DFG is going to use are initialized. We want to
     // make sure that all JIT code generation does finalization on the main thread.
-    vm.getCTIStub(osrExitGenerationThunkGenerator);
+    vm.getCTIStub(osrExitThunkGenerator);
     vm.getCTIStub(throwExceptionFromCallSlowPathGenerator);
     vm.getCTIStub(linkCallThunkGenerator);
     vm.getCTIStub(linkPolymorphicCallThunkGenerator);
index 67c33f0..c02cd0d 100644 (file)
@@ -225,18 +225,6 @@ void JITCode::validateReferences(const TrackedReferences& trackedReferences)
     minifiedDFG.validateReferences(trackedReferences);
 }
 
-std::optional<CodeOrigin> JITCode::findPC(CodeBlock*, void* pc)
-{
-    for (OSRExit& exit : osrExit) {
-        if (ExecutableMemoryHandle* handle = exit.m_code.executableMemory()) {
-            if (handle->start() <= pc && pc < handle->end())
-                return std::optional<CodeOrigin>(exit.m_codeOriginForExitProfile);
-        }
-    }
-
-    return std::nullopt;
-}
-
 void JITCode::finalizeOSREntrypoints()
 {
     auto comparator = [] (const auto& a, const auto& b) {
index 5507a8a..4143461 100644 (file)
@@ -126,8 +126,6 @@ public:
 
     static ptrdiff_t commonDataOffset() { return OBJECT_OFFSETOF(JITCode, common); }
 
-    std::optional<CodeOrigin> findPC(CodeBlock*, void* pc) override;
-    
 private:
     friend class JITCompiler; // Allow JITCompiler to call setCodeRef().
 
index d19ddb7..7a31925 100644 (file)
@@ -85,8 +85,9 @@ void JITCompiler::linkOSRExits()
         }
     }
     
+    MacroAssemblerCodeRef osrExitThunk = vm()->getCTIStub(osrExitThunkGenerator);
+    CodeLocationLabel osrExitThunkLabel = CodeLocationLabel(osrExitThunk.code());
     for (unsigned i = 0; i < m_jitCode->osrExit.size(); ++i) {
-        OSRExit& exit = m_jitCode->osrExit[i];
         OSRExitCompilationInfo& info = m_exitCompilationInfo[i];
         JumpList& failureJumps = info.m_failureJumps;
         if (!failureJumps.empty())
@@ -96,7 +97,10 @@ void JITCompiler::linkOSRExits()
 
         jitAssertHasValidCallFrame();
         store32(TrustedImm32(i), &vm()->osrExitIndex);
-        exit.setPatchableCodeOffset(patchableJump());
+        Jump target = jump();
+        addLinkTask([target, osrExitThunkLabel] (LinkBuffer& linkBuffer) {
+            linkBuffer.link(target, osrExitThunkLabel);
+        });
     }
 }
 
@@ -303,13 +307,8 @@ void JITCompiler::link(LinkBuffer& linkBuffer)
             linkBuffer.locationOfNearCall(record.call));
     }
     
-    MacroAssemblerCodeRef osrExitThunk = vm()->getCTIStub(osrExitGenerationThunkGenerator);
-    CodeLocationLabel target = CodeLocationLabel(osrExitThunk.code());
     for (unsigned i = 0; i < m_jitCode->osrExit.size(); ++i) {
-        OSRExit& exit = m_jitCode->osrExit[i];
         OSRExitCompilationInfo& info = m_exitCompilationInfo[i];
-        linkBuffer.link(exit.getPatchableCodeOffsetAsJump(), target);
-        exit.correctJump(linkBuffer);
         if (info.m_replacementSource.isSet()) {
             m_jitCode->common.jumpReplacements.append(JumpReplacement(
                 linkBuffer.locationOf(info.m_replacementSource),
index 3b73a12..06f8234 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 #if ENABLE(DFG_JIT)
 
 #include "AssemblyHelpers.h"
+#include "ClonedArguments.h"
 #include "DFGGraph.h"
 #include "DFGMayExit.h"
-#include "DFGOSRExitCompilerCommon.h"
 #include "DFGOSRExitPreparation.h"
 #include "DFGOperations.h"
 #include "DFGSpeculativeJIT.h"
-#include "FrameTracers.h"
+#include "DirectArguments.h"
+#include "InlineCallFrame.h"
 #include "JSCInlines.h"
+#include "JSCJSValue.h"
 #include "OperandsInlines.h"
+#include "ProbeContext.h"
+#include "ProbeFrame.h"
 
 namespace JSC { namespace DFG {
 
-OSRExit::OSRExit(ExitKind kind, JSValueSource jsValueSource, MethodOfGettingAValueProfile valueProfile, SpeculativeJIT* jit, unsigned streamIndex, unsigned recoveryIndex)
-    : OSRExitBase(kind, jit->m_origin.forExit, jit->m_origin.semantic, jit->m_origin.wasHoisted)
-    , m_jsValueSource(jsValueSource)
-    , m_valueProfile(valueProfile)
-    , m_recoveryIndex(recoveryIndex)
-    , m_streamIndex(streamIndex)
+using CPUState = Probe::CPUState;
+using Context = Probe::Context;
+using Frame = Probe::Frame;
+
+static void reifyInlinedCallFrames(Probe::Context&, CodeBlock* baselineCodeBlock, const OSRExitBase&);
+static void adjustAndJumpToTarget(Probe::Context&, VM&, CodeBlock*, CodeBlock* baselineCodeBlock, OSRExit&);
+static void printOSRExit(Context&, uint32_t osrExitIndex, const OSRExit&);
+
+static JSValue jsValueFor(CPUState& cpu, JSValueSource source)
 {
-    bool canExit = jit->m_origin.exitOK;
-    if (!canExit && jit->m_currentNode) {
-        ExitMode exitMode = mayExit(jit->m_jit.graph(), jit->m_currentNode);
-        canExit = exitMode == ExitMode::Exits || exitMode == ExitMode::ExitsForExceptions;
+    if (source.isAddress()) {
+        JSValue result;
+        std::memcpy(&result, cpu.gpr<uint8_t*>(source.base()) + source.offset(), sizeof(JSValue));
+        return result;
     }
-    DFG_ASSERT(jit->m_jit.graph(), jit->m_currentNode, canExit);
+#if USE(JSVALUE64)
+    return JSValue::decode(cpu.gpr<EncodedJSValue>(source.gpr()));
+#else
+    if (source.hasKnownTag())
+        return JSValue(source.tag(), cpu.gpr<int32_t>(source.payloadGPR()));
+    return JSValue(cpu.gpr<int32_t>(source.tagGPR()), cpu.gpr<int32_t>(source.payloadGPR()));
+#endif
 }
 
-void OSRExit::setPatchableCodeOffset(MacroAssembler::PatchableJump check)
+#if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
+
+static_assert(is64Bit(), "we only support callee save registers on 64-bit");
+
+// Based on AssemblyHelpers::emitRestoreCalleeSavesFor().
+static void restoreCalleeSavesFor(Context& context, CodeBlock* codeBlock)
 {
-    m_patchableCodeOffset = check.m_jump.m_label.m_offset;
+    ASSERT(codeBlock);
+
+    RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
+    RegisterSet dontRestoreRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
+    unsigned registerCount = calleeSaves->size();
+
+    uintptr_t* physicalStackFrame = context.fp<uintptr_t*>();
+    for (unsigned i = 0; i < registerCount; i++) {
+        RegisterAtOffset entry = calleeSaves->at(i);
+        if (dontRestoreRegisters.get(entry.reg()))
+            continue;
+        // The callee saved values come from the original stack, not the recovered stack.
+        // Hence, we read the values directly from the physical stack memory instead of
+        // going through context.stack().
+        ASSERT(!(entry.offset() % sizeof(uintptr_t)));
+        context.gpr(entry.reg().gpr()) = physicalStackFrame[entry.offset() / sizeof(uintptr_t)];
+    }
 }
 
-MacroAssembler::Jump OSRExit::getPatchableCodeOffsetAsJump() const
+// Based on AssemblyHelpers::emitSaveCalleeSavesFor().
+static void saveCalleeSavesFor(Context& context, CodeBlock* codeBlock)
 {
-    return MacroAssembler::Jump(AssemblerLabel(m_patchableCodeOffset));
+    auto& stack = context.stack();
+    ASSERT(codeBlock);
+
+    RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
+    RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
+    unsigned registerCount = calleeSaves->size();
+
+    for (unsigned i = 0; i < registerCount; i++) {
+        RegisterAtOffset entry = calleeSaves->at(i);
+        if (dontSaveRegisters.get(entry.reg()))
+            continue;
+        stack.set(context.fp(), entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr()));
+    }
 }
 
-CodeLocationJump OSRExit::codeLocationForRepatch(CodeBlock* dfgCodeBlock) const
+// Based on AssemblyHelpers::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer().
+static void restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(Context& context)
 {
-    return CodeLocationJump(dfgCodeBlock->jitCode()->dataAddressAtOffset(m_patchableCodeOffset));
+    VM& vm = *context.arg<VM*>();
+
+    RegisterAtOffsetList* allCalleeSaves = VM::getAllCalleeSaveRegisterOffsets();
+    RegisterSet dontRestoreRegisters = RegisterSet::stackRegisters();
+    unsigned registerCount = allCalleeSaves->size();
+
+    VMEntryRecord* entryRecord = vmEntryRecord(vm.topVMEntryFrame);
+    uintptr_t* calleeSaveBuffer = reinterpret_cast<uintptr_t*>(entryRecord->calleeSaveRegistersBuffer);
+
+    // Restore all callee saves.
+    for (unsigned i = 0; i < registerCount; i++) {
+        RegisterAtOffset entry = allCalleeSaves->at(i);
+        if (dontRestoreRegisters.get(entry.reg()))
+            continue;
+        size_t uintptrOffset = entry.offset() / sizeof(uintptr_t);
+        if (entry.reg().isGPR())
+            context.gpr(entry.reg().gpr()) = calleeSaveBuffer[uintptrOffset];
+        else
+            context.fpr(entry.reg().fpr()) = bitwise_cast<double>(calleeSaveBuffer[uintptrOffset]);
+    }
+}
+
+// Based on AssemblyHelpers::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer().
+static void copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(Context& context)
+{
+    VM& vm = *context.arg<VM*>();
+    auto& stack = context.stack();
+
+    VMEntryRecord* entryRecord = vmEntryRecord(vm.topVMEntryFrame);
+    void* calleeSaveBuffer = entryRecord->calleeSaveRegistersBuffer;
+
+    RegisterAtOffsetList* allCalleeSaves = VM::getAllCalleeSaveRegisterOffsets();
+    RegisterSet dontCopyRegisters = RegisterSet::stackRegisters();
+    unsigned registerCount = allCalleeSaves->size();
+
+    for (unsigned i = 0; i < registerCount; i++) {
+        RegisterAtOffset entry = allCalleeSaves->at(i);
+        if (dontCopyRegisters.get(entry.reg()))
+            continue;
+        if (entry.reg().isGPR())
+            stack.set(calleeSaveBuffer, entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr()));
+        else
+            stack.set(calleeSaveBuffer, entry.offset(), context.fpr<uintptr_t>(entry.reg().fpr()));
+    }
+}
+
+// Based on AssemblyHelpers::emitSaveOrCopyCalleeSavesFor().
+static void saveOrCopyCalleeSavesFor(Context& context, CodeBlock* codeBlock, VirtualRegister offsetVirtualRegister, bool wasCalledViaTailCall)
+{
+    Frame frame(context.fp(), context.stack());
+    ASSERT(codeBlock);
+
+    RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
+    RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
+    unsigned registerCount = calleeSaves->size();
+
+    RegisterSet baselineCalleeSaves = RegisterSet::llintBaselineCalleeSaveRegisters();
+
+    for (unsigned i = 0; i < registerCount; i++) {
+        RegisterAtOffset entry = calleeSaves->at(i);
+        if (dontSaveRegisters.get(entry.reg()))
+            continue;
+
+        uintptr_t savedRegisterValue;
+
+        if (wasCalledViaTailCall && baselineCalleeSaves.get(entry.reg()))
+            savedRegisterValue = frame.get<uintptr_t>(entry.offset());
+        else
+            savedRegisterValue = context.gpr(entry.reg().gpr());
+
+        frame.set(offsetVirtualRegister.offsetInBytes() + entry.offset(), savedRegisterValue);
+    }
+}
+#else // not NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
+
+static void restoreCalleeSavesFor(Context&, CodeBlock*) { }
+static void saveCalleeSavesFor(Context&, CodeBlock*) { }
+static void restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(Context&) { }
+static void copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(Context&) { }
+static void saveOrCopyCalleeSavesFor(Context&, CodeBlock*, VirtualRegister, bool) { }
+
+#endif // NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
+
+static JSCell* createDirectArgumentsDuringExit(Context& context, CodeBlock* codeBlock, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
+{
+    VM& vm = *context.arg<VM*>();
+
+    ASSERT(vm.heap.isDeferred());
+
+    if (inlineCallFrame)
+        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
+
+    unsigned length = argumentCount - 1;
+    unsigned capacity = std::max(length, static_cast<unsigned>(codeBlock->numParameters() - 1));
+    DirectArguments* result = DirectArguments::create(
+        vm, codeBlock->globalObject()->directArgumentsStructure(), length, capacity);
+
+    result->callee().set(vm, result, callee);
+
+    void* frameBase = context.fp<Register*>() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0);
+    Frame frame(frameBase, context.stack());
+    for (unsigned i = length; i--;)
+        result->setIndexQuickly(vm, i, frame.argument(i));
+
+    return result;
+}
+
+static JSCell* createClonedArgumentsDuringExit(Context& context, CodeBlock* codeBlock, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
+{
+    VM& vm = *context.arg<VM*>();
+    ExecState* exec = context.fp<ExecState*>();
+
+    ASSERT(vm.heap.isDeferred());
+
+    if (inlineCallFrame)
+        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
+
+    unsigned length = argumentCount - 1;
+    ClonedArguments* result = ClonedArguments::createEmpty(
+        vm, codeBlock->globalObject()->clonedArgumentsStructure(), callee, length);
+
+    void* frameBase = context.fp<Register*>() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0);
+    Frame frame(frameBase, context.stack());
+    for (unsigned i = length; i--;)
+        result->putDirectIndex(exec, i, frame.argument(i));
+    return result;
 }
 
-void OSRExit::correctJump(LinkBuffer& linkBuffer)
+OSRExit::OSRExit(ExitKind kind, JSValueSource jsValueSource, MethodOfGettingAValueProfile valueProfile, SpeculativeJIT* jit, unsigned streamIndex, unsigned recoveryIndex)
+    : OSRExitBase(kind, jit->m_origin.forExit, jit->m_origin.semantic, jit->m_origin.wasHoisted)
+    , m_jsValueSource(jsValueSource)
+    , m_valueProfile(valueProfile)
+    , m_recoveryIndex(recoveryIndex)
+    , m_streamIndex(streamIndex)
 {
-    MacroAssembler::Label label;
-    label.m_label.m_offset = m_patchableCodeOffset;
-    m_patchableCodeOffset = linkBuffer.offsetOf(label);
+    bool canExit = jit->m_origin.exitOK;
+    if (!canExit && jit->m_currentNode) {
+        ExitMode exitMode = mayExit(jit->m_jit.graph(), jit->m_currentNode);
+        canExit = exitMode == ExitMode::Exits || exitMode == ExitMode::ExitsForExceptions;
+    }
+    DFG_ASSERT(jit->m_jit.graph(), jit->m_currentNode, canExit);
 }
 
-void OSRExit::emitRestoreArguments(CCallHelpers& jit, const Operands<ValueRecovery>& operands)
+static void emitRestoreArguments(Context& context, CodeBlock* codeBlock, DFG::JITCode* dfgJITCode, const Operands<ValueRecovery>& operands)
 {
+    Frame frame(context.fp(), context.stack());
+
     HashMap<MinifiedID, int> alreadyAllocatedArguments; // Maps phantom arguments node ID to operand.
     for (size_t index = 0; index < operands.size(); ++index) {
         const ValueRecovery& recovery = operands[index];
@@ -92,14 +275,12 @@ void OSRExit::emitRestoreArguments(CCallHelpers& jit, const Operands<ValueRecove
         MinifiedID id = recovery.nodeID();
         auto iter = alreadyAllocatedArguments.find(id);
         if (iter != alreadyAllocatedArguments.end()) {
-            JSValueRegs regs = JSValueRegs::withTwoAvailableRegs(GPRInfo::regT0, GPRInfo::regT1);
-            jit.loadValue(CCallHelpers::addressFor(iter->value), regs);
-            jit.storeValue(regs, CCallHelpers::addressFor(operand));
+            frame.setOperand(operand, frame.operand(iter->value));
             continue;
         }
 
         InlineCallFrame* inlineCallFrame =
-            jit.codeBlock()->jitCode()->dfg()->minifiedDFG.at(id)->inlineCallFrame();
+            dfgJITCode->minifiedDFG.at(id)->inlineCallFrame();
 
         int stackOffset;
         if (inlineCallFrame)
@@ -107,53 +288,48 @@ void OSRExit::emitRestoreArguments(CCallHelpers& jit, const Operands<ValueRecove
         else
             stackOffset = 0;
 
-        if (!inlineCallFrame || inlineCallFrame->isClosureCall) {
-            jit.loadPtr(
-                AssemblyHelpers::addressFor(stackOffset + CallFrameSlot::callee),
-                GPRInfo::regT0);
-        } else {
-            jit.move(
-                AssemblyHelpers::TrustedImmPtr(inlineCallFrame->calleeRecovery.constant().asCell()),
-                GPRInfo::regT0);
-        }
+        JSFunction* callee;
+        if (!inlineCallFrame || inlineCallFrame->isClosureCall)
+            callee = jsCast<JSFunction*>(frame.operand(stackOffset + CallFrameSlot::callee).asCell());
+        else
+            callee = jsCast<JSFunction*>(inlineCallFrame->calleeRecovery.constant().asCell());
 
-        if (!inlineCallFrame || inlineCallFrame->isVarargs()) {
-            jit.load32(
-                AssemblyHelpers::payloadFor(stackOffset + CallFrameSlot::argumentCount),
-                GPRInfo::regT1);
-        } else {
-            jit.move(
-                AssemblyHelpers::TrustedImm32(inlineCallFrame->argumentCountIncludingThis),
-                GPRInfo::regT1);
-        }
+        int32_t argumentCount;
+        if (!inlineCallFrame || inlineCallFrame->isVarargs())
+            argumentCount = frame.operand<int32_t>(stackOffset + CallFrameSlot::argumentCount, PayloadOffset);
+        else
+            argumentCount = inlineCallFrame->argumentCountIncludingThis;
 
-        jit.setupArgumentsWithExecState(
-            AssemblyHelpers::TrustedImmPtr(inlineCallFrame), GPRInfo::regT0, GPRInfo::regT1);
+        JSCell* argumentsObject;
         switch (recovery.technique()) {
         case DirectArgumentsThatWereNotCreated:
-            jit.move(AssemblyHelpers::TrustedImmPtr(bitwise_cast<void*>(operationCreateDirectArgumentsDuringExit)), GPRInfo::nonArgGPR0);
+            argumentsObject = createDirectArgumentsDuringExit(context, codeBlock, inlineCallFrame, callee, argumentCount);
             break;
         case ClonedArgumentsThatWereNotCreated:
-            jit.move(AssemblyHelpers::TrustedImmPtr(bitwise_cast<void*>(operationCreateClonedArgumentsDuringExit)), GPRInfo::nonArgGPR0);
+            argumentsObject = createClonedArgumentsDuringExit(context, codeBlock, inlineCallFrame, callee, argumentCount);
             break;
         default:
             RELEASE_ASSERT_NOT_REACHED();
             break;
         }
-        jit.call(GPRInfo::nonArgGPR0);
-        jit.storeCell(GPRInfo::returnValueGPR, AssemblyHelpers::addressFor(operand));
+        frame.setOperand(operand, JSValue(argumentsObject));
 
         alreadyAllocatedArguments.add(id, operand);
     }
 }
 
-void JIT_OPERATION OSRExit::compileOSRExit(ExecState* exec)
+void OSRExit::executeOSRExit(Context& context)
 {
-    VM* vm = &exec->vm();
-    auto scope = DECLARE_THROW_SCOPE(*vm);
+    VM& vm = *context.arg<VM*>();
+    auto scope = DECLARE_THROW_SCOPE(vm);
 
-    if (vm->callFrameForCatch)
-        RELEASE_ASSERT(vm->callFrameForCatch == exec);
+    ExecState* exec = context.fp<ExecState*>();
+    ASSERT(&exec->vm() == &vm);
+
+    if (vm.callFrameForCatch) {
+        exec = vm.callFrameForCatch;
+        context.fp() = exec;
+    }
 
     CodeBlock* codeBlock = exec->codeBlock();
     ASSERT(codeBlock);
@@ -161,81 +337,102 @@ void JIT_OPERATION OSRExit::compileOSRExit(ExecState* exec)
 
     // It's sort of preferable that we don't GC while in here. Anyways, doing so wouldn't
     // really be profitable.
-    DeferGCForAWhile deferGC(vm->heap);
+    DeferGCForAWhile deferGC(vm.heap);
 
-    uint32_t exitIndex = vm->osrExitIndex;
-    OSRExit& exit = codeBlock->jitCode()->dfg()->osrExit[exitIndex];
+    uint32_t exitIndex = vm.osrExitIndex;
+    DFG::JITCode* dfgJITCode = codeBlock->jitCode()->dfg();
+    OSRExit& exit = dfgJITCode->osrExit[exitIndex];
 
-    if (vm->callFrameForCatch)
-        ASSERT(exit.m_kind == GenericUnwind);
-    if (exit.isExceptionHandler())
-        ASSERT_UNUSED(scope, !!scope.exception());
-    
-    prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin);
-
-    // Compute the value recoveries.
-    Operands<ValueRecovery> operands;
-    codeBlock->jitCode()->dfg()->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, codeBlock->jitCode()->dfg()->minifiedDFG, exit.m_streamIndex, operands);
-
-    SpeculationRecovery* recovery = 0;
-    if (exit.m_recoveryIndex != UINT_MAX)
-        recovery = &codeBlock->jitCode()->dfg()->speculationRecovery[exit.m_recoveryIndex];
-
-    {
-        CCallHelpers jit(codeBlock);
-
-        if (exit.m_kind == GenericUnwind) {
-            // We are acting as a defacto op_catch because we arrive here from genericUnwind().
-            // So, we must restore our call frame and stack pointer.
-            jit.restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(*vm);
-            jit.loadPtr(vm->addressOfCallFrameForCatch(), GPRInfo::callFrameRegister);
-        }
-        jit.addPtr(
-            CCallHelpers::TrustedImm32(codeBlock->stackPointerOffset() * sizeof(Register)),
-            GPRInfo::callFrameRegister, CCallHelpers::stackPointerRegister);
+    ASSERT(!vm.callFrameForCatch || exit.m_kind == GenericUnwind);
+    ASSERT_UNUSED(scope, !exit.isExceptionHandler() || !!scope.exception());
+
+    if (UNLIKELY(!exit.exitState)) {
+        // We only need to execute this block once for each OSRExit record. The computed
+        // results will be cached in the OSRExitState record for use of the rest of the
+        // exit ramp code.
+
+        // Ensure we have baseline codeBlocks to OSR exit to.
+        prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin);
+
+        CodeBlock* baselineCodeBlock = codeBlock->baselineAlternative();
+        ASSERT(baselineCodeBlock->jitType() == JITCode::BaselineJIT);
+
+        // Compute the value recoveries.
+        Operands<ValueRecovery> operands;
+        dfgJITCode->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, dfgJITCode->minifiedDFG, exit.m_streamIndex, operands);
+
+        SpeculationRecovery* recovery = nullptr;
+        if (exit.m_recoveryIndex != UINT_MAX)
+            recovery = &dfgJITCode->speculationRecovery[exit.m_recoveryIndex];
+
+        int32_t activeThreshold = baselineCodeBlock->adjustedCounterValue(Options::thresholdForOptimizeAfterLongWarmUp());
+        double adjustedThreshold = applyMemoryUsageHeuristicsAndConvertToInt(activeThreshold, baselineCodeBlock);
+        ASSERT(adjustedThreshold > 0);
+        adjustedThreshold = BaselineExecutionCounter::clippedThreshold(codeBlock->globalObject(), adjustedThreshold);
+
+        CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock);
+        Vector<BytecodeAndMachineOffset> decodedCodeMap;
+        codeBlockForExit->jitCodeMap()->decode(decodedCodeMap);
+
+        BytecodeAndMachineOffset* mapping = binarySearch<BytecodeAndMachineOffset, unsigned>(decodedCodeMap, decodedCodeMap.size(), exit.m_codeOrigin.bytecodeIndex, BytecodeAndMachineOffset::getBytecodeIndex);
+
+        ASSERT(mapping);
+        ASSERT(mapping->m_bytecodeIndex == exit.m_codeOrigin.bytecodeIndex);
+
+        ptrdiff_t finalStackPointerOffset = codeBlockForExit->stackPointerOffset() * sizeof(Register);
 
-        jit.jitAssertHasValidCallFrame();
+        void* jumpTarget = codeBlockForExit->jitCode()->executableAddressAtOffset(mapping->m_machineCodeOffset);
 
-        if (UNLIKELY(vm->m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation)) {
-            Profiler::Database& database = *vm->m_perBytecodeProfiler;
+        exit.exitState = adoptRef(new OSRExitState(exit, codeBlock, baselineCodeBlock, operands, recovery, finalStackPointerOffset, activeThreshold, adjustedThreshold, jumpTarget));
+
+        if (UNLIKELY(vm.m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation)) {
+            Profiler::Database& database = *vm.m_perBytecodeProfiler;
             Profiler::Compilation* compilation = codeBlock->jitCode()->dfgCommon()->compilation.get();
 
             Profiler::OSRExit* profilerExit = compilation->addOSRExit(
                 exitIndex, Profiler::OriginStack(database, codeBlock, exit.m_codeOrigin),
                 exit.m_kind, exit.m_kind == UncountableInvalidation);
-            jit.add64(CCallHelpers::TrustedImm32(1), CCallHelpers::AbsoluteAddress(profilerExit->counterAddress()));
+            exit.exitState->profilerExit = profilerExit;
         }
 
-        compileExit(jit, *vm, exit, operands, recovery);
-
-        LinkBuffer patchBuffer(jit, codeBlock);
-        exit.m_code = FINALIZE_CODE_IF(
-            shouldDumpDisassembly() || Options::verboseOSR() || Options::verboseDFGOSRExit(),
-            patchBuffer,
-            ("DFG OSR exit #%u (%s, %s) from %s, with operands = %s",
+        if (UNLIKELY(Options::verboseOSR() || Options::verboseDFGOSRExit())) {
+            dataLogF("DFG OSR exit #%u (%s, %s) from %s, with operands = %s\n",
                 exitIndex, toCString(exit.m_codeOrigin).data(),
                 exitKindToString(exit.m_kind), toCString(*codeBlock).data(),
-                toCString(ignoringContext<DumpContext>(operands)).data()));
+                toCString(ignoringContext<DumpContext>(operands)).data());
+        }
     }
 
-    MacroAssembler::repatchJump(exit.codeLocationForRepatch(codeBlock), CodeLocationLabel(exit.m_code.code()));
+    OSRExitState& exitState = *exit.exitState.get();
+    CodeBlock* baselineCodeBlock = exitState.baselineCodeBlock;
+    ASSERT(baselineCodeBlock->jitType() == JITCode::BaselineJIT);
 
-    vm->osrExitJumpDestination = exit.m_code.code().executableAddress();
-}
+    Operands<ValueRecovery>& operands = exitState.operands;
+    SpeculationRecovery* recovery = exitState.recovery;
 
-void OSRExit::compileExit(CCallHelpers& jit, VM& vm, const OSRExit& exit, const Operands<ValueRecovery>& operands, SpeculationRecovery* recovery)
-{
-    jit.jitAssertTagsInPlace();
+    if (exit.m_kind == GenericUnwind) {
+        // We are acting as a defacto op_catch because we arrive here from genericUnwind().
+        // So, we must restore our call frame and stack pointer.
+        restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(context);
+        ASSERT(context.fp() == vm.callFrameForCatch);
+    }
+    context.sp() = context.fp<uint8_t*>() + (codeBlock->stackPointerOffset() * sizeof(Register));
 
-    // Pro-forma stuff.
-    if (Options::printEachOSRExit()) {
-        SpeculationFailureDebugInfo* debugInfo = new SpeculationFailureDebugInfo;
-        debugInfo->codeBlock = jit.codeBlock();
-        debugInfo->kind = exit.m_kind;
-        debugInfo->bytecodeOffset = exit.m_codeOrigin.bytecodeIndex;
+    ASSERT(!(context.fp<uintptr_t>() & 0x7));
 
-        jit.debugCall(vm, debugOperationPrintSpeculationFailure, debugInfo);
-    }
+    if (exitState.profilerExit)
+        exitState.profilerExit->incCount();
+
+    auto& cpu = context.cpu;
+    Frame frame(cpu.fp(), context.stack());
+
+#if USE(JSVALUE64)
+    ASSERT(cpu.gpr(GPRInfo::tagTypeNumberRegister) == TagTypeNumber);
+    ASSERT(cpu.gpr(GPRInfo::tagMaskRegister) == TagMask);
+#endif
+
+    if (UNLIKELY(Options::printEachOSRExit()))
+        printOSRExit(context, vm.osrExitIndex, exit);
 
     // Perform speculation recovery. This only comes into play when an operation
     // starts mutating state before verifying the speculation it has already made.
@@ -243,22 +440,24 @@ void OSRExit::compileExit(CCallHelpers& jit, VM& vm, const OSRExit& exit, const
     if (recovery) {
         switch (recovery->type()) {
         case SpeculativeAdd:
-            jit.sub32(recovery->src(), recovery->dest());
+            cpu.gpr(recovery->dest()) = cpu.gpr<uint32_t>(recovery->dest()) - cpu.gpr<uint32_t>(recovery->src());
 #if USE(JSVALUE64)
-            jit.or64(GPRInfo::tagTypeNumberRegister, recovery->dest());
+            ASSERT(!(cpu.gpr(recovery->dest()) >> 32));
+            cpu.gpr(recovery->dest()) |= TagTypeNumber;
 #endif
             break;
 
         case SpeculativeAddImmediate:
-            jit.sub32(AssemblyHelpers::Imm32(recovery->immediate()), recovery->dest());
+            cpu.gpr(recovery->dest()) = (cpu.gpr<uint32_t>(recovery->dest()) - recovery->immediate());
 #if USE(JSVALUE64)
-            jit.or64(GPRInfo::tagTypeNumberRegister, recovery->dest());
+            ASSERT(!(cpu.gpr(recovery->dest()) >> 32));
+            cpu.gpr(recovery->dest()) |= TagTypeNumber;
 #endif
             break;
 
         case BooleanSpeculationCheck:
 #if USE(JSVALUE64)
-            jit.xor64(AssemblyHelpers::TrustedImm32(static_cast<int32_t>(ValueFalse)), recovery->dest());
+            cpu.gpr(recovery->dest()) = cpu.gpr(recovery->dest()) ^ ValueFalse;
 #endif
             break;
 
@@ -281,395 +480,113 @@ void OSRExit::compileExit(CCallHelpers& jit, VM& vm, const OSRExit& exit, const
             // property access, or due to an array profile).
 
             CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile;
-            if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex)) {
-#if USE(JSVALUE64)
-                GPRReg usedRegister;
-                if (exit.m_jsValueSource.isAddress())
-                    usedRegister = exit.m_jsValueSource.base();
-                else
-                    usedRegister = exit.m_jsValueSource.gpr();
-#else
-                GPRReg usedRegister1;
-                GPRReg usedRegister2;
-                if (exit.m_jsValueSource.isAddress()) {
-                    usedRegister1 = exit.m_jsValueSource.base();
-                    usedRegister2 = InvalidGPRReg;
-                } else {
-                    usedRegister1 = exit.m_jsValueSource.payloadGPR();
-                    if (exit.m_jsValueSource.hasKnownTag())
-                        usedRegister2 = InvalidGPRReg;
-                    else
-                        usedRegister2 = exit.m_jsValueSource.tagGPR();
-                }
-#endif
-
-                GPRReg scratch1;
-                GPRReg scratch2;
-#if USE(JSVALUE64)
-                scratch1 = AssemblyHelpers::selectScratchGPR(usedRegister);
-                scratch2 = AssemblyHelpers::selectScratchGPR(usedRegister, scratch1);
-#else
-                scratch1 = AssemblyHelpers::selectScratchGPR(usedRegister1, usedRegister2);
-                scratch2 = AssemblyHelpers::selectScratchGPR(usedRegister1, usedRegister2, scratch1);
-#endif
-
-                if (isARM64()) {
-                    jit.pushToSave(scratch1);
-                    jit.pushToSave(scratch2);
-                } else {
-                    jit.push(scratch1);
-                    jit.push(scratch2);
-                }
-
-                GPRReg value;
-                if (exit.m_jsValueSource.isAddress()) {
-                    value = scratch1;
-                    jit.loadPtr(AssemblyHelpers::Address(exit.m_jsValueSource.asAddress()), value);
-                } else
-                    value = exit.m_jsValueSource.payloadGPR();
-
-                jit.load32(AssemblyHelpers::Address(value, JSCell::structureIDOffset()), scratch1);
-                jit.store32(scratch1, arrayProfile->addressOfLastSeenStructureID());
-#if USE(JSVALUE64)
-                jit.load8(AssemblyHelpers::Address(value, JSCell::indexingTypeAndMiscOffset()), scratch1);
-#else
-                jit.load8(AssemblyHelpers::Address(scratch1, Structure::indexingTypeIncludingHistoryOffset()), scratch1);
-#endif
-                jit.move(AssemblyHelpers::TrustedImm32(1), scratch2);
-                jit.lshift32(scratch1, scratch2);
-                jit.or32(scratch2, AssemblyHelpers::AbsoluteAddress(arrayProfile->addressOfArrayModes()));
-
-                if (isARM64()) {
-                    jit.popToRestore(scratch2);
-                    jit.popToRestore(scratch1);
-                } else {
-                    jit.pop(scratch2);
-                    jit.pop(scratch1);
-                }
+            CodeBlock* profiledCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(codeOrigin, baselineCodeBlock);
+            if (ArrayProfile* arrayProfile = profiledCodeBlock->getArrayProfile(codeOrigin.bytecodeIndex)) {
+                Structure* structure = jsValueFor(cpu, exit.m_jsValueSource).asCell()->structure(vm);
+                arrayProfile->observeStructure(structure);
+                // FIXME: We should be able to use arrayModeFromStructure() to determine the observed ArrayMode here.
+                // However, currently, doing so would result in a pdfjs preformance regression.
+                // https://bugs.webkit.org/show_bug.cgi?id=176473
+                arrayProfile->observeArrayMode(asArrayModes(structure->indexingType()));
             }
         }
 
-        if (MethodOfGettingAValueProfile profile = exit.m_valueProfile) {
-#if USE(JSVALUE64)
-            if (exit.m_jsValueSource.isAddress()) {
-                // We can't be sure that we have a spare register. So use the tagTypeNumberRegister,
-                // since we know how to restore it.
-                jit.load64(AssemblyHelpers::Address(exit.m_jsValueSource.asAddress()), GPRInfo::tagTypeNumberRegister);
-                profile.emitReportValue(jit, JSValueRegs(GPRInfo::tagTypeNumberRegister));
-                jit.move(AssemblyHelpers::TrustedImm64(TagTypeNumber), GPRInfo::tagTypeNumberRegister);
-            } else
-                profile.emitReportValue(jit, JSValueRegs(exit.m_jsValueSource.gpr()));
-#else // not USE(JSVALUE64)
-            if (exit.m_jsValueSource.isAddress()) {
-                // Save a register so we can use it.
-                GPRReg scratchPayload = AssemblyHelpers::selectScratchGPR(exit.m_jsValueSource.base());
-                GPRReg scratchTag = AssemblyHelpers::selectScratchGPR(exit.m_jsValueSource.base(), scratchPayload);
-                jit.pushToSave(scratchPayload);
-                jit.pushToSave(scratchTag);
-
-                JSValueRegs scratch(scratchTag, scratchPayload);
-                
-                jit.loadValue(exit.m_jsValueSource.asAddress(), scratch);
-                profile.emitReportValue(jit, scratch);
-                
-                jit.popToRestore(scratchTag);
-                jit.popToRestore(scratchPayload);
-            } else if (exit.m_jsValueSource.hasKnownTag()) {
-                GPRReg scratchTag = AssemblyHelpers::selectScratchGPR(exit.m_jsValueSource.payloadGPR());
-                jit.pushToSave(scratchTag);
-                jit.move(AssemblyHelpers::TrustedImm32(exit.m_jsValueSource.tag()), scratchTag);
-                JSValueRegs value(scratchTag, exit.m_jsValueSource.payloadGPR());
-                profile.emitReportValue(jit, value);
-                jit.popToRestore(scratchTag);
-            } else
-                profile.emitReportValue(jit, exit.m_jsValueSource.regs());
-#endif // USE(JSVALUE64)
-        }
-    }
-
-    // What follows is an intentionally simple OSR exit implementation that generates
-    // fairly poor code but is very easy to hack. In particular, it dumps all state that
-    // needs conversion into a scratch buffer so that in step 6, where we actually do the
-    // conversions, we know that all temp registers are free to use and the variable is
-    // definitely in a well-known spot in the scratch buffer regardless of whether it had
-    // originally been in a register or spilled. This allows us to decouple "where was
-    // the variable" from "how was it represented". Consider that the
-    // Int32DisplacedInJSStack recovery: it tells us that the value is in a
-    // particular place and that that place holds an unboxed int32. We have two different
-    // places that a value could be (displaced, register) and a bunch of different
-    // ways of representing a value. The number of recoveries is two * a bunch. The code
-    // below means that we have to have two + a bunch cases rather than two * a bunch.
-    // Once we have loaded the value from wherever it was, the reboxing is the same
-    // regardless of its location. Likewise, before we do the reboxing, the way we get to
-    // the value (i.e. where we load it from) is the same regardless of its type. Because
-    // the code below always dumps everything into a scratch buffer first, the two
-    // questions become orthogonal, which simplifies adding new types and adding new
-    // locations.
-    //
-    // This raises the question: does using such a suboptimal implementation of OSR exit,
-    // where we always emit code to dump all state into a scratch buffer only to then
-    // dump it right back into the stack, hurt us in any way? The asnwer is that OSR exits
-    // are rare. Our tiering strategy ensures this. This is because if an OSR exit is
-    // taken more than ~100 times, we jettison the DFG code block along with all of its
-    // exits. It is impossible for an OSR exit - i.e. the code we compile below - to
-    // execute frequently enough for the codegen to matter that much. It probably matters
-    // enough that we don't want to turn this into some super-slow function call, but so
-    // long as we're generating straight-line code, that code can be pretty bad. Also
-    // because we tend to exit only along one OSR exit from any DFG code block - that's an
-    // empirical result that we're extremely confident about - the code size of this
-    // doesn't matter much. Hence any attempt to optimize the codegen here is just purely
-    // harmful to the system: it probably won't reduce either net memory usage or net
-    // execution time. It will only prevent us from cleanly decoupling "where was the
-    // variable" from "how was it represented", which will make it more difficult to add
-    // features in the future and it will make it harder to reason about bugs.
-
-    // Save all state from GPRs into the scratch buffer.
-
-    ScratchBuffer* scratchBuffer = vm.scratchBufferForSize(sizeof(EncodedJSValue) * operands.size());
-    EncodedJSValue* scratch = scratchBuffer ? static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()) : 0;
-
-    for (size_t index = 0; index < operands.size(); ++index) {
-        const ValueRecovery& recovery = operands[index];
-
-        switch (recovery.technique()) {
-        case UnboxedInt32InGPR:
-        case UnboxedCellInGPR:
-#if USE(JSVALUE64)
-        case InGPR:
-        case UnboxedInt52InGPR:
-        case UnboxedStrictInt52InGPR:
-            jit.store64(recovery.gpr(), scratch + index);
-            break;
-#else
-        case UnboxedBooleanInGPR:
-            jit.store32(
-                recovery.gpr(),
-                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload);
-            break;
-            
-        case InPair:
-            jit.store32(
-                recovery.tagGPR(),
-                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.tag);
-            jit.store32(
-                recovery.payloadGPR(),
-                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload);
-            break;
-#endif
-
-        default:
-            break;
-        }
-    }
-
-    // And voila, all GPRs are free to reuse.
-
-    // Save all state from FPRs into the scratch buffer.
-
-    for (size_t index = 0; index < operands.size(); ++index) {
-        const ValueRecovery& recovery = operands[index];
-
-        switch (recovery.technique()) {
-        case UnboxedDoubleInFPR:
-        case InFPR:
-            jit.move(AssemblyHelpers::TrustedImmPtr(scratch + index), GPRInfo::regT0);
-            jit.storeDouble(recovery.fpr(), MacroAssembler::Address(GPRInfo::regT0));
-            break;
-
-        default:
-            break;
-        }
+        if (MethodOfGettingAValueProfile profile = exit.m_valueProfile)
+            profile.reportValue(jsValueFor(cpu, exit.m_jsValueSource));
     }
 
-    // Now, all FPRs are also free.
-
-    // Save all state from the stack into the scratch buffer. For simplicity we
-    // do this even for state that's already in the right place on the stack.
-    // It makes things simpler later.
-
-    for (size_t index = 0; index < operands.size(); ++index) {
-        const ValueRecovery& recovery = operands[index];
-
-        switch (recovery.technique()) {
-        case DisplacedInJSStack:
-        case CellDisplacedInJSStack:
-        case BooleanDisplacedInJSStack:
-        case Int32DisplacedInJSStack:
-        case DoubleDisplacedInJSStack:
-#if USE(JSVALUE64)
-        case Int52DisplacedInJSStack:
-        case StrictInt52DisplacedInJSStack:
-            jit.load64(AssemblyHelpers::addressFor(recovery.virtualRegister()), GPRInfo::regT0);
-            jit.store64(GPRInfo::regT0, scratch + index);
-            break;
-#else
-            jit.load32(
-                AssemblyHelpers::tagFor(recovery.virtualRegister()),
-                GPRInfo::regT0);
-            jit.load32(
-                AssemblyHelpers::payloadFor(recovery.virtualRegister()),
-                GPRInfo::regT1);
-            jit.store32(
-                GPRInfo::regT0,
-                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.tag);
-            jit.store32(
-                GPRInfo::regT1,
-                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload);
-            break;
-#endif
-
-        default:
-            break;
-        }
-    }
-
-    // Need to ensure that the stack pointer accounts for the worst-case stack usage at exit. This
-    // could toast some stack that the DFG used. We need to do it before storing to stack offsets
-    // used by baseline.
-    jit.addPtr(
-        CCallHelpers::TrustedImm32(
-            -jit.codeBlock()->jitCode()->dfgCommon()->requiredRegisterCountForExit * sizeof(Register)),
-        CCallHelpers::framePointerRegister, CCallHelpers::stackPointerRegister);
-
-    // Restore the DFG callee saves and then save the ones the baseline JIT uses.
-    jit.emitRestoreCalleeSaves();
-    jit.emitSaveCalleeSavesFor(jit.baselineCodeBlock());
-
-    // The tag registers are needed to materialize recoveries below.
-    jit.emitMaterializeTagCheckRegisters();
-
-    if (exit.isExceptionHandler())
-        jit.copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(vm);
-
     // Do all data format conversions and store the results into the stack.
+    // Note: we need to recover values before restoring callee save registers below
+    // because the recovery may rely on values in some of callee save registers.
 
-    for (size_t index = 0; index < operands.size(); ++index) {
+    int calleeSaveSpaceAsVirtualRegisters = static_cast<int>(baselineCodeBlock->calleeSaveSpaceAsVirtualRegisters());
+    size_t numberOfOperands = operands.size();
+    for (size_t index = 0; index < numberOfOperands; ++index) {
         const ValueRecovery& recovery = operands[index];
         VirtualRegister reg = operands.virtualRegisterForIndex(index);
 
-        if (reg.isLocal() && reg.toLocal() < static_cast<int>(jit.baselineCodeBlock()->calleeSaveSpaceAsVirtualRegisters()))
+        if (reg.isLocal() && reg.toLocal() < calleeSaveSpaceAsVirtualRegisters)
             continue;
 
         int operand = reg.offset();
 
         switch (recovery.technique()) {
         case DisplacedInJSStack:
+            frame.setOperand(operand, exec->r(recovery.virtualRegister()).jsValue());
+            break;
+
         case InFPR:
+            frame.setOperand(operand, cpu.fpr<JSValue>(recovery.fpr()));
+            break;
+
 #if USE(JSVALUE64)
         case InGPR:
-        case UnboxedCellInGPR:
-        case CellDisplacedInJSStack:
-        case BooleanDisplacedInJSStack:
-            jit.load64(scratch + index, GPRInfo::regT0);
-            jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
+            frame.setOperand(operand, cpu.gpr<JSValue>(recovery.gpr()));
             break;
-#else // not USE(JSVALUE64)
+#else
         case InPair:
-            jit.load32(
-                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.tag,
-                GPRInfo::regT0);
-            jit.load32(
-                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
-                GPRInfo::regT1);
-            jit.store32(
-                GPRInfo::regT0,
-                AssemblyHelpers::tagFor(operand));
-            jit.store32(
-                GPRInfo::regT1,
-                AssemblyHelpers::payloadFor(operand));
+            frame.setOperand(operand, JSValue(cpu.gpr<int32_t>(recovery.tagGPR()), cpu.gpr<int32_t>(recovery.payloadGPR())));
             break;
+#endif
 
         case UnboxedCellInGPR:
+            frame.setOperand(operand, JSValue(cpu.gpr<JSCell*>(recovery.gpr())));
+            break;
+
         case CellDisplacedInJSStack:
-            jit.load32(
-                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
-                GPRInfo::regT0);
-            jit.store32(
-                AssemblyHelpers::TrustedImm32(JSValue::CellTag),
-                AssemblyHelpers::tagFor(operand));
-            jit.store32(
-                GPRInfo::regT0,
-                AssemblyHelpers::payloadFor(operand));
+            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedCell()));
             break;
 
+#if USE(JSVALUE32_64)
         case UnboxedBooleanInGPR:
-        case BooleanDisplacedInJSStack:
-            jit.load32(
-                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
-                GPRInfo::regT0);
-            jit.store32(
-                AssemblyHelpers::TrustedImm32(JSValue::BooleanTag),
-                AssemblyHelpers::tagFor(operand));
-            jit.store32(
-                GPRInfo::regT0,
-                AssemblyHelpers::payloadFor(operand));
+            frame.setOperand(operand, jsBoolean(cpu.gpr<bool>(recovery.gpr())));
             break;
-#endif // USE(JSVALUE64)
+#endif
 
-        case UnboxedInt32InGPR:
-        case Int32DisplacedInJSStack:
+        case BooleanDisplacedInJSStack:
 #if USE(JSVALUE64)
-            jit.load64(scratch + index, GPRInfo::regT0);
-            jit.zeroExtend32ToPtr(GPRInfo::regT0, GPRInfo::regT0);
-            jit.or64(GPRInfo::tagTypeNumberRegister, GPRInfo::regT0);
-            jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
+            frame.setOperand(operand, exec->r(recovery.virtualRegister()).jsValue());
 #else
-            jit.load32(
-                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
-                GPRInfo::regT0);
-            jit.store32(
-                AssemblyHelpers::TrustedImm32(JSValue::Int32Tag),
-                AssemblyHelpers::tagFor(operand));
-            jit.store32(
-                GPRInfo::regT0,
-                AssemblyHelpers::payloadFor(operand));
+            frame.setOperand(operand, jsBoolean(exec->r(recovery.virtualRegister()).jsValue().payload()));
 #endif
             break;
 
+        case UnboxedInt32InGPR:
+            frame.setOperand(operand, JSValue(cpu.gpr<int32_t>(recovery.gpr())));
+            break;
+
+        case Int32DisplacedInJSStack:
+            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedInt32()));
+            break;
+
 #if USE(JSVALUE64)
         case UnboxedInt52InGPR:
+            frame.setOperand(operand, JSValue(cpu.gpr<int64_t>(recovery.gpr()) >> JSValue::int52ShiftAmount));
+            break;
+
         case Int52DisplacedInJSStack:
-            jit.load64(scratch + index, GPRInfo::regT0);
-            jit.rshift64(
-                AssemblyHelpers::TrustedImm32(JSValue::int52ShiftAmount), GPRInfo::regT0);
-            jit.boxInt52(GPRInfo::regT0, GPRInfo::regT0, GPRInfo::regT1, FPRInfo::fpRegT0);
-            jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
+            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedInt52()));
             break;
 
         case UnboxedStrictInt52InGPR:
+            frame.setOperand(operand, JSValue(cpu.gpr<int64_t>(recovery.gpr())));
+            break;
+
         case StrictInt52DisplacedInJSStack:
-            jit.load64(scratch + index, GPRInfo::regT0);
-            jit.boxInt52(GPRInfo::regT0, GPRInfo::regT0, GPRInfo::regT1, FPRInfo::fpRegT0);
-            jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
+            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedStrictInt52()));
             break;
 #endif
 
         case UnboxedDoubleInFPR:
+            frame.setOperand(operand, JSValue(JSValue::EncodeAsDouble, purifyNaN(cpu.fpr(recovery.fpr()))));
+            break;
+
         case DoubleDisplacedInJSStack:
-            jit.move(AssemblyHelpers::TrustedImmPtr(scratch + index), GPRInfo::regT0);
-            jit.loadDouble(MacroAssembler::Address(GPRInfo::regT0), FPRInfo::fpRegT0);
-            jit.purifyNaN(FPRInfo::fpRegT0);
-#if USE(JSVALUE64)
-            jit.boxDouble(FPRInfo::fpRegT0, GPRInfo::regT0);
-            jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
-#else
-            jit.storeDouble(FPRInfo::fpRegT0, AssemblyHelpers::addressFor(operand));
-#endif
+            frame.setOperand(operand, JSValue(JSValue::EncodeAsDouble, purifyNaN(exec->r(recovery.virtualRegister()).unboxedDouble())));
             break;
 
         case Constant:
-#if USE(JSVALUE64)
-            jit.store64(
-                AssemblyHelpers::TrustedImm64(JSValue::encode(recovery.constant())),
-                AssemblyHelpers::addressFor(operand));
-#else
-            jit.store32(
-                AssemblyHelpers::TrustedImm32(recovery.constant().tag()),
-                AssemblyHelpers::tagFor(operand));
-            jit.store32(
-                AssemblyHelpers::TrustedImm32(recovery.constant().payload()),
-                AssemblyHelpers::payloadFor(operand));
-#endif
+            frame.setOperand(operand, recovery.constant());
             break;
 
         case DirectArgumentsThatWereNotCreated:
@@ -683,13 +600,31 @@ void OSRExit::compileExit(CCallHelpers& jit, VM& vm, const OSRExit& exit, const
         }
     }
 
+    // Need to ensure that the stack pointer accounts for the worst-case stack usage at exit. This
+    // could toast some stack that the DFG used. We need to do it before storing to stack offsets
+    // used by baseline.
+    cpu.sp() = cpu.fp<uint8_t*>() - (codeBlock->jitCode()->dfgCommon()->requiredRegisterCountForExit * sizeof(Register));
+
+    // Restore the DFG callee saves and then save the ones the baseline JIT uses.
+    restoreCalleeSavesFor(context, codeBlock);
+    saveCalleeSavesFor(context, baselineCodeBlock);
+
+    // The tag registers are needed to materialize recoveries below.
+#if USE(JSVALUE64)
+    cpu.gpr(GPRInfo::tagTypeNumberRegister) = TagTypeNumber;
+    cpu.gpr(GPRInfo::tagMaskRegister) = TagTypeNumber | TagBitTypeOther;
+#endif
+
+    if (exit.isExceptionHandler())
+        copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(context);
+
     // Now that things on the stack are recovered, do the arguments recovery. We assume that arguments
     // recoveries don't recursively refer to each other. But, we don't try to assume that they only
     // refer to certain ranges of locals. Hence why we need to do this here, once the stack is sensible.
     // Note that we also roughly assume that the arguments might still be materialized outside of its
     // inline call frame scope - but for now the DFG wouldn't do that.
 
-    emitRestoreArguments(jit, operands);
+    emitRestoreArguments(context, codeBlock, dfgJITCode, operands);
 
     // Adjust the old JIT's execute counter. Since we are exiting OSR, we know
     // that all new calls into this code will go to the new JIT, so the execute
@@ -727,26 +662,161 @@ void OSRExit::compileExit(CCallHelpers& jit, VM& vm, const OSRExit& exit, const
     // counter to 0; otherwise we set the counter to
     // counterValueForOptimizeAfterWarmUp().
 
-    handleExitCounts(jit, exit);
+    if (UNLIKELY(codeBlock->updateOSRExitCounterAndCheckIfNeedToReoptimize(exitState) == CodeBlock::OptimizeAction::ReoptimizeNow))
+        triggerReoptimizationNow(baselineCodeBlock, &exit);
+
+    reifyInlinedCallFrames(context, baselineCodeBlock, exit);
+    adjustAndJumpToTarget(context, vm, codeBlock, baselineCodeBlock, exit);
+}
+
+static void reifyInlinedCallFrames(Context& context, CodeBlock* outermostBaselineCodeBlock, const OSRExitBase& exit)
+{
+    auto& cpu = context.cpu;
+    Frame frame(cpu.fp(), context.stack());
+
+    // FIXME: We shouldn't leave holes on the stack when performing an OSR exit
+    // in presence of inlined tail calls.
+    // https://bugs.webkit.org/show_bug.cgi?id=147511
+    ASSERT(outermostBaselineCodeBlock->jitType() == JITCode::BaselineJIT);
+    frame.setOperand<CodeBlock*>(CallFrameSlot::codeBlock, outermostBaselineCodeBlock);
+
+    const CodeOrigin* codeOrigin;
+    for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame; codeOrigin = codeOrigin->inlineCallFrame->getCallerSkippingTailCalls()) {
+        InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame;
+        CodeBlock* baselineCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(*codeOrigin, outermostBaselineCodeBlock);
+        InlineCallFrame::Kind trueCallerCallKind;
+        CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingTailCalls(&trueCallerCallKind);
+        void* callerFrame = cpu.fp();
+
+        if (!trueCaller) {
+            ASSERT(inlineCallFrame->isTail());
+            void* returnPC = frame.get<void*>(CallFrame::returnPCOffset());
+            frame.set<void*>(inlineCallFrame->returnPCOffset(), returnPC);
+            callerFrame = frame.get<void*>(CallFrame::callerFrameOffset());
+        } else {
+            CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock);
+            unsigned callBytecodeIndex = trueCaller->bytecodeIndex;
+            void* jumpTarget = nullptr;
+
+            switch (trueCallerCallKind) {
+            case InlineCallFrame::Call:
+            case InlineCallFrame::Construct:
+            case InlineCallFrame::CallVarargs:
+            case InlineCallFrame::ConstructVarargs:
+            case InlineCallFrame::TailCall:
+            case InlineCallFrame::TailCallVarargs: {
+                CallLinkInfo* callLinkInfo =
+                    baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex);
+                RELEASE_ASSERT(callLinkInfo);
+
+                jumpTarget = callLinkInfo->callReturnLocation().executableAddress();
+                break;
+            }
+
+            case InlineCallFrame::GetterCall:
+            case InlineCallFrame::SetterCall: {
+                StructureStubInfo* stubInfo =
+                    baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex));
+                RELEASE_ASSERT(stubInfo);
+
+                jumpTarget = stubInfo->doneLocation().executableAddress();
+                break;
+            }
+
+            default:
+                RELEASE_ASSERT_NOT_REACHED();
+            }
+
+            if (trueCaller->inlineCallFrame)
+                callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame->stackOffset * sizeof(EncodedJSValue);
+
+            frame.set<void*>(inlineCallFrame->returnPCOffset(), jumpTarget);
+        }
 
-    // Reify inlined call frames.
+        frame.setOperand<void*>(inlineCallFrame->stackOffset + CallFrameSlot::codeBlock, baselineCodeBlock);
 
-    reifyInlinedCallFrames(jit, exit);
+        // Restore the inline call frame's callee save registers.
+        // If this inlined frame is a tail call that will return back to the original caller, we need to
+        // copy the prior contents of the tag registers already saved for the outer frame to this frame.
+        saveOrCopyCalleeSavesFor(context, baselineCodeBlock, VirtualRegister(inlineCallFrame->stackOffset), !trueCaller);
 
-    // And finish.
-    adjustAndJumpToTarget(vm, jit, exit);
+        if (!inlineCallFrame->isVarargs())
+            frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, PayloadOffset, inlineCallFrame->argumentCountIncludingThis);
+        ASSERT(callerFrame);
+        frame.set<void*>(inlineCallFrame->callerFrameOffset(), callerFrame);
+#if USE(JSVALUE64)
+        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
+        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, TagOffset, locationBits);
+        if (!inlineCallFrame->isClosureCall)
+            frame.setOperand(inlineCallFrame->stackOffset + CallFrameSlot::callee, JSValue(inlineCallFrame->calleeConstant()));
+#else // USE(JSVALUE64) // so this is the 32-bit part
+        Instruction* instruction = baselineCodeBlock->instructions().begin() + codeOrigin->bytecodeIndex;
+        uint32_t locationBits = CallSiteIndex(instruction).bits();
+        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, TagOffset, locationBits);
+        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::callee, TagOffset, static_cast<uint32_t>(JSValue::CellTag));
+        if (!inlineCallFrame->isClosureCall)
+            frame.setOperand(inlineCallFrame->stackOffset + CallFrameSlot::callee, PayloadOffset, inlineCallFrame->calleeConstant());
+#endif // USE(JSVALUE64) // ending the #else part, so directly above is the 32-bit part
+    }
+
+    // Don't need to set the toplevel code origin if we only did inline tail calls
+    if (codeOrigin) {
+#if USE(JSVALUE64)
+        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
+#else
+        Instruction* instruction = outermostBaselineCodeBlock->instructions().begin() + codeOrigin->bytecodeIndex;
+        uint32_t locationBits = CallSiteIndex(instruction).bits();
+#endif
+        frame.setOperand<uint32_t>(CallFrameSlot::argumentCount, TagOffset, locationBits);
+    }
 }
 
-void JIT_OPERATION OSRExit::debugOperationPrintSpeculationFailure(ExecState* exec, void* debugInfoRaw, void* scratch)
+static void adjustAndJumpToTarget(Context& context, VM& vm, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, OSRExit& exit)
 {
-    VM* vm = &exec->vm();
-    NativeCallFrameTracer tracer(vm, exec);
+    OSRExitState* exitState = exit.exitState.get();
+
+    WTF::storeLoadFence(); // The optimizing compiler expects that the OSR exit mechanism will execute this fence.
+    vm.heap.writeBarrier(baselineCodeBlock);
+
+    // We barrier all inlined frames -- and not just the current inline stack --
+    // because we don't know which inlined function owns the value profile that
+    // we'll update when we exit. In the case of "f() { a(); b(); }", if both
+    // a and b are inlined, we might exit inside b due to a bad value loaded
+    // from a.
+    // FIXME: MethodOfGettingAValueProfile should remember which CodeBlock owns
+    // the value profile.
+    InlineCallFrameSet* inlineCallFrames = codeBlock->jitCode()->dfgCommon()->inlineCallFrames.get();
+    if (inlineCallFrames) {
+        for (InlineCallFrame* inlineCallFrame : *inlineCallFrames)
+            vm.heap.writeBarrier(inlineCallFrame->baselineCodeBlock.get());
+    }
+
+    if (exit.m_codeOrigin.inlineCallFrame)
+        context.fp() = context.fp<uint8_t*>() + exit.m_codeOrigin.inlineCallFrame->stackOffset * sizeof(EncodedJSValue);
 
-    SpeculationFailureDebugInfo* debugInfo = static_cast<SpeculationFailureDebugInfo*>(debugInfoRaw);
-    CodeBlock* codeBlock = debugInfo->codeBlock;
+    void* jumpTarget = exitState->jumpTarget;
+    ASSERT(jumpTarget);
+
+    context.sp() = context.fp<uint8_t*>() + exitState->stackPointerOffset;
+    if (exit.isExceptionHandler()) {
+        // Since we're jumping to op_catch, we need to set callFrameForCatch.
+        vm.callFrameForCatch = context.fp<ExecState*>();
+    }
+
+    vm.topCallFrame = context.fp<ExecState*>();
+    context.pc() = jumpTarget;
+}
+
+static void printOSRExit(Context& context, uint32_t osrExitIndex, const OSRExit& exit)
+{
+    ExecState* exec = context.fp<ExecState*>();
+    CodeBlock* codeBlock = exec->codeBlock();
     CodeBlock* alternative = codeBlock->alternative();
+    ExitKind kind = exit.m_kind;
+    unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex;
+
     dataLog("Speculation failure in ", *codeBlock);
-    dataLog(" @ exit #", vm->osrExitIndex, " (bc#", debugInfo->bytecodeOffset, ", ", exitKindToString(debugInfo->kind), ") with ");
+    dataLog(" @ exit #", osrExitIndex, " (bc#", bytecodeOffset, ", ", exitKindToString(kind), ") with ");
     if (alternative) {
         dataLog(
             "executeCounter = ", alternative->jitExecuteCounter(),
@@ -756,21 +826,18 @@ void JIT_OPERATION OSRExit::debugOperationPrintSpeculationFailure(ExecState* exe
         dataLog("no alternative code block (i.e. we've been jettisoned)");
     dataLog(", osrExitCounter = ", codeBlock->osrExitCounter(), "\n");
     dataLog("    GPRs at time of exit:");
-    char* scratchPointer = static_cast<char*>(scratch);
     for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
         GPRReg gpr = GPRInfo::toRegister(i);
-        dataLog(" ", GPRInfo::debugName(gpr), ":", RawPointer(*reinterpret_cast_ptr<void**>(scratchPointer)));
-        scratchPointer += sizeof(EncodedJSValue);
+        dataLog(" ", context.gprName(gpr), ":", RawPointer(context.gpr<void*>(gpr)));
     }
     dataLog("\n");
     dataLog("    FPRs at time of exit:");
     for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
         FPRReg fpr = FPRInfo::toRegister(i);
-        dataLog(" ", FPRInfo::debugName(fpr), ":");
-        uint64_t bits = *reinterpret_cast_ptr<uint64_t*>(scratchPointer);
-        double value = *reinterpret_cast_ptr<double*>(scratchPointer);
+        dataLog(" ", context.fprName(fpr), ":");
+        uint64_t bits = context.fpr<uint64_t>(fpr);
+        double value = context.fpr(fpr);
         dataLogF("%llx:%lf", static_cast<long long>(bits), value);
-        scratchPointer += sizeof(EncodedJSValue);
     }
     dataLog("\n");
 }
index 9945d0c..b6c9a85 100644 (file)
 #include "MethodOfGettingAValueProfile.h"
 #include "Operands.h"
 #include "ValueRecovery.h"
+#include <wtf/RefPtr.h>
 
 namespace JSC {
 
-class CCallHelpers;
+namespace Probe {
+class Context;
+} // namespace Probe
+
+namespace Profiler {
+class OSRExit;
+} // namespace Profiler
 
 namespace DFG {
 
@@ -91,6 +98,32 @@ private:
     SpeculationRecoveryType m_type;
 };
 
+struct OSRExitState : RefCounted<OSRExitState> {
+    OSRExitState(OSRExitBase& exit, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, Operands<ValueRecovery>& operands, SpeculationRecovery* recovery, ptrdiff_t stackPointerOffset, int32_t activeThreshold, double memoryUsageAdjustedThreshold, void* jumpTarget)
+        : exit(exit)
+        , codeBlock(codeBlock)
+        , baselineCodeBlock(baselineCodeBlock)
+        , operands(operands)
+        , recovery(recovery)
+        , stackPointerOffset(stackPointerOffset)
+        , activeThreshold(activeThreshold)
+        , memoryUsageAdjustedThreshold(memoryUsageAdjustedThreshold)
+        , jumpTarget(jumpTarget)
+    { }
+
+    OSRExitBase& exit;
+    CodeBlock* codeBlock;
+    CodeBlock* baselineCodeBlock;
+    Operands<ValueRecovery> operands;
+    SpeculationRecovery* recovery;
+    ptrdiff_t stackPointerOffset;
+    uint32_t activeThreshold;
+    double memoryUsageAdjustedThreshold;
+    void* jumpTarget;
+
+    Profiler::OSRExit* profilerExit { nullptr };
+};
+
 // === OSRExit ===
 //
 // This structure describes how to exit the speculative path by
@@ -98,32 +131,20 @@ private:
 struct OSRExit : public OSRExitBase {
     OSRExit(ExitKind, JSValueSource, MethodOfGettingAValueProfile, SpeculativeJIT*, unsigned streamIndex, unsigned recoveryIndex = UINT_MAX);
 
-    static void JIT_OPERATION compileOSRExit(ExecState*) WTF_INTERNAL;
+    static void executeOSRExit(Probe::Context&);
 
-    unsigned m_patchableCodeOffset { 0 };
-    
-    MacroAssemblerCodeRef m_code;
+    RefPtr<OSRExitState> exitState;
     
     JSValueSource m_jsValueSource;
     MethodOfGettingAValueProfile m_valueProfile;
     
     unsigned m_recoveryIndex;
 
-    void setPatchableCodeOffset(MacroAssembler::PatchableJump);
-    MacroAssembler::Jump getPatchableCodeOffsetAsJump() const;
-    CodeLocationJump codeLocationForRepatch(CodeBlock*) const;
-    void correctJump(LinkBuffer&);
-
     unsigned m_streamIndex;
     void considerAddingAsFrequentExitSite(CodeBlock* profiledCodeBlock)
     {
         OSRExitBase::considerAddingAsFrequentExitSite(profiledCodeBlock, ExitFromDFG);
     }
-
-private:
-    static void compileExit(CCallHelpers&, VM&, const OSRExit&, const Operands<ValueRecovery>&, SpeculationRecovery*);
-    static void emitRestoreArguments(CCallHelpers&, const Operands<ValueRecovery>&);
-    static void JIT_OPERATION debugOperationPrintSpeculationFailure(ExecState*, void*, void*) WTF_INTERNAL;
 };
 
 struct SpeculationFailureDebugInfo {
index 2151172..657ecff 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -37,6 +37,7 @@
 
 namespace JSC { namespace DFG {
 
+// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 void handleExitCounts(CCallHelpers& jit, const OSRExitBase& exit)
 {
     if (!exitKindMayJettison(exit.m_kind)) {
@@ -143,6 +144,7 @@ void handleExitCounts(CCallHelpers& jit, const OSRExitBase& exit)
     doneAdjusting.link(&jit);
 }
 
+// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit)
 {
     // FIXME: We shouldn't leave holes on the stack when performing an OSR exit
@@ -252,6 +254,7 @@ void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit)
     }
 }
 
+// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 static void osrWriteBarrier(CCallHelpers& jit, GPRReg owner, GPRReg scratch)
 {
     AssemblyHelpers::Jump ownerIsRememberedOrInEden = jit.barrierBranchWithoutFence(owner);
@@ -272,6 +275,7 @@ static void osrWriteBarrier(CCallHelpers& jit, GPRReg owner, GPRReg scratch)
     ownerIsRememberedOrInEden.link(&jit);
 }
 
+// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 void adjustAndJumpToTarget(VM& vm, CCallHelpers& jit, const OSRExitBase& exit)
 {
     jit.memoryFence();
index 108a0f5..0563034 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -40,6 +40,7 @@ void handleExitCounts(CCallHelpers&, const OSRExitBase&);
 void reifyInlinedCallFrames(CCallHelpers&, const OSRExitBase&);
 void adjustAndJumpToTarget(VM&, CCallHelpers&, const OSRExitBase&);
 
+// FIXME: This won't be needed once we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 template <typename JITCodeType>
 void adjustFrameAndStackInOSRExitCompilerThunk(MacroAssembler& jit, VM* vm, JITCode::JITType jitType)
 {
index 3ac3bd1..4e6919e 100644 (file)
@@ -1474,62 +1474,6 @@ JSCell* JIT_OPERATION operationCreateClonedArguments(ExecState* exec, Structure*
         exec, structure, argumentStart, length, callee);
 }
 
-JSCell* JIT_OPERATION operationCreateDirectArgumentsDuringExit(ExecState* exec, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
-{
-    VM& vm = exec->vm();
-    NativeCallFrameTracer target(&vm, exec);
-    
-    DeferGCForAWhile deferGC(vm.heap);
-    
-    CodeBlock* codeBlock;
-    if (inlineCallFrame)
-        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
-    else
-        codeBlock = exec->codeBlock();
-    
-    unsigned length = argumentCount - 1;
-    unsigned capacity = std::max(length, static_cast<unsigned>(codeBlock->numParameters() - 1));
-    DirectArguments* result = DirectArguments::create(
-        vm, codeBlock->globalObject()->directArgumentsStructure(), length, capacity);
-    
-    result->callee().set(vm, result, callee);
-    
-    Register* arguments =
-        exec->registers() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0) +
-        CallFrame::argumentOffset(0);
-    for (unsigned i = length; i--;)
-        result->setIndexQuickly(vm, i, arguments[i].jsValue());
-    
-    return result;
-}
-
-JSCell* JIT_OPERATION operationCreateClonedArgumentsDuringExit(ExecState* exec, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
-{
-    VM& vm = exec->vm();
-    NativeCallFrameTracer target(&vm, exec);
-    
-    DeferGCForAWhile deferGC(vm.heap);
-    
-    CodeBlock* codeBlock;
-    if (inlineCallFrame)
-        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
-    else
-        codeBlock = exec->codeBlock();
-    
-    unsigned length = argumentCount - 1;
-    ClonedArguments* result = ClonedArguments::createEmpty(
-        vm, codeBlock->globalObject()->clonedArgumentsStructure(), callee, length);
-    
-    Register* arguments =
-        exec->registers() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0) +
-        CallFrame::argumentOffset(0);
-    for (unsigned i = length; i--;)
-        result->putDirectIndex(exec, i, arguments[i].jsValue());
-
-    
-    return result;
-}
-
 JSCell* JIT_OPERATION operationCreateRest(ExecState* exec, Register* argumentStart, unsigned numberOfParamsToSkip, unsigned arraySize)
 {
     VM* vm = &exec->vm();
index 7963bd6..42ca784 100644 (file)
@@ -149,9 +149,7 @@ size_t JIT_OPERATION operationCompareStrictEqCell(ExecState*, EncodedJSValue enc
 size_t JIT_OPERATION operationCompareStrictEq(ExecState*, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2) WTF_INTERNAL;
 JSCell* JIT_OPERATION operationCreateActivationDirect(ExecState*, Structure*, JSScope*, SymbolTable*, EncodedJSValue);
 JSCell* JIT_OPERATION operationCreateDirectArguments(ExecState*, Structure*, int32_t length, int32_t minCapacity);
-JSCell* JIT_OPERATION operationCreateDirectArgumentsDuringExit(ExecState*, InlineCallFrame*, JSFunction*, int32_t argumentCount);
 JSCell* JIT_OPERATION operationCreateScopedArguments(ExecState*, Structure*, Register* argumentStart, int32_t length, JSFunction* callee, JSLexicalEnvironment*);
-JSCell* JIT_OPERATION operationCreateClonedArgumentsDuringExit(ExecState*, InlineCallFrame*, JSFunction*, int32_t argumentCount);
 JSCell* JIT_OPERATION operationCreateClonedArguments(ExecState*, Structure*, Register* argumentStart, int32_t length, JSFunction* callee);
 JSCell* JIT_OPERATION operationCreateRest(ExecState*, Register* argumentStart, unsigned numberOfArgumentsToSkip, unsigned arraySize);
 double JIT_OPERATION operationFModOnInts(int32_t, int32_t) WTF_INTERNAL;
index b7327f3..dba7388 100644 (file)
 
 namespace JSC { namespace DFG {
 
-MacroAssemblerCodeRef osrExitGenerationThunkGenerator(VM* vm)
+MacroAssemblerCodeRef osrExitThunkGenerator(VM* vm)
 {
     MacroAssembler jit;
-
-    // This needs to happen before we use the scratch buffer because this function also uses the scratch buffer.
-    adjustFrameAndStackInOSRExitCompilerThunk<DFG::JITCode>(jit, vm, JITCode::DFGJIT);
-    
-    size_t scratchSize = sizeof(EncodedJSValue) * (GPRInfo::numberOfRegisters + FPRInfo::numberOfRegisters);
-    ScratchBuffer* scratchBuffer = vm->scratchBufferForSize(scratchSize);
-    EncodedJSValue* buffer = static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer());
-    
-    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
-#if USE(JSVALUE64)
-        jit.store64(GPRInfo::toRegister(i), buffer + i);
-#else
-        jit.store32(GPRInfo::toRegister(i), buffer + i);
-#endif
-    }
-    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
-        jit.move(MacroAssembler::TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
-        jit.storeDouble(FPRInfo::toRegister(i), MacroAssembler::Address(GPRInfo::regT0));
-    }
-    
-    // Tell GC mark phase how much of the scratch buffer is active during call.
-    jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
-    jit.storePtr(MacroAssembler::TrustedImmPtr(scratchSize), MacroAssembler::Address(GPRInfo::regT0));
-
-    // Set up one argument.
-#if CPU(X86)
-    jit.poke(GPRInfo::callFrameRegister, 0);
-#else
-    jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
-#endif
-
-    MacroAssembler::Call functionCall = jit.call();
-
-    jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
-    jit.storePtr(MacroAssembler::TrustedImmPtr(0), MacroAssembler::Address(GPRInfo::regT0));
-
-    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
-        jit.move(MacroAssembler::TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
-        jit.loadDouble(MacroAssembler::Address(GPRInfo::regT0), FPRInfo::toRegister(i));
-    }
-    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
-#if USE(JSVALUE64)
-        jit.load64(buffer + i, GPRInfo::toRegister(i));
-#else
-        jit.load32(buffer + i, GPRInfo::toRegister(i));
-#endif
-    }
-    
-    jit.jump(MacroAssembler::AbsoluteAddress(&vm->osrExitJumpDestination));
-    
+    jit.probe(OSRExit::executeOSRExit, vm);
     LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID);
-    
-    patchBuffer.link(functionCall, OSRExit::compileOSRExit);
-    
-    return FINALIZE_CODE(patchBuffer, ("DFG OSR exit generation thunk"));
+    return FINALIZE_CODE(patchBuffer, ("DFG OSR exit thunk"));
 }
 
 MacroAssemblerCodeRef osrEntryThunkGenerator(VM* vm)
index 58a33da..cffac9f 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -35,7 +35,7 @@ class VM;
 
 namespace DFG {
 
-MacroAssemblerCodeRef osrExitGenerationThunkGenerator(VM*);
+MacroAssemblerCodeRef osrExitThunkGenerator(VM*);
 MacroAssemblerCodeRef osrEntryThunkGenerator(VM*);
 
 } } // namespace JSC::DFG
index 8d31f7a..8b9d6a3 100644 (file)
@@ -50,6 +50,7 @@ ExecutableBase* AssemblyHelpers::executableFor(const CodeOrigin& codeOrigin)
     return codeOrigin.inlineCallFrame->baselineCodeBlock->ownerExecutable();
 }
 
+// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 Vector<BytecodeAndMachineOffset>& AssemblyHelpers::decodedCodeMapFor(CodeBlock* codeBlock)
 {
     ASSERT(codeBlock == codeBlock->baselineVersion());
@@ -820,61 +821,6 @@ bool AssemblyHelpers::storeWasmContextNeedsMacroScratchRegister()
 
 #endif // ENABLE(WEBASSEMBLY)
 
-void AssemblyHelpers::debugCall(VM& vm, V_DebugOperation_EPP function, void* argument)
-{
-    size_t scratchSize = sizeof(EncodedJSValue) * (GPRInfo::numberOfRegisters + FPRInfo::numberOfRegisters);
-    ScratchBuffer* scratchBuffer = vm.scratchBufferForSize(scratchSize);
-    EncodedJSValue* buffer = static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer());
-
-    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
-#if USE(JSVALUE64)
-        store64(GPRInfo::toRegister(i), buffer + i);
-#else
-        store32(GPRInfo::toRegister(i), buffer + i);
-#endif
-    }
-
-    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
-        move(TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
-        storeDouble(FPRInfo::toRegister(i), GPRInfo::regT0);
-    }
-
-    // Tell GC mark phase how much of the scratch buffer is active during call.
-    move(TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
-    storePtr(TrustedImmPtr(scratchSize), GPRInfo::regT0);
-
-#if CPU(X86_64) || CPU(ARM) || CPU(ARM64) || CPU(MIPS)
-    move(TrustedImmPtr(buffer), GPRInfo::argumentGPR2);
-    move(TrustedImmPtr(argument), GPRInfo::argumentGPR1);
-    move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
-    GPRReg scratch = selectScratchGPR(GPRInfo::argumentGPR0, GPRInfo::argumentGPR1, GPRInfo::argumentGPR2);
-#elif CPU(X86)
-    poke(GPRInfo::callFrameRegister, 0);
-    poke(TrustedImmPtr(argument), 1);
-    poke(TrustedImmPtr(buffer), 2);
-    GPRReg scratch = GPRInfo::regT0;
-#else
-#error "JIT not supported on this platform."
-#endif
-    move(TrustedImmPtr(reinterpret_cast<void*>(function)), scratch);
-    call(scratch);
-
-    move(TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
-    storePtr(TrustedImmPtr(0), GPRInfo::regT0);
-
-    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
-        move(TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
-        loadDouble(GPRInfo::regT0, FPRInfo::toRegister(i));
-    }
-    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
-#if USE(JSVALUE64)
-        load64(buffer + i, GPRInfo::toRegister(i));
-#else
-        load32(buffer + i, GPRInfo::toRegister(i));
-#endif
-    }
-}
-
 void AssemblyHelpers::copyCalleeSavesToVMEntryFrameCalleeSavesBufferImpl(GPRReg calleeSavesBuffer)
 {
 #if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
index a68c369..6bf2f3b 100644 (file)
@@ -992,9 +992,6 @@ public:
         return GPRInfo::regT5;
     }
 
-    // Add a debug call. This call has no effect on JIT code execution state.
-    void debugCall(VM&, V_DebugOperation_EPP function, void* argument);
-
     // These methods JIT generate dynamic, debug-only checks - akin to ASSERTs.
 #if !ASSERT_DISABLED
     void jitAssertIsInt32(GPRReg);
@@ -1465,6 +1462,7 @@ public:
     
     void emitDumbVirtualCall(VM&, CallLinkInfo*);
     
+    // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
     Vector<BytecodeAndMachineOffset>& decodedCodeMapFor(CodeBlock*);
 
     void makeSpaceOnStackForCCall();
@@ -1656,6 +1654,7 @@ protected:
     CodeBlock* m_codeBlock;
     CodeBlock* m_baselineCodeBlock;
 
+    // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
     HashMap<CodeBlock*, Vector<BytecodeAndMachineOffset>> m_decodedCodeMaps;
 };
 
index 81d0700..a043a5d 100644 (file)
@@ -2307,6 +2307,7 @@ char* JIT_OPERATION operationReallocateButterflyToGrowPropertyStorage(ExecState*
     return reinterpret_cast<char*>(result);
 }
 
+// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 void JIT_OPERATION operationOSRWriteBarrier(ExecState* exec, JSCell* cell)
 {
     VM* vm = &exec->vm();
index b4d8530..c23c40b 100644 (file)
@@ -447,6 +447,7 @@ char* JIT_OPERATION operationReallocateButterflyToHavePropertyStorageWithInitial
 char* JIT_OPERATION operationReallocateButterflyToGrowPropertyStorage(ExecState*, JSObject*, size_t newSize) WTF_INTERNAL;
 
 void JIT_OPERATION operationWriteBarrierSlowPath(ExecState*, JSCell*);
+// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 void JIT_OPERATION operationOSRWriteBarrier(ExecState*, JSCell*);
 
 void JIT_OPERATION operationExceptionFuzz(ExecState*);
index 733df63..61e6dd4 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -43,7 +43,8 @@ public:
     
     uint64_t* counterAddress() { return &m_counter; }
     uint64_t count() const { return m_counter; }
-    
+    void incCount() { m_counter++; }
+
     JSValue toJS(ExecState*) const;
 
 private:
index 978b03b..c24ea5b 100644 (file)
@@ -1,7 +1,7 @@
 /*
  *  Copyright (C) 1999-2001 Harri Porten (porten@kde.org)
  *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
- *  Copyright (C) 2003, 2004, 2005, 2007, 2008, 2009, 2012, 2015 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
  *
  *  This library is free software; you can redistribute it and/or
  *  modify it under the terms of the GNU Library General Public
@@ -344,12 +344,9 @@ public:
     uint32_t tag() const;
     int32_t payload() const;
 
-#if !ENABLE(JIT)
-    // This should only be used by the LLInt C Loop interpreter who needs
-    // synthesize JSValue from its "register"s holding tag and payload
-    // values.
+    // This should only be used by the LLInt C Loop interpreter and OSRExit code who needs
+    // synthesize JSValue from its "register"s holding tag and payload values.
     explicit JSValue(int32_t tag, int32_t payload);
-#endif
 
 #elif USE(JSVALUE64)
     /*
index 046bd48..ff46040 100644 (file)
@@ -340,7 +340,7 @@ inline JSValue::JSValue(int i)
     u.asBits.payload = i;
 }
 
-#if !ENABLE(JIT)
+#if USE(JSVALUE32_64)
 inline JSValue::JSValue(int32_t tag, int32_t payload)
 {
     u.asBits.tag = tag;
index 28a3336..6bba992 100644 (file)
@@ -571,7 +571,6 @@ public:
     void* targetMachinePCForThrow;
     Instruction* targetInterpreterPCForThrow;
     uint32_t osrExitIndex;
-    void* osrExitJumpDestination;
     bool isExecutingInRegExpJIT { false };
 
     // The threading protocol here is as follows: