Rolling out r221832: Regresses Speedometer by ~4% and Dromaeo CSS YUI by ~20%.
authormark.lam@apple.com <mark.lam@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Thu, 14 Sep 2017 04:21:05 +0000 (04:21 +0000)
committermark.lam@apple.com <mark.lam@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Thu, 14 Sep 2017 04:21:05 +0000 (04:21 +0000)
https://bugs.webkit.org/show_bug.cgi?id=176888
<rdar://problem/34381832>

Not reviewed.

JSTests:

* stress/op_mod-ConstVar.js:
* stress/op_mod-VarConst.js:
* stress/op_mod-VarVar.js:

Source/JavaScriptCore:

* JavaScriptCore.xcodeproj/project.pbxproj:
* assembler/MacroAssembler.cpp:
(JSC::stdFunctionCallback):
* assembler/MacroAssemblerPrinter.cpp:
(JSC::Printer::printCallback):
* assembler/ProbeContext.h:
(JSC::Probe:: const):
(JSC::Probe::Context::Context):
(JSC::Probe::Context::gpr):
(JSC::Probe::Context::spr):
(JSC::Probe::Context::fpr):
(JSC::Probe::Context::gprName):
(JSC::Probe::Context::sprName):
(JSC::Probe::Context::fprName):
(JSC::Probe::Context::pc):
(JSC::Probe::Context::fp):
(JSC::Probe::Context::sp):
(JSC::Probe::CPUState::gpr const): Deleted.
(JSC::Probe::CPUState::spr const): Deleted.
(JSC::Probe::Context::arg): Deleted.
(JSC::Probe::Context::gpr const): Deleted.
(JSC::Probe::Context::spr const): Deleted.
(JSC::Probe::Context::fpr const): Deleted.
* assembler/ProbeFrame.h: Removed.
* assembler/ProbeStack.cpp:
(JSC::Probe::Page::Page):
* assembler/ProbeStack.h:
(JSC::Probe::Page::get):
(JSC::Probe::Page::set):
(JSC::Probe::Page::physicalAddressFor):
(JSC::Probe::Stack::lowWatermark):
(JSC::Probe::Stack::get):
(JSC::Probe::Stack::set):
* bytecode/ArithProfile.cpp:
* bytecode/ArithProfile.h:
* bytecode/ArrayProfile.h:
(JSC::ArrayProfile::observeArrayMode): Deleted.
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize): Deleted.
* bytecode/CodeBlock.h:
(JSC::CodeBlock::addressOfOSRExitCounter):
* bytecode/ExecutionCounter.h:
(JSC::ExecutionCounter::hasCrossedThreshold const): Deleted.
(JSC::ExecutionCounter::setNewThresholdForOSRExit): Deleted.
* bytecode/MethodOfGettingAValueProfile.cpp:
(JSC::MethodOfGettingAValueProfile::reportValue): Deleted.
* bytecode/MethodOfGettingAValueProfile.h:
* dfg/DFGDriver.cpp:
(JSC::DFG::compileImpl):
* dfg/DFGJITCode.cpp:
(JSC::DFG::JITCode::findPC):
* dfg/DFGJITCode.h:
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::linkOSRExits):
(JSC::DFG::JITCompiler::link):
* dfg/DFGOSRExit.cpp:
(JSC::DFG::OSRExit::setPatchableCodeOffset):
(JSC::DFG::OSRExit::getPatchableCodeOffsetAsJump const):
(JSC::DFG::OSRExit::codeLocationForRepatch const):
(JSC::DFG::OSRExit::correctJump):
(JSC::DFG::OSRExit::emitRestoreArguments):
(JSC::DFG::OSRExit::compileOSRExit):
(JSC::DFG::OSRExit::compileExit):
(JSC::DFG::OSRExit::debugOperationPrintSpeculationFailure):
(JSC::DFG::jsValueFor): Deleted.
(JSC::DFG::restoreCalleeSavesFor): Deleted.
(JSC::DFG::saveCalleeSavesFor): Deleted.
(JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer): Deleted.
(JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer): Deleted.
(JSC::DFG::saveOrCopyCalleeSavesFor): Deleted.
(JSC::DFG::createDirectArgumentsDuringExit): Deleted.
(JSC::DFG::createClonedArgumentsDuringExit): Deleted.
(JSC::DFG::emitRestoreArguments): Deleted.
(JSC::DFG::OSRExit::executeOSRExit): Deleted.
(JSC::DFG::reifyInlinedCallFrames): Deleted.
(JSC::DFG::adjustAndJumpToTarget): Deleted.
(JSC::DFG::printOSRExit): Deleted.
* dfg/DFGOSRExit.h:
(JSC::DFG::OSRExitState::OSRExitState): Deleted.
* dfg/DFGOSRExitCompilerCommon.cpp:
* dfg/DFGOSRExitCompilerCommon.h:
* dfg/DFGOperations.cpp:
* dfg/DFGOperations.h:
* dfg/DFGThunks.cpp:
(JSC::DFG::osrExitGenerationThunkGenerator):
(JSC::DFG::osrExitThunkGenerator): Deleted.
* dfg/DFGThunks.h:
* jit/AssemblyHelpers.cpp:
(JSC::AssemblyHelpers::debugCall):
* jit/AssemblyHelpers.h:
* jit/JITOperations.cpp:
* jit/JITOperations.h:
* profiler/ProfilerOSRExit.h:
(JSC::Profiler::OSRExit::incCount): Deleted.
* runtime/JSCJSValue.h:
* runtime/JSCJSValueInlines.h:
* runtime/VM.h:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@222009 268f45cc-cd09-0410-ab3c-d52691b4dbfc

40 files changed:
JSTests/ChangeLog
JSTests/stress/op_mod-ConstVar.js
JSTests/stress/op_mod-VarConst.js
JSTests/stress/op_mod-VarVar.js
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/assembler/MacroAssembler.cpp
Source/JavaScriptCore/assembler/MacroAssemblerPrinter.cpp
Source/JavaScriptCore/assembler/ProbeContext.h
Source/JavaScriptCore/assembler/ProbeFrame.h [deleted file]
Source/JavaScriptCore/assembler/ProbeStack.cpp
Source/JavaScriptCore/assembler/ProbeStack.h
Source/JavaScriptCore/bytecode/ArithProfile.cpp
Source/JavaScriptCore/bytecode/ArithProfile.h
Source/JavaScriptCore/bytecode/ArrayProfile.h
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/bytecode/CodeBlock.h
Source/JavaScriptCore/bytecode/ExecutionCounter.h
Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.cpp
Source/JavaScriptCore/bytecode/MethodOfGettingAValueProfile.h
Source/JavaScriptCore/dfg/DFGDriver.cpp
Source/JavaScriptCore/dfg/DFGJITCode.cpp
Source/JavaScriptCore/dfg/DFGJITCode.h
Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
Source/JavaScriptCore/dfg/DFGOSRExit.cpp
Source/JavaScriptCore/dfg/DFGOSRExit.h
Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.h
Source/JavaScriptCore/dfg/DFGOperations.cpp
Source/JavaScriptCore/dfg/DFGOperations.h
Source/JavaScriptCore/dfg/DFGThunks.cpp
Source/JavaScriptCore/dfg/DFGThunks.h
Source/JavaScriptCore/jit/AssemblyHelpers.cpp
Source/JavaScriptCore/jit/AssemblyHelpers.h
Source/JavaScriptCore/jit/JITOperations.cpp
Source/JavaScriptCore/jit/JITOperations.h
Source/JavaScriptCore/profiler/ProfilerOSRExit.h
Source/JavaScriptCore/runtime/JSCJSValue.h
Source/JavaScriptCore/runtime/JSCJSValueInlines.h
Source/JavaScriptCore/runtime/VM.h

index 94bc66b..0928ac3 100644 (file)
@@ -1,3 +1,15 @@
+2017-09-13  Mark Lam  <mark.lam@apple.com>
+
+        Rolling out r221832: Regresses Speedometer by ~4% and Dromaeo CSS YUI by ~20%.
+        https://bugs.webkit.org/show_bug.cgi?id=176888
+        <rdar://problem/34381832>
+
+        Not reviewed.
+
+        * stress/op_mod-ConstVar.js:
+        * stress/op_mod-VarConst.js:
+        * stress/op_mod-VarVar.js:
+
 2017-09-13  Ryan Haddad  <ryanhaddad@apple.com>
 
         Skip 3 op_mod tests on Debug JSC bots.
index 794ef05..489188c 100644 (file)
@@ -1,4 +1,4 @@
-//@ if $buildType == "release" then runFTLNoCJIT("--timeoutMultiplier=1.5") else skip end
+//@ runFTLNoCJIT("--timeoutMultiplier=1.5")
 
 // If all goes well, this test module will terminate silently. If not, it will print
 // errors. See binary-op-test.js for debugging options if needed.
index 406e0e5..f03a4d4 100644 (file)
@@ -1,4 +1,4 @@
-//@ if $buildType == "release" then runFTLNoCJIT("--timeoutMultiplier=1.5") else skip end
+//@ runFTLNoCJIT("--timeoutMultiplier=1.5")
 
 // If all goes well, this test module will terminate silently. If not, it will print
 // errors. See binary-op-test.js for debugging options if needed.
index 3110733..13436a9 100644 (file)
@@ -1,4 +1,4 @@
-//@ if $buildType == "release" then runFTLNoCJIT("--timeoutMultiplier=1.5") else skip end
+//@ runFTLNoCJIT("--timeoutMultiplier=1.5")
 
 // If all goes well, this test module will terminate silently. If not, it will print
 // errors. See binary-op-test.js for debugging options if needed.
index cb15891..0d41c6b 100644 (file)
@@ -1,3 +1,109 @@
+2017-09-13  Mark Lam  <mark.lam@apple.com>
+
+        Rolling out r221832: Regresses Speedometer by ~4% and Dromaeo CSS YUI by ~20%.
+        https://bugs.webkit.org/show_bug.cgi?id=176888
+        <rdar://problem/34381832>
+
+        Not reviewed.
+
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * assembler/MacroAssembler.cpp:
+        (JSC::stdFunctionCallback):
+        * assembler/MacroAssemblerPrinter.cpp:
+        (JSC::Printer::printCallback):
+        * assembler/ProbeContext.h:
+        (JSC::Probe:: const):
+        (JSC::Probe::Context::Context):
+        (JSC::Probe::Context::gpr):
+        (JSC::Probe::Context::spr):
+        (JSC::Probe::Context::fpr):
+        (JSC::Probe::Context::gprName):
+        (JSC::Probe::Context::sprName):
+        (JSC::Probe::Context::fprName):
+        (JSC::Probe::Context::pc):
+        (JSC::Probe::Context::fp):
+        (JSC::Probe::Context::sp):
+        (JSC::Probe::CPUState::gpr const): Deleted.
+        (JSC::Probe::CPUState::spr const): Deleted.
+        (JSC::Probe::Context::arg): Deleted.
+        (JSC::Probe::Context::gpr const): Deleted.
+        (JSC::Probe::Context::spr const): Deleted.
+        (JSC::Probe::Context::fpr const): Deleted.
+        * assembler/ProbeFrame.h: Removed.
+        * assembler/ProbeStack.cpp:
+        (JSC::Probe::Page::Page):
+        * assembler/ProbeStack.h:
+        (JSC::Probe::Page::get):
+        (JSC::Probe::Page::set):
+        (JSC::Probe::Page::physicalAddressFor):
+        (JSC::Probe::Stack::lowWatermark):
+        (JSC::Probe::Stack::get):
+        (JSC::Probe::Stack::set):
+        * bytecode/ArithProfile.cpp:
+        * bytecode/ArithProfile.h:
+        * bytecode/ArrayProfile.h:
+        (JSC::ArrayProfile::observeArrayMode): Deleted.
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize): Deleted.
+        * bytecode/CodeBlock.h:
+        (JSC::CodeBlock::addressOfOSRExitCounter):
+        * bytecode/ExecutionCounter.h:
+        (JSC::ExecutionCounter::hasCrossedThreshold const): Deleted.
+        (JSC::ExecutionCounter::setNewThresholdForOSRExit): Deleted.
+        * bytecode/MethodOfGettingAValueProfile.cpp:
+        (JSC::MethodOfGettingAValueProfile::reportValue): Deleted.
+        * bytecode/MethodOfGettingAValueProfile.h:
+        * dfg/DFGDriver.cpp:
+        (JSC::DFG::compileImpl):
+        * dfg/DFGJITCode.cpp:
+        (JSC::DFG::JITCode::findPC):
+        * dfg/DFGJITCode.h:
+        * dfg/DFGJITCompiler.cpp:
+        (JSC::DFG::JITCompiler::linkOSRExits):
+        (JSC::DFG::JITCompiler::link):
+        * dfg/DFGOSRExit.cpp:
+        (JSC::DFG::OSRExit::setPatchableCodeOffset):
+        (JSC::DFG::OSRExit::getPatchableCodeOffsetAsJump const):
+        (JSC::DFG::OSRExit::codeLocationForRepatch const):
+        (JSC::DFG::OSRExit::correctJump):
+        (JSC::DFG::OSRExit::emitRestoreArguments):
+        (JSC::DFG::OSRExit::compileOSRExit):
+        (JSC::DFG::OSRExit::compileExit):
+        (JSC::DFG::OSRExit::debugOperationPrintSpeculationFailure):
+        (JSC::DFG::jsValueFor): Deleted.
+        (JSC::DFG::restoreCalleeSavesFor): Deleted.
+        (JSC::DFG::saveCalleeSavesFor): Deleted.
+        (JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer): Deleted.
+        (JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer): Deleted.
+        (JSC::DFG::saveOrCopyCalleeSavesFor): Deleted.
+        (JSC::DFG::createDirectArgumentsDuringExit): Deleted.
+        (JSC::DFG::createClonedArgumentsDuringExit): Deleted.
+        (JSC::DFG::emitRestoreArguments): Deleted.
+        (JSC::DFG::OSRExit::executeOSRExit): Deleted.
+        (JSC::DFG::reifyInlinedCallFrames): Deleted.
+        (JSC::DFG::adjustAndJumpToTarget): Deleted.
+        (JSC::DFG::printOSRExit): Deleted.
+        * dfg/DFGOSRExit.h:
+        (JSC::DFG::OSRExitState::OSRExitState): Deleted.
+        * dfg/DFGOSRExitCompilerCommon.cpp:
+        * dfg/DFGOSRExitCompilerCommon.h:
+        * dfg/DFGOperations.cpp:
+        * dfg/DFGOperations.h:
+        * dfg/DFGThunks.cpp:
+        (JSC::DFG::osrExitGenerationThunkGenerator):
+        (JSC::DFG::osrExitThunkGenerator): Deleted.
+        * dfg/DFGThunks.h:
+        * jit/AssemblyHelpers.cpp:
+        (JSC::AssemblyHelpers::debugCall):
+        * jit/AssemblyHelpers.h:
+        * jit/JITOperations.cpp:
+        * jit/JITOperations.h:
+        * profiler/ProfilerOSRExit.h:
+        (JSC::Profiler::OSRExit::incCount): Deleted.
+        * runtime/JSCJSValue.h:
+        * runtime/JSCJSValueInlines.h:
+        * runtime/VM.h:
+
 2017-09-13  Yusuke Suzuki  <utatane.tea@gmail.com>
 
         [JSC] Move class/struct used in other class' member out of anonymous namespace
index 8e350c3..3df622c 100644 (file)
                FE10AAEC1F44D545009DEDC5 /* ProbeStack.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE10AAE91F44D510009DEDC5 /* ProbeStack.cpp */; };
                FE10AAEE1F44D954009DEDC5 /* ProbeContext.h in Headers */ = {isa = PBXBuildFile; fileRef = FE10AAED1F44D946009DEDC5 /* ProbeContext.h */; settings = {ATTRIBUTES = (Private, ); }; };
                FE10AAF41F468396009DEDC5 /* ProbeContext.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */; };
-               FE10AAFF1F4E38E5009DEDC5 /* ProbeFrame.h in Headers */ = {isa = PBXBuildFile; fileRef = FE10AAFE1F4E38DA009DEDC5 /* ProbeFrame.h */; };
                FE1220271BE7F58C0039E6F2 /* JITAddGenerator.h in Headers */ = {isa = PBXBuildFile; fileRef = FE1220261BE7F5640039E6F2 /* JITAddGenerator.h */; };
                FE1220281BE7F5910039E6F2 /* JITAddGenerator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE1220251BE7F5640039E6F2 /* JITAddGenerator.cpp */; };
                FE187A011BFBE55E0038BBCA /* JITMulGenerator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE1879FF1BFBC73C0038BBCA /* JITMulGenerator.cpp */; };
                FE10AAEA1F44D512009DEDC5 /* ProbeStack.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProbeStack.h; sourceTree = "<group>"; };
                FE10AAED1F44D946009DEDC5 /* ProbeContext.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProbeContext.h; sourceTree = "<group>"; };
                FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ProbeContext.cpp; sourceTree = "<group>"; };
-               FE10AAFE1F4E38DA009DEDC5 /* ProbeFrame.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProbeFrame.h; sourceTree = "<group>"; };
                FE1220251BE7F5640039E6F2 /* JITAddGenerator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITAddGenerator.cpp; sourceTree = "<group>"; };
                FE1220261BE7F5640039E6F2 /* JITAddGenerator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITAddGenerator.h; sourceTree = "<group>"; };
                FE1879FF1BFBC73C0038BBCA /* JITMulGenerator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JITMulGenerator.cpp; sourceTree = "<group>"; };
                                FE63DD531EA9B60E00103A69 /* Printer.h */,
                                FE10AAF31F46826D009DEDC5 /* ProbeContext.cpp */,
                                FE10AAED1F44D946009DEDC5 /* ProbeContext.h */,
-                               FE10AAFE1F4E38DA009DEDC5 /* ProbeFrame.h */,
                                FE10AAE91F44D510009DEDC5 /* ProbeStack.cpp */,
                                FE10AAEA1F44D512009DEDC5 /* ProbeStack.h */,
                                FE533CA01F217C310016A1FE /* testmasm.cpp */,
                                AD2FCC1D1DB59CB200B3E736 /* WebAssemblyModulePrototype.lut.h in Headers */,
                                AD4937C81DDD0AAE0077C807 /* WebAssemblyModuleRecord.h in Headers */,
                                AD2FCC2D1DB838FD00B3E736 /* WebAssemblyPrototype.h in Headers */,
-                               FE10AAFF1F4E38E5009DEDC5 /* ProbeFrame.h in Headers */,
                                AD2FCBF91DB58DAD00B3E736 /* WebAssemblyRuntimeErrorConstructor.h in Headers */,
                                AD2FCC1E1DB59CB200B3E736 /* WebAssemblyRuntimeErrorConstructor.lut.h in Headers */,
                                AD2FCBFB1DB58DAD00B3E736 /* WebAssemblyRuntimeErrorPrototype.h in Headers */,
index 82b25c8..d19b6c5 100644 (file)
@@ -38,7 +38,7 @@ const double MacroAssembler::twoToThe32 = (double)0x100000000ull;
 #if ENABLE(MASM_PROBE)
 static void stdFunctionCallback(Probe::Context& context)
 {
-    auto func = context.arg<const std::function<void(Probe::Context&)>*>();
+    auto func = static_cast<const std::function<void(Probe::Context&)>*>(context.arg);
     (*func)(context);
 }
     
index 57ed63e..443f77f 100644 (file)
@@ -175,7 +175,7 @@ void printMemory(PrintStream& out, Context& context)
 void printCallback(Probe::Context& probeContext)
 {
     auto& out = WTF::dataFile();
-    PrintRecordList& list = *probeContext.arg<PrintRecordList*>();
+    PrintRecordList& list = *reinterpret_cast<PrintRecordList*>(probeContext.arg);
     for (size_t i = 0; i < list.size(); i++) {
         auto& record = list[i];
         Context context(probeContext, record.data);
index 0e52034..caa52ba 100644 (file)
@@ -45,8 +45,14 @@ struct CPUState {
     inline uintptr_t& spr(SPRegisterID);
     inline double& fpr(FPRegisterID);
 
-    template<typename T> T gpr(RegisterID) const;
-    template<typename T> T spr(SPRegisterID) const;
+    template<typename T, typename std::enable_if<std::is_integral<T>::value>::type* = nullptr>
+    T gpr(RegisterID) const;
+    template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type* = nullptr>
+    T gpr(RegisterID) const;
+    template<typename T, typename std::enable_if<std::is_integral<T>::value>::type* = nullptr>
+    T spr(SPRegisterID) const;
+    template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type* = nullptr>
+    T spr(SPRegisterID) const;
     template<typename T> T fpr(FPRegisterID) const;
 
     void*& pc();
@@ -79,24 +85,32 @@ inline double& CPUState::fpr(FPRegisterID id)
     return fprs[id];
 }
 
-template<typename T>
+template<typename T, typename std::enable_if<std::is_integral<T>::value>::type*>
 T CPUState::gpr(RegisterID id) const
 {
     CPUState* cpu = const_cast<CPUState*>(this);
-    auto& from = cpu->gpr(id);
-    typename std::remove_const<T>::type to { };
-    std::memcpy(&to, &from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues.
-    return to;
+    return static_cast<T>(cpu->gpr(id));
 }
 
-template<typename T>
+template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type*>
+T CPUState::gpr(RegisterID id) const
+{
+    CPUState* cpu = const_cast<CPUState*>(this);
+    return reinterpret_cast<T>(cpu->gpr(id));
+}
+
+template<typename T, typename std::enable_if<std::is_integral<T>::value>::type*>
 T CPUState::spr(SPRegisterID id) const
 {
     CPUState* cpu = const_cast<CPUState*>(this);
-    auto& from = cpu->spr(id);
-    typename std::remove_const<T>::type to { };
-    std::memcpy(&to, &from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues.
-    return to;
+    return static_cast<T>(cpu->spr(id));
+}
+
+template<typename T, typename std::enable_if<std::is_pointer<T>::value>::type*>
+T CPUState::spr(SPRegisterID id) const
+{
+    CPUState* cpu = const_cast<CPUState*>(this);
+    return reinterpret_cast<T>(cpu->spr(id));
 }
 
 template<typename T>
@@ -191,31 +205,25 @@ public:
     using FPRegisterID = MacroAssembler::FPRegisterID;
 
     Context(State* state)
-        : cpu(state->cpu)
-        , m_state(state)
+        : m_state(state)
+        , arg(state->arg)
+        , cpu(state->cpu)
     { }
 
-    template<typename T>
-    T arg() { return reinterpret_cast<T>(m_state->arg); }
-
-    uintptr_t& gpr(RegisterID id) { return cpu.gpr(id); }
-    uintptr_t& spr(SPRegisterID id) { return cpu.spr(id); }
-    double& fpr(FPRegisterID id) { return cpu.fpr(id); }
-    const char* gprName(RegisterID id) { return cpu.gprName(id); }
-    const char* sprName(SPRegisterID id) { return cpu.sprName(id); }
-    const char* fprName(FPRegisterID id) { return cpu.fprName(id); }
+    uintptr_t& gpr(RegisterID id) { return m_state->cpu.gpr(id); }
+    uintptr_t& spr(SPRegisterID id) { return m_state->cpu.spr(id); }
+    double& fpr(FPRegisterID id) { return m_state->cpu.fpr(id); }
+    const char* gprName(RegisterID id) { return m_state->cpu.gprName(id); }
+    const char* sprName(SPRegisterID id) { return m_state->cpu.sprName(id); }
+    const char* fprName(FPRegisterID id) { return m_state->cpu.fprName(id); }
 
-    template<typename T> T gpr(RegisterID id) const { return cpu.gpr<T>(id); }
-    template<typename T> T spr(SPRegisterID id) const { return cpu.spr<T>(id); }
-    template<typename T> T fpr(FPRegisterID id) const { return cpu.fpr<T>(id); }
+    void*& pc() { return m_state->cpu.pc(); }
+    void*& fp() { return m_state->cpu.fp(); }
+    void*& sp() { return m_state->cpu.sp(); }
 
-    void*& pc() { return cpu.pc(); }
-    void*& fp() { return cpu.fp(); }
-    void*& sp() { return cpu.sp(); }
-
-    template<typename T> T pc() { return cpu.pc<T>(); }
-    template<typename T> T fp() { return cpu.fp<T>(); }
-    template<typename T> T sp() { return cpu.sp<T>(); }
+    template<typename T> T pc() { return m_state->cpu.pc<T>(); }
+    template<typename T> T fp() { return m_state->cpu.fp<T>(); }
+    template<typename T> T sp() { return m_state->cpu.sp<T>(); }
 
     Stack& stack()
     {
@@ -226,10 +234,13 @@ public:
     bool hasWritesToFlush() { return m_stack.hasWritesToFlush(); }
     Stack* releaseStack() { return new Stack(WTFMove(m_stack)); }
 
+private:
+    State* m_state;
+public:
+    void* arg;
     CPUState& cpu;
 
 private:
-    State* m_state;
     Stack m_stack;
 
     friend JS_EXPORT_PRIVATE void* probeStateForContext(Context&); // Not for general use. This should only be for writing tests.
diff --git a/Source/JavaScriptCore/assembler/ProbeFrame.h b/Source/JavaScriptCore/assembler/ProbeFrame.h
deleted file mode 100644 (file)
index cab368d..0000000
+++ /dev/null
@@ -1,94 +0,0 @@
-/*
- * Copyright (C) 2017 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#pragma once
-
-#if ENABLE(MASM_PROBE)
-
-#include "CallFrame.h"
-#include "ProbeStack.h"
-
-namespace JSC {
-namespace Probe {
-
-class Frame {
-public:
-    Frame(void* frameBase, Stack& stack)
-        : m_frameBase { reinterpret_cast<uint8_t*>(frameBase) }
-        , m_stack { stack }
-    { }
-
-    template<typename T = JSValue>
-    T argument(int argument)
-    {
-        return get<T>(CallFrame::argumentOffset(argument) * sizeof(Register));
-    }
-    template<typename T = JSValue>
-    T operand(int operand)
-    {
-        return get<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register));
-    }
-    template<typename T = JSValue>
-    T operand(int operand, ptrdiff_t offset)
-    {
-        return get<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register) + offset);
-    }
-
-    template<typename T>
-    void setArgument(int argument, T value)
-    {
-        return set<T>(CallFrame::argumentOffset(argument) * sizeof(Register), value);
-    }
-    template<typename T>
-    void setOperand(int operand, T value)
-    {
-        set<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register), value);
-    }
-    template<typename T>
-    void setOperand(int operand, ptrdiff_t offset, T value)
-    {
-        set<T>(static_cast<VirtualRegister>(operand).offset() * sizeof(Register) + offset, value);
-    }
-
-    template<typename T = JSValue>
-    T get(ptrdiff_t offset)
-    {
-        return m_stack.get<T>(m_frameBase + offset);
-    }
-    template<typename T>
-    void set(ptrdiff_t offset, T value)
-    {
-        m_stack.set<T>(m_frameBase + offset, value);
-    }
-
-private:
-    uint8_t* m_frameBase;
-    Stack& m_stack;
-};
-
-} // namespace Probe
-} // namespace JSC
-
-#endif // ENABLE(MASM_PROBE)
index da7b239..37484b3 100644 (file)
@@ -35,7 +35,6 @@ namespace Probe {
 
 Page::Page(void* baseAddress)
     : m_baseLogicalAddress(baseAddress)
-    , m_physicalAddressOffset(reinterpret_cast<uint8_t*>(&m_buffer) - reinterpret_cast<uint8_t*>(baseAddress))
 {
     memcpy(&m_buffer, baseAddress, s_pageSize);
 }
index 8ff8277..593da33 100644 (file)
@@ -56,28 +56,14 @@ public:
     template<typename T>
     T get(void* logicalAddress)
     {
-        void* from = physicalAddressFor(logicalAddress);
-        typename std::remove_const<T>::type to { };
-        std::memcpy(&to, from, sizeof(to)); // Use std::memcpy to avoid strict aliasing issues.
-        return to;
-    }
-    template<typename T>
-    T get(void* logicalBaseAddress, ptrdiff_t offset)
-    {
-        return get<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset);
+        return *physicalAddressFor<T*>(logicalAddress);
     }
 
     template<typename T>
     void set(void* logicalAddress, T value)
     {
         m_dirtyBits |= dirtyBitFor(logicalAddress);
-        void* to = physicalAddressFor(logicalAddress);
-        std::memcpy(to, &value, sizeof(T)); // Use std::memcpy to avoid strict aliasing issues.
-    }
-    template<typename T>
-    void set(void* logicalBaseAddress, ptrdiff_t offset, T value)
-    {
-        set<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset, value);
+        *physicalAddressFor<T*>(logicalAddress) = value;
     }
 
     bool hasWritesToFlush() const { return !!m_dirtyBits; }
@@ -94,16 +80,18 @@ private:
         return static_cast<uintptr_t>(1) << (offset >> s_chunkSizeShift);
     }
 
-    void* physicalAddressFor(void* logicalAddress)
+    template<typename T, typename = typename std::enable_if<std::is_pointer<T>::value>::type>
+    T physicalAddressFor(void* logicalAddress)
     {
-        return reinterpret_cast<uint8_t*>(logicalAddress) + m_physicalAddressOffset;
+        uintptr_t offset = reinterpret_cast<uintptr_t>(logicalAddress) & s_pageMask;
+        void* physicalAddress = reinterpret_cast<uint8_t*>(&m_buffer) + offset;
+        return reinterpret_cast<T>(physicalAddress);
     }
 
     void flushWrites();
 
     void* m_baseLogicalAddress { nullptr };
     uintptr_t m_dirtyBits { 0 };
-    ptrdiff_t m_physicalAddressOffset;
 
     static constexpr size_t s_pageSize = 1024;
     static constexpr uintptr_t s_pageMask = s_pageSize - 1;
@@ -132,39 +120,40 @@ public:
     { }
     Stack(Stack&& other);
 
-    void* lowWatermark()
-    {
-        // We use the chunkAddress for the low watermark because we'll be doing write backs
-        // to the stack in increments of chunks. Hence, we'll treat the lowest address of
-        // the chunk as the low watermark of any given set address.
-        return Page::chunkAddressFor(m_lowWatermark);
-    }
+    void* lowWatermark() { return m_lowWatermark; }
 
     template<typename T>
-    T get(void* address)
+    typename std::enable_if<!std::is_same<double, typename std::remove_cv<T>::type>::value, T>::type get(void* address)
     {
         Page* page = pageFor(address);
         return page->get<T>(address);
     }
-    template<typename T>
-    T get(void* logicalBaseAddress, ptrdiff_t offset)
-    {
-        return get<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset);
-    }
 
-    template<typename T>
+    template<typename T, typename = typename std::enable_if<!std::is_same<double, typename std::remove_cv<T>::type>::value>::type>
     void set(void* address, T value)
     {
         Page* page = pageFor(address);
         page->set<T>(address, value);
 
-        if (address < m_lowWatermark)
-            m_lowWatermark = address;
+        // We use the chunkAddress for the low watermark because we'll be doing write backs
+        // to the stack in increments of chunks. Hence, we'll treat the lowest address of
+        // the chunk as the low watermark of any given set address.
+        void* chunkAddress = Page::chunkAddressFor(address);
+        if (chunkAddress < m_lowWatermark)
+            m_lowWatermark = chunkAddress;
     }
+
     template<typename T>
-    void set(void* logicalBaseAddress, ptrdiff_t offset, T value)
+    typename std::enable_if<std::is_same<double, typename std::remove_cv<T>::type>::value, T>::type get(void* address)
+    {
+        Page* page = pageFor(address);
+        return bitwise_cast<double>(page->get<uint64_t>(address));
+    }
+
+    template<typename T, typename = typename std::enable_if<std::is_same<double, typename std::remove_cv<T>::type>::value>::type>
+    void set(void* address, double value)
     {
-        set<T>(reinterpret_cast<uint8_t*>(logicalBaseAddress) + offset, value);
+        set<uint64_t>(address, bitwise_cast<uint64_t>(value));
     }
 
     JS_EXPORT_PRIVATE Page* ensurePageFor(void* address);
index f36505a..1fa7c79 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2016-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -32,8 +32,6 @@
 namespace JSC {
 
 #if ENABLE(JIT)
-// FIXME: This is being supplanted by observeResult(). Remove this one
-// https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
 void ArithProfile::emitObserveResult(CCallHelpers& jit, JSValueRegs regs, TagRegistersMode mode)
 {
     if (!shouldEmitSetDouble() && !shouldEmitSetNonNumber())
index 6213e79..40fad1b 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2016-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -211,8 +211,6 @@ public:
 #if ENABLE(JIT)    
     // Sets (Int32Overflow | Int52Overflow | NonNegZeroDouble | NegZeroDouble) if it sees a
     // double. Sets NonNumber if it sees a non-number.
-    // FIXME: This is being supplanted by observeResult(). Remove this one
-    // https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
     void emitObserveResult(CCallHelpers&, JSValueRegs, TagRegistersMode = HaveTagRegisters);
     
     // Sets (Int32Overflow | Int52Overflow | NonNegZeroDouble | NegZeroDouble).
index c10c5e2..68c11a5 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2013 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -218,7 +218,6 @@ public:
     void computeUpdatedPrediction(const ConcurrentJSLocker&, CodeBlock*);
     void computeUpdatedPrediction(const ConcurrentJSLocker&, CodeBlock*, Structure* lastSeenStructure);
     
-    void observeArrayMode(ArrayModes mode) { m_observedArrayModes |= mode; }
     ArrayModes observedArrayModes(const ConcurrentJSLocker&) const { return m_observedArrayModes; }
     bool mayInterceptIndexedAccesses(const ConcurrentJSLocker&) const { return m_mayInterceptIndexedAccesses; }
     
index bd64bf0..68defde 100644 (file)
@@ -2320,53 +2320,6 @@ bool CodeBlock::checkIfOptimizationThresholdReached()
     return m_jitExecuteCounter.checkIfThresholdCrossedAndSet(this);
 }
 
-auto CodeBlock::updateOSRExitCounterAndCheckIfNeedToReoptimize(DFG::OSRExitState& exitState) -> OptimizeAction
-{
-    DFG::OSRExitBase& exit = exitState.exit;
-    if (!exitKindMayJettison(exit.m_kind)) {
-        // FIXME: We may want to notice that we're frequently exiting
-        // at an op_catch that we didn't compile an entrypoint for, and
-        // then trigger a reoptimization of this CodeBlock:
-        // https://bugs.webkit.org/show_bug.cgi?id=175842
-        return OptimizeAction::None;
-    }
-
-    exit.m_count++;
-    m_osrExitCounter++;
-
-    CodeBlock* baselineCodeBlock = exitState.baselineCodeBlock;
-    ASSERT(baselineCodeBlock == baselineAlternative());
-    if (UNLIKELY(baselineCodeBlock->jitExecuteCounter().hasCrossedThreshold()))
-        return OptimizeAction::ReoptimizeNow;
-
-    // We want to figure out if there's a possibility that we're in a loop. For the outermost
-    // code block in the inline stack, we handle this appropriately by having the loop OSR trigger
-    // check the exit count of the replacement of the CodeBlock from which we are OSRing. The
-    // problem is the inlined functions, which might also have loops, but whose baseline versions
-    // don't know where to look for the exit count. Figure out if those loops are severe enough
-    // that we had tried to OSR enter. If so, then we should use the loop reoptimization trigger.
-    // Otherwise, we should use the normal reoptimization trigger.
-
-    bool didTryToEnterInLoop = false;
-    for (InlineCallFrame* inlineCallFrame = exit.m_codeOrigin.inlineCallFrame; inlineCallFrame; inlineCallFrame = inlineCallFrame->directCaller.inlineCallFrame) {
-        if (inlineCallFrame->baselineCodeBlock->ownerScriptExecutable()->didTryToEnterInLoop()) {
-            didTryToEnterInLoop = true;
-            break;
-        }
-    }
-
-    uint32_t exitCountThreshold = didTryToEnterInLoop
-        ? exitCountThresholdForReoptimizationFromLoop()
-        : exitCountThresholdForReoptimization();
-
-    if (m_osrExitCounter > exitCountThreshold)
-        return OptimizeAction::ReoptimizeNow;
-
-    // Too few fails. Adjust the execution counter such that the target is to only optimize after a while.
-    baselineCodeBlock->m_jitExecuteCounter.setNewThresholdForOSRExit(exitState.activeThreshold, exitState.memoryUsageAdjustedThreshold);
-    return OptimizeAction::None;
-}
-
 void CodeBlock::optimizeNextInvocation()
 {
     if (Options::verboseOSR())
index 65a9613..eb04699 100644 (file)
 
 namespace JSC {
 
-namespace DFG {
-struct OSRExitState;
-} // namespace DFG
-
 class BytecodeLivenessAnalysis;
 class CodeBlockSet;
 class ExecState;
@@ -766,10 +762,8 @@ public:
 
     void countOSRExit() { m_osrExitCounter++; }
 
-    enum class OptimizeAction { None, ReoptimizeNow };
-    OptimizeAction updateOSRExitCounterAndCheckIfNeedToReoptimize(DFG::OSRExitState&);
+    uint32_t* addressOfOSRExitCounter() { return &m_osrExitCounter; }
 
-    // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
     static ptrdiff_t offsetOfOSRExitCounter() { return OBJECT_OFFSETOF(CodeBlock, m_osrExitCounter); }
 
     uint32_t adjustedExitCountThreshold(uint32_t desiredThreshold);
index c971f0a..f78a912 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2014 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -41,7 +41,6 @@ enum CountingVariant {
 double applyMemoryUsageHeuristics(int32_t value, CodeBlock*);
 int32_t applyMemoryUsageHeuristicsAndConvertToInt(int32_t value, CodeBlock*);
 
-// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 inline int32_t formattedTotalExecutionCount(float value)
 {
     union {
@@ -58,19 +57,11 @@ public:
     ExecutionCounter();
     void forceSlowPathConcurrently(); // If you use this, checkIfThresholdCrossedAndSet() may still return false.
     bool checkIfThresholdCrossedAndSet(CodeBlock*);
-    bool hasCrossedThreshold() const { return m_counter >= 0; }
     void setNewThreshold(int32_t threshold, CodeBlock*);
     void deferIndefinitely();
     double count() const { return static_cast<double>(m_totalCount) + m_counter; }
     void dump(PrintStream&) const;
     
-    void setNewThresholdForOSRExit(uint32_t activeThreshold, double memoryUsageAdjustedThreshold)
-    {
-        m_activeThreshold = activeThreshold;
-        m_counter = static_cast<int32_t>(-memoryUsageAdjustedThreshold);
-        m_totalCount = memoryUsageAdjustedThreshold;
-    }
-
     static int32_t maximumExecutionCountsBetweenCheckpoints()
     {
         switch (countingVariant) {
index acd3078..f479e5f 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2013, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -46,8 +46,6 @@ MethodOfGettingAValueProfile MethodOfGettingAValueProfile::fromLazyOperand(
     return result;
 }
 
-// FIXME: This is being supplanted by reportValue(). Remove this one
-// https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
 void MethodOfGettingAValueProfile::emitReportValue(CCallHelpers& jit, JSValueRegs regs) const
 {
     switch (m_kind) {
@@ -76,34 +74,6 @@ void MethodOfGettingAValueProfile::emitReportValue(CCallHelpers& jit, JSValueReg
     RELEASE_ASSERT_NOT_REACHED();
 }
 
-void MethodOfGettingAValueProfile::reportValue(JSValue value)
-{
-    switch (m_kind) {
-    case None:
-        return;
-
-    case Ready:
-        *u.profile->specFailBucket(0) = JSValue::encode(value);
-        return;
-
-    case LazyOperand: {
-        LazyOperandValueProfileKey key(u.lazyOperand.bytecodeOffset, VirtualRegister(u.lazyOperand.operand));
-
-        ConcurrentJSLocker locker(u.lazyOperand.codeBlock->m_lock);
-        LazyOperandValueProfile* profile =
-            u.lazyOperand.codeBlock->lazyOperandValueProfiles().add(locker, key);
-        *profile->specFailBucket(0) = JSValue::encode(value);
-        return;
-    }
-
-    case ArithProfileReady: {
-        u.arithProfile->observeResult(value);
-        return;
-    } }
-
-    RELEASE_ASSERT_NOT_REACHED();
-}
-
 } // namespace JSC
 
 #endif // ENABLE(DFG_JIT)
index f475dad..6ed743e 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -70,12 +70,9 @@ public:
         CodeBlock*, const LazyOperandValueProfileKey&);
     
     explicit operator bool() const { return m_kind != None; }
-
-    // FIXME: emitReportValue is being supplanted by reportValue(). Remove this one
-    // https://bugs.webkit.org/show_bug.cgi?id=175145 has been fixed.
+    
     void emitReportValue(CCallHelpers&, JSValueRegs) const;
-    void reportValue(JSValue);
-
+    
 private:
     enum Kind {
         None,
index 7b6d4d6..2149e6c 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2014, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -89,7 +89,7 @@ static CompilationResult compileImpl(
     
     // Make sure that any stubs that the DFG is going to use are initialized. We want to
     // make sure that all JIT code generation does finalization on the main thread.
-    vm.getCTIStub(osrExitThunkGenerator);
+    vm.getCTIStub(osrExitGenerationThunkGenerator);
     vm.getCTIStub(throwExceptionFromCallSlowPathGenerator);
     vm.getCTIStub(linkCallThunkGenerator);
     vm.getCTIStub(linkPolymorphicCallThunkGenerator);
index c02cd0d..67c33f0 100644 (file)
@@ -225,6 +225,18 @@ void JITCode::validateReferences(const TrackedReferences& trackedReferences)
     minifiedDFG.validateReferences(trackedReferences);
 }
 
+std::optional<CodeOrigin> JITCode::findPC(CodeBlock*, void* pc)
+{
+    for (OSRExit& exit : osrExit) {
+        if (ExecutableMemoryHandle* handle = exit.m_code.executableMemory()) {
+            if (handle->start() <= pc && pc < handle->end())
+                return std::optional<CodeOrigin>(exit.m_codeOriginForExitProfile);
+        }
+    }
+
+    return std::nullopt;
+}
+
 void JITCode::finalizeOSREntrypoints()
 {
     auto comparator = [] (const auto& a, const auto& b) {
index 4143461..5507a8a 100644 (file)
@@ -126,6 +126,8 @@ public:
 
     static ptrdiff_t commonDataOffset() { return OBJECT_OFFSETOF(JITCode, common); }
 
+    std::optional<CodeOrigin> findPC(CodeBlock*, void* pc) override;
+    
 private:
     friend class JITCompiler; // Allow JITCompiler to call setCodeRef().
 
index 7a31925..d19ddb7 100644 (file)
@@ -85,9 +85,8 @@ void JITCompiler::linkOSRExits()
         }
     }
     
-    MacroAssemblerCodeRef osrExitThunk = vm()->getCTIStub(osrExitThunkGenerator);
-    CodeLocationLabel osrExitThunkLabel = CodeLocationLabel(osrExitThunk.code());
     for (unsigned i = 0; i < m_jitCode->osrExit.size(); ++i) {
+        OSRExit& exit = m_jitCode->osrExit[i];
         OSRExitCompilationInfo& info = m_exitCompilationInfo[i];
         JumpList& failureJumps = info.m_failureJumps;
         if (!failureJumps.empty())
@@ -97,10 +96,7 @@ void JITCompiler::linkOSRExits()
 
         jitAssertHasValidCallFrame();
         store32(TrustedImm32(i), &vm()->osrExitIndex);
-        Jump target = jump();
-        addLinkTask([target, osrExitThunkLabel] (LinkBuffer& linkBuffer) {
-            linkBuffer.link(target, osrExitThunkLabel);
-        });
+        exit.setPatchableCodeOffset(patchableJump());
     }
 }
 
@@ -307,8 +303,13 @@ void JITCompiler::link(LinkBuffer& linkBuffer)
             linkBuffer.locationOfNearCall(record.call));
     }
     
+    MacroAssemblerCodeRef osrExitThunk = vm()->getCTIStub(osrExitGenerationThunkGenerator);
+    CodeLocationLabel target = CodeLocationLabel(osrExitThunk.code());
     for (unsigned i = 0; i < m_jitCode->osrExit.size(); ++i) {
+        OSRExit& exit = m_jitCode->osrExit[i];
         OSRExitCompilationInfo& info = m_exitCompilationInfo[i];
+        linkBuffer.link(exit.getPatchableCodeOffsetAsJump(), target);
+        exit.correctJump(linkBuffer);
         if (info.m_replacementSource.isSet()) {
             m_jitCode->common.jumpReplacements.append(JumpReplacement(
                 linkBuffer.locationOf(info.m_replacementSource),
index a5cd9a1..343308c 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2013 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 #if ENABLE(DFG_JIT)
 
 #include "AssemblyHelpers.h"
-#include "ClonedArguments.h"
 #include "DFGGraph.h"
 #include "DFGMayExit.h"
+#include "DFGOSRExitCompilerCommon.h"
 #include "DFGOSRExitPreparation.h"
 #include "DFGOperations.h"
 #include "DFGSpeculativeJIT.h"
-#include "DirectArguments.h"
-#include "InlineCallFrame.h"
+#include "FrameTracers.h"
 #include "JSCInlines.h"
-#include "JSCJSValue.h"
 #include "OperandsInlines.h"
-#include "ProbeContext.h"
-#include "ProbeFrame.h"
 
 namespace JSC { namespace DFG {
 
-using CPUState = Probe::CPUState;
-using Context = Probe::Context;
-using Frame = Probe::Frame;
-
-static void reifyInlinedCallFrames(Probe::Context&, CodeBlock* baselineCodeBlock, const OSRExitBase&);
-static void adjustAndJumpToTarget(Probe::Context&, VM&, CodeBlock*, CodeBlock* baselineCodeBlock, OSRExit&);
-static void printOSRExit(Context&, uint32_t osrExitIndex, const OSRExit&);
-
-static JSValue jsValueFor(CPUState& cpu, JSValueSource source)
-{
-    if (source.isAddress()) {
-        JSValue result;
-        std::memcpy(&result, cpu.gpr<uint8_t*>(source.base()) + source.offset(), sizeof(JSValue));
-        return result;
-    }
-#if USE(JSVALUE64)
-    return JSValue::decode(cpu.gpr<EncodedJSValue>(source.gpr()));
-#else
-    if (source.hasKnownTag())
-        return JSValue(source.tag(), cpu.gpr<int32_t>(source.payloadGPR()));
-    return JSValue(cpu.gpr<int32_t>(source.tagGPR()), cpu.gpr<int32_t>(source.payloadGPR()));
-#endif
-}
-
-#if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
-
-static_assert(is64Bit(), "we only support callee save registers on 64-bit");
-
-// Based on AssemblyHelpers::emitRestoreCalleeSavesFor().
-static void restoreCalleeSavesFor(Context& context, CodeBlock* codeBlock)
-{
-    ASSERT(codeBlock);
-
-    RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
-    RegisterSet dontRestoreRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
-    unsigned registerCount = calleeSaves->size();
-
-    uintptr_t* physicalStackFrame = context.fp<uintptr_t*>();
-    for (unsigned i = 0; i < registerCount; i++) {
-        RegisterAtOffset entry = calleeSaves->at(i);
-        if (dontRestoreRegisters.get(entry.reg()))
-            continue;
-        // The callee saved values come from the original stack, not the recovered stack.
-        // Hence, we read the values directly from the physical stack memory instead of
-        // going through context.stack().
-        ASSERT(!(entry.offset() % sizeof(uintptr_t)));
-        context.gpr(entry.reg().gpr()) = physicalStackFrame[entry.offset() / sizeof(uintptr_t)];
-    }
-}
-
-// Based on AssemblyHelpers::emitSaveCalleeSavesFor().
-static void saveCalleeSavesFor(Context& context, CodeBlock* codeBlock)
-{
-    auto& stack = context.stack();
-    ASSERT(codeBlock);
-
-    RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
-    RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
-    unsigned registerCount = calleeSaves->size();
-
-    for (unsigned i = 0; i < registerCount; i++) {
-        RegisterAtOffset entry = calleeSaves->at(i);
-        if (dontSaveRegisters.get(entry.reg()))
-            continue;
-        stack.set(context.fp(), entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr()));
-    }
-}
-
-// Based on AssemblyHelpers::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer().
-static void restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(Context& context)
-{
-    VM& vm = *context.arg<VM*>();
-
-    RegisterAtOffsetList* allCalleeSaves = VM::getAllCalleeSaveRegisterOffsets();
-    RegisterSet dontRestoreRegisters = RegisterSet::stackRegisters();
-    unsigned registerCount = allCalleeSaves->size();
-
-    VMEntryRecord* entryRecord = vmEntryRecord(vm.topVMEntryFrame);
-    uintptr_t* calleeSaveBuffer = reinterpret_cast<uintptr_t*>(entryRecord->calleeSaveRegistersBuffer);
-
-    // Restore all callee saves.
-    for (unsigned i = 0; i < registerCount; i++) {
-        RegisterAtOffset entry = allCalleeSaves->at(i);
-        if (dontRestoreRegisters.get(entry.reg()))
-            continue;
-        size_t uintptrOffset = entry.offset() / sizeof(uintptr_t);
-        if (entry.reg().isGPR())
-            context.gpr(entry.reg().gpr()) = calleeSaveBuffer[uintptrOffset];
-        else
-            context.fpr(entry.reg().fpr()) = bitwise_cast<double>(calleeSaveBuffer[uintptrOffset]);
-    }
-}
-
-// Based on AssemblyHelpers::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer().
-static void copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(Context& context)
-{
-    VM& vm = *context.arg<VM*>();
-    auto& stack = context.stack();
-
-    VMEntryRecord* entryRecord = vmEntryRecord(vm.topVMEntryFrame);
-    void* calleeSaveBuffer = entryRecord->calleeSaveRegistersBuffer;
-
-    RegisterAtOffsetList* allCalleeSaves = VM::getAllCalleeSaveRegisterOffsets();
-    RegisterSet dontCopyRegisters = RegisterSet::stackRegisters();
-    unsigned registerCount = allCalleeSaves->size();
-
-    for (unsigned i = 0; i < registerCount; i++) {
-        RegisterAtOffset entry = allCalleeSaves->at(i);
-        if (dontCopyRegisters.get(entry.reg()))
-            continue;
-        if (entry.reg().isGPR())
-            stack.set(calleeSaveBuffer, entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr()));
-        else
-            stack.set(calleeSaveBuffer, entry.offset(), context.fpr<uintptr_t>(entry.reg().fpr()));
-    }
-}
-
-// Based on AssemblyHelpers::emitSaveOrCopyCalleeSavesFor().
-static void saveOrCopyCalleeSavesFor(Context& context, CodeBlock* codeBlock, VirtualRegister offsetVirtualRegister, bool wasCalledViaTailCall)
-{
-    Frame frame(context.fp(), context.stack());
-    ASSERT(codeBlock);
-
-    RegisterAtOffsetList* calleeSaves = codeBlock->calleeSaveRegisters();
-    RegisterSet dontSaveRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
-    unsigned registerCount = calleeSaves->size();
-
-    RegisterSet baselineCalleeSaves = RegisterSet::llintBaselineCalleeSaveRegisters();
-
-    for (unsigned i = 0; i < registerCount; i++) {
-        RegisterAtOffset entry = calleeSaves->at(i);
-        if (dontSaveRegisters.get(entry.reg()))
-            continue;
-
-        uintptr_t savedRegisterValue;
-
-        if (wasCalledViaTailCall && baselineCalleeSaves.get(entry.reg()))
-            savedRegisterValue = frame.get<uintptr_t>(entry.offset());
-        else
-            savedRegisterValue = context.gpr(entry.reg().gpr());
-
-        frame.set(offsetVirtualRegister.offsetInBytes() + entry.offset(), savedRegisterValue);
-    }
-}
-#else // not NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
-
-static void restoreCalleeSavesFor(Context&, CodeBlock*) { }
-static void saveCalleeSavesFor(Context&, CodeBlock*) { }
-static void restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(Context&) { }
-static void copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(Context&) { }
-static void saveOrCopyCalleeSavesFor(Context&, CodeBlock*, VirtualRegister, bool) { }
-
-#endif // NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
-
-static JSCell* createDirectArgumentsDuringExit(Context& context, CodeBlock* codeBlock, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
-{
-    VM& vm = *context.arg<VM*>();
-
-    ASSERT(vm.heap.isDeferred());
-
-    if (inlineCallFrame)
-        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
-
-    unsigned length = argumentCount - 1;
-    unsigned capacity = std::max(length, static_cast<unsigned>(codeBlock->numParameters() - 1));
-    DirectArguments* result = DirectArguments::create(
-        vm, codeBlock->globalObject()->directArgumentsStructure(), length, capacity);
-
-    result->callee().set(vm, result, callee);
-
-    void* frameBase = context.fp<Register*>() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0);
-    Frame frame(frameBase, context.stack());
-    for (unsigned i = length; i--;)
-        result->setIndexQuickly(vm, i, frame.argument(i));
-
-    return result;
-}
-
-static JSCell* createClonedArgumentsDuringExit(Context& context, CodeBlock* codeBlock, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
-{
-    VM& vm = *context.arg<VM*>();
-    ExecState* exec = context.fp<ExecState*>();
-
-    ASSERT(vm.heap.isDeferred());
-
-    if (inlineCallFrame)
-        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
-
-    unsigned length = argumentCount - 1;
-    ClonedArguments* result = ClonedArguments::createEmpty(
-        vm, codeBlock->globalObject()->clonedArgumentsStructure(), callee, length);
-
-    void* frameBase = context.fp<Register*>() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0);
-    Frame frame(frameBase, context.stack());
-    for (unsigned i = length; i--;)
-        result->putDirectIndex(exec, i, frame.argument(i));
-    return result;
-}
-
 OSRExit::OSRExit(ExitKind kind, JSValueSource jsValueSource, MethodOfGettingAValueProfile valueProfile, SpeculativeJIT* jit, unsigned streamIndex, unsigned recoveryIndex)
     : OSRExitBase(kind, jit->m_origin.forExit, jit->m_origin.semantic, jit->m_origin.wasHoisted)
     , m_jsValueSource(jsValueSource)
@@ -259,10 +56,30 @@ OSRExit::OSRExit(ExitKind kind, JSValueSource jsValueSource, MethodOfGettingAVal
     DFG_ASSERT(jit->m_jit.graph(), jit->m_currentNode, canExit);
 }
 
-static void emitRestoreArguments(Context& context, CodeBlock* codeBlock, DFG::JITCode* dfgJITCode, const Operands<ValueRecovery>& operands)
+void OSRExit::setPatchableCodeOffset(MacroAssembler::PatchableJump check)
+{
+    m_patchableCodeOffset = check.m_jump.m_label.m_offset;
+}
+
+MacroAssembler::Jump OSRExit::getPatchableCodeOffsetAsJump() const
 {
-    Frame frame(context.fp(), context.stack());
+    return MacroAssembler::Jump(AssemblerLabel(m_patchableCodeOffset));
+}
 
+CodeLocationJump OSRExit::codeLocationForRepatch(CodeBlock* dfgCodeBlock) const
+{
+    return CodeLocationJump(dfgCodeBlock->jitCode()->dataAddressAtOffset(m_patchableCodeOffset));
+}
+
+void OSRExit::correctJump(LinkBuffer& linkBuffer)
+{
+    MacroAssembler::Label label;
+    label.m_label.m_offset = m_patchableCodeOffset;
+    m_patchableCodeOffset = linkBuffer.offsetOf(label);
+}
+
+void OSRExit::emitRestoreArguments(CCallHelpers& jit, const Operands<ValueRecovery>& operands)
+{
     HashMap<MinifiedID, int> alreadyAllocatedArguments; // Maps phantom arguments node ID to operand.
     for (size_t index = 0; index < operands.size(); ++index) {
         const ValueRecovery& recovery = operands[index];
@@ -275,12 +92,14 @@ static void emitRestoreArguments(Context& context, CodeBlock* codeBlock, DFG::JI
         MinifiedID id = recovery.nodeID();
         auto iter = alreadyAllocatedArguments.find(id);
         if (iter != alreadyAllocatedArguments.end()) {
-            frame.setOperand(operand, frame.operand(iter->value));
+            JSValueRegs regs = JSValueRegs::withTwoAvailableRegs(GPRInfo::regT0, GPRInfo::regT1);
+            jit.loadValue(CCallHelpers::addressFor(iter->value), regs);
+            jit.storeValue(regs, CCallHelpers::addressFor(operand));
             continue;
         }
 
         InlineCallFrame* inlineCallFrame =
-            dfgJITCode->minifiedDFG.at(id)->inlineCallFrame();
+            jit.codeBlock()->jitCode()->dfg()->minifiedDFG.at(id)->inlineCallFrame();
 
         int stackOffset;
         if (inlineCallFrame)
@@ -288,48 +107,53 @@ static void emitRestoreArguments(Context& context, CodeBlock* codeBlock, DFG::JI
         else
             stackOffset = 0;
 
-        JSFunction* callee;
-        if (!inlineCallFrame || inlineCallFrame->isClosureCall)
-            callee = jsCast<JSFunction*>(frame.operand(stackOffset + CallFrameSlot::callee).asCell());
-        else
-            callee = jsCast<JSFunction*>(inlineCallFrame->calleeRecovery.constant().asCell());
+        if (!inlineCallFrame || inlineCallFrame->isClosureCall) {
+            jit.loadPtr(
+                AssemblyHelpers::addressFor(stackOffset + CallFrameSlot::callee),
+                GPRInfo::regT0);
+        } else {
+            jit.move(
+                AssemblyHelpers::TrustedImmPtr(inlineCallFrame->calleeRecovery.constant().asCell()),
+                GPRInfo::regT0);
+        }
 
-        int32_t argumentCount;
-        if (!inlineCallFrame || inlineCallFrame->isVarargs())
-            argumentCount = frame.operand<int32_t>(stackOffset + CallFrameSlot::argumentCount, PayloadOffset);
-        else
-            argumentCount = inlineCallFrame->argumentCountIncludingThis;
+        if (!inlineCallFrame || inlineCallFrame->isVarargs()) {
+            jit.load32(
+                AssemblyHelpers::payloadFor(stackOffset + CallFrameSlot::argumentCount),
+                GPRInfo::regT1);
+        } else {
+            jit.move(
+                AssemblyHelpers::TrustedImm32(inlineCallFrame->argumentCountIncludingThis),
+                GPRInfo::regT1);
+        }
 
-        JSCell* argumentsObject;
+        jit.setupArgumentsWithExecState(
+            AssemblyHelpers::TrustedImmPtr(inlineCallFrame), GPRInfo::regT0, GPRInfo::regT1);
         switch (recovery.technique()) {
         case DirectArgumentsThatWereNotCreated:
-            argumentsObject = createDirectArgumentsDuringExit(context, codeBlock, inlineCallFrame, callee, argumentCount);
+            jit.move(AssemblyHelpers::TrustedImmPtr(bitwise_cast<void*>(operationCreateDirectArgumentsDuringExit)), GPRInfo::nonArgGPR0);
             break;
         case ClonedArgumentsThatWereNotCreated:
-            argumentsObject = createClonedArgumentsDuringExit(context, codeBlock, inlineCallFrame, callee, argumentCount);
+            jit.move(AssemblyHelpers::TrustedImmPtr(bitwise_cast<void*>(operationCreateClonedArgumentsDuringExit)), GPRInfo::nonArgGPR0);
             break;
         default:
             RELEASE_ASSERT_NOT_REACHED();
             break;
         }
-        frame.setOperand(operand, JSValue(argumentsObject));
+        jit.call(GPRInfo::nonArgGPR0);
+        jit.storeCell(GPRInfo::returnValueGPR, AssemblyHelpers::addressFor(operand));
 
         alreadyAllocatedArguments.add(id, operand);
     }
 }
 
-void OSRExit::executeOSRExit(Context& context)
+void JIT_OPERATION OSRExit::compileOSRExit(ExecState* exec)
 {
-    VM& vm = *context.arg<VM*>();
-    auto scope = DECLARE_THROW_SCOPE(vm);
+    VM* vm = &exec->vm();
+    auto scope = DECLARE_THROW_SCOPE(*vm);
 
-    ExecState* exec = context.fp<ExecState*>();
-    ASSERT(&exec->vm() == &vm);
-
-    if (vm.callFrameForCatch) {
-        exec = vm.callFrameForCatch;
-        context.fp() = exec;
-    }
+    if (vm->callFrameForCatch)
+        RELEASE_ASSERT(vm->callFrameForCatch == exec);
 
     CodeBlock* codeBlock = exec->codeBlock();
     ASSERT(codeBlock);
@@ -337,102 +161,79 @@ void OSRExit::executeOSRExit(Context& context)
 
     // It's sort of preferable that we don't GC while in here. Anyways, doing so wouldn't
     // really be profitable.
-    DeferGCForAWhile deferGC(vm.heap);
+    DeferGCForAWhile deferGC(vm->heap);
 
-    uint32_t exitIndex = vm.osrExitIndex;
-    DFG::JITCode* dfgJITCode = codeBlock->jitCode()->dfg();
-    OSRExit& exit = dfgJITCode->osrExit[exitIndex];
+    uint32_t exitIndex = vm->osrExitIndex;
+    OSRExit& exit = codeBlock->jitCode()->dfg()->osrExit[exitIndex];
 
-    ASSERT(!vm.callFrameForCatch || exit.m_kind == GenericUnwind);
+    ASSERT(!vm->callFrameForCatch || exit.m_kind == GenericUnwind);
     EXCEPTION_ASSERT_UNUSED(scope, !!scope.exception() || !exit.isExceptionHandler());
+    
+    prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin);
 
-    if (UNLIKELY(!exit.exitState)) {
-        // We only need to execute this block once for each OSRExit record. The computed
-        // results will be cached in the OSRExitState record for use of the rest of the
-        // exit ramp code.
-
-        // Ensure we have baseline codeBlocks to OSR exit to.
-        prepareCodeOriginForOSRExit(exec, exit.m_codeOrigin);
-
-        CodeBlock* baselineCodeBlock = codeBlock->baselineAlternative();
-        ASSERT(baselineCodeBlock->jitType() == JITCode::BaselineJIT);
+    // Compute the value recoveries.
+    Operands<ValueRecovery> operands;
+    codeBlock->jitCode()->dfg()->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, codeBlock->jitCode()->dfg()->minifiedDFG, exit.m_streamIndex, operands);
 
-        // Compute the value recoveries.
-        Operands<ValueRecovery> operands;
-        dfgJITCode->variableEventStream.reconstruct(codeBlock, exit.m_codeOrigin, dfgJITCode->minifiedDFG, exit.m_streamIndex, operands);
+    SpeculationRecovery* recovery = 0;
+    if (exit.m_recoveryIndex != UINT_MAX)
+        recovery = &codeBlock->jitCode()->dfg()->speculationRecovery[exit.m_recoveryIndex];
 
-        SpeculationRecovery* recovery = nullptr;
-        if (exit.m_recoveryIndex != UINT_MAX)
-            recovery = &dfgJITCode->speculationRecovery[exit.m_recoveryIndex];
+    {
+        CCallHelpers jit(codeBlock);
 
-        int32_t activeThreshold = baselineCodeBlock->adjustedCounterValue(Options::thresholdForOptimizeAfterLongWarmUp());
-        double adjustedThreshold = applyMemoryUsageHeuristicsAndConvertToInt(activeThreshold, baselineCodeBlock);
-        ASSERT(adjustedThreshold > 0);
-        adjustedThreshold = BaselineExecutionCounter::clippedThreshold(codeBlock->globalObject(), adjustedThreshold);
-
-        CodeBlock* codeBlockForExit = baselineCodeBlockForOriginAndBaselineCodeBlock(exit.m_codeOrigin, baselineCodeBlock);
-        Vector<BytecodeAndMachineOffset> decodedCodeMap;
-        codeBlockForExit->jitCodeMap()->decode(decodedCodeMap);
-
-        BytecodeAndMachineOffset* mapping = binarySearch<BytecodeAndMachineOffset, unsigned>(decodedCodeMap, decodedCodeMap.size(), exit.m_codeOrigin.bytecodeIndex, BytecodeAndMachineOffset::getBytecodeIndex);
-
-        ASSERT(mapping);
-        ASSERT(mapping->m_bytecodeIndex == exit.m_codeOrigin.bytecodeIndex);
-
-        ptrdiff_t finalStackPointerOffset = codeBlockForExit->stackPointerOffset() * sizeof(Register);
-
-        void* jumpTarget = codeBlockForExit->jitCode()->executableAddressAtOffset(mapping->m_machineCodeOffset);
+        if (exit.m_kind == GenericUnwind) {
+            // We are acting as a defacto op_catch because we arrive here from genericUnwind().
+            // So, we must restore our call frame and stack pointer.
+            jit.restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(*vm);
+            jit.loadPtr(vm->addressOfCallFrameForCatch(), GPRInfo::callFrameRegister);
+        }
+        jit.addPtr(
+            CCallHelpers::TrustedImm32(codeBlock->stackPointerOffset() * sizeof(Register)),
+            GPRInfo::callFrameRegister, CCallHelpers::stackPointerRegister);
 
-        exit.exitState = adoptRef(new OSRExitState(exit, codeBlock, baselineCodeBlock, operands, recovery, finalStackPointerOffset, activeThreshold, adjustedThreshold, jumpTarget));
+        jit.jitAssertHasValidCallFrame();
 
-        if (UNLIKELY(vm.m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation)) {
-            Profiler::Database& database = *vm.m_perBytecodeProfiler;
+        if (UNLIKELY(vm->m_perBytecodeProfiler && codeBlock->jitCode()->dfgCommon()->compilation)) {
+            Profiler::Database& database = *vm->m_perBytecodeProfiler;
             Profiler::Compilation* compilation = codeBlock->jitCode()->dfgCommon()->compilation.get();
 
             Profiler::OSRExit* profilerExit = compilation->addOSRExit(
                 exitIndex, Profiler::OriginStack(database, codeBlock, exit.m_codeOrigin),
                 exit.m_kind, exit.m_kind == UncountableInvalidation);
-            exit.exitState->profilerExit = profilerExit;
+            jit.add64(CCallHelpers::TrustedImm32(1), CCallHelpers::AbsoluteAddress(profilerExit->counterAddress()));
         }
 
-        if (UNLIKELY(Options::verboseOSR() || Options::verboseDFGOSRExit())) {
-            dataLogF("DFG OSR exit #%u (%s, %s) from %s, with operands = %s\n",
+        compileExit(jit, *vm, exit, operands, recovery);
+
+        LinkBuffer patchBuffer(jit, codeBlock);
+        exit.m_code = FINALIZE_CODE_IF(
+            shouldDumpDisassembly() || Options::verboseOSR() || Options::verboseDFGOSRExit(),
+            patchBuffer,
+            ("DFG OSR exit #%u (%s, %s) from %s, with operands = %s",
                 exitIndex, toCString(exit.m_codeOrigin).data(),
                 exitKindToString(exit.m_kind), toCString(*codeBlock).data(),
-                toCString(ignoringContext<DumpContext>(operands)).data());
-        }
+                toCString(ignoringContext<DumpContext>(operands)).data()));
     }
 
-    OSRExitState& exitState = *exit.exitState.get();
-    CodeBlock* baselineCodeBlock = exitState.baselineCodeBlock;
-    ASSERT(baselineCodeBlock->jitType() == JITCode::BaselineJIT);
+    MacroAssembler::repatchJump(exit.codeLocationForRepatch(codeBlock), CodeLocationLabel(exit.m_code.code()));
 
-    Operands<ValueRecovery>& operands = exitState.operands;
-    SpeculationRecovery* recovery = exitState.recovery;
-
-    if (exit.m_kind == GenericUnwind) {
-        // We are acting as a defacto op_catch because we arrive here from genericUnwind().
-        // So, we must restore our call frame and stack pointer.
-        restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(context);
-        ASSERT(context.fp() == vm.callFrameForCatch);
-    }
-    context.sp() = context.fp<uint8_t*>() + (codeBlock->stackPointerOffset() * sizeof(Register));
-
-    ASSERT(!(context.fp<uintptr_t>() & 0x7));
-
-    if (exitState.profilerExit)
-        exitState.profilerExit->incCount();
+    vm->osrExitJumpDestination = exit.m_code.code().executableAddress();
+}
 
-    auto& cpu = context.cpu;
-    Frame frame(cpu.fp(), context.stack());
+void OSRExit::compileExit(CCallHelpers& jit, VM& vm, const OSRExit& exit, const Operands<ValueRecovery>& operands, SpeculationRecovery* recovery)
+{
+    jit.jitAssertTagsInPlace();
 
-#if USE(JSVALUE64)
-    ASSERT(cpu.gpr(GPRInfo::tagTypeNumberRegister) == TagTypeNumber);
-    ASSERT(cpu.gpr(GPRInfo::tagMaskRegister) == TagMask);
-#endif
+    // Pro-forma stuff.
+    if (Options::printEachOSRExit()) {
+        SpeculationFailureDebugInfo* debugInfo = new SpeculationFailureDebugInfo;
+        debugInfo->codeBlock = jit.codeBlock();
+        debugInfo->kind = exit.m_kind;
+        debugInfo->bytecodeOffset = exit.m_codeOrigin.bytecodeIndex;
 
-    if (UNLIKELY(Options::printEachOSRExit()))
-        printOSRExit(context, vm.osrExitIndex, exit);
+        jit.debugCall(vm, debugOperationPrintSpeculationFailure, debugInfo);
+    }
 
     // Perform speculation recovery. This only comes into play when an operation
     // starts mutating state before verifying the speculation it has already made.
@@ -440,24 +241,22 @@ void OSRExit::executeOSRExit(Context& context)
     if (recovery) {
         switch (recovery->type()) {
         case SpeculativeAdd:
-            cpu.gpr(recovery->dest()) = cpu.gpr<uint32_t>(recovery->dest()) - cpu.gpr<uint32_t>(recovery->src());
+            jit.sub32(recovery->src(), recovery->dest());
 #if USE(JSVALUE64)
-            ASSERT(!(cpu.gpr(recovery->dest()) >> 32));
-            cpu.gpr(recovery->dest()) |= TagTypeNumber;
+            jit.or64(GPRInfo::tagTypeNumberRegister, recovery->dest());
 #endif
             break;
 
         case SpeculativeAddImmediate:
-            cpu.gpr(recovery->dest()) = (cpu.gpr<uint32_t>(recovery->dest()) - recovery->immediate());
+            jit.sub32(AssemblyHelpers::Imm32(recovery->immediate()), recovery->dest());
 #if USE(JSVALUE64)
-            ASSERT(!(cpu.gpr(recovery->dest()) >> 32));
-            cpu.gpr(recovery->dest()) |= TagTypeNumber;
+            jit.or64(GPRInfo::tagTypeNumberRegister, recovery->dest());
 #endif
             break;
 
         case BooleanSpeculationCheck:
 #if USE(JSVALUE64)
-            cpu.gpr(recovery->dest()) = cpu.gpr(recovery->dest()) ^ ValueFalse;
+            jit.xor64(AssemblyHelpers::TrustedImm32(static_cast<int32_t>(ValueFalse)), recovery->dest());
 #endif
             break;
 
@@ -480,113 +279,395 @@ void OSRExit::executeOSRExit(Context& context)
             // property access, or due to an array profile).
 
             CodeOrigin codeOrigin = exit.m_codeOriginForExitProfile;
-            CodeBlock* profiledCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(codeOrigin, baselineCodeBlock);
-            if (ArrayProfile* arrayProfile = profiledCodeBlock->getArrayProfile(codeOrigin.bytecodeIndex)) {
-                Structure* structure = jsValueFor(cpu, exit.m_jsValueSource).asCell()->structure(vm);
-                arrayProfile->observeStructure(structure);
-                // FIXME: We should be able to use arrayModeFromStructure() to determine the observed ArrayMode here.
-                // However, currently, doing so would result in a pdfjs preformance regression.
-                // https://bugs.webkit.org/show_bug.cgi?id=176473
-                arrayProfile->observeArrayMode(asArrayModes(structure->indexingType()));
+            if (ArrayProfile* arrayProfile = jit.baselineCodeBlockFor(codeOrigin)->getArrayProfile(codeOrigin.bytecodeIndex)) {
+#if USE(JSVALUE64)
+                GPRReg usedRegister;
+                if (exit.m_jsValueSource.isAddress())
+                    usedRegister = exit.m_jsValueSource.base();
+                else
+                    usedRegister = exit.m_jsValueSource.gpr();
+#else
+                GPRReg usedRegister1;
+                GPRReg usedRegister2;
+                if (exit.m_jsValueSource.isAddress()) {
+                    usedRegister1 = exit.m_jsValueSource.base();
+                    usedRegister2 = InvalidGPRReg;
+                } else {
+                    usedRegister1 = exit.m_jsValueSource.payloadGPR();
+                    if (exit.m_jsValueSource.hasKnownTag())
+                        usedRegister2 = InvalidGPRReg;
+                    else
+                        usedRegister2 = exit.m_jsValueSource.tagGPR();
+                }
+#endif
+
+                GPRReg scratch1;
+                GPRReg scratch2;
+#if USE(JSVALUE64)
+                scratch1 = AssemblyHelpers::selectScratchGPR(usedRegister);
+                scratch2 = AssemblyHelpers::selectScratchGPR(usedRegister, scratch1);
+#else
+                scratch1 = AssemblyHelpers::selectScratchGPR(usedRegister1, usedRegister2);
+                scratch2 = AssemblyHelpers::selectScratchGPR(usedRegister1, usedRegister2, scratch1);
+#endif
+
+                if (isARM64()) {
+                    jit.pushToSave(scratch1);
+                    jit.pushToSave(scratch2);
+                } else {
+                    jit.push(scratch1);
+                    jit.push(scratch2);
+                }
+
+                GPRReg value;
+                if (exit.m_jsValueSource.isAddress()) {
+                    value = scratch1;
+                    jit.loadPtr(AssemblyHelpers::Address(exit.m_jsValueSource.asAddress()), value);
+                } else
+                    value = exit.m_jsValueSource.payloadGPR();
+
+                jit.load32(AssemblyHelpers::Address(value, JSCell::structureIDOffset()), scratch1);
+                jit.store32(scratch1, arrayProfile->addressOfLastSeenStructureID());
+#if USE(JSVALUE64)
+                jit.load8(AssemblyHelpers::Address(value, JSCell::indexingTypeAndMiscOffset()), scratch1);
+#else
+                jit.load8(AssemblyHelpers::Address(scratch1, Structure::indexingTypeIncludingHistoryOffset()), scratch1);
+#endif
+                jit.move(AssemblyHelpers::TrustedImm32(1), scratch2);
+                jit.lshift32(scratch1, scratch2);
+                jit.or32(scratch2, AssemblyHelpers::AbsoluteAddress(arrayProfile->addressOfArrayModes()));
+
+                if (isARM64()) {
+                    jit.popToRestore(scratch2);
+                    jit.popToRestore(scratch1);
+                } else {
+                    jit.pop(scratch2);
+                    jit.pop(scratch1);
+                }
             }
         }
 
-        if (MethodOfGettingAValueProfile profile = exit.m_valueProfile)
-            profile.reportValue(jsValueFor(cpu, exit.m_jsValueSource));
+        if (MethodOfGettingAValueProfile profile = exit.m_valueProfile) {
+#if USE(JSVALUE64)
+            if (exit.m_jsValueSource.isAddress()) {
+                // We can't be sure that we have a spare register. So use the tagTypeNumberRegister,
+                // since we know how to restore it.
+                jit.load64(AssemblyHelpers::Address(exit.m_jsValueSource.asAddress()), GPRInfo::tagTypeNumberRegister);
+                profile.emitReportValue(jit, JSValueRegs(GPRInfo::tagTypeNumberRegister));
+                jit.move(AssemblyHelpers::TrustedImm64(TagTypeNumber), GPRInfo::tagTypeNumberRegister);
+            } else
+                profile.emitReportValue(jit, JSValueRegs(exit.m_jsValueSource.gpr()));
+#else // not USE(JSVALUE64)
+            if (exit.m_jsValueSource.isAddress()) {
+                // Save a register so we can use it.
+                GPRReg scratchPayload = AssemblyHelpers::selectScratchGPR(exit.m_jsValueSource.base());
+                GPRReg scratchTag = AssemblyHelpers::selectScratchGPR(exit.m_jsValueSource.base(), scratchPayload);
+                jit.pushToSave(scratchPayload);
+                jit.pushToSave(scratchTag);
+
+                JSValueRegs scratch(scratchTag, scratchPayload);
+                
+                jit.loadValue(exit.m_jsValueSource.asAddress(), scratch);
+                profile.emitReportValue(jit, scratch);
+                
+                jit.popToRestore(scratchTag);
+                jit.popToRestore(scratchPayload);
+            } else if (exit.m_jsValueSource.hasKnownTag()) {
+                GPRReg scratchTag = AssemblyHelpers::selectScratchGPR(exit.m_jsValueSource.payloadGPR());
+                jit.pushToSave(scratchTag);
+                jit.move(AssemblyHelpers::TrustedImm32(exit.m_jsValueSource.tag()), scratchTag);
+                JSValueRegs value(scratchTag, exit.m_jsValueSource.payloadGPR());
+                profile.emitReportValue(jit, value);
+                jit.popToRestore(scratchTag);
+            } else
+                profile.emitReportValue(jit, exit.m_jsValueSource.regs());
+#endif // USE(JSVALUE64)
+        }
     }
 
-    // Do all data format conversions and store the results into the stack.
-    // Note: we need to recover values before restoring callee save registers below
-    // because the recovery may rely on values in some of callee save registers.
+    // What follows is an intentionally simple OSR exit implementation that generates
+    // fairly poor code but is very easy to hack. In particular, it dumps all state that
+    // needs conversion into a scratch buffer so that in step 6, where we actually do the
+    // conversions, we know that all temp registers are free to use and the variable is
+    // definitely in a well-known spot in the scratch buffer regardless of whether it had
+    // originally been in a register or spilled. This allows us to decouple "where was
+    // the variable" from "how was it represented". Consider that the
+    // Int32DisplacedInJSStack recovery: it tells us that the value is in a
+    // particular place and that that place holds an unboxed int32. We have two different
+    // places that a value could be (displaced, register) and a bunch of different
+    // ways of representing a value. The number of recoveries is two * a bunch. The code
+    // below means that we have to have two + a bunch cases rather than two * a bunch.
+    // Once we have loaded the value from wherever it was, the reboxing is the same
+    // regardless of its location. Likewise, before we do the reboxing, the way we get to
+    // the value (i.e. where we load it from) is the same regardless of its type. Because
+    // the code below always dumps everything into a scratch buffer first, the two
+    // questions become orthogonal, which simplifies adding new types and adding new
+    // locations.
+    //
+    // This raises the question: does using such a suboptimal implementation of OSR exit,
+    // where we always emit code to dump all state into a scratch buffer only to then
+    // dump it right back into the stack, hurt us in any way? The asnwer is that OSR exits
+    // are rare. Our tiering strategy ensures this. This is because if an OSR exit is
+    // taken more than ~100 times, we jettison the DFG code block along with all of its
+    // exits. It is impossible for an OSR exit - i.e. the code we compile below - to
+    // execute frequently enough for the codegen to matter that much. It probably matters
+    // enough that we don't want to turn this into some super-slow function call, but so
+    // long as we're generating straight-line code, that code can be pretty bad. Also
+    // because we tend to exit only along one OSR exit from any DFG code block - that's an
+    // empirical result that we're extremely confident about - the code size of this
+    // doesn't matter much. Hence any attempt to optimize the codegen here is just purely
+    // harmful to the system: it probably won't reduce either net memory usage or net
+    // execution time. It will only prevent us from cleanly decoupling "where was the
+    // variable" from "how was it represented", which will make it more difficult to add
+    // features in the future and it will make it harder to reason about bugs.
+
+    // Save all state from GPRs into the scratch buffer.
+
+    ScratchBuffer* scratchBuffer = vm.scratchBufferForSize(sizeof(EncodedJSValue) * operands.size());
+    EncodedJSValue* scratch = scratchBuffer ? static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer()) : 0;
 
-    int calleeSaveSpaceAsVirtualRegisters = static_cast<int>(baselineCodeBlock->calleeSaveSpaceAsVirtualRegisters());
-    size_t numberOfOperands = operands.size();
-    for (size_t index = 0; index < numberOfOperands; ++index) {
+    for (size_t index = 0; index < operands.size(); ++index) {
         const ValueRecovery& recovery = operands[index];
-        VirtualRegister reg = operands.virtualRegisterForIndex(index);
 
-        if (reg.isLocal() && reg.toLocal() < calleeSaveSpaceAsVirtualRegisters)
-            continue;
+        switch (recovery.technique()) {
+        case UnboxedInt32InGPR:
+        case UnboxedCellInGPR:
+#if USE(JSVALUE64)
+        case InGPR:
+        case UnboxedInt52InGPR:
+        case UnboxedStrictInt52InGPR:
+            jit.store64(recovery.gpr(), scratch + index);
+            break;
+#else
+        case UnboxedBooleanInGPR:
+            jit.store32(
+                recovery.gpr(),
+                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload);
+            break;
+            
+        case InPair:
+            jit.store32(
+                recovery.tagGPR(),
+                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.tag);
+            jit.store32(
+                recovery.payloadGPR(),
+                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload);
+            break;
+#endif
 
-        int operand = reg.offset();
+        default:
+            break;
+        }
+    }
+
+    // And voila, all GPRs are free to reuse.
+
+    // Save all state from FPRs into the scratch buffer.
+
+    for (size_t index = 0; index < operands.size(); ++index) {
+        const ValueRecovery& recovery = operands[index];
 
         switch (recovery.technique()) {
-        case DisplacedInJSStack:
-            frame.setOperand(operand, exec->r(recovery.virtualRegister()).jsValue());
+        case UnboxedDoubleInFPR:
+        case InFPR:
+            jit.move(AssemblyHelpers::TrustedImmPtr(scratch + index), GPRInfo::regT0);
+            jit.storeDouble(recovery.fpr(), MacroAssembler::Address(GPRInfo::regT0));
             break;
 
-        case InFPR:
-            frame.setOperand(operand, cpu.fpr<JSValue>(recovery.fpr()));
+        default:
             break;
+        }
+    }
 
+    // Now, all FPRs are also free.
+
+    // Save all state from the stack into the scratch buffer. For simplicity we
+    // do this even for state that's already in the right place on the stack.
+    // It makes things simpler later.
+
+    for (size_t index = 0; index < operands.size(); ++index) {
+        const ValueRecovery& recovery = operands[index];
+
+        switch (recovery.technique()) {
+        case DisplacedInJSStack:
+        case CellDisplacedInJSStack:
+        case BooleanDisplacedInJSStack:
+        case Int32DisplacedInJSStack:
+        case DoubleDisplacedInJSStack:
 #if USE(JSVALUE64)
-        case InGPR:
-            frame.setOperand(operand, cpu.gpr<JSValue>(recovery.gpr()));
+        case Int52DisplacedInJSStack:
+        case StrictInt52DisplacedInJSStack:
+            jit.load64(AssemblyHelpers::addressFor(recovery.virtualRegister()), GPRInfo::regT0);
+            jit.store64(GPRInfo::regT0, scratch + index);
             break;
 #else
-        case InPair:
-            frame.setOperand(operand, JSValue(cpu.gpr<int32_t>(recovery.tagGPR()), cpu.gpr<int32_t>(recovery.payloadGPR())));
+            jit.load32(
+                AssemblyHelpers::tagFor(recovery.virtualRegister()),
+                GPRInfo::regT0);
+            jit.load32(
+                AssemblyHelpers::payloadFor(recovery.virtualRegister()),
+                GPRInfo::regT1);
+            jit.store32(
+                GPRInfo::regT0,
+                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.tag);
+            jit.store32(
+                GPRInfo::regT1,
+                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload);
             break;
 #endif
 
-        case UnboxedCellInGPR:
-            frame.setOperand(operand, JSValue(cpu.gpr<JSCell*>(recovery.gpr())));
+        default:
             break;
+        }
+    }
+
+    // Need to ensure that the stack pointer accounts for the worst-case stack usage at exit. This
+    // could toast some stack that the DFG used. We need to do it before storing to stack offsets
+    // used by baseline.
+    jit.addPtr(
+        CCallHelpers::TrustedImm32(
+            -jit.codeBlock()->jitCode()->dfgCommon()->requiredRegisterCountForExit * sizeof(Register)),
+        CCallHelpers::framePointerRegister, CCallHelpers::stackPointerRegister);
 
+    // Restore the DFG callee saves and then save the ones the baseline JIT uses.
+    jit.emitRestoreCalleeSaves();
+    jit.emitSaveCalleeSavesFor(jit.baselineCodeBlock());
+
+    // The tag registers are needed to materialize recoveries below.
+    jit.emitMaterializeTagCheckRegisters();
+
+    if (exit.isExceptionHandler())
+        jit.copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(vm);
+
+    // Do all data format conversions and store the results into the stack.
+
+    for (size_t index = 0; index < operands.size(); ++index) {
+        const ValueRecovery& recovery = operands[index];
+        VirtualRegister reg = operands.virtualRegisterForIndex(index);
+
+        if (reg.isLocal() && reg.toLocal() < static_cast<int>(jit.baselineCodeBlock()->calleeSaveSpaceAsVirtualRegisters()))
+            continue;
+
+        int operand = reg.offset();
+
+        switch (recovery.technique()) {
+        case DisplacedInJSStack:
+        case InFPR:
+#if USE(JSVALUE64)
+        case InGPR:
+        case UnboxedCellInGPR:
         case CellDisplacedInJSStack:
-            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedCell()));
+        case BooleanDisplacedInJSStack:
+            jit.load64(scratch + index, GPRInfo::regT0);
+            jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
+            break;
+#else // not USE(JSVALUE64)
+        case InPair:
+            jit.load32(
+                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.tag,
+                GPRInfo::regT0);
+            jit.load32(
+                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
+                GPRInfo::regT1);
+            jit.store32(
+                GPRInfo::regT0,
+                AssemblyHelpers::tagFor(operand));
+            jit.store32(
+                GPRInfo::regT1,
+                AssemblyHelpers::payloadFor(operand));
             break;
 
-#if USE(JSVALUE32_64)
-        case UnboxedBooleanInGPR:
-            frame.setOperand(operand, jsBoolean(cpu.gpr<bool>(recovery.gpr())));
+        case UnboxedCellInGPR:
+        case CellDisplacedInJSStack:
+            jit.load32(
+                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
+                GPRInfo::regT0);
+            jit.store32(
+                AssemblyHelpers::TrustedImm32(JSValue::CellTag),
+                AssemblyHelpers::tagFor(operand));
+            jit.store32(
+                GPRInfo::regT0,
+                AssemblyHelpers::payloadFor(operand));
             break;
-#endif
 
+        case UnboxedBooleanInGPR:
         case BooleanDisplacedInJSStack:
-#if USE(JSVALUE64)
-            frame.setOperand(operand, exec->r(recovery.virtualRegister()).jsValue());
-#else
-            frame.setOperand(operand, jsBoolean(exec->r(recovery.virtualRegister()).jsValue().payload()));
-#endif
+            jit.load32(
+                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
+                GPRInfo::regT0);
+            jit.store32(
+                AssemblyHelpers::TrustedImm32(JSValue::BooleanTag),
+                AssemblyHelpers::tagFor(operand));
+            jit.store32(
+                GPRInfo::regT0,
+                AssemblyHelpers::payloadFor(operand));
             break;
+#endif // USE(JSVALUE64)
 
         case UnboxedInt32InGPR:
-            frame.setOperand(operand, JSValue(cpu.gpr<int32_t>(recovery.gpr())));
-            break;
-
         case Int32DisplacedInJSStack:
-            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedInt32()));
+#if USE(JSVALUE64)
+            jit.load64(scratch + index, GPRInfo::regT0);
+            jit.zeroExtend32ToPtr(GPRInfo::regT0, GPRInfo::regT0);
+            jit.or64(GPRInfo::tagTypeNumberRegister, GPRInfo::regT0);
+            jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
+#else
+            jit.load32(
+                &bitwise_cast<EncodedValueDescriptor*>(scratch + index)->asBits.payload,
+                GPRInfo::regT0);
+            jit.store32(
+                AssemblyHelpers::TrustedImm32(JSValue::Int32Tag),
+                AssemblyHelpers::tagFor(operand));
+            jit.store32(
+                GPRInfo::regT0,
+                AssemblyHelpers::payloadFor(operand));
+#endif
             break;
 
 #if USE(JSVALUE64)
         case UnboxedInt52InGPR:
-            frame.setOperand(operand, JSValue(cpu.gpr<int64_t>(recovery.gpr()) >> JSValue::int52ShiftAmount));
-            break;
-
         case Int52DisplacedInJSStack:
-            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedInt52()));
+            jit.load64(scratch + index, GPRInfo::regT0);
+            jit.rshift64(
+                AssemblyHelpers::TrustedImm32(JSValue::int52ShiftAmount), GPRInfo::regT0);
+            jit.boxInt52(GPRInfo::regT0, GPRInfo::regT0, GPRInfo::regT1, FPRInfo::fpRegT0);
+            jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
             break;
 
         case UnboxedStrictInt52InGPR:
-            frame.setOperand(operand, JSValue(cpu.gpr<int64_t>(recovery.gpr())));
-            break;
-
         case StrictInt52DisplacedInJSStack:
-            frame.setOperand(operand, JSValue(exec->r(recovery.virtualRegister()).unboxedStrictInt52()));
+            jit.load64(scratch + index, GPRInfo::regT0);
+            jit.boxInt52(GPRInfo::regT0, GPRInfo::regT0, GPRInfo::regT1, FPRInfo::fpRegT0);
+            jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
             break;
 #endif
 
         case UnboxedDoubleInFPR:
-            frame.setOperand(operand, JSValue(JSValue::EncodeAsDouble, purifyNaN(cpu.fpr(recovery.fpr()))));
-            break;
-
         case DoubleDisplacedInJSStack:
-            frame.setOperand(operand, JSValue(JSValue::EncodeAsDouble, purifyNaN(exec->r(recovery.virtualRegister()).unboxedDouble())));
+            jit.move(AssemblyHelpers::TrustedImmPtr(scratch + index), GPRInfo::regT0);
+            jit.loadDouble(MacroAssembler::Address(GPRInfo::regT0), FPRInfo::fpRegT0);
+            jit.purifyNaN(FPRInfo::fpRegT0);
+#if USE(JSVALUE64)
+            jit.boxDouble(FPRInfo::fpRegT0, GPRInfo::regT0);
+            jit.store64(GPRInfo::regT0, AssemblyHelpers::addressFor(operand));
+#else
+            jit.storeDouble(FPRInfo::fpRegT0, AssemblyHelpers::addressFor(operand));
+#endif
             break;
 
         case Constant:
-            frame.setOperand(operand, recovery.constant());
+#if USE(JSVALUE64)
+            jit.store64(
+                AssemblyHelpers::TrustedImm64(JSValue::encode(recovery.constant())),
+                AssemblyHelpers::addressFor(operand));
+#else
+            jit.store32(
+                AssemblyHelpers::TrustedImm32(recovery.constant().tag()),
+                AssemblyHelpers::tagFor(operand));
+            jit.store32(
+                AssemblyHelpers::TrustedImm32(recovery.constant().payload()),
+                AssemblyHelpers::payloadFor(operand));
+#endif
             break;
 
         case DirectArgumentsThatWereNotCreated:
@@ -600,31 +681,13 @@ void OSRExit::executeOSRExit(Context& context)
         }
     }
 
-    // Need to ensure that the stack pointer accounts for the worst-case stack usage at exit. This
-    // could toast some stack that the DFG used. We need to do it before storing to stack offsets
-    // used by baseline.
-    cpu.sp() = cpu.fp<uint8_t*>() - (codeBlock->jitCode()->dfgCommon()->requiredRegisterCountForExit * sizeof(Register));
-
-    // Restore the DFG callee saves and then save the ones the baseline JIT uses.
-    restoreCalleeSavesFor(context, codeBlock);
-    saveCalleeSavesFor(context, baselineCodeBlock);
-
-    // The tag registers are needed to materialize recoveries below.
-#if USE(JSVALUE64)
-    cpu.gpr(GPRInfo::tagTypeNumberRegister) = TagTypeNumber;
-    cpu.gpr(GPRInfo::tagMaskRegister) = TagTypeNumber | TagBitTypeOther;
-#endif
-
-    if (exit.isExceptionHandler())
-        copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(context);
-
     // Now that things on the stack are recovered, do the arguments recovery. We assume that arguments
     // recoveries don't recursively refer to each other. But, we don't try to assume that they only
     // refer to certain ranges of locals. Hence why we need to do this here, once the stack is sensible.
     // Note that we also roughly assume that the arguments might still be materialized outside of its
     // inline call frame scope - but for now the DFG wouldn't do that.
 
-    emitRestoreArguments(context, codeBlock, dfgJITCode, operands);
+    emitRestoreArguments(jit, operands);
 
     // Adjust the old JIT's execute counter. Since we are exiting OSR, we know
     // that all new calls into this code will go to the new JIT, so the execute
@@ -662,161 +725,26 @@ void OSRExit::executeOSRExit(Context& context)
     // counter to 0; otherwise we set the counter to
     // counterValueForOptimizeAfterWarmUp().
 
-    if (UNLIKELY(codeBlock->updateOSRExitCounterAndCheckIfNeedToReoptimize(exitState) == CodeBlock::OptimizeAction::ReoptimizeNow))
-        triggerReoptimizationNow(baselineCodeBlock, &exit);
-
-    reifyInlinedCallFrames(context, baselineCodeBlock, exit);
-    adjustAndJumpToTarget(context, vm, codeBlock, baselineCodeBlock, exit);
-}
-
-static void reifyInlinedCallFrames(Context& context, CodeBlock* outermostBaselineCodeBlock, const OSRExitBase& exit)
-{
-    auto& cpu = context.cpu;
-    Frame frame(cpu.fp(), context.stack());
-
-    // FIXME: We shouldn't leave holes on the stack when performing an OSR exit
-    // in presence of inlined tail calls.
-    // https://bugs.webkit.org/show_bug.cgi?id=147511
-    ASSERT(outermostBaselineCodeBlock->jitType() == JITCode::BaselineJIT);
-    frame.setOperand<CodeBlock*>(CallFrameSlot::codeBlock, outermostBaselineCodeBlock);
-
-    const CodeOrigin* codeOrigin;
-    for (codeOrigin = &exit.m_codeOrigin; codeOrigin && codeOrigin->inlineCallFrame; codeOrigin = codeOrigin->inlineCallFrame->getCallerSkippingTailCalls()) {
-        InlineCallFrame* inlineCallFrame = codeOrigin->inlineCallFrame;
-        CodeBlock* baselineCodeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(*codeOrigin, outermostBaselineCodeBlock);
-        InlineCallFrame::Kind trueCallerCallKind;
-        CodeOrigin* trueCaller = inlineCallFrame->getCallerSkippingTailCalls(&trueCallerCallKind);
-        void* callerFrame = cpu.fp();
-
-        if (!trueCaller) {
-            ASSERT(inlineCallFrame->isTail());
-            void* returnPC = frame.get<void*>(CallFrame::returnPCOffset());
-            frame.set<void*>(inlineCallFrame->returnPCOffset(), returnPC);
-            callerFrame = frame.get<void*>(CallFrame::callerFrameOffset());
-        } else {
-            CodeBlock* baselineCodeBlockForCaller = baselineCodeBlockForOriginAndBaselineCodeBlock(*trueCaller, outermostBaselineCodeBlock);
-            unsigned callBytecodeIndex = trueCaller->bytecodeIndex;
-            void* jumpTarget = nullptr;
-
-            switch (trueCallerCallKind) {
-            case InlineCallFrame::Call:
-            case InlineCallFrame::Construct:
-            case InlineCallFrame::CallVarargs:
-            case InlineCallFrame::ConstructVarargs:
-            case InlineCallFrame::TailCall:
-            case InlineCallFrame::TailCallVarargs: {
-                CallLinkInfo* callLinkInfo =
-                    baselineCodeBlockForCaller->getCallLinkInfoForBytecodeIndex(callBytecodeIndex);
-                RELEASE_ASSERT(callLinkInfo);
-
-                jumpTarget = callLinkInfo->callReturnLocation().executableAddress();
-                break;
-            }
-
-            case InlineCallFrame::GetterCall:
-            case InlineCallFrame::SetterCall: {
-                StructureStubInfo* stubInfo =
-                    baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex));
-                RELEASE_ASSERT(stubInfo);
-
-                jumpTarget = stubInfo->doneLocation().executableAddress();
-                break;
-            }
-
-            default:
-                RELEASE_ASSERT_NOT_REACHED();
-            }
-
-            if (trueCaller->inlineCallFrame)
-                callerFrame = cpu.fp<uint8_t*>() + trueCaller->inlineCallFrame->stackOffset * sizeof(EncodedJSValue);
-
-            frame.set<void*>(inlineCallFrame->returnPCOffset(), jumpTarget);
-        }
-
-        frame.setOperand<void*>(inlineCallFrame->stackOffset + CallFrameSlot::codeBlock, baselineCodeBlock);
+    handleExitCounts(jit, exit);
 
-        // Restore the inline call frame's callee save registers.
-        // If this inlined frame is a tail call that will return back to the original caller, we need to
-        // copy the prior contents of the tag registers already saved for the outer frame to this frame.
-        saveOrCopyCalleeSavesFor(context, baselineCodeBlock, VirtualRegister(inlineCallFrame->stackOffset), !trueCaller);
+    // Reify inlined call frames.
 
-        if (!inlineCallFrame->isVarargs())
-            frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, PayloadOffset, inlineCallFrame->argumentCountIncludingThis);
-        ASSERT(callerFrame);
-        frame.set<void*>(inlineCallFrame->callerFrameOffset(), callerFrame);
-#if USE(JSVALUE64)
-        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
-        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, TagOffset, locationBits);
-        if (!inlineCallFrame->isClosureCall)
-            frame.setOperand(inlineCallFrame->stackOffset + CallFrameSlot::callee, JSValue(inlineCallFrame->calleeConstant()));
-#else // USE(JSVALUE64) // so this is the 32-bit part
-        Instruction* instruction = baselineCodeBlock->instructions().begin() + codeOrigin->bytecodeIndex;
-        uint32_t locationBits = CallSiteIndex(instruction).bits();
-        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount, TagOffset, locationBits);
-        frame.setOperand<uint32_t>(inlineCallFrame->stackOffset + CallFrameSlot::callee, TagOffset, static_cast<uint32_t>(JSValue::CellTag));
-        if (!inlineCallFrame->isClosureCall)
-            frame.setOperand(inlineCallFrame->stackOffset + CallFrameSlot::callee, PayloadOffset, inlineCallFrame->calleeConstant());
-#endif // USE(JSVALUE64) // ending the #else part, so directly above is the 32-bit part
-    }
+    reifyInlinedCallFrames(jit, exit);
 
-    // Don't need to set the toplevel code origin if we only did inline tail calls
-    if (codeOrigin) {
-#if USE(JSVALUE64)
-        uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
-#else
-        Instruction* instruction = outermostBaselineCodeBlock->instructions().begin() + codeOrigin->bytecodeIndex;
-        uint32_t locationBits = CallSiteIndex(instruction).bits();
-#endif
-        frame.setOperand<uint32_t>(CallFrameSlot::argumentCount, TagOffset, locationBits);
-    }
+    // And finish.
+    adjustAndJumpToTarget(vm, jit, exit);
 }
 
-static void adjustAndJumpToTarget(Context& context, VM& vm, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, OSRExit& exit)
+void JIT_OPERATION OSRExit::debugOperationPrintSpeculationFailure(ExecState* exec, void* debugInfoRaw, void* scratch)
 {
-    OSRExitState* exitState = exit.exitState.get();
-
-    WTF::storeLoadFence(); // The optimizing compiler expects that the OSR exit mechanism will execute this fence.
-    vm.heap.writeBarrier(baselineCodeBlock);
-
-    // We barrier all inlined frames -- and not just the current inline stack --
-    // because we don't know which inlined function owns the value profile that
-    // we'll update when we exit. In the case of "f() { a(); b(); }", if both
-    // a and b are inlined, we might exit inside b due to a bad value loaded
-    // from a.
-    // FIXME: MethodOfGettingAValueProfile should remember which CodeBlock owns
-    // the value profile.
-    InlineCallFrameSet* inlineCallFrames = codeBlock->jitCode()->dfgCommon()->inlineCallFrames.get();
-    if (inlineCallFrames) {
-        for (InlineCallFrame* inlineCallFrame : *inlineCallFrames)
-            vm.heap.writeBarrier(inlineCallFrame->baselineCodeBlock.get());
-    }
-
-    if (exit.m_codeOrigin.inlineCallFrame)
-        context.fp() = context.fp<uint8_t*>() + exit.m_codeOrigin.inlineCallFrame->stackOffset * sizeof(EncodedJSValue);
+    VM* vm = &exec->vm();
+    NativeCallFrameTracer tracer(vm, exec);
 
-    void* jumpTarget = exitState->jumpTarget;
-    ASSERT(jumpTarget);
-
-    context.sp() = context.fp<uint8_t*>() + exitState->stackPointerOffset;
-    if (exit.isExceptionHandler()) {
-        // Since we're jumping to op_catch, we need to set callFrameForCatch.
-        vm.callFrameForCatch = context.fp<ExecState*>();
-    }
-
-    vm.topCallFrame = context.fp<ExecState*>();
-    context.pc() = jumpTarget;
-}
-
-static void printOSRExit(Context& context, uint32_t osrExitIndex, const OSRExit& exit)
-{
-    ExecState* exec = context.fp<ExecState*>();
-    CodeBlock* codeBlock = exec->codeBlock();
+    SpeculationFailureDebugInfo* debugInfo = static_cast<SpeculationFailureDebugInfo*>(debugInfoRaw);
+    CodeBlock* codeBlock = debugInfo->codeBlock;
     CodeBlock* alternative = codeBlock->alternative();
-    ExitKind kind = exit.m_kind;
-    unsigned bytecodeOffset = exit.m_codeOrigin.bytecodeIndex;
-
     dataLog("Speculation failure in ", *codeBlock);
-    dataLog(" @ exit #", osrExitIndex, " (bc#", bytecodeOffset, ", ", exitKindToString(kind), ") with ");
+    dataLog(" @ exit #", vm->osrExitIndex, " (bc#", debugInfo->bytecodeOffset, ", ", exitKindToString(debugInfo->kind), ") with ");
     if (alternative) {
         dataLog(
             "executeCounter = ", alternative->jitExecuteCounter(),
@@ -826,18 +754,21 @@ static void printOSRExit(Context& context, uint32_t osrExitIndex, const OSRExit&
         dataLog("no alternative code block (i.e. we've been jettisoned)");
     dataLog(", osrExitCounter = ", codeBlock->osrExitCounter(), "\n");
     dataLog("    GPRs at time of exit:");
+    char* scratchPointer = static_cast<char*>(scratch);
     for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
         GPRReg gpr = GPRInfo::toRegister(i);
-        dataLog(" ", context.gprName(gpr), ":", RawPointer(context.gpr<void*>(gpr)));
+        dataLog(" ", GPRInfo::debugName(gpr), ":", RawPointer(*reinterpret_cast_ptr<void**>(scratchPointer)));
+        scratchPointer += sizeof(EncodedJSValue);
     }
     dataLog("\n");
     dataLog("    FPRs at time of exit:");
     for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
         FPRReg fpr = FPRInfo::toRegister(i);
-        dataLog(" ", context.fprName(fpr), ":");
-        uint64_t bits = context.fpr<uint64_t>(fpr);
-        double value = context.fpr(fpr);
+        dataLog(" ", FPRInfo::debugName(fpr), ":");
+        uint64_t bits = *reinterpret_cast_ptr<uint64_t*>(scratchPointer);
+        double value = *reinterpret_cast_ptr<double*>(scratchPointer);
         dataLogF("%llx:%lf", static_cast<long long>(bits), value);
+        scratchPointer += sizeof(EncodedJSValue);
     }
     dataLog("\n");
 }
index b6c9a85..9945d0c 100644 (file)
 #include "MethodOfGettingAValueProfile.h"
 #include "Operands.h"
 #include "ValueRecovery.h"
-#include <wtf/RefPtr.h>
 
 namespace JSC {
 
-namespace Probe {
-class Context;
-} // namespace Probe
-
-namespace Profiler {
-class OSRExit;
-} // namespace Profiler
+class CCallHelpers;
 
 namespace DFG {
 
@@ -98,32 +91,6 @@ private:
     SpeculationRecoveryType m_type;
 };
 
-struct OSRExitState : RefCounted<OSRExitState> {
-    OSRExitState(OSRExitBase& exit, CodeBlock* codeBlock, CodeBlock* baselineCodeBlock, Operands<ValueRecovery>& operands, SpeculationRecovery* recovery, ptrdiff_t stackPointerOffset, int32_t activeThreshold, double memoryUsageAdjustedThreshold, void* jumpTarget)
-        : exit(exit)
-        , codeBlock(codeBlock)
-        , baselineCodeBlock(baselineCodeBlock)
-        , operands(operands)
-        , recovery(recovery)
-        , stackPointerOffset(stackPointerOffset)
-        , activeThreshold(activeThreshold)
-        , memoryUsageAdjustedThreshold(memoryUsageAdjustedThreshold)
-        , jumpTarget(jumpTarget)
-    { }
-
-    OSRExitBase& exit;
-    CodeBlock* codeBlock;
-    CodeBlock* baselineCodeBlock;
-    Operands<ValueRecovery> operands;
-    SpeculationRecovery* recovery;
-    ptrdiff_t stackPointerOffset;
-    uint32_t activeThreshold;
-    double memoryUsageAdjustedThreshold;
-    void* jumpTarget;
-
-    Profiler::OSRExit* profilerExit { nullptr };
-};
-
 // === OSRExit ===
 //
 // This structure describes how to exit the speculative path by
@@ -131,20 +98,32 @@ struct OSRExitState : RefCounted<OSRExitState> {
 struct OSRExit : public OSRExitBase {
     OSRExit(ExitKind, JSValueSource, MethodOfGettingAValueProfile, SpeculativeJIT*, unsigned streamIndex, unsigned recoveryIndex = UINT_MAX);
 
-    static void executeOSRExit(Probe::Context&);
+    static void JIT_OPERATION compileOSRExit(ExecState*) WTF_INTERNAL;
 
-    RefPtr<OSRExitState> exitState;
+    unsigned m_patchableCodeOffset { 0 };
+    
+    MacroAssemblerCodeRef m_code;
     
     JSValueSource m_jsValueSource;
     MethodOfGettingAValueProfile m_valueProfile;
     
     unsigned m_recoveryIndex;
 
+    void setPatchableCodeOffset(MacroAssembler::PatchableJump);
+    MacroAssembler::Jump getPatchableCodeOffsetAsJump() const;
+    CodeLocationJump codeLocationForRepatch(CodeBlock*) const;
+    void correctJump(LinkBuffer&);
+
     unsigned m_streamIndex;
     void considerAddingAsFrequentExitSite(CodeBlock* profiledCodeBlock)
     {
         OSRExitBase::considerAddingAsFrequentExitSite(profiledCodeBlock, ExitFromDFG);
     }
+
+private:
+    static void compileExit(CCallHelpers&, VM&, const OSRExit&, const Operands<ValueRecovery>&, SpeculationRecovery*);
+    static void emitRestoreArguments(CCallHelpers&, const Operands<ValueRecovery>&);
+    static void JIT_OPERATION debugOperationPrintSpeculationFailure(ExecState*, void*, void*) WTF_INTERNAL;
 };
 
 struct SpeculationFailureDebugInfo {
index 657ecff..2151172 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -37,7 +37,6 @@
 
 namespace JSC { namespace DFG {
 
-// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 void handleExitCounts(CCallHelpers& jit, const OSRExitBase& exit)
 {
     if (!exitKindMayJettison(exit.m_kind)) {
@@ -144,7 +143,6 @@ void handleExitCounts(CCallHelpers& jit, const OSRExitBase& exit)
     doneAdjusting.link(&jit);
 }
 
-// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit)
 {
     // FIXME: We shouldn't leave holes on the stack when performing an OSR exit
@@ -254,7 +252,6 @@ void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit)
     }
 }
 
-// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 static void osrWriteBarrier(CCallHelpers& jit, GPRReg owner, GPRReg scratch)
 {
     AssemblyHelpers::Jump ownerIsRememberedOrInEden = jit.barrierBranchWithoutFence(owner);
@@ -275,7 +272,6 @@ static void osrWriteBarrier(CCallHelpers& jit, GPRReg owner, GPRReg scratch)
     ownerIsRememberedOrInEden.link(&jit);
 }
 
-// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 void adjustAndJumpToTarget(VM& vm, CCallHelpers& jit, const OSRExitBase& exit)
 {
     jit.memoryFence();
index 0563034..108a0f5 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -40,7 +40,6 @@ void handleExitCounts(CCallHelpers&, const OSRExitBase&);
 void reifyInlinedCallFrames(CCallHelpers&, const OSRExitBase&);
 void adjustAndJumpToTarget(VM&, CCallHelpers&, const OSRExitBase&);
 
-// FIXME: This won't be needed once we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 template <typename JITCodeType>
 void adjustFrameAndStackInOSRExitCompilerThunk(MacroAssembler& jit, VM* vm, JITCode::JITType jitType)
 {
index d19a7c7..c74bb35 100644 (file)
@@ -1487,6 +1487,62 @@ JSCell* JIT_OPERATION operationCreateClonedArguments(ExecState* exec, Structure*
         exec, structure, argumentStart, length, callee);
 }
 
+JSCell* JIT_OPERATION operationCreateDirectArgumentsDuringExit(ExecState* exec, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
+{
+    VM& vm = exec->vm();
+    NativeCallFrameTracer target(&vm, exec);
+    
+    DeferGCForAWhile deferGC(vm.heap);
+    
+    CodeBlock* codeBlock;
+    if (inlineCallFrame)
+        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
+    else
+        codeBlock = exec->codeBlock();
+    
+    unsigned length = argumentCount - 1;
+    unsigned capacity = std::max(length, static_cast<unsigned>(codeBlock->numParameters() - 1));
+    DirectArguments* result = DirectArguments::create(
+        vm, codeBlock->globalObject()->directArgumentsStructure(), length, capacity);
+    
+    result->callee().set(vm, result, callee);
+    
+    Register* arguments =
+        exec->registers() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0) +
+        CallFrame::argumentOffset(0);
+    for (unsigned i = length; i--;)
+        result->setIndexQuickly(vm, i, arguments[i].jsValue());
+    
+    return result;
+}
+
+JSCell* JIT_OPERATION operationCreateClonedArgumentsDuringExit(ExecState* exec, InlineCallFrame* inlineCallFrame, JSFunction* callee, int32_t argumentCount)
+{
+    VM& vm = exec->vm();
+    NativeCallFrameTracer target(&vm, exec);
+    
+    DeferGCForAWhile deferGC(vm.heap);
+    
+    CodeBlock* codeBlock;
+    if (inlineCallFrame)
+        codeBlock = baselineCodeBlockForInlineCallFrame(inlineCallFrame);
+    else
+        codeBlock = exec->codeBlock();
+    
+    unsigned length = argumentCount - 1;
+    ClonedArguments* result = ClonedArguments::createEmpty(
+        vm, codeBlock->globalObject()->clonedArgumentsStructure(), callee, length);
+    
+    Register* arguments =
+        exec->registers() + (inlineCallFrame ? inlineCallFrame->stackOffset : 0) +
+        CallFrame::argumentOffset(0);
+    for (unsigned i = length; i--;)
+        result->putDirectIndex(exec, i, arguments[i].jsValue());
+
+    
+    return result;
+}
+
 JSCell* JIT_OPERATION operationCreateRest(ExecState* exec, Register* argumentStart, unsigned numberOfParamsToSkip, unsigned arraySize)
 {
     VM* vm = &exec->vm();
index 5476bb1..155322c 100644 (file)
@@ -150,7 +150,9 @@ size_t JIT_OPERATION operationCompareStrictEqCell(ExecState*, EncodedJSValue enc
 size_t JIT_OPERATION operationCompareStrictEq(ExecState*, EncodedJSValue encodedOp1, EncodedJSValue encodedOp2) WTF_INTERNAL;
 JSCell* JIT_OPERATION operationCreateActivationDirect(ExecState*, Structure*, JSScope*, SymbolTable*, EncodedJSValue);
 JSCell* JIT_OPERATION operationCreateDirectArguments(ExecState*, Structure*, int32_t length, int32_t minCapacity);
+JSCell* JIT_OPERATION operationCreateDirectArgumentsDuringExit(ExecState*, InlineCallFrame*, JSFunction*, int32_t argumentCount);
 JSCell* JIT_OPERATION operationCreateScopedArguments(ExecState*, Structure*, Register* argumentStart, int32_t length, JSFunction* callee, JSLexicalEnvironment*);
+JSCell* JIT_OPERATION operationCreateClonedArgumentsDuringExit(ExecState*, InlineCallFrame*, JSFunction*, int32_t argumentCount);
 JSCell* JIT_OPERATION operationCreateClonedArguments(ExecState*, Structure*, Register* argumentStart, int32_t length, JSFunction* callee);
 JSCell* JIT_OPERATION operationCreateRest(ExecState*, Register* argumentStart, unsigned numberOfArgumentsToSkip, unsigned arraySize);
 double JIT_OPERATION operationFModOnInts(int32_t, int32_t) WTF_INTERNAL;
index dba7388..b7327f3 100644 (file)
 
 namespace JSC { namespace DFG {
 
-MacroAssemblerCodeRef osrExitThunkGenerator(VM* vm)
+MacroAssemblerCodeRef osrExitGenerationThunkGenerator(VM* vm)
 {
     MacroAssembler jit;
-    jit.probe(OSRExit::executeOSRExit, vm);
+
+    // This needs to happen before we use the scratch buffer because this function also uses the scratch buffer.
+    adjustFrameAndStackInOSRExitCompilerThunk<DFG::JITCode>(jit, vm, JITCode::DFGJIT);
+    
+    size_t scratchSize = sizeof(EncodedJSValue) * (GPRInfo::numberOfRegisters + FPRInfo::numberOfRegisters);
+    ScratchBuffer* scratchBuffer = vm->scratchBufferForSize(scratchSize);
+    EncodedJSValue* buffer = static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer());
+    
+    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
+#if USE(JSVALUE64)
+        jit.store64(GPRInfo::toRegister(i), buffer + i);
+#else
+        jit.store32(GPRInfo::toRegister(i), buffer + i);
+#endif
+    }
+    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
+        jit.move(MacroAssembler::TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
+        jit.storeDouble(FPRInfo::toRegister(i), MacroAssembler::Address(GPRInfo::regT0));
+    }
+    
+    // Tell GC mark phase how much of the scratch buffer is active during call.
+    jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
+    jit.storePtr(MacroAssembler::TrustedImmPtr(scratchSize), MacroAssembler::Address(GPRInfo::regT0));
+
+    // Set up one argument.
+#if CPU(X86)
+    jit.poke(GPRInfo::callFrameRegister, 0);
+#else
+    jit.move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
+#endif
+
+    MacroAssembler::Call functionCall = jit.call();
+
+    jit.move(MacroAssembler::TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
+    jit.storePtr(MacroAssembler::TrustedImmPtr(0), MacroAssembler::Address(GPRInfo::regT0));
+
+    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
+        jit.move(MacroAssembler::TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
+        jit.loadDouble(MacroAssembler::Address(GPRInfo::regT0), FPRInfo::toRegister(i));
+    }
+    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
+#if USE(JSVALUE64)
+        jit.load64(buffer + i, GPRInfo::toRegister(i));
+#else
+        jit.load32(buffer + i, GPRInfo::toRegister(i));
+#endif
+    }
+    
+    jit.jump(MacroAssembler::AbsoluteAddress(&vm->osrExitJumpDestination));
+    
     LinkBuffer patchBuffer(jit, GLOBAL_THUNK_ID);
-    return FINALIZE_CODE(patchBuffer, ("DFG OSR exit thunk"));
+    
+    patchBuffer.link(functionCall, OSRExit::compileOSRExit);
+    
+    return FINALIZE_CODE(patchBuffer, ("DFG OSR exit generation thunk"));
 }
 
 MacroAssemblerCodeRef osrEntryThunkGenerator(VM* vm)
index cffac9f..58a33da 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2014 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -35,7 +35,7 @@ class VM;
 
 namespace DFG {
 
-MacroAssemblerCodeRef osrExitThunkGenerator(VM*);
+MacroAssemblerCodeRef osrExitGenerationThunkGenerator(VM*);
 MacroAssemblerCodeRef osrEntryThunkGenerator(VM*);
 
 } } // namespace JSC::DFG
index 8b9d6a3..8d31f7a 100644 (file)
@@ -50,7 +50,6 @@ ExecutableBase* AssemblyHelpers::executableFor(const CodeOrigin& codeOrigin)
     return codeOrigin.inlineCallFrame->baselineCodeBlock->ownerExecutable();
 }
 
-// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 Vector<BytecodeAndMachineOffset>& AssemblyHelpers::decodedCodeMapFor(CodeBlock* codeBlock)
 {
     ASSERT(codeBlock == codeBlock->baselineVersion());
@@ -821,6 +820,61 @@ bool AssemblyHelpers::storeWasmContextNeedsMacroScratchRegister()
 
 #endif // ENABLE(WEBASSEMBLY)
 
+void AssemblyHelpers::debugCall(VM& vm, V_DebugOperation_EPP function, void* argument)
+{
+    size_t scratchSize = sizeof(EncodedJSValue) * (GPRInfo::numberOfRegisters + FPRInfo::numberOfRegisters);
+    ScratchBuffer* scratchBuffer = vm.scratchBufferForSize(scratchSize);
+    EncodedJSValue* buffer = static_cast<EncodedJSValue*>(scratchBuffer->dataBuffer());
+
+    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
+#if USE(JSVALUE64)
+        store64(GPRInfo::toRegister(i), buffer + i);
+#else
+        store32(GPRInfo::toRegister(i), buffer + i);
+#endif
+    }
+
+    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
+        move(TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
+        storeDouble(FPRInfo::toRegister(i), GPRInfo::regT0);
+    }
+
+    // Tell GC mark phase how much of the scratch buffer is active during call.
+    move(TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
+    storePtr(TrustedImmPtr(scratchSize), GPRInfo::regT0);
+
+#if CPU(X86_64) || CPU(ARM) || CPU(ARM64) || CPU(MIPS)
+    move(TrustedImmPtr(buffer), GPRInfo::argumentGPR2);
+    move(TrustedImmPtr(argument), GPRInfo::argumentGPR1);
+    move(GPRInfo::callFrameRegister, GPRInfo::argumentGPR0);
+    GPRReg scratch = selectScratchGPR(GPRInfo::argumentGPR0, GPRInfo::argumentGPR1, GPRInfo::argumentGPR2);
+#elif CPU(X86)
+    poke(GPRInfo::callFrameRegister, 0);
+    poke(TrustedImmPtr(argument), 1);
+    poke(TrustedImmPtr(buffer), 2);
+    GPRReg scratch = GPRInfo::regT0;
+#else
+#error "JIT not supported on this platform."
+#endif
+    move(TrustedImmPtr(reinterpret_cast<void*>(function)), scratch);
+    call(scratch);
+
+    move(TrustedImmPtr(scratchBuffer->addressOfActiveLength()), GPRInfo::regT0);
+    storePtr(TrustedImmPtr(0), GPRInfo::regT0);
+
+    for (unsigned i = 0; i < FPRInfo::numberOfRegisters; ++i) {
+        move(TrustedImmPtr(buffer + GPRInfo::numberOfRegisters + i), GPRInfo::regT0);
+        loadDouble(GPRInfo::regT0, FPRInfo::toRegister(i));
+    }
+    for (unsigned i = 0; i < GPRInfo::numberOfRegisters; ++i) {
+#if USE(JSVALUE64)
+        load64(buffer + i, GPRInfo::toRegister(i));
+#else
+        load32(buffer + i, GPRInfo::toRegister(i));
+#endif
+    }
+}
+
 void AssemblyHelpers::copyCalleeSavesToVMEntryFrameCalleeSavesBufferImpl(GPRReg calleeSavesBuffer)
 {
 #if NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
index 6bf2f3b..a68c369 100644 (file)
@@ -992,6 +992,9 @@ public:
         return GPRInfo::regT5;
     }
 
+    // Add a debug call. This call has no effect on JIT code execution state.
+    void debugCall(VM&, V_DebugOperation_EPP function, void* argument);
+
     // These methods JIT generate dynamic, debug-only checks - akin to ASSERTs.
 #if !ASSERT_DISABLED
     void jitAssertIsInt32(GPRReg);
@@ -1462,7 +1465,6 @@ public:
     
     void emitDumbVirtualCall(VM&, CallLinkInfo*);
     
-    // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
     Vector<BytecodeAndMachineOffset>& decodedCodeMapFor(CodeBlock*);
 
     void makeSpaceOnStackForCCall();
@@ -1654,7 +1656,6 @@ protected:
     CodeBlock* m_codeBlock;
     CodeBlock* m_baselineCodeBlock;
 
-    // FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
     HashMap<CodeBlock*, Vector<BytecodeAndMachineOffset>> m_decodedCodeMaps;
 };
 
index 99841de..c04cdc4 100644 (file)
@@ -2329,7 +2329,6 @@ char* JIT_OPERATION operationReallocateButterflyToGrowPropertyStorage(ExecState*
     return reinterpret_cast<char*>(result);
 }
 
-// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 void JIT_OPERATION operationOSRWriteBarrier(ExecState* exec, JSCell* cell)
 {
     VM* vm = &exec->vm();
index 3dfbef4..669f304 100644 (file)
@@ -448,7 +448,6 @@ char* JIT_OPERATION operationReallocateButterflyToHavePropertyStorageWithInitial
 char* JIT_OPERATION operationReallocateButterflyToGrowPropertyStorage(ExecState*, JSObject*, size_t newSize) WTF_INTERNAL;
 
 void JIT_OPERATION operationWriteBarrierSlowPath(ExecState*, JSCell*);
-// FIXME: remove this when we fix https://bugs.webkit.org/show_bug.cgi?id=175145.
 void JIT_OPERATION operationOSRWriteBarrier(ExecState*, JSCell*);
 
 void JIT_OPERATION operationExceptionFuzz(ExecState*);
index 61e6dd4..733df63 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -43,8 +43,7 @@ public:
     
     uint64_t* counterAddress() { return &m_counter; }
     uint64_t count() const { return m_counter; }
-    void incCount() { m_counter++; }
-
+    
     JSValue toJS(ExecState*) const;
 
 private:
index c24ea5b..978b03b 100644 (file)
@@ -1,7 +1,7 @@
 /*
  *  Copyright (C) 1999-2001 Harri Porten (porten@kde.org)
  *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
- *  Copyright (C) 2003-2017 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003, 2004, 2005, 2007, 2008, 2009, 2012, 2015 Apple Inc. All rights reserved.
  *
  *  This library is free software; you can redistribute it and/or
  *  modify it under the terms of the GNU Library General Public
@@ -344,9 +344,12 @@ public:
     uint32_t tag() const;
     int32_t payload() const;
 
-    // This should only be used by the LLInt C Loop interpreter and OSRExit code who needs
-    // synthesize JSValue from its "register"s holding tag and payload values.
+#if !ENABLE(JIT)
+    // This should only be used by the LLInt C Loop interpreter who needs
+    // synthesize JSValue from its "register"s holding tag and payload
+    // values.
     explicit JSValue(int32_t tag, int32_t payload);
+#endif
 
 #elif USE(JSVALUE64)
     /*
index a27f6a1..72de4c7 100644 (file)
@@ -341,7 +341,7 @@ inline JSValue::JSValue(int i)
     u.asBits.payload = i;
 }
 
-#if USE(JSVALUE32_64)
+#if !ENABLE(JIT)
 inline JSValue::JSValue(int32_t tag, int32_t payload)
 {
     u.asBits.tag = tag;
index 2fddf18..9558594 100644 (file)
@@ -571,6 +571,7 @@ public:
     void* targetMachinePCForThrow;
     Instruction* targetInterpreterPCForThrow;
     uint32_t osrExitIndex;
+    void* osrExitJumpDestination;
     bool isExecutingInRegExpJIT { false };
 
     // The threading protocol here is as follows: