[JSC] Merge op_check_traps into op_enter and op_loop_hint
authorysuzuki@apple.com <ysuzuki@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Mon, 2 Sep 2019 03:44:32 +0000 (03:44 +0000)
committerysuzuki@apple.com <ysuzuki@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Mon, 2 Sep 2019 03:44:32 +0000 (03:44 +0000)
https://bugs.webkit.org/show_bug.cgi?id=201373

Reviewed by Mark Lam.

This patch removes op_check_traps. Previously we were conditionally emitting op_check_traps based on Options and Platform configurations.
But now we are always emitting op_check_traps. So it is not necessary to have separate bytecode as op_check_traps. We can do checking in
op_enter and op_loop_hint.

While this patch moves check_traps implementation to op_enter and op_loop_hint, we keep separate DFG nodes (CheckTraps or InvalidationPoint),
since inserted nodes are different based on configurations and options. And emitting multiple DFG nodes from one bytecode is easy.

We also inline op_enter's slow path's write-barrier emission in LLInt.

* bytecode/BytecodeList.rb:
* bytecode/BytecodeUseDef.h:
(JSC::computeUsesForBytecodeOffset):
(JSC::computeDefsForBytecodeOffset):
* bytecompiler/BytecodeGenerator.cpp:
(JSC::BytecodeGenerator::BytecodeGenerator):
(JSC::BytecodeGenerator::emitLoopHint):
(JSC::BytecodeGenerator::emitCheckTraps): Deleted.
* bytecompiler/BytecodeGenerator.h:
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::handleRecursiveTailCall):
(JSC::DFG::ByteCodeParser::parseBlock):
* dfg/DFGCapabilities.cpp:
(JSC::DFG::capabilityLevel):
* jit/JIT.cpp:
(JSC::JIT::privateCompileMainPass):
(JSC::JIT::privateCompileSlowCases):
(JSC::JIT::emitEnterOptimizationCheck): Deleted.
* jit/JIT.h:
* jit/JITOpcodes.cpp:
(JSC::JIT::emit_op_loop_hint):
(JSC::JIT::emitSlow_op_loop_hint):
(JSC::JIT::emit_op_enter):
(JSC::JIT::emitSlow_op_enter):
(JSC::JIT::emit_op_check_traps): Deleted.
(JSC::JIT::emitSlow_op_check_traps): Deleted.
* jit/JITOpcodes32_64.cpp:
(JSC::JIT::emit_op_enter): Deleted.
* llint/LowLevelInterpreter.asm:
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:
* runtime/CommonSlowPaths.cpp:
* runtime/CommonSlowPaths.h:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@249372 268f45cc-cd09-0410-ab3c-d52691b4dbfc

16 files changed:
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/bytecode/BytecodeList.rb
Source/JavaScriptCore/bytecode/BytecodeUseDef.h
Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
Source/JavaScriptCore/bytecompiler/BytecodeGenerator.h
Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
Source/JavaScriptCore/dfg/DFGCapabilities.cpp
Source/JavaScriptCore/jit/JIT.cpp
Source/JavaScriptCore/jit/JIT.h
Source/JavaScriptCore/jit/JITOpcodes.cpp
Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
Source/JavaScriptCore/llint/LowLevelInterpreter.asm
Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
Source/JavaScriptCore/runtime/CommonSlowPaths.h

index 2eb98d1..47f5325 100644 (file)
@@ -1,5 +1,55 @@
 2019-09-01  Yusuke Suzuki  <ysuzuki@apple.com>
 
+        [JSC] Merge op_check_traps into op_enter and op_loop_hint
+        https://bugs.webkit.org/show_bug.cgi?id=201373
+
+        Reviewed by Mark Lam.
+
+        This patch removes op_check_traps. Previously we were conditionally emitting op_check_traps based on Options and Platform configurations.
+        But now we are always emitting op_check_traps. So it is not necessary to have separate bytecode as op_check_traps. We can do checking in
+        op_enter and op_loop_hint.
+
+        While this patch moves check_traps implementation to op_enter and op_loop_hint, we keep separate DFG nodes (CheckTraps or InvalidationPoint),
+        since inserted nodes are different based on configurations and options. And emitting multiple DFG nodes from one bytecode is easy.
+
+        We also inline op_enter's slow path's write-barrier emission in LLInt.
+
+        * bytecode/BytecodeList.rb:
+        * bytecode/BytecodeUseDef.h:
+        (JSC::computeUsesForBytecodeOffset):
+        (JSC::computeDefsForBytecodeOffset):
+        * bytecompiler/BytecodeGenerator.cpp:
+        (JSC::BytecodeGenerator::BytecodeGenerator):
+        (JSC::BytecodeGenerator::emitLoopHint):
+        (JSC::BytecodeGenerator::emitCheckTraps): Deleted.
+        * bytecompiler/BytecodeGenerator.h:
+        * dfg/DFGByteCodeParser.cpp:
+        (JSC::DFG::ByteCodeParser::handleRecursiveTailCall):
+        (JSC::DFG::ByteCodeParser::parseBlock):
+        * dfg/DFGCapabilities.cpp:
+        (JSC::DFG::capabilityLevel):
+        * jit/JIT.cpp:
+        (JSC::JIT::privateCompileMainPass):
+        (JSC::JIT::privateCompileSlowCases):
+        (JSC::JIT::emitEnterOptimizationCheck): Deleted.
+        * jit/JIT.h:
+        * jit/JITOpcodes.cpp:
+        (JSC::JIT::emit_op_loop_hint):
+        (JSC::JIT::emitSlow_op_loop_hint):
+        (JSC::JIT::emit_op_enter):
+        (JSC::JIT::emitSlow_op_enter):
+        (JSC::JIT::emit_op_check_traps): Deleted.
+        (JSC::JIT::emitSlow_op_check_traps): Deleted.
+        * jit/JITOpcodes32_64.cpp:
+        (JSC::JIT::emit_op_enter): Deleted.
+        * llint/LowLevelInterpreter.asm:
+        * llint/LowLevelInterpreter32_64.asm:
+        * llint/LowLevelInterpreter64.asm:
+        * runtime/CommonSlowPaths.cpp:
+        * runtime/CommonSlowPaths.h:
+
+2019-09-01  Yusuke Suzuki  <ysuzuki@apple.com>
+
         [JSC] Fix testb3 debug failures
         https://bugs.webkit.org/show_bug.cgi?id=201382
 
index 67e8a20..69cd2f6 100644 (file)
@@ -1093,8 +1093,6 @@ op :yield,
         argument: VirtualRegister,
     }
 
-op :check_traps
-
 op :log_shadow_chicken_prologue,
     args: {
         scope: VirtualRegister,
index 8de75a1..812c65a 100644 (file)
@@ -86,7 +86,6 @@ void computeUsesForBytecodeOffset(Block* codeBlock, OpcodeID opcodeID, const Ins
     case op_create_direct_arguments:
     case op_create_cloned_arguments:
     case op_get_rest_length:
-    case op_check_traps:
     case op_get_argument:
     case op_nop:
     case op_unreachable:
@@ -346,7 +345,6 @@ void computeDefsForBytecodeOffset(Block* codeBlock, OpcodeID opcodeID, const Ins
     case op_profile_control_flow:
     case op_put_to_arguments:
     case op_set_function_name:
-    case op_check_traps:
     case op_log_shadow_chicken_prologue:
     case op_log_shadow_chicken_tail:
     case op_yield:
index 1c22ccd..9658c69 100644 (file)
@@ -352,8 +352,6 @@ BytecodeGenerator::BytecodeGenerator(VM& vm, ProgramNode* programNode, UnlinkedP
 
     allocateAndEmitScope();
 
-    emitCheckTraps();
-
     const FunctionStack& functionStack = programNode->functionStack();
 
     for (auto* function : functionStack)
@@ -473,8 +471,6 @@ BytecodeGenerator::BytecodeGenerator(VM& vm, FunctionNode* functionNode, Unlinke
 
     allocateAndEmitScope();
 
-    emitCheckTraps();
-    
     if (functionNameIsInScope(functionNode->ident(), functionNode->functionMode())) {
         ASSERT(parseMode != SourceParseMode::GeneratorBodyMode);
         ASSERT(!isAsyncFunctionBodyParseMode(parseMode));
@@ -882,8 +878,6 @@ BytecodeGenerator::BytecodeGenerator(VM& vm, EvalNode* evalNode, UnlinkedEvalCod
 
     allocateAndEmitScope();
 
-    emitCheckTraps();
-    
     for (FunctionMetadataNode* function : evalNode->functionStack()) {
         m_codeBlock->addFunctionDecl(makeFunction(function));
         m_functionsToInitialize.append(std::make_pair(function, TopLevelFunctionVariable));
@@ -968,8 +962,6 @@ BytecodeGenerator::BytecodeGenerator(VM& vm, ModuleProgramNode* moduleProgramNod
 
     allocateAndEmitScope();
 
-    emitCheckTraps();
-    
     m_calleeRegister.setIndex(CallFrameSlot::callee);
 
     m_codeBlock->setNumParameters(1); // Allocate space for "this"
@@ -1397,7 +1389,6 @@ void BytecodeGenerator::emitEnter()
 void BytecodeGenerator::emitLoopHint()
 {
     OpLoopHint::emit(this);
-    emitCheckTraps();
 }
 
 void BytecodeGenerator::emitJump(Label& target)
@@ -1405,11 +1396,6 @@ void BytecodeGenerator::emitJump(Label& target)
     OpJmp::emit(this, target.bind(this));
 }
 
-void BytecodeGenerator::emitCheckTraps()
-{
-    OpCheckTraps::emit(this);
-}
-
 void ALWAYS_INLINE BytecodeGenerator::rewind()
 {
     ASSERT(m_lastInstruction.isValid());
index 1c6a185..1c2cae6 100644 (file)
@@ -847,7 +847,6 @@ namespace JSC {
         bool fuseTestAndJmp(RegisterID* cond, Label& target);
 
         void emitEnter();
-        void emitCheckTraps();
 
         RegisterID* emitHasIndexedProperty(RegisterID* dst, RegisterID* base, RegisterID* propertyName);
         RegisterID* emitHasStructureProperty(RegisterID* dst, RegisterID* base, RegisterID* propertyName, RegisterID* enumerator);
index e57c575..2edd2f5 100644 (file)
@@ -1441,10 +1441,16 @@ bool ByteCodeParser::handleRecursiveTailCall(Node* callTargetNode, CallVariant c
         for (int i = 0; i < stackEntry->m_codeBlock->numVars(); ++i)
             setDirect(stackEntry->remapOperand(virtualRegisterForLocal(i)), undefined, NormalSet);
 
-        // We want to emit the SetLocals with an exit origin that points to the place we are jumping to.
         unsigned oldIndex = m_currentIndex;
         auto oldStackTop = m_inlineStackTop;
+
+        // First, we emit check-traps operation pointing to bc#0 as exit.
         m_inlineStackTop = stackEntry;
+        m_currentIndex = 0;
+        m_exitOK = true;
+        addToGraph(Options::usePollingTraps() ? CheckTraps : InvalidationPoint);
+
+        // Then, we want to emit the SetLocals with an exit origin that points to the place we are jumping to.
         m_currentIndex = opcodeLengths[op_enter];
         m_exitOK = true;
         processSetLocalQueue();
@@ -4779,11 +4785,11 @@ void ByteCodeParser::parseBlock(unsigned limit)
         // === Function entry opcodes ===
 
         case op_enter: {
+            addToGraph(Options::usePollingTraps() ? CheckTraps : InvalidationPoint);
             Node* undefined = addToGraph(JSConstant, OpInfo(m_constantUndefined));
             // Initialize all locals to undefined.
             for (int i = 0; i < m_inlineStackTop->m_codeBlock->numVars(); ++i)
                 set(virtualRegisterForLocal(i), undefined, ImmediateNakedSet);
-
             NEXT_OPCODE(op_enter);
         }
             
@@ -6640,14 +6646,10 @@ void ByteCodeParser::parseBlock(unsigned limit)
                 m_currentBlock->isOSRTarget = true;
 
             addToGraph(LoopHint);
+            addToGraph(Options::usePollingTraps() ? CheckTraps : InvalidationPoint);
             NEXT_OPCODE(op_loop_hint);
         }
         
-        case op_check_traps: {
-            addToGraph(Options::usePollingTraps() ? CheckTraps : InvalidationPoint);
-            NEXT_OPCODE(op_check_traps);
-        }
-
         case op_nop: {
             addToGraph(Check); // We add a nop here so that basic block linking doesn't break.
             NEXT_OPCODE(op_nop);
index 6d3f237..fd64b40 100644 (file)
@@ -205,7 +205,6 @@ CapabilityLevel capabilityLevel(OpcodeID opcodeID, CodeBlock* codeBlock, const I
     case op_jbelow:
     case op_jbeloweq:
     case op_loop_hint:
-    case op_check_traps:
     case op_nop:
     case op_ret:
     case op_end:
index 307f157..e88e5c1 100644 (file)
@@ -91,26 +91,6 @@ JIT::~JIT()
 {
 }
 
-#if ENABLE(DFG_JIT)
-void JIT::emitEnterOptimizationCheck()
-{
-    if (!canBeOptimized())
-        return;
-
-    JumpList skipOptimize;
-    
-    skipOptimize.append(branchAdd32(Signed, TrustedImm32(Options::executionCounterIncrementForEntry()), AbsoluteAddress(m_codeBlock->addressOfJITExecuteCounter())));
-    ASSERT(!m_bytecodeOffset);
-
-    copyCalleeSavesFromFrameOrRegisterToEntryFrameCalleeSavesBuffer(vm().topEntryFrame);
-
-    callOperation(operationOptimize, m_bytecodeOffset);
-    skipOptimize.append(branchTestPtr(Zero, returnValueGPR));
-    farJump(returnValueGPR, GPRInfo::callFrameRegister);
-    skipOptimize.link(this);
-}
-#endif
-
 void JIT::emitNotifyWrite(WatchpointSet* set)
 {
     if (!set || set->state() == IsInvalidated) {
@@ -383,7 +363,6 @@ void JIT::privateCompileMainPass()
         DEFINE_OP(op_jbeloweq)
         DEFINE_OP(op_jtrue)
         DEFINE_OP(op_loop_hint)
-        DEFINE_OP(op_check_traps)
         DEFINE_OP(op_nop)
         DEFINE_OP(op_super_sampler_begin)
         DEFINE_OP(op_super_sampler_end)
@@ -548,7 +527,7 @@ void JIT::privateCompileSlowCases()
         DEFINE_SLOWCASE_OP(op_jstricteq)
         DEFINE_SLOWCASE_OP(op_jnstricteq)
         DEFINE_SLOWCASE_OP(op_loop_hint)
-        DEFINE_SLOWCASE_OP(op_check_traps)
+        DEFINE_SLOWCASE_OP(op_enter)
         DEFINE_SLOWCASE_OP(op_mod)
         DEFINE_SLOWCASE_OP(op_mul)
         DEFINE_SLOWCASE_OP(op_negate)
index 0baf243..2082245 100644 (file)
@@ -575,7 +575,6 @@ namespace JSC {
         void emit_op_jbeloweq(const Instruction*);
         void emit_op_jtrue(const Instruction*);
         void emit_op_loop_hint(const Instruction*);
-        void emit_op_check_traps(const Instruction*);
         void emit_op_nop(const Instruction*);
         void emit_op_super_sampler_begin(const Instruction*);
         void emit_op_super_sampler_end(const Instruction*);
@@ -673,7 +672,7 @@ namespace JSC {
         void emitSlow_op_jnstricteq(const Instruction*, Vector<SlowCaseEntry>::iterator&);
         void emitSlow_op_jtrue(const Instruction*, Vector<SlowCaseEntry>::iterator&);
         void emitSlow_op_loop_hint(const Instruction*, Vector<SlowCaseEntry>::iterator&);
-        void emitSlow_op_check_traps(const Instruction*, Vector<SlowCaseEntry>::iterator&);
+        void emitSlow_op_enter(const Instruction*, Vector<SlowCaseEntry>::iterator&);
         void emitSlow_op_mod(const Instruction*, Vector<SlowCaseEntry>::iterator&);
         void emitSlow_op_mul(const Instruction*, Vector<SlowCaseEntry>::iterator&);
         void emitSlow_op_negate(const Instruction*, Vector<SlowCaseEntry>::iterator&);
@@ -868,12 +867,6 @@ namespace JSC {
 
         int jumpTarget(const Instruction*, int target);
         
-#if ENABLE(DFG_JIT)
-        void emitEnterOptimizationCheck();
-#else
-        void emitEnterOptimizationCheck() { }
-#endif
-
 #ifndef NDEBUG
         void printBytecodeOperandTypes(int src1, int src2);
 #endif
index 7260668..31d597a 100644 (file)
@@ -875,20 +875,6 @@ void JIT::emit_op_neq_null(const Instruction* currentInstruction)
     emitPutVirtualRegister(dst);
 }
 
-void JIT::emit_op_enter(const Instruction*)
-{
-    // Even though CTI doesn't use them, we initialize our constant
-    // registers to zap stale pointers, to avoid unnecessarily prolonging
-    // object lifetime and increasing GC pressure.
-    size_t count = m_codeBlock->numVars();
-    for (size_t j = CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters(); j < count; ++j)
-        emitInitRegister(virtualRegisterForLocal(j).offset());
-
-    emitWriteBarrier(m_codeBlock);
-
-    emitEnterOptimizationCheck();
-}
-
 void JIT::emit_op_get_scope(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpGetScope>();
@@ -1020,45 +1006,43 @@ void JIT::emitSlow_op_instanceof_custom(const Instruction* currentInstruction, V
 
 void JIT::emit_op_loop_hint(const Instruction*)
 {
-    // Emit the JIT optimization check: 
+    // Check traps.
+    addSlowCase(branchTest8(NonZero, AbsoluteAddress(m_vm->needTrapHandlingAddress())));
+#if ENABLE(DFG_JIT)
+    // Emit the JIT optimization check:
     if (canBeOptimized()) {
         addSlowCase(branchAdd32(PositiveOrZero, TrustedImm32(Options::executionCounterIncrementForLoop()),
             AbsoluteAddress(m_codeBlock->addressOfJITExecuteCounter())));
     }
+#endif
 }
 
 void JIT::emitSlow_op_loop_hint(const Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
 {
+    linkSlowCase(iter);
+    callOperation(operationHandleTraps);
 #if ENABLE(DFG_JIT)
     // Emit the slow path for the JIT optimization check:
     if (canBeOptimized()) {
-        linkAllSlowCases(iter);
+        emitJumpSlowToHot(branchAdd32(Signed, TrustedImm32(Options::executionCounterIncrementForLoop()), AbsoluteAddress(m_codeBlock->addressOfJITExecuteCounter())), currentInstruction->size());
+        linkSlowCase(iter);
 
         copyCalleeSavesFromFrameOrRegisterToEntryFrameCalleeSavesBuffer(vm().topEntryFrame);
 
         callOperation(operationOptimize, m_bytecodeOffset);
-        Jump noOptimizedEntry = branchTestPtr(Zero, returnValueGPR);
+        emitJumpSlowToHot(branchTestPtr(Zero, returnValueGPR), currentInstruction->size());
         if (!ASSERT_DISABLED) {
             Jump ok = branchPtr(MacroAssembler::Above, returnValueGPR, TrustedImmPtr(bitwise_cast<void*>(static_cast<intptr_t>(1000))));
             abortWithReason(JITUnreasonableLoopHintJumpTarget);
             ok.link(this);
         }
         farJump(returnValueGPR, GPRInfo::callFrameRegister);
-        noOptimizedEntry.link(this);
-
-        emitJumpSlowToHot(jump(), currentInstruction->size());
     }
 #else
     UNUSED_PARAM(currentInstruction);
-    UNUSED_PARAM(iter);
 #endif
 }
 
-void JIT::emit_op_check_traps(const Instruction*)
-{
-    addSlowCase(branchTest8(NonZero, AbsoluteAddress(m_vm->needTrapHandlingAddress())));
-}
-
 void JIT::emit_op_nop(const Instruction*)
 {
 }
@@ -1073,11 +1057,46 @@ void JIT::emit_op_super_sampler_end(const Instruction*)
     sub32(TrustedImm32(1), AbsoluteAddress(bitwise_cast<void*>(&g_superSamplerCount)));
 }
 
-void JIT::emitSlow_op_check_traps(const Instruction*, Vector<SlowCaseEntry>::iterator& iter)
+void JIT::emit_op_enter(const Instruction*)
 {
-    linkAllSlowCases(iter);
+    // Even though JIT doesn't use them, we initialize our constant
+    // registers to zap stale pointers, to avoid unnecessarily prolonging
+    // object lifetime and increasing GC pressure.
+    size_t count = m_codeBlock->numVars();
+    for (size_t i = CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters(); i < count; ++i)
+        emitInitRegister(virtualRegisterForLocal(i).offset());
+
+    emitWriteBarrier(m_codeBlock);
 
+    // Check traps.
+    addSlowCase(branchTest8(NonZero, AbsoluteAddress(m_vm->needTrapHandlingAddress())));
+
+#if ENABLE(DFG_JIT)
+    if (canBeOptimized())
+        addSlowCase(branchAdd32(PositiveOrZero, TrustedImm32(Options::executionCounterIncrementForEntry()), AbsoluteAddress(m_codeBlock->addressOfJITExecuteCounter())));
+#endif
+}
+
+void JIT::emitSlow_op_enter(const Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
+{
+    linkSlowCase(iter);
     callOperation(operationHandleTraps);
+#if ENABLE(DFG_JIT)
+    if (canBeOptimized()) {
+        emitJumpSlowToHot(branchAdd32(Signed, TrustedImm32(Options::executionCounterIncrementForEntry()), AbsoluteAddress(m_codeBlock->addressOfJITExecuteCounter())), currentInstruction->size());
+        linkSlowCase(iter);
+
+        ASSERT(!m_bytecodeOffset);
+
+        copyCalleeSavesFromFrameOrRegisterToEntryFrameCalleeSavesBuffer(vm().topEntryFrame);
+
+        callOperation(operationOptimize, m_bytecodeOffset);
+        emitJumpSlowToHot(branchTestPtr(Zero, returnValueGPR), currentInstruction->size());
+        farJump(returnValueGPR, GPRInfo::callFrameRegister);
+    }
+#else
+    UNUSED_PARAM(currentInstruction);
+#endif
 }
 
 void JIT::emit_op_new_regexp(const Instruction* currentInstruction)
index c7ad3b0..6a86d6c 100644 (file)
@@ -1002,21 +1002,6 @@ void JIT::emit_op_debug(const Instruction* currentInstruction)
     noDebuggerRequests.link(this);
 }
 
-
-void JIT::emit_op_enter(const Instruction* currentInstruction)
-{
-    emitEnterOptimizationCheck();
-    
-    // Even though JIT code doesn't use them, we initialize our constant
-    // registers to zap stale pointers, to avoid unnecessarily prolonging
-    // object lifetime and increasing GC pressure.
-    for (int i = CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters(); i < m_codeBlock->numVars(); ++i)
-        emitStore(virtualRegisterForLocal(i).offset(), jsUndefined());
-
-    JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_enter);
-    slowPathCall.call();
-}
-
 void JIT::emit_op_get_scope(const Instruction* currentInstruction)
 {
     auto bytecode = currentInstruction->as<OpGetScope>();
index c7e993f..8220f83 100644 (file)
@@ -1671,23 +1671,16 @@ preOp(dec, OpDec,
 
 
 llintOp(op_loop_hint, OpLoopHint, macro (unused, unused, dispatch)
-    checkSwitchToJITForLoop()
-    dispatch()
-end)
-
-
-llintOp(op_check_traps, OpCheckTraps, macro (unused, unused, dispatch)
+    # CheckTraps.
     loadp CodeBlock[cfr], t1
     loadp CodeBlock::m_vm[t1], t1
-    loadb VM::m_traps+VMTraps::m_needTrapHandling[t1], t0
-    btpnz t0, .handleTraps
+    btbnz VM::m_traps + VMTraps::m_needTrapHandling[t1], .handleTraps
 .afterHandlingTraps:
+    checkSwitchToJITForLoop()
     dispatch()
 .handleTraps:
-    callTrapHandler(.throwHandler)
+    callTrapHandler(_llint_throw_from_slow_path_trampoline)
     jmp .afterHandlingTraps
-.throwHandler:
-    jmp _llint_throw_from_slow_path_trampoline
 end)
 
 
index 55006d4..0203591 100644 (file)
@@ -546,21 +546,26 @@ macro loadConstantOrVariablePayloadUnchecked(size, index, payload)
         payload)
 end
 
-macro writeBarrierOnOperand(size, get, cellFieldName)
-    get(cellFieldName, t1)
-    loadConstantOrVariablePayload(size, t1, CellTag, t2, .writeBarrierDone)
+macro writeBarrierOnCellWithReload(cell, reloadAfterSlowPath)
     skipIfIsRememberedOrInEden(
-        t2, 
+        cell,
         macro()
             push cfr, PC
             # We make two extra slots because cCall2 will poke.
             subp 8, sp
-            move t2, a1 # t2 can be a0 on x86
+            move cell, a1 # cell can be a0
             move cfr, a0
             cCall2Void(_llint_write_barrier_slow)
             addp 8, sp
             pop PC, cfr
+            reloadAfterSlowPath()
         end)
+end
+
+macro writeBarrierOnOperand(size, get, cellFieldName)
+    get(cellFieldName, t1)
+    loadConstantOrVariablePayload(size, t1, CellTag, t2, .writeBarrierDone)
+    writeBarrierOnCellWithReload(t2, macro() end)
 .writeBarrierDone:
 end
 
@@ -580,18 +585,7 @@ macro writeBarrierOnGlobal(size, get, valueFieldName, loadMacro)
 
     loadMacro(t3)
 
-    skipIfIsRememberedOrInEden(
-        t3,
-        macro()
-            push cfr, PC
-            # We make two extra slots because cCall2 will poke.
-            subp 8, sp
-            move cfr, a0
-            move t3, a1
-            cCall2Void(_llint_write_barrier_slow)
-            addp 8, sp
-            pop PC, cfr
-        end)
+    writeBarrierOnCellWithReload(t3, macro() end)
 .writeBarrierDone:
 end
 
@@ -707,24 +701,31 @@ end
 _llint_op_enter:
     traceExecution()
     checkStackPointerAlignment(t2, 0xdead00e1)
-    loadp CodeBlock[cfr], t2                // t2<CodeBlock> = cfr.CodeBlock
-    loadi CodeBlock::m_numVars[t2], t2      // t2<size_t> = t2<CodeBlock>.m_numVars
+    loadp CodeBlock[cfr], t1                // t1<CodeBlock> = cfr.CodeBlock
+    loadi CodeBlock::m_numVars[t1], t2      // t2<size_t> = t1<CodeBlock>.m_numVars
     subi CalleeSaveSpaceAsVirtualRegisters, t2
     move cfr, t3
     subp CalleeSaveSpaceAsVirtualRegisters * SlotSize, t3
     btiz t2, .opEnterDone
     move UndefinedTag, t0
-    move 0, t1
     negi t2
 .opEnterLoop:
     storei t0, TagOffset[t3, t2, 8]
-    storei t1, PayloadOffset[t3, t2, 8]
+    storei 0, PayloadOffset[t3, t2, 8]
     addi 1, t2
     btinz t2, .opEnterLoop
 .opEnterDone:
-    callSlowPath(_slow_path_enter)
+    writeBarrierOnCellWithReload(t1, macro ()
+        loadp CodeBlock[cfr], t1 # Reload CodeBlock
+    end)
+    # Checking traps.
+    loadp CodeBlock::m_vm[t1], t1
+    btpnz VM::m_traps + VMTraps::m_needTrapHandling[t1], .handleTraps
+.afterHandlingTraps:
     dispatchOp(narrow, op_enter)
-
+.handleTraps:
+    callTrapHandler(_llint_throw_from_slow_path_trampoline)
+    jmp .afterHandlingTraps
 
 llintOpWithProfile(op_get_argument, OpGetArgument, macro (size, get, dispatch, return)
     get(m_index, t2)
index 0a4559a..9417e87 100644 (file)
@@ -510,19 +510,23 @@ macro loadConstantOrVariableCell(size, index, value, slow)
     btqnz value, tagMask, slow
 end
 
-macro writeBarrierOnOperandWithReload(size, get, cellFieldName, reloadAfterSlowPath)
-    get(cellFieldName, t1)
-    loadConstantOrVariableCell(size, t1, t2, .writeBarrierDone)
+macro writeBarrierOnCellWithReload(cell, reloadAfterSlowPath)
     skipIfIsRememberedOrInEden(
-        t2,
+        cell,
         macro()
             push PB, PC
-            move t2, a1 # t2 can be a0 (not on 64 bits, but better safe than sorry)
+            move cell, a1 # cell can be a0
             move cfr, a0
             cCall2Void(_llint_write_barrier_slow)
             pop PC, PB
             reloadAfterSlowPath()
         end)
+end
+
+macro writeBarrierOnOperandWithReload(size, get, cellFieldName, reloadAfterSlowPath)
+    get(cellFieldName, t1)
+    loadConstantOrVariableCell(size, t1, t2, .writeBarrierDone)
+    writeBarrierOnCellWithReload(t2, reloadAfterSlowPath)
 .writeBarrierDone:
 end
 
@@ -545,15 +549,7 @@ macro writeBarrierOnGlobal(size, get, valueFieldName, loadMacro)
     btpz t0, .writeBarrierDone
 
     loadMacro(t3)
-    skipIfIsRememberedOrInEden(
-        t3,
-        macro()
-            push PB, PC
-            move cfr, a0
-            move t3, a1
-            cCall2Void(_llint_write_barrier_slow)
-            pop PC, PB
-        end)
+    writeBarrierOnCellWithReload(t3, macro() end)
 .writeBarrierDone:
 end
 
@@ -686,8 +682,8 @@ end
 _llint_op_enter:
     traceExecution()
     checkStackPointerAlignment(t2, 0xdead00e1)
-    loadp CodeBlock[cfr], t2                // t2<CodeBlock> = cfr.CodeBlock
-    loadi CodeBlock::m_numVars[t2], t2      // t2<size_t> = t2<CodeBlock>.m_numVars
+    loadp CodeBlock[cfr], t3                // t3<CodeBlock> = cfr.CodeBlock
+    loadi CodeBlock::m_numVars[t3], t2      // t2<size_t> = t3<CodeBlock>.m_numVars
     subq CalleeSaveSpaceAsVirtualRegisters, t2
     move cfr, t1
     subq CalleeSaveSpaceAsVirtualRegisters * 8, t1
@@ -700,9 +696,16 @@ _llint_op_enter:
     addq 1, t2
     btqnz t2, .opEnterLoop
 .opEnterDone:
-    callSlowPath(_slow_path_enter)
+    writeBarrierOnCellWithReload(t3, macro ()
+        loadp CodeBlock[cfr], t3 # Reload CodeBlock
+    end)
+    loadp CodeBlock::m_vm[t3], t1
+    btbnz VM::m_traps + VMTraps::m_needTrapHandling[t1], .handleTraps
+.afterHandlingTraps:
     dispatchOp(narrow, op_enter)
-
+.handleTraps:
+    callTrapHandler(_llint_throw_from_slow_path_trampoline)
+    jmp .afterHandlingTraps
 
 llintOpWithProfile(op_get_argument, OpGetArgument, macro (size, get, dispatch, return)
     get(m_index, t2)
index da5a7c5..156caa1 100644 (file)
@@ -888,14 +888,6 @@ SLOW_PATH_DECL(slow_path_to_primitive)
     RETURN(GET_C(bytecode.m_src).jsValue().toPrimitive(exec));
 }
 
-SLOW_PATH_DECL(slow_path_enter)
-{
-    BEGIN();
-    CodeBlock* codeBlock = exec->codeBlock();
-    Heap::heap(codeBlock)->writeBarrier(codeBlock);
-    END();
-}
-
 SLOW_PATH_DECL(slow_path_get_enumerable_length)
 {
     BEGIN();
index 7eeaf70..cad957f 100644 (file)
@@ -323,7 +323,6 @@ SLOW_PATH_HIDDEN_DECL(slow_path_create_direct_arguments);
 SLOW_PATH_HIDDEN_DECL(slow_path_create_scoped_arguments);
 SLOW_PATH_HIDDEN_DECL(slow_path_create_cloned_arguments);
 SLOW_PATH_HIDDEN_DECL(slow_path_create_this);
-SLOW_PATH_HIDDEN_DECL(slow_path_enter);
 SLOW_PATH_HIDDEN_DECL(slow_path_get_callee);
 SLOW_PATH_HIDDEN_DECL(slow_path_to_this);
 SLOW_PATH_HIDDEN_DECL(slow_path_throw_tdz_error);