Make the VM Traps mechanism non-polling for the DFG and FTL.
authormark.lam@apple.com <mark.lam@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Thu, 9 Mar 2017 19:08:46 +0000 (19:08 +0000)
committermark.lam@apple.com <mark.lam@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Thu, 9 Mar 2017 19:08:46 +0000 (19:08 +0000)
https://bugs.webkit.org/show_bug.cgi?id=168920
<rdar://problem/30738588>

Reviewed by Filip Pizlo.

Source/JavaScriptCore:

1. Added a ENABLE(SIGNAL_BASED_VM_TRAPS) configuration in Platform.h.
   This is currently only enabled for OS(DARWIN) and ENABLE(JIT).
2. Added assembler functions for overwriting an instruction with a breakpoint.
3. Added a new JettisonDueToVMTraps jettison reason.
4. Added CodeBlock and DFG::CommonData utility functions for over-writing
   invalidation points with breakpoint instructions.
5. The BytecodeGenerator now emits the op_check_traps bytecode unconditionally.
6. Remove the JSC_alwaysCheckTraps option because of (4) above.
   For ports that don't ENABLE(SIGNAL_BASED_VM_TRAPS), we'll force
   Options::usePollingTraps() to always be true.  This makes the VMTraps
   implementation fall back to using polling based traps only.

7. Make VMTraps support signal based traps.

Some design and implementation details of signal based VM traps:

- The implementation makes use of 2 signal handlers for SIGUSR1 and SIGTRAP.

- VMTraps::fireTrap() will set the flag for the requested trap and instantiate
  a SignalSender.  The SignalSender will send SIGUSR1 to the mutator thread that
  we want to trap, and check for the occurence of one of the following events:

  a. VMTraps::handleTraps() has been called for the requested trap, or

  b. the VM is inactive and is no longer executing any JS code.  We determine
     this to be the case if the thread no longer owns the JSLock and the VM's
     entryScope is null.

     Note: the thread can relinquish the JSLock while the VM's entryScope is not
     null.  This happens when the thread calls JSLock::dropAllLocks() before
     calling a host function that may block on IO (or whatever).  For our purpose,
     this counts as the VM still running JS code, and VM::fireTrap() will still
     be waiting.

  If the SignalSender does not see either of these events, it will sleep for a
  while and then re-send SIGUSR1 and check for the events again.  When it sees
  one of these events, it will consider the mutator to have received the trap
  request.

- The SIGUSR1 handler will try to insert breakpoints at the invalidation points
  in the DFG/FTL codeBlock at the top of the stack.  This allows the mutator
  thread to break (with a SIGTRAP) exactly at an invalidation point, where it's
  safe to jettison the codeBlock.

  Note: we cannot have the requester thread (that called VMTraps::fireTrap())
  insert the breakpoint instructions itself.  This is because we need the
  register state of the the mutator thread (that we want to trap in) in order to
  find the codeBlocks that we wish to insert the breakpoints in.  Currently,
  we don't have a generic way for the requester thread to get the register state
  of another thread.

- The SIGTRAP handler will check to see if it is trapping on a breakpoint at an
  invalidation point.  If so, it will jettison the codeBlock and adjust the PC
  to re-execute the invalidation OSR exit off-ramp.  After the OSR exit, the
  baseline JIT code will eventually reach an op_check_traps and call
  VMTraps::handleTraps().

  If the handler is not trapping at an invalidation point, then it must be
  observing an assertion failure (which also uses the breakpoint instruction).
  In this case, the handler will defer to the default SIGTRAP handler and crash.

- The reason we need the SignalSender is because SignalSender::send() is called
  from another thread in a loop, so that VMTraps::fireTrap() can return sooner.
  send() needs to make use of the VM pointer, and it is not guaranteed that the
  VM will outlive the thread.  SignalSender provides the mechanism by which we
  can nullify the VM pointer when the VM dies so that the thread does not
  continue to use it.

* assembler/ARM64Assembler.h:
(JSC::ARM64Assembler::replaceWithBrk):
* assembler/ARMAssembler.h:
(JSC::ARMAssembler::replaceWithBrk):
* assembler/ARMv7Assembler.h:
(JSC::ARMv7Assembler::replaceWithBkpt):
* assembler/MIPSAssembler.h:
(JSC::MIPSAssembler::replaceWithBkpt):
* assembler/MacroAssemblerARM.h:
(JSC::MacroAssemblerARM::replaceWithJump):
* assembler/MacroAssemblerARM64.h:
(JSC::MacroAssemblerARM64::replaceWithBreakpoint):
* assembler/MacroAssemblerARMv7.h:
(JSC::MacroAssemblerARMv7::replaceWithBreakpoint):
* assembler/MacroAssemblerMIPS.h:
(JSC::MacroAssemblerMIPS::replaceWithJump):
* assembler/MacroAssemblerX86Common.h:
(JSC::MacroAssemblerX86Common::replaceWithBreakpoint):
* assembler/X86Assembler.h:
(JSC::X86Assembler::replaceWithInt3):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::jettison):
(JSC::CodeBlock::hasInstalledVMTrapBreakpoints):
(JSC::CodeBlock::installVMTrapBreakpoints):
* bytecode/CodeBlock.h:
* bytecompiler/BytecodeGenerator.cpp:
(JSC::BytecodeGenerator::emitCheckTraps):
* dfg/DFGCommonData.cpp:
(JSC::DFG::CommonData::installVMTrapBreakpoints):
(JSC::DFG::CommonData::isVMTrapBreakpoint):
* dfg/DFGCommonData.h:
(JSC::DFG::CommonData::hasInstalledVMTrapsBreakpoints):
* dfg/DFGJumpReplacement.cpp:
(JSC::DFG::JumpReplacement::installVMTrapBreakpoint):
* dfg/DFGJumpReplacement.h:
(JSC::DFG::JumpReplacement::dataLocation):
* dfg/DFGNodeType.h:
* heap/CodeBlockSet.cpp:
(JSC::CodeBlockSet::contains):
* heap/CodeBlockSet.h:
* heap/CodeBlockSetInlines.h:
(JSC::CodeBlockSet::iterate):
* heap/Heap.cpp:
(JSC::Heap::forEachCodeBlockIgnoringJITPlansImpl):
* heap/Heap.h:
* heap/HeapInlines.h:
(JSC::Heap::forEachCodeBlockIgnoringJITPlans):
* heap/MachineStackMarker.h:
(JSC::MachineThreads::threadsListHead):
* jit/ExecutableAllocator.cpp:
(JSC::ExecutableAllocator::isValidExecutableMemory):
* jit/ExecutableAllocator.h:
* profiler/ProfilerJettisonReason.cpp:
(WTF::printInternal):
* profiler/ProfilerJettisonReason.h:
* runtime/JSLock.cpp:
(JSC::JSLock::didAcquireLock):
* runtime/Options.cpp:
(JSC::overrideDefaults):
* runtime/Options.h:
* runtime/PlatformThread.h:
(JSC::platformThreadSignal):
* runtime/VM.cpp:
(JSC::VM::~VM):
(JSC::VM::ensureWatchdog):
(JSC::VM::handleTraps): Deleted.
(JSC::VM::setNeedAsynchronousTerminationSupport): Deleted.
* runtime/VM.h:
(JSC::VM::ownerThread):
(JSC::VM::traps):
(JSC::VM::handleTraps):
(JSC::VM::needTrapHandling):
(JSC::VM::needAsynchronousTerminationSupport): Deleted.
* runtime/VMTraps.cpp:
(JSC::VMTraps::vm):
(JSC::SignalContext::SignalContext):
(JSC::SignalContext::adjustPCToPointToTrappingInstruction):
(JSC::vmIsInactive):
(JSC::findActiveVMAndStackBounds):
(JSC::handleSigusr1):
(JSC::handleSigtrap):
(JSC::installSignalHandlers):
(JSC::sanitizedTopCallFrame):
(JSC::isSaneFrame):
(JSC::VMTraps::tryInstallTrapBreakpoints):
(JSC::VMTraps::invalidateCodeBlocksOnStack):
(JSC::VMTraps::VMTraps):
(JSC::VMTraps::willDestroyVM):
(JSC::VMTraps::addSignalSender):
(JSC::VMTraps::removeSignalSender):
(JSC::VMTraps::SignalSender::willDestroyVM):
(JSC::VMTraps::SignalSender::send):
(JSC::VMTraps::fireTrap):
(JSC::VMTraps::handleTraps):
* runtime/VMTraps.h:
(JSC::VMTraps::~VMTraps):
(JSC::VMTraps::needTrapHandling):
(JSC::VMTraps::notifyGrabAllLocks):
(JSC::VMTraps::SignalSender::SignalSender):
(JSC::VMTraps::invalidateCodeBlocksOnStack):
* tools/VMInspector.cpp:
* tools/VMInspector.h:
(JSC::VMInspector::getLock):
(JSC::VMInspector::iterate):

Source/WebCore:

No new tests needed.  This is covered by existing tests.

* bindings/js/WorkerScriptController.cpp:
(WebCore::WorkerScriptController::WorkerScriptController):
(WebCore::WorkerScriptController::scheduleExecutionTermination):

Source/WTF:

Make StackBounds more useful for checking if a pointer is within stack bounds.

* wtf/MetaAllocator.cpp:
(WTF::MetaAllocator::isInAllocatedMemory):
* wtf/MetaAllocator.h:
* wtf/Platform.h:
* wtf/StackBounds.h:
(WTF::StackBounds::emptyBounds):
(WTF::StackBounds::StackBounds):
(WTF::StackBounds::isEmpty):
(WTF::StackBounds::contains):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@213652 268f45cc-cd09-0410-ab3c-d52691b4dbfc

47 files changed:
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/assembler/ARM64Assembler.h
Source/JavaScriptCore/assembler/ARMAssembler.h
Source/JavaScriptCore/assembler/ARMv7Assembler.h
Source/JavaScriptCore/assembler/MIPSAssembler.h
Source/JavaScriptCore/assembler/MacroAssemblerARM.h
Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h
Source/JavaScriptCore/assembler/MacroAssemblerMIPS.h
Source/JavaScriptCore/assembler/MacroAssemblerX86Common.h
Source/JavaScriptCore/assembler/X86Assembler.h
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/bytecode/CodeBlock.h
Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
Source/JavaScriptCore/dfg/DFGCommonData.cpp
Source/JavaScriptCore/dfg/DFGCommonData.h
Source/JavaScriptCore/dfg/DFGJumpReplacement.cpp
Source/JavaScriptCore/dfg/DFGJumpReplacement.h
Source/JavaScriptCore/dfg/DFGNodeType.h
Source/JavaScriptCore/heap/CodeBlockSet.cpp
Source/JavaScriptCore/heap/CodeBlockSet.h
Source/JavaScriptCore/heap/CodeBlockSetInlines.h
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/heap/Heap.h
Source/JavaScriptCore/heap/HeapInlines.h
Source/JavaScriptCore/heap/MachineStackMarker.h
Source/JavaScriptCore/jit/ExecutableAllocator.cpp
Source/JavaScriptCore/jit/ExecutableAllocator.h
Source/JavaScriptCore/profiler/ProfilerJettisonReason.cpp
Source/JavaScriptCore/profiler/ProfilerJettisonReason.h
Source/JavaScriptCore/runtime/JSLock.cpp
Source/JavaScriptCore/runtime/Options.cpp
Source/JavaScriptCore/runtime/Options.h
Source/JavaScriptCore/runtime/PlatformThread.h
Source/JavaScriptCore/runtime/VM.cpp
Source/JavaScriptCore/runtime/VM.h
Source/JavaScriptCore/runtime/VMTraps.cpp
Source/JavaScriptCore/runtime/VMTraps.h
Source/JavaScriptCore/tools/VMInspector.cpp
Source/JavaScriptCore/tools/VMInspector.h
Source/WTF/ChangeLog
Source/WTF/wtf/MetaAllocator.cpp
Source/WTF/wtf/MetaAllocator.h
Source/WTF/wtf/Platform.h
Source/WTF/wtf/StackBounds.h
Source/WebCore/ChangeLog
Source/WebCore/bindings/js/WorkerScriptController.cpp

index 865ba14..d9531e9 100644 (file)
@@ -1,3 +1,184 @@
+2017-03-09  Mark Lam  <mark.lam@apple.com>
+
+        Make the VM Traps mechanism non-polling for the DFG and FTL.
+        https://bugs.webkit.org/show_bug.cgi?id=168920
+        <rdar://problem/30738588>
+
+        Reviewed by Filip Pizlo.
+
+        1. Added a ENABLE(SIGNAL_BASED_VM_TRAPS) configuration in Platform.h.
+           This is currently only enabled for OS(DARWIN) and ENABLE(JIT). 
+        2. Added assembler functions for overwriting an instruction with a breakpoint.
+        3. Added a new JettisonDueToVMTraps jettison reason.
+        4. Added CodeBlock and DFG::CommonData utility functions for over-writing
+           invalidation points with breakpoint instructions.
+        5. The BytecodeGenerator now emits the op_check_traps bytecode unconditionally.
+        6. Remove the JSC_alwaysCheckTraps option because of (4) above.
+           For ports that don't ENABLE(SIGNAL_BASED_VM_TRAPS), we'll force
+           Options::usePollingTraps() to always be true.  This makes the VMTraps
+           implementation fall back to using polling based traps only.
+
+        7. Make VMTraps support signal based traps.
+
+        Some design and implementation details of signal based VM traps:
+
+        - The implementation makes use of 2 signal handlers for SIGUSR1 and SIGTRAP.
+
+        - VMTraps::fireTrap() will set the flag for the requested trap and instantiate
+          a SignalSender.  The SignalSender will send SIGUSR1 to the mutator thread that
+          we want to trap, and check for the occurence of one of the following events:
+
+          a. VMTraps::handleTraps() has been called for the requested trap, or
+
+          b. the VM is inactive and is no longer executing any JS code.  We determine
+             this to be the case if the thread no longer owns the JSLock and the VM's
+             entryScope is null.
+
+             Note: the thread can relinquish the JSLock while the VM's entryScope is not
+             null.  This happens when the thread calls JSLock::dropAllLocks() before
+             calling a host function that may block on IO (or whatever).  For our purpose,
+             this counts as the VM still running JS code, and VM::fireTrap() will still
+             be waiting.
+
+          If the SignalSender does not see either of these events, it will sleep for a
+          while and then re-send SIGUSR1 and check for the events again.  When it sees
+          one of these events, it will consider the mutator to have received the trap
+          request.
+
+        - The SIGUSR1 handler will try to insert breakpoints at the invalidation points
+          in the DFG/FTL codeBlock at the top of the stack.  This allows the mutator
+          thread to break (with a SIGTRAP) exactly at an invalidation point, where it's
+          safe to jettison the codeBlock.
+
+          Note: we cannot have the requester thread (that called VMTraps::fireTrap())
+          insert the breakpoint instructions itself.  This is because we need the
+          register state of the the mutator thread (that we want to trap in) in order to
+          find the codeBlocks that we wish to insert the breakpoints in.  Currently,
+          we don't have a generic way for the requester thread to get the register state
+          of another thread.
+
+        - The SIGTRAP handler will check to see if it is trapping on a breakpoint at an
+          invalidation point.  If so, it will jettison the codeBlock and adjust the PC
+          to re-execute the invalidation OSR exit off-ramp.  After the OSR exit, the
+          baseline JIT code will eventually reach an op_check_traps and call
+          VMTraps::handleTraps().
+
+          If the handler is not trapping at an invalidation point, then it must be
+          observing an assertion failure (which also uses the breakpoint instruction).
+          In this case, the handler will defer to the default SIGTRAP handler and crash.
+
+        - The reason we need the SignalSender is because SignalSender::send() is called
+          from another thread in a loop, so that VMTraps::fireTrap() can return sooner.
+          send() needs to make use of the VM pointer, and it is not guaranteed that the
+          VM will outlive the thread.  SignalSender provides the mechanism by which we
+          can nullify the VM pointer when the VM dies so that the thread does not
+          continue to use it.
+
+        * assembler/ARM64Assembler.h:
+        (JSC::ARM64Assembler::replaceWithBrk):
+        * assembler/ARMAssembler.h:
+        (JSC::ARMAssembler::replaceWithBrk):
+        * assembler/ARMv7Assembler.h:
+        (JSC::ARMv7Assembler::replaceWithBkpt):
+        * assembler/MIPSAssembler.h:
+        (JSC::MIPSAssembler::replaceWithBkpt):
+        * assembler/MacroAssemblerARM.h:
+        (JSC::MacroAssemblerARM::replaceWithJump):
+        * assembler/MacroAssemblerARM64.h:
+        (JSC::MacroAssemblerARM64::replaceWithBreakpoint):
+        * assembler/MacroAssemblerARMv7.h:
+        (JSC::MacroAssemblerARMv7::replaceWithBreakpoint):
+        * assembler/MacroAssemblerMIPS.h:
+        (JSC::MacroAssemblerMIPS::replaceWithJump):
+        * assembler/MacroAssemblerX86Common.h:
+        (JSC::MacroAssemblerX86Common::replaceWithBreakpoint):
+        * assembler/X86Assembler.h:
+        (JSC::X86Assembler::replaceWithInt3):
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::jettison):
+        (JSC::CodeBlock::hasInstalledVMTrapBreakpoints):
+        (JSC::CodeBlock::installVMTrapBreakpoints):
+        * bytecode/CodeBlock.h:
+        * bytecompiler/BytecodeGenerator.cpp:
+        (JSC::BytecodeGenerator::emitCheckTraps):
+        * dfg/DFGCommonData.cpp:
+        (JSC::DFG::CommonData::installVMTrapBreakpoints):
+        (JSC::DFG::CommonData::isVMTrapBreakpoint):
+        * dfg/DFGCommonData.h:
+        (JSC::DFG::CommonData::hasInstalledVMTrapsBreakpoints):
+        * dfg/DFGJumpReplacement.cpp:
+        (JSC::DFG::JumpReplacement::installVMTrapBreakpoint):
+        * dfg/DFGJumpReplacement.h:
+        (JSC::DFG::JumpReplacement::dataLocation):
+        * dfg/DFGNodeType.h:
+        * heap/CodeBlockSet.cpp:
+        (JSC::CodeBlockSet::contains):
+        * heap/CodeBlockSet.h:
+        * heap/CodeBlockSetInlines.h:
+        (JSC::CodeBlockSet::iterate):
+        * heap/Heap.cpp:
+        (JSC::Heap::forEachCodeBlockIgnoringJITPlansImpl):
+        * heap/Heap.h:
+        * heap/HeapInlines.h:
+        (JSC::Heap::forEachCodeBlockIgnoringJITPlans):
+        * heap/MachineStackMarker.h:
+        (JSC::MachineThreads::threadsListHead):
+        * jit/ExecutableAllocator.cpp:
+        (JSC::ExecutableAllocator::isValidExecutableMemory):
+        * jit/ExecutableAllocator.h:
+        * profiler/ProfilerJettisonReason.cpp:
+        (WTF::printInternal):
+        * profiler/ProfilerJettisonReason.h:
+        * runtime/JSLock.cpp:
+        (JSC::JSLock::didAcquireLock):
+        * runtime/Options.cpp:
+        (JSC::overrideDefaults):
+        * runtime/Options.h:
+        * runtime/PlatformThread.h:
+        (JSC::platformThreadSignal):
+        * runtime/VM.cpp:
+        (JSC::VM::~VM):
+        (JSC::VM::ensureWatchdog):
+        (JSC::VM::handleTraps): Deleted.
+        (JSC::VM::setNeedAsynchronousTerminationSupport): Deleted.
+        * runtime/VM.h:
+        (JSC::VM::ownerThread):
+        (JSC::VM::traps):
+        (JSC::VM::handleTraps):
+        (JSC::VM::needTrapHandling):
+        (JSC::VM::needAsynchronousTerminationSupport): Deleted.
+        * runtime/VMTraps.cpp:
+        (JSC::VMTraps::vm):
+        (JSC::SignalContext::SignalContext):
+        (JSC::SignalContext::adjustPCToPointToTrappingInstruction):
+        (JSC::vmIsInactive):
+        (JSC::findActiveVMAndStackBounds):
+        (JSC::handleSigusr1):
+        (JSC::handleSigtrap):
+        (JSC::installSignalHandlers):
+        (JSC::sanitizedTopCallFrame):
+        (JSC::isSaneFrame):
+        (JSC::VMTraps::tryInstallTrapBreakpoints):
+        (JSC::VMTraps::invalidateCodeBlocksOnStack):
+        (JSC::VMTraps::VMTraps):
+        (JSC::VMTraps::willDestroyVM):
+        (JSC::VMTraps::addSignalSender):
+        (JSC::VMTraps::removeSignalSender):
+        (JSC::VMTraps::SignalSender::willDestroyVM):
+        (JSC::VMTraps::SignalSender::send):
+        (JSC::VMTraps::fireTrap):
+        (JSC::VMTraps::handleTraps):
+        * runtime/VMTraps.h:
+        (JSC::VMTraps::~VMTraps):
+        (JSC::VMTraps::needTrapHandling):
+        (JSC::VMTraps::notifyGrabAllLocks):
+        (JSC::VMTraps::SignalSender::SignalSender):
+        (JSC::VMTraps::invalidateCodeBlocksOnStack):
+        * tools/VMInspector.cpp:
+        * tools/VMInspector.h:
+        (JSC::VMInspector::getLock):
+        (JSC::VMInspector::iterate):
+
 2017-03-09  Filip Pizlo  <fpizlo@apple.com>
 
         WebKit: JSC: JSObject::ensureLength doesn't check if ensureLengthSlow failed
index b9e3a7e..b0d62e6 100644 (file)
@@ -2536,6 +2536,13 @@ public:
         linkPointer(addressOf(code, where), valuePtr);
     }
 
+    static void replaceWithBrk(void* where)
+    {
+        int insn = excepnGeneration(ExcepnOp_BREAKPOINT, 0, 0);
+        performJITMemcpy(where, &insn, sizeof(int));
+        cacheFlush(where, sizeof(int));
+    }
+
     static void replaceWithJump(void* where, void* to)
     {
         intptr_t offset = (reinterpret_cast<intptr_t>(to) - reinterpret_cast<intptr_t>(where)) >> 2;
index 48e1101..8d222f6 100644 (file)
@@ -995,6 +995,13 @@ namespace JSC {
             return reinterpret_cast<void*>(readPointer(reinterpret_cast<void*>(getAbsoluteJumpAddress(from))));
         }
 
+        static void replaceWithBrk(void* instructionStart)
+        {
+            ARMWord* instruction = reinterpret_cast<ARMWord*>(instructionStart);
+            instruction[0] = BKPT;
+            cacheFlush(instruction, sizeof(ARMWord));
+        }
+
         static void replaceWithJump(void* instructionStart, void* to)
         {
             ARMWord* instruction = reinterpret_cast<ARMWord*>(instructionStart);
index fbf7998..6cd4025 100644 (file)
@@ -2327,7 +2327,17 @@ public:
     {
         return reinterpret_cast<void*>(readInt32(where));
     }
-    
+
+    static void replaceWithBkpt(void* instructionStart)
+    {
+        ASSERT(!(bitwise_cast<uintptr_t>(instructionStart) & 1));
+
+        uint16_t* ptr = reinterpret_cast<uint16_t*>(instructionStart);
+        uint16_t instructions = OP_BKPT;
+        performJITMemcpy(ptr, &instructions, sizeof(uint16_t));
+        cacheFlush(ptr, sizeof(uint16_t));
+    }
+
     static void replaceWithJump(void* instructionStart, void* to)
     {
         ASSERT(!(bitwise_cast<uintptr_t>(instructionStart) & 1));
index 99e2fa7..31bbfcd 100644 (file)
@@ -915,6 +915,15 @@ public:
         cacheFlush(insn, codeSize);
     }
 
+    static void replaceWithBkpt(void* instructionStart)
+    {
+        ASSERT(!(bitwise_cast<uintptr_t>(instructionStart) & 3));
+        MIPSWord* insn = reinterpret_cast<MIPSWord*>(reinterpret_cast<intptr_t>(code));
+        int value = 512; /* BRK_BUG */
+        insn[0] = (0x0000000d | ((value & 0x3ff) << OP_SH_CODE));
+        cacheFlush(instructionStart, sizeof(MIPSWord));
+    }
+
     static void replaceWithJump(void* instructionStart, void* to)
     {
         ASSERT(!(bitwise_cast<uintptr_t>(instructionStart) & 3));
index 27a1312..e3c1336 100644 (file)
@@ -1482,6 +1482,11 @@ public:
         return FunctionPtr(reinterpret_cast<void(*)()>(ARMAssembler::readCallTarget(call.dataLocation())));
     }
 
+    static void replaceWithJump(CodeLocationLabel instructionStart)
+    {
+        ARMAssembler::replaceWithBkpt(instructionStart.executableAddress());
+    }
+
     static void replaceWithJump(CodeLocationLabel instructionStart, CodeLocationLabel destination)
     {
         ARMAssembler::replaceWithJump(instructionStart.dataLocation(), destination.dataLocation());
index cf94bf9..962ef27 100644 (file)
@@ -3406,6 +3406,11 @@ public:
         return FunctionPtr(reinterpret_cast<void(*)()>(ARM64Assembler::readCallTarget(call.dataLocation())));
     }
 
+    static void replaceWithBreakpoint(CodeLocationLabel instructionStart)
+    {
+        ARM64Assembler::replaceWithBrk(instructionStart.executableAddress());
+    }
+
     static void replaceWithJump(CodeLocationLabel instructionStart, CodeLocationLabel destination)
     {
         ARM64Assembler::replaceWithJump(instructionStart.dataLocation(), destination.dataLocation());
index f9c3450..77175b8 100644 (file)
@@ -1349,6 +1349,11 @@ public:
         m_assembler.dmbISHST();
     }
     
+    static void replaceWithBreakpoint(CodeLocationLabel instructionStart)
+    {
+        ARMv7Assembler::replaceWithBkpt(instructionStart.dataLocation());
+    }
+
     static void replaceWithJump(CodeLocationLabel instructionStart, CodeLocationLabel destination)
     {
         ARMv7Assembler::replaceWithJump(instructionStart.dataLocation(), destination.dataLocation());
index d6a05a3..5d5c603 100644 (file)
@@ -2978,6 +2978,11 @@ public:
         return FunctionPtr(reinterpret_cast<void(*)()>(MIPSAssembler::readCallTarget(call.dataLocation())));
     }
 
+    static void replaceWithJump(CodeLocationLabel instructionStart)
+    {
+        MIPSAssembler::replaceWithBkpt(instructionStart.executableAddress());
+    }
+
     static void replaceWithJump(CodeLocationLabel instructionStart, CodeLocationLabel destination)
     {
         MIPSAssembler::replaceWithJump(instructionStart.dataLocation(), destination.dataLocation());
index cea0f6a..8d05e2e 100644 (file)
@@ -2756,6 +2756,11 @@ public:
     {
     }
 
+    static void replaceWithBreakpoint(CodeLocationLabel instructionStart)
+    {
+        X86Assembler::replaceWithInt3(instructionStart.executableAddress());
+    }
+
     static void replaceWithJump(CodeLocationLabel instructionStart, CodeLocationLabel destination)
     {
         X86Assembler::replaceWithJump(instructionStart.executableAddress(), destination.executableAddress());
index bc52aff..6671f2c 100644 (file)
@@ -2902,6 +2902,12 @@ public:
         return reinterpret_cast<void**>(where)[-1];
     }
 
+    static void replaceWithInt3(void* instructionStart)
+    {
+        uint8_t* ptr = reinterpret_cast<uint8_t*>(instructionStart);
+        ptr[0] = static_cast<uint8_t>(OP_INT3);
+    }
+
     static void replaceWithJump(void* instructionStart, void* to)
     {
         uint8_t* ptr = reinterpret_cast<uint8_t*>(instructionStart);
index 6161f1b..bb8cc38 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2008-2010, 2012-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2008-2017 Apple Inc. All rights reserved.
  * Copyright (C) 2008 Cameron Zwarich <cwzwarich@uwaterloo.ca>
  *
  * Redistribution and use in source and binary forms, with or without
@@ -1904,7 +1904,7 @@ void CodeBlock::jettison(Profiler::JettisonReason reason, ReoptimizationMode mod
     if (alternative())
         alternative()->optimizeAfterWarmUp();
 
-    if (reason != Profiler::JettisonDueToOldAge)
+    if (reason != Profiler::JettisonDueToOldAge && reason != Profiler::JettisonDueToVMTraps)
         tallyFrequentExitSites();
 #endif // ENABLE(DFG_JIT)
 
@@ -2966,6 +2966,36 @@ void CodeBlock::jitSoon()
     m_llintExecuteCounter.setNewThreshold(thresholdForJIT(Options::thresholdForJITSoon()), this);
 }
 
+bool CodeBlock::hasInstalledVMTrapBreakpoints() const
+{
+#if ENABLE(SIGNAL_BASED_VM_TRAPS)
+    
+    // This function may be called from a signal handler. We need to be
+    // careful to not call anything that is not signal handler safe, e.g.
+    // we should not perturb the refCount of m_jitCode.
+    if (!JITCode::isOptimizingJIT(jitType()))
+        return false;
+    return m_jitCode->dfgCommon()->hasInstalledVMTrapsBreakpoints();
+#else
+    return false;
+#endif
+}
+
+bool CodeBlock::installVMTrapBreakpoints()
+{
+#if ENABLE(SIGNAL_BASED_VM_TRAPS)
+    // This function may be called from a signal handler. We need to be
+    // careful to not call anything that is not signal handler safe, e.g.
+    // we should not perturb the refCount of m_jitCode.
+    if (!JITCode::isOptimizingJIT(jitType()))
+        return false;
+    m_jitCode->dfgCommon()->installVMTrapBreakpoints();
+    return true;
+#else
+    return false;
+#endif
+}
+
 void CodeBlock::dumpMathICStats()
 {
 #if ENABLE(MATH_IC_STATS)
index bdc33d6..75aefbf 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2008-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2008-2017 Apple Inc. All rights reserved.
  * Copyright (C) 2008 Cameron Zwarich <cwzwarich@uwaterloo.ca>
  *
  * Redistribution and use in source and binary forms, with or without
@@ -203,6 +203,9 @@ public:
     bool isStrictMode() const { return m_isStrictMode; }
     ECMAMode ecmaMode() const { return isStrictMode() ? StrictMode : NotStrictMode; }
 
+    bool hasInstalledVMTrapBreakpoints() const;
+    bool installVMTrapBreakpoints();
+
     inline bool isKnownNotImmediate(int index)
     {
         if (index == m_thisRegister.offset() && !m_isStrictMode)
index 8d89f84..ece9a03 100644 (file)
@@ -1274,8 +1274,7 @@ void BytecodeGenerator::emitLoopHint()
 
 void BytecodeGenerator::emitCheckTraps()
 {
-    if (Options::alwaysCheckTraps() || vm()->watchdog() || vm()->needAsynchronousTerminationSupport())
-        emitOpcode(op_check_traps);
+    emitOpcode(op_check_traps);
 }
 
 void BytecodeGenerator::retrieveLastBinaryOp(int& dstIndex, int& src1Index, int& src2Index)
index 42dd381..cd1e5ef 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -97,6 +97,26 @@ bool CommonData::invalidate()
     return true;
 }
 
+void CommonData::installVMTrapBreakpoints()
+{
+    if (!isStillValid || hasVMTrapsBreakpointsInstalled)
+        return;
+    hasVMTrapsBreakpointsInstalled = true;
+    for (unsigned i = jumpReplacements.size(); i--;)
+        jumpReplacements[i].installVMTrapBreakpoint();
+}
+
+bool CommonData::isVMTrapBreakpoint(void* address)
+{
+    if (!isStillValid)
+        return false;
+    for (unsigned i = jumpReplacements.size(); i--;) {
+        if (address == jumpReplacements[i].dataLocation())
+            return true;
+    }
+    return false;
+}
+
 void CommonData::validateReferences(const TrackedReferences& trackedReferences)
 {
     if (InlineCallFrameSet* set = inlineCallFrames.get()) {
index e58a2eb..8936a5f 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -86,7 +86,10 @@ public:
     void shrinkToFit();
     
     bool invalidate(); // Returns true if we did invalidate, or false if the code block was already invalidated.
-    
+    bool hasInstalledVMTrapsBreakpoints() const { return isStillValid && hasVMTrapsBreakpointsInstalled; }
+    void installVMTrapBreakpoints();
+    bool isVMTrapBreakpoint(void* address);
+
     unsigned requiredRegisterCountForExecutionAndExit() const
     {
         return std::max(frameRegisterCount, requiredRegisterCountForExit);
@@ -112,6 +115,7 @@ public:
     bool livenessHasBeenProved; // Initialized and used on every GC.
     bool allTransitionsHaveBeenMarked; // Initialized and used on every GC.
     bool isStillValid;
+    bool hasVMTrapsBreakpointsInstalled { false };
     
 #if USE(JSVALUE32_64)
     std::unique_ptr<Bag<double>> doubleConstants;
index 5337529..247bda9 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -41,6 +41,11 @@ void JumpReplacement::fire()
     MacroAssembler::replaceWithJump(m_source, m_destination);
 }
 
+void JumpReplacement::installVMTrapBreakpoint()
+{
+    MacroAssembler::replaceWithBreakpoint(m_source);
+}
+
 } } // namespace JSC::DFG
 
 #endif // ENABLE(DFG_JIT)
index 4f9ce15..77d3938 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -40,6 +40,8 @@ public:
     }
     
     void fire();
+    void installVMTrapBreakpoint();
+    void* dataLocation() const { return m_source.dataLocation(); }
 
 private:
     CodeLocationLabel m_source;
index 1357e69..1ce230b 100644 (file)
@@ -391,7 +391,7 @@ namespace JSC { namespace DFG {
     /* flow. */\
     macro(BottomValue, NodeResultJS) \
     \
-    /* Checks for VM traps. If there is a trap, we call operation operationHandleTraps */ \
+    /* Checks for VM traps. If there is a trap, we'll jettison or call operation operationHandleTraps. */ \
     macro(CheckTraps, NodeMustGenerate) \
     /* Write barriers */\
     macro(StoreBarrier, NodeMustGenerate) \
index 6d305ba..5cc7745 100644 (file)
@@ -103,7 +103,7 @@ void CodeBlockSet::deleteUnmarkedAndUnreferenced(VM& vm, CollectionScope scope)
     promoteYoungCodeBlocks(locker);
 }
 
-bool CodeBlockSet::contains(const LockHolder&, void* candidateCodeBlock)
+bool CodeBlockSet::contains(const AbstractLocker&, void* candidateCodeBlock)
 {
     RELEASE_ASSERT(m_lock.isLocked());
     CodeBlock* codeBlock = static_cast<CodeBlock*>(candidateCodeBlock);
index 0fca79a..f6c46ec 100644 (file)
@@ -72,13 +72,14 @@ public:
     
     void clearCurrentlyExecuting();
 
-    bool contains(const LockHolder&, void* candidateCodeBlock);
+    bool contains(const AbstractLocker&, void* candidateCodeBlock);
     Lock& getLock() { return m_lock; }
 
     // Visits each CodeBlock in the heap until the visitor function returns true
     // to indicate that it is done iterating, or until every CodeBlock has been
     // visited.
     template<typename Functor> void iterate(const Functor&);
+    template<typename Functor> void iterate(const AbstractLocker&, const Functor&);
     
     template<typename Functor> void iterateCurrentlyExecuting(const Functor&);
     
index 04dbcec..80a9bef 100644 (file)
@@ -63,7 +63,13 @@ inline void CodeBlockSet::mark(const LockHolder&, CodeBlock* codeBlock)
 template<typename Functor>
 void CodeBlockSet::iterate(const Functor& functor)
 {
-    LockHolder locker(m_lock);
+    auto locker = holdLock(m_lock);
+    iterate(locker, functor);
+}
+
+template<typename Functor>
+void CodeBlockSet::iterate(const AbstractLocker&, const Functor& functor)
+{
     for (auto& codeBlock : m_oldCodeBlocks) {
         bool done = functor(codeBlock);
         if (done)
index 88c0475..e0d69e9 100644 (file)
@@ -2330,9 +2330,9 @@ void Heap::forEachCodeBlockImpl(const ScopedLambda<bool(CodeBlock*)>& func)
     return m_codeBlocks->iterate(func);
 }
 
-void Heap::forEachCodeBlockIgnoringJITPlansImpl(const ScopedLambda<bool(CodeBlock*)>& func)
+void Heap::forEachCodeBlockIgnoringJITPlansImpl(const AbstractLocker& locker, const ScopedLambda<bool(CodeBlock*)>& func)
 {
-    return m_codeBlocks->iterate(func);
+    return m_codeBlocks->iterate(locker, func);
 }
 
 void Heap::writeBarrierSlowPath(const JSCell* from)
index 8b23b9e..0632716 100644 (file)
@@ -226,7 +226,7 @@ public:
     
     template<typename Functor> void forEachProtectedCell(const Functor&);
     template<typename Functor> void forEachCodeBlock(const Functor&);
-    template<typename Functor> void forEachCodeBlockIgnoringJITPlans(const Functor&);
+    template<typename Functor> void forEachCodeBlockIgnoringJITPlans(const AbstractLocker& codeBlockSetLocker, const Functor&);
 
     HandleSet* handleSet() { return &m_handleSet; }
     HandleStack* handleStack() { return &m_handleStack; }
@@ -499,7 +499,7 @@ private:
     size_t bytesVisited();
     
     void forEachCodeBlockImpl(const ScopedLambda<bool(CodeBlock*)>&);
-    void forEachCodeBlockIgnoringJITPlansImpl(const ScopedLambda<bool(CodeBlock*)>&);
+    void forEachCodeBlockIgnoringJITPlansImpl(const AbstractLocker& codeBlockSetLocker, const ScopedLambda<bool(CodeBlock*)>&);
     
     void setMutatorShouldBeFenced(bool value);
     
index 8a05e38..00a62d5 100644 (file)
@@ -155,9 +155,9 @@ template<typename Functor> inline void Heap::forEachCodeBlock(const Functor& fun
     forEachCodeBlockImpl(scopedLambdaRef<bool(CodeBlock*)>(func));
 }
 
-template<typename Functor> inline void Heap::forEachCodeBlockIgnoringJITPlans(const Functor& func)
+template<typename Functor> inline void Heap::forEachCodeBlockIgnoringJITPlans(const AbstractLocker& codeBlockSetLocker, const Functor& func)
 {
-    forEachCodeBlockIgnoringJITPlansImpl(scopedLambdaRef<bool(CodeBlock*)>(func));
+    forEachCodeBlockIgnoringJITPlansImpl(codeBlockSetLocker, scopedLambdaRef<bool(CodeBlock*)>(func));
 }
 
 template<typename Functor> inline void Heap::forEachProtectedCell(const Functor& functor)
index df53517..6b04a62 100644 (file)
@@ -135,7 +135,7 @@ public:
     };
 
     Lock& getLock() { return m_registeredThreadsMutex; }
-    Thread* threadsListHead(const LockHolder&) const { ASSERT(m_registeredThreadsMutex.isLocked()); return m_registeredThreads; }
+    Thread* threadsListHead(const AbstractLocker&) const { ASSERT(m_registeredThreadsMutex.isLocked()); return m_registeredThreads; }
     Thread* machineThreadForCurrentThread();
 
 private:
index c7bd2e7..78e113c 100644 (file)
@@ -399,7 +399,7 @@ RefPtr<ExecutableMemoryHandle> ExecutableAllocator::allocate(VM&, size_t sizeInB
     return result;
 }
 
-bool ExecutableAllocator::isValidExecutableMemory(const LockHolder& locker, void* address)
+bool ExecutableAllocator::isValidExecutableMemory(const AbstractLocker& locker, void* address)
 {
     return allocator->isInAllocatedMemory(locker, address);
 }
index 9b2f4f0..300b306 100644 (file)
@@ -136,7 +136,7 @@ public:
 
     RefPtr<ExecutableMemoryHandle> allocate(VM&, size_t sizeInBytes, void* ownerUID, JITCompilationEffort);
 
-    bool isValidExecutableMemory(const LockHolder&, void* address);
+    bool isValidExecutableMemory(const AbstractLocker&, void* address);
 
     static size_t committedByteCount();
 
index 3751fed..3c6fa4a 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -65,6 +65,9 @@ void printInternal(PrintStream& out, JettisonReason reason)
     case JettisonDueToOldAge:
         out.print("JettisonDueToOldAge");
         return;
+    case JettisonDueToVMTraps:
+        out.print("JettisonDueToVMTraps");
+        return;
     }
     RELEASE_ASSERT_NOT_REACHED();
 }
index 745964f..99e56ee 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -37,7 +37,8 @@ enum JettisonReason {
     JettisonDueToOSRExit,
     JettisonDueToProfiledWatchpoint,
     JettisonDueToUnprofiledWatchpoint,
-    JettisonDueToOldAge
+    JettisonDueToOldAge,
+    JettisonDueToVMTraps
 };
 
 } } // namespace JSC::Profiler
index 08517e7..1557a7a 100644 (file)
@@ -144,6 +144,8 @@ void JSLock::didAcquireLock()
 
     m_vm->heap.machineThreads().addCurrentThread();
 
+    m_vm->traps().notifyGrabAllLocks();
+
 #if ENABLE(SAMPLING_PROFILER)
     // Note: this must come after addCurrentThread().
     if (SamplingProfiler* samplingProfiler = m_vm->samplingProfiler())
index 0e158c7..bce57bf 100644 (file)
@@ -332,6 +332,10 @@ static void overrideDefaults()
 #if PLATFORM(IOS)
     Options::useSigillCrashAnalyzer() = true;
 #endif
+
+#if !ENABLE(SIGNAL_BASED_VM_TRAPS)
+    Options::usePollingTraps() = true;
+#endif
 }
 
 static void recomputeDependentOptions()
index 602e15d..62add97 100644 (file)
@@ -406,7 +406,6 @@ typedef const char* optionString;
     v(bool, useSigillCrashAnalyzer, false, Configurable, "logs data about SIGILL crashes") \
     \
     v(unsigned, watchdog, 0, Normal, "watchdog timeout (0 = Disabled, N = a timeout period of N milliseconds)") \
-    v(bool, alwaysCheckTraps, false, Normal, "always emit op_check_traps bytecode") \
     v(bool, usePollingTraps, false, Normal, "use polling (instead of signalling) VM traps") \
     \
     v(bool, useICStats, false, Normal, nullptr) \
index 6f8ca74..bf754fd 100644 (file)
@@ -56,4 +56,13 @@ inline PlatformThread currentPlatformThread()
 #endif
 }
 
+#if OS(DARWIN)
+inline bool platformThreadSignal(PlatformThread platformThread, int signalNumber)
+{
+    pthread_t pthreadID = pthread_from_mach_thread_np(platformThread);
+    int errNo = pthread_kill(pthreadID, signalNumber);
+    return !errNo; // A 0 errNo means success.
+}
+#endif
+
 } // namespace JSC
index d4952a3..064db20 100644 (file)
@@ -98,7 +98,6 @@
 #include "StrictEvalActivation.h"
 #include "StrongInlines.h"
 #include "StructureInlines.h"
-#include "ThrowScope.h"
 #include "TypeProfiler.h"
 #include "TypeProfilerLog.h"
 #include "UnlinkedCodeBlock.h"
@@ -360,6 +359,7 @@ VM::~VM()
 {
     if (UNLIKELY(m_watchdog))
         m_watchdog->willDestroyVM(this);
+    m_traps.willDestroyVM();
     VMInspector::instance().remove(this);
 
     // Never GC, ever again.
@@ -462,21 +462,8 @@ VM*& VM::sharedInstanceInternal()
 
 Watchdog& VM::ensureWatchdog()
 {
-    if (!m_watchdog) {
-        Options::usePollingTraps() = true; // Force polling traps on until we have support for signal based traps.
-
+    if (!m_watchdog)
         m_watchdog = adoptRef(new Watchdog(this));
-        
-        // The LLINT peeks into the Watchdog object directly. In order to do that,
-        // the LLINT assumes that the internal shape of a std::unique_ptr is the
-        // same as a plain C++ pointer, and loads the address of Watchdog from it.
-        RELEASE_ASSERT(*reinterpret_cast<Watchdog**>(&m_watchdog) == m_watchdog.get());
-
-        // And if we've previously compiled any functions, we need to revert
-        // them because they don't have the needed polling checks for the watchdog
-        // yet.
-        deleteAllCode(PreventCollectionAndDeleteAllCode);
-    }
     return *m_watchdog;
 }
 
@@ -949,39 +936,4 @@ void VM::verifyExceptionCheckNeedIsSatisfied(unsigned recursionDepth, ExceptionE
 }
 #endif
 
-void VM::handleTraps(ExecState* exec, VMTraps::Mask mask)
-{
-    auto scope = DECLARE_THROW_SCOPE(*this);
-
-    ASSERT(needTrapHandling(mask));
-    while (needTrapHandling(mask)) {
-        auto trapEventType = m_traps.takeTopPriorityTrap(mask);
-        switch (trapEventType) {
-        case VMTraps::NeedDebuggerBreak:
-            if (Options::alwaysCheckTraps())
-                dataLog("VM ", RawPointer(this), " on pid ", getCurrentProcessID(), " received NeedDebuggerBreak trap\n");
-            return;
-
-        case VMTraps::NeedWatchdogCheck:
-            ASSERT(m_watchdog);
-            if (LIKELY(!m_watchdog->shouldTerminate(exec)))
-                continue;
-            FALLTHROUGH;
-
-        case VMTraps::NeedTermination:
-            JSC::throwException(exec, scope, createTerminatedExecutionException(this));
-            return;
-
-        default:
-            RELEASE_ASSERT_NOT_REACHED();
-        }
-    }
-}
-
-void VM::setNeedAsynchronousTerminationSupport()
-{
-    Options::usePollingTraps() = true; // Force polling traps on until we have support for signal based traps.
-    m_needAsynchronousTerminationSupport = true;
-}
-
 } // namespace JSC
index 84394cf..144086d 100644 (file)
@@ -269,7 +269,7 @@ public:
     static Ref<VM> createContextGroup(HeapType = SmallHeap);
     JS_EXPORT_PRIVATE ~VM();
 
-    JS_EXPORT_PRIVATE Watchdog& ensureWatchdog();
+    Watchdog& ensureWatchdog();
     Watchdog* watchdog() { return m_watchdog.get(); }
 
     HeapProfiler* heapProfiler() const { return m_heapProfiler.get(); }
@@ -314,7 +314,7 @@ public:
     // topVMEntryFrame.
     // FIXME: This should be a void*, because it might not point to a CallFrame.
     // https://bugs.webkit.org/show_bug.cgi?id=160441
-    ExecState* topCallFrame;
+    ExecState* topCallFrame { nullptr };
     JSWebAssemblyInstance* topJSWebAssemblyInstance;
     Strong<Structure> structureStructure;
     Strong<Structure> structureRareDataStructure;
@@ -672,18 +672,20 @@ public:
     template<typename Func>
     void logEvent(CodeBlock*, const char* summary, const Func& func);
 
-    void handleTraps(ExecState*, VMTraps::Mask = VMTraps::Mask::allEventTypes());
+    std::optional<PlatformThread> ownerThread() const { return m_apiLock->ownerThread(); }
+
+    VMTraps& traps() { return m_traps; }
+
+    void handleTraps(ExecState* exec, VMTraps::Mask mask = VMTraps::Mask::allEventTypes()) { m_traps.handleTraps(exec, mask); }
 
-    bool needTrapHandling(VMTraps::Mask mask = VMTraps::Mask::allEventTypes()) { return m_traps.needTrapHandling(mask); }
+    bool needTrapHandling() { return m_traps.needTrapHandling(); }
+    bool needTrapHandling(VMTraps::Mask mask) { return m_traps.needTrapHandling(mask); }
     void* needTrapHandlingAddress() { return m_traps.needTrapHandlingAddress(); }
 
     void notifyNeedDebuggerBreak() { m_traps.fireTrap(VMTraps::NeedDebuggerBreak); }
     void notifyNeedTermination() { m_traps.fireTrap(VMTraps::NeedTermination); }
     void notifyNeedWatchdogCheck() { m_traps.fireTrap(VMTraps::NeedWatchdogCheck); }
 
-    bool needAsynchronousTerminationSupport() const { return m_needAsynchronousTerminationSupport; }
-    JS_EXPORT_PRIVATE void setNeedAsynchronousTerminationSupport();
-
 private:
     friend class LLIntOffsetsExtractor;
 
@@ -725,8 +727,6 @@ private:
     bool isSafeToRecurseSoftCLoop() const;
 #endif // !ENABLE(JIT)
 
-    std::optional<PlatformThread> ownerThread() const { return m_apiLock->ownerThread(); }
-
     JS_EXPORT_PRIVATE void throwException(ExecState*, Exception*);
     JS_EXPORT_PRIVATE JSValue throwException(ExecState*, JSValue);
     JS_EXPORT_PRIVATE JSObject* throwException(ExecState*, JSObject*);
@@ -770,7 +770,6 @@ private:
     DeletePropertyMode m_deletePropertyMode { DeletePropertyMode::Default };
     bool m_globalConstRedeclarationShouldThrow { true };
     bool m_shouldBuildPCToCodeOriginMapping { false };
-    bool m_needAsynchronousTerminationSupport { false };
     std::unique_ptr<CodeCache> m_codeCache;
     std::unique_ptr<BuiltinExecutables> m_builtinExecutables;
     HashMap<String, RefPtr<WatchpointSet>> m_impurePropertyWatchpointSets;
@@ -799,6 +798,7 @@ private:
     friend class CatchScope;
     friend class ExceptionScope;
     friend class ThrowScope;
+    friend class VMTraps;
     friend class WTF::DoublyLinkedListNode<VM>;
 };
 
index 60b76dd..ee07e05 100644 (file)
 #include "config.h"
 #include "VMTraps.h"
 
+#include "CallFrame.h"
+#include "CodeBlock.h"
+#include "CodeBlockSet.h"
+#include "DFGCommonData.h"
+#include "ExceptionHelpers.h"
+#include "HeapInlines.h"
+#include "LLIntPCRanges.h"
+#include "MachineStackMarker.h"
+#include "MacroAssembler.h"
+#include "VM.h"
+#include "VMInspector.h"
+#include "Watchdog.h"
+#include <wtf/ProcessID.h>
+
+#if OS(DARWIN)
+#include <signal.h>
+#endif
+
 namespace JSC {
 
-void VMTraps::fireTrap(VMTraps::EventType eventType)
+ALWAYS_INLINE VM& VMTraps::vm() const
+{
+    return *bitwise_cast<VM*>(bitwise_cast<uintptr_t>(this) - OBJECT_OFFSETOF(VM, m_traps));
+}
+
+#if ENABLE(SIGNAL_BASED_VM_TRAPS)
+
+struct sigaction originalSigusr1Action;
+struct sigaction originalSigtrapAction;
+
+#if CPU(X86_64)
+
+struct SignalContext {
+    SignalContext(mcontext_t& mcontext)
+        : mcontext(mcontext)
+        , trapPC(reinterpret_cast<void*>(mcontext->__ss.__rip))
+        , stackPointer(reinterpret_cast<void*>(mcontext->__ss.__rsp))
+        , framePointer(reinterpret_cast<void*>(mcontext->__ss.__rbp))
+    {
+        // On X86_64, SIGTRAP reports the address after the trapping PC. So, dec by 1.
+        trapPC = reinterpret_cast<uint8_t*>(trapPC) - 1;
+    }
+
+    void adjustPCToPointToTrappingInstruction()
+    {
+        mcontext->__ss.__rip = reinterpret_cast<uintptr_t>(trapPC);
+    }
+
+    mcontext_t& mcontext;
+    void* trapPC;
+    void* stackPointer;
+    void* framePointer;
+};
+    
+#elif CPU(X86)
+
+struct SignalContext {
+    SignalContext(mcontext_t& mcontext)
+        : mcontext(mcontext)
+        , trapPC(reinterpret_cast<void*>(mcontext->__ss.__eip))
+        , stackPointer(reinterpret_cast<void*>(mcontext->__ss.__esp))
+        , framePointer(reinterpret_cast<void*>(mcontext->__ss.__ebp))
+    {
+        // On X86, SIGTRAP reports the address after the trapping PC. So, dec by 1.
+        trapPC = reinterpret_cast<uint8_t*>(trapPC) - 1;
+    }
+    
+    void adjustPCToPointToTrappingInstruction()
+    {
+        mcontext->__ss.__eip = reinterpret_cast<uintptr_t>(trapPC);
+    }
+    
+    mcontext_t& mcontext;
+    void* trapPC;
+    void* stackPointer;
+    void* framePointer;
+};
+
+#elif CPU(ARM64) || CPU(ARM_THUMB2) || CPU(ARM)
+    
+struct SignalContext {
+    SignalContext(mcontext_t& mcontext)
+        : mcontext(mcontext)
+        , trapPC(reinterpret_cast<void*>(mcontext->__ss.__pc))
+        , stackPointer(reinterpret_cast<void*>(mcontext->__ss.__sp))
+#if CPU(ARM64)
+        , framePointer(reinterpret_cast<void*>(mcontext->__ss.__fp))
+#elif CPU(ARM_THUMB2)
+        , framePointer(reinterpret_cast<void*>(mcontext->__ss.__r[7]))
+#elif CPU(ARM)
+        , framePointer(reinterpret_cast<void*>(mcontext->__ss.__r[11]))
+#endif
+    { }
+        
+    void adjustPCToPointToTrappingInstruction() { }
+
+    mcontext_t& mcontext;
+    void* trapPC;
+    void* stackPointer;
+    void* framePointer;
+};
+    
+#endif
+
+inline static bool vmIsInactive(VM& vm)
+{
+    return !vm.entryScope && !vm.ownerThread();
+}
+
+static Expected<std::pair<VM*, StackBounds>, VMTraps::Error> findActiveVMAndStackBounds(SignalContext& context)
+{
+    VMInspector& inspector = VMInspector::instance();
+    auto locker = tryHoldLock(inspector.getLock());
+    if (UNLIKELY(!locker))
+        return makeUnexpected(VMTraps::Error::LockUnavailable);
+    
+    VM* activeVM = nullptr;
+    StackBounds stackBounds = StackBounds::emptyBounds();
+    void* stackPointer = context.stackPointer;
+    bool unableToAcquireMachineThreadsLock = false;
+    inspector.iterate(locker, [&] (VM& vm) {
+        if (vmIsInactive(vm))
+            return VMInspector::FunctorStatus::Continue;
+
+        auto& machineThreads = vm.heap.machineThreads();
+        auto machineThreadsLocker = tryHoldLock(machineThreads.getLock());
+        if (UNLIKELY(!machineThreadsLocker)) {
+            unableToAcquireMachineThreadsLock = true;
+            return VMInspector::FunctorStatus::Continue; // Try next VM.
+        }
+
+        for (MachineThreads::Thread* thread = machineThreads.threadsListHead(machineThreadsLocker); thread; thread = thread->next) {
+            RELEASE_ASSERT(thread->stackBase);
+            RELEASE_ASSERT(thread->stackEnd);
+            if (stackPointer <= thread->stackBase && stackPointer >= thread->stackEnd) {
+                activeVM = &vm;
+                stackBounds = StackBounds(thread->stackBase, thread->stackEnd);
+                return VMInspector::FunctorStatus::Done;
+            }
+        }
+        return VMInspector::FunctorStatus::Continue;
+    });
+
+    if (!activeVM && unableToAcquireMachineThreadsLock)
+        return makeUnexpected(VMTraps::Error::LockUnavailable);
+    return std::make_pair(activeVM, stackBounds);
+}
+
+static void handleSigusr1(int signalNumber, siginfo_t* info, void* uap)
+{
+    SignalContext context(static_cast<ucontext_t*>(uap)->uc_mcontext);
+    auto activeVMAndStackBounds = findActiveVMAndStackBounds(context);
+    if (activeVMAndStackBounds) {
+        VM* vm = activeVMAndStackBounds.value().first;
+        if (vm) {
+            StackBounds stackBounds = activeVMAndStackBounds.value().second;
+            VMTraps& traps = vm->traps();
+            if (traps.needTrapHandling())
+                traps.tryInstallTrapBreakpoints(context, stackBounds);
+        }
+    }
+
+    auto originalAction = originalSigusr1Action.sa_sigaction;
+    if (originalAction)
+        originalAction(signalNumber, info, uap);
+}
+
+static void handleSigtrap(int signalNumber, siginfo_t* info, void* uap)
+{
+    SignalContext context(static_cast<ucontext_t*>(uap)->uc_mcontext);
+    auto activeVMAndStackBounds = findActiveVMAndStackBounds(context);
+    if (!activeVMAndStackBounds)
+        return; // Let the SignalSender try again later.
+
+    VM* vm = activeVMAndStackBounds.value().first;
+    if (vm) {
+        VMTraps& traps = vm->traps();
+        if (!traps.needTrapHandling())
+            return; // The polling code beat us to handling the trap already.
+
+        auto expectedSuccess = traps.tryJettisonCodeBlocksOnStack(context);
+        if (!expectedSuccess)
+            return; // Let the SignalSender try again later.
+        if (expectedSuccess.value())
+            return; // We've success jettison the codeBlocks.
+    }
+
+    // If we get here, then this SIGTRAP is not due to a VMTrap. Let's do the default action.
+    auto originalAction = originalSigtrapAction.sa_sigaction;
+    if (originalAction) {
+        // It is always safe to just invoke the original handler using the sa_sigaction form
+        // without checking for the SA_SIGINFO flag. If the original handler is of the
+        // sa_handler form, it will just ignore the 2nd and 3rd arguments since sa_handler is a
+        // subset of sa_sigaction. This is what the man pages says the OS does anyway.
+        originalAction(signalNumber, info, uap);
+    }
+    
+    // Pre-emptively restore the default handler but we may roll it back below.
+    struct sigaction currentAction;
+    struct sigaction defaultAction;
+    defaultAction.sa_handler = SIG_DFL;
+    sigfillset(&defaultAction.sa_mask);
+    defaultAction.sa_flags = 0;
+    sigaction(SIGTRAP, &defaultAction, &currentAction);
+    
+    if (currentAction.sa_sigaction != handleSigtrap) {
+        // This means that there's a client handler installed after us. This also means
+        // that the client handler thinks it was able to recover from the SIGTRAP, and
+        // did not uninstall itself. We can't argue with this because the signal isn't
+        // known to be from a VMTraps signal. Hence, restore the client handler
+        // and keep going.
+        sigaction(SIGTRAP, &currentAction, nullptr);
+    }
+}
+
+static void installSignalHandlers()
+{
+    typedef void (* SigactionHandler)(int, siginfo_t *, void *);
+    struct sigaction action;
+
+    action.sa_sigaction = reinterpret_cast<SigactionHandler>(handleSigusr1);
+    sigfillset(&action.sa_mask);
+    action.sa_flags = SA_SIGINFO;
+    sigaction(SIGUSR1, &action, &originalSigusr1Action);
+
+    action.sa_sigaction = reinterpret_cast<SigactionHandler>(handleSigtrap);
+    sigfillset(&action.sa_mask);
+    action.sa_flags = SA_SIGINFO;
+    sigaction(SIGTRAP, &action, &originalSigtrapAction);
+}
+
+ALWAYS_INLINE static CallFrame* sanitizedTopCallFrame(CallFrame* topCallFrame)
+{
+#if !defined(NDEBUG) && !CPU(ARM) && !CPU(MIPS)
+    // prepareForExternalCall() in DFGSpeculativeJIT.h may set topCallFrame to a bad word
+    // before calling native functions, but tryInstallTrapBreakpoints() below expects
+    // topCallFrame to be null if not set.
+#if USE(JSVALUE64)
+    const uintptr_t badBeefWord = 0xbadbeef0badbeef;
+#else
+    const uintptr_t badBeefWord = 0xbadbeef;
+#endif
+    if (topCallFrame == reinterpret_cast<CallFrame*>(badBeefWord))
+        topCallFrame = nullptr;
+#endif
+    return topCallFrame;
+}
+
+static bool isSaneFrame(CallFrame* frame, CallFrame* calleeFrame, VMEntryFrame* entryFrame, StackBounds stackBounds)
+{
+    if (reinterpret_cast<void*>(frame) >= reinterpret_cast<void*>(entryFrame))
+        return false;
+    if (calleeFrame >= frame)
+        return false;
+    return stackBounds.contains(frame);
+}
+    
+void VMTraps::tryInstallTrapBreakpoints(SignalContext& context, StackBounds stackBounds)
+{
+    // This must be the initial signal to get the mutator thread's attention.
+    // Let's get the thread to break at invalidation points if needed.
+    VM& vm = this->vm();
+    void* trapPC = context.trapPC;
+
+    CallFrame* callFrame = reinterpret_cast<CallFrame*>(context.framePointer);
+
+    auto codeBlockSetLocker = tryHoldLock(vm.heap.codeBlockSet().getLock());
+    if (!codeBlockSetLocker)
+        return; // Let the SignalSender try again later.
+
+    {
+        auto allocator = vm.executableAllocator;
+        auto allocatorLocker = tryHoldLock(allocator.getLock());
+        if (!allocatorLocker)
+            return; // Let the SignalSender try again later.
+
+        if (allocator.isValidExecutableMemory(allocatorLocker, trapPC)) {
+            if (vm.isExecutingInRegExpJIT) {
+                // We need to do this because a regExpJIT frame isn't a JS frame.
+                callFrame = sanitizedTopCallFrame(vm.topCallFrame);
+            }
+        } else if (LLInt::isLLIntPC(trapPC)) {
+            // The framePointer probably has the callFrame. We're good to go.
+        } else {
+            // We resort to topCallFrame to see if we can get anything
+            // useful. We usually get here when we're executing C code.
+            callFrame = sanitizedTopCallFrame(vm.topCallFrame);
+        }
+    }
+
+    CodeBlock* foundCodeBlock = nullptr;
+    VMEntryFrame* vmEntryFrame = vm.topVMEntryFrame;
+
+    // We don't have a callee to start with. So, use the end of the stack to keep the
+    // isSaneFrame() checker below happy for the first iteration. It will still check
+    // to ensure that the address is in the stackBounds.
+    CallFrame* calleeFrame = reinterpret_cast<CallFrame*>(stackBounds.end());
+
+    if (!vmEntryFrame || !callFrame)
+        return; // Not running JS code. Let the SignalSender try again later.
+
+    do {
+        if (!isSaneFrame(callFrame, calleeFrame, vmEntryFrame, stackBounds))
+            return; // Let the SignalSender try again later.
+
+        CodeBlock* candidateCodeBlock = callFrame->codeBlock();
+        if (candidateCodeBlock && vm.heap.codeBlockSet().contains(codeBlockSetLocker, candidateCodeBlock)) {
+            foundCodeBlock = candidateCodeBlock;
+            break;
+        }
+
+        calleeFrame = callFrame;
+        callFrame = callFrame->callerFrame(vmEntryFrame);
+
+    } while (callFrame && vmEntryFrame);
+
+    if (!foundCodeBlock) {
+        // We may have just entered the frame and the codeBlock pointer is not
+        // initialized yet. Just bail and let the SignalSender try again later.
+        return;
+    }
+
+    if (JITCode::isOptimizingJIT(foundCodeBlock->jitType())) {
+        auto locker = tryHoldLock(m_lock);
+        if (!locker)
+            return; // Let the SignalSender try again later.
+
+        if (!foundCodeBlock->hasInstalledVMTrapBreakpoints())
+            foundCodeBlock->installVMTrapBreakpoints();
+        return;
+    }
+}
+
+auto VMTraps::tryJettisonCodeBlocksOnStack(SignalContext& context) -> Expected<bool, Error>
+{
+    VM& vm = this->vm();
+    auto codeBlockSetLocker = tryHoldLock(vm.heap.codeBlockSet().getLock());
+    if (!codeBlockSetLocker)
+        return makeUnexpected(Error::LockUnavailable);
+
+    CallFrame* topCallFrame = reinterpret_cast<CallFrame*>(context.framePointer);
+    void* trapPC = context.trapPC;
+    bool trapPCIsVMTrap = false;
+    
+    vm.heap.forEachCodeBlockIgnoringJITPlans(codeBlockSetLocker, [&] (CodeBlock* codeBlock) {
+        if (!codeBlock->hasInstalledVMTrapBreakpoints())
+            return false; // Not found yet.
+
+        JITCode* jitCode = codeBlock->jitCode().get();
+        ASSERT(JITCode::isOptimizingJIT(jitCode->jitType()));
+        if (jitCode->dfgCommon()->isVMTrapBreakpoint(trapPC)) {
+            trapPCIsVMTrap = true;
+            // At the codeBlock trap point, we're guaranteed that:
+            // 1. the pc is not in the middle of any range of JIT code which invalidation points
+            //    may write over. Hence, it's now safe to patch those invalidation points and
+            //    jettison the codeBlocks.
+            // 2. The top frame must be an optimized JS frame.
+            ASSERT(codeBlock == topCallFrame->codeBlock());
+            codeBlock->jettison(Profiler::JettisonDueToVMTraps);
+            return true;
+        }
+
+        return false; // Not found yet.
+    });
+
+    if (!trapPCIsVMTrap)
+        return false;
+
+    invalidateCodeBlocksOnStack(codeBlockSetLocker, topCallFrame);
+
+    // Re-run the trapping instruction now that we've patched it with the invalidation
+    // OSR exit off-ramp.
+    context.adjustPCToPointToTrappingInstruction();
+    return true;
+}
+
+void VMTraps::invalidateCodeBlocksOnStack()
+{
+    invalidateCodeBlocksOnStack(vm().topCallFrame);
+}
+
+void VMTraps::invalidateCodeBlocksOnStack(ExecState* topCallFrame)
+{
+    auto codeBlockSetLocker = holdLock(vm().heap.codeBlockSet().getLock());
+    invalidateCodeBlocksOnStack(codeBlockSetLocker, topCallFrame);
+}
+    
+void VMTraps::invalidateCodeBlocksOnStack(Locker<Lock>&, ExecState* topCallFrame)
+{
+    if (!m_needToInvalidatedCodeBlocks)
+        return;
+
+    m_needToInvalidatedCodeBlocks = false;
+
+    VMEntryFrame* vmEntryFrame = vm().topVMEntryFrame;
+    CallFrame* callFrame = topCallFrame;
+
+    if (!vmEntryFrame)
+        return; // Not running JS code. Nothing to invalidate.
+
+    while (callFrame) {
+        CodeBlock* codeBlock = callFrame->codeBlock();
+        if (codeBlock && JITCode::isOptimizingJIT(codeBlock->jitType()))
+            codeBlock->jettison(Profiler::JettisonDueToVMTraps);
+        callFrame = callFrame->callerFrame(vmEntryFrame);
+    }
+}
+
+#endif // ENABLE(SIGNAL_BASED_VM_TRAPS)
+
+VMTraps::VMTraps()
+{
+#if ENABLE(SIGNAL_BASED_VM_TRAPS)
+    if (!Options::usePollingTraps()) {
+        static std::once_flag once;
+        std::call_once(once, [] {
+            installSignalHandlers();
+        });
+    }
+#endif
+}
+
+void VMTraps::willDestroyVM()
+{
+#if ENABLE(SIGNAL_BASED_VM_TRAPS)
+    while (!m_signalSenders.isEmpty()) {
+        RefPtr<SignalSender> sender;
+        {
+            // We don't want to be holding the VMTraps lock when calling
+            // SignalSender::willDestroyVM() because SignalSender::willDestroyVM()
+            // will acquire the SignalSender lock, and SignalSender::send() needs
+            // to acquire these locks in the opposite order.
+            auto locker = holdLock(m_lock);
+            sender = m_signalSenders.takeAny();
+        }
+        sender->willDestroyVM();
+    }
+#endif
+}
+
+#if ENABLE(SIGNAL_BASED_VM_TRAPS)
+void VMTraps::addSignalSender(VMTraps::SignalSender* sender)
+{
+    auto locker = holdLock(m_lock);
+    m_signalSenders.add(sender);
+}
+
+void VMTraps::removeSignalSender(VMTraps::SignalSender* sender)
+{
+    auto locker = holdLock(m_lock);
+    m_signalSenders.remove(sender);
+}
+
+void VMTraps::SignalSender::willDestroyVM()
 {
     auto locker = holdLock(m_lock);
-    setTrapForEvent(locker, eventType);
+    m_vm = nullptr;
+}
+
+void VMTraps::SignalSender::send()
+{
+    while (true) {
+        // We need a nested scope so that we'll release the lock before we sleep below.
+        {
+            auto locker = holdLock(m_lock);
+            if (!m_vm)
+                break;
+
+            VM& vm = *m_vm;
+            auto optionalOwnerThread = vm.ownerThread();
+            if (optionalOwnerThread) {
+                platformThreadSignal(optionalOwnerThread.value(), SIGUSR1);
+                break;
+            }
+
+            if (vmIsInactive(vm))
+                break;
+
+            VMTraps::Mask mask(m_eventType);
+            if (!vm.needTrapHandling(mask))
+                break;
+        }
+
+        sleepMS(1);
+    }
+
+    auto locker = holdLock(m_lock);
+    if (m_vm)
+        m_vm->traps().removeSignalSender(this);
+}
+#endif // ENABLE(SIGNAL_BASED_VM_TRAPS)
+
+void VMTraps::fireTrap(VMTraps::EventType eventType)
+{
+    ASSERT(!vm().currentThreadIsHoldingAPILock());
+    {
+        auto locker = holdLock(m_lock);
+        setTrapForEvent(locker, eventType);
+        m_needToInvalidatedCodeBlocks = true;
+    }
+    
+#if ENABLE(SIGNAL_BASED_VM_TRAPS)
+    if (!Options::usePollingTraps()) {
+        // sendSignal() can loop until it has confirmation that the mutator thread
+        // has received the trap request. We'll call it from another trap so that
+        // fireTrap() does not block.
+        RefPtr<SignalSender> sender = adoptRef(new SignalSender(vm(), eventType));
+        addSignalSender(sender.get());
+        createThread("jsc.vmtraps.signalling.thread", [sender] {
+            sender->send();
+        });
+    }
+#endif
+}
+
+void VMTraps::handleTraps(ExecState* exec, VMTraps::Mask mask)
+{
+    VM& vm = this->vm();
+    auto scope = DECLARE_THROW_SCOPE(vm);
+
+    ASSERT(needTrapHandling(mask));
+    while (needTrapHandling(mask)) {
+        auto eventType = takeTopPriorityTrap(mask);
+        switch (eventType) {
+        case NeedDebuggerBreak:
+            dataLog("VM ", RawPointer(&vm), " on pid ", getCurrentProcessID(), " received NeedDebuggerBreak trap\n");
+            invalidateCodeBlocksOnStack(exec);
+            break;
+                
+        case NeedWatchdogCheck:
+            ASSERT(vm.watchdog());
+            if (LIKELY(!vm.watchdog()->shouldTerminate(exec)))
+                continue;
+            FALLTHROUGH;
+
+        case NeedTermination:
+            invalidateCodeBlocksOnStack(exec);
+            throwException(exec, scope, createTerminatedExecutionException(&vm));
+            return;
+
+        default:
+            RELEASE_ASSERT_NOT_REACHED();
+        }
+    }
 }
 
 auto VMTraps::takeTopPriorityTrap(VMTraps::Mask mask) -> EventType
index e77c642..b046a01 100644 (file)
 
 #pragma once
 
+#include <wtf/Expected.h>
+#include <wtf/HashSet.h>
 #include <wtf/Lock.h>
 #include <wtf/Locker.h>
+#include <wtf/RefPtr.h>
+#include <wtf/StackBounds.h>
 
 namespace JSC {
 
+class ExecState;
 class VM;
 
 class VMTraps {
     typedef uint8_t BitField;
 public:
+    enum class Error {
+        None,
+        LockUnavailable
+    };
+
     enum EventType {
         // Sorted in servicing priority order from highest to lowest.
         NeedDebuggerBreak,
@@ -75,12 +85,32 @@ public:
         BitField m_mask;
     };
 
+    VMTraps();
+    ~VMTraps()
+    {
+#if ENABLE(SIGNAL_BASED_VM_TRAPS)
+        ASSERT(m_signalSenders.isEmpty());
+#endif
+    }
+
+    void willDestroyVM();
+
+    bool needTrapHandling() { return m_needTrapHandling; }
     bool needTrapHandling(Mask mask) { return m_needTrapHandling & mask.bits(); }
     void* needTrapHandlingAddress() { return &m_needTrapHandling; }
 
+    void notifyGrabAllLocks()
+    {
+        if (needTrapHandling())
+            invalidateCodeBlocksOnStack();
+    }
+
     JS_EXPORT_PRIVATE void fireTrap(EventType);
 
-    EventType takeTopPriorityTrap(Mask);
+    void handleTraps(ExecState*, VMTraps::Mask);
+
+    void tryInstallTrapBreakpoints(struct SignalContext&, StackBounds);
+    Expected<bool, Error> tryJettisonCodeBlocksOnStack(struct SignalContext&);
 
 private:
     VM& vm() const;
@@ -101,13 +131,49 @@ private:
         m_trapsBitField &= ~(1 << eventType);
     }
 
+    EventType takeTopPriorityTrap(Mask);
+
+#if ENABLE(SIGNAL_BASED_VM_TRAPS)
+    class SignalSender : public ThreadSafeRefCounted<SignalSender> {
+    public:
+        SignalSender(VM& vm, EventType eventType)
+            : m_vm(&vm)
+            , m_eventType(eventType)
+        { }
+
+        void willDestroyVM();
+        void send();
+
+    private:
+        Lock m_lock;
+        VM* m_vm;
+        EventType m_eventType;
+    };
+
+    void invalidateCodeBlocksOnStack();
+    void invalidateCodeBlocksOnStack(ExecState* topCallFrame);
+    void invalidateCodeBlocksOnStack(Locker<Lock>& codeBlockSetLocker, ExecState* topCallFrame);
+
+    void addSignalSender(SignalSender*);
+    void removeSignalSender(SignalSender*);
+#else
+    void invalidateCodeBlocksOnStack() { }
+    void invalidateCodeBlocksOnStack(ExecState*) { }
+#endif
+
     Lock m_lock;
     union {
         BitField m_needTrapHandling { 0 };
         BitField m_trapsBitField;
     };
+    bool m_needToInvalidatedCodeBlocks { false };
+
+#if ENABLE(SIGNAL_BASED_VM_TRAPS)
+    HashSet<RefPtr<SignalSender>> m_signalSenders;
+#endif
 
     friend class LLIntOffsetsExtractor;
+    friend class SignalSender;
 };
 
 } // namespace JSC
index 490d794..8be3c85 100644 (file)
@@ -146,21 +146,22 @@ auto VMInspector::codeBlockForMachinePC(const VMInspector::Locker&, void* machin
         // 1. CodeBlocks are added to the CodeBlockSet from the main thread before
         //    they are handed to the JIT plans. Those codeBlocks will have a null jitCode,
         //    but we check for that in our lambda functor.
-        // 2. CodeBlockSet::iterate() will acquire the CodeBlockSet lock before iterating.
+        // 2. We will acquire the CodeBlockSet lock before iterating.
         //    This ensures that a CodeBlock won't be GCed while we're iterating.
         // 3. We do a tryLock on the CodeBlockSet's lock first to ensure that it is
         //    safe for the current thread to lock it before calling
         //    Heap::forEachCodeBlockIgnoringJITPlans(). Hence, there's no risk of
         //    re-entering the lock and deadlocking on it.
 
-        auto& lock = vm.heap.codeBlockSet().getLock();
-        bool isSafeToLock = ensureIsSafeToLock(lock);
+        auto& codeBlockSetLock = vm.heap.codeBlockSet().getLock();
+        bool isSafeToLock = ensureIsSafeToLock(codeBlockSetLock);
         if (!isSafeToLock) {
             hasTimeout = true;
             return FunctorStatus::Continue; // Skip this VM.
         }
 
-        vm.heap.forEachCodeBlockIgnoringJITPlans([&] (CodeBlock* cb) {
+        auto locker = holdLock(codeBlockSetLock);
+        vm.heap.forEachCodeBlockIgnoringJITPlans(locker, [&] (CodeBlock* cb) {
             JITCode* jitCode = cb->jitCode().get();
             if (!jitCode) {
                 // If the codeBlock is a replacement codeBlock which is in the process of being
index b633b4b..3740d18 100644 (file)
@@ -46,16 +46,22 @@ public:
     void add(VM*);
     void remove(VM*);
 
+    Lock& getLock() { return m_lock; }
+
+    enum class FunctorStatus {
+        Continue,
+        Done
+    };
+
+    template <typename Functor>
+    void iterate(const Locker&, const Functor& functor) { iterate(functor); }
+
     Expected<Locker, Error> lock(Seconds timeout = Seconds::infinity());
 
     Expected<bool, Error> isValidExecutableMemory(const Locker&, void*);
     Expected<CodeBlock*, Error> codeBlockForMachinePC(const Locker&, void*);
 
 private:
-    enum class FunctorStatus {
-        Continue,
-        Done
-    };
     template <typename Functor> void iterate(const Functor& functor)
     {
         for (VM* vm = m_list.head(); vm; vm = vm->next()) {
index 7a4325e..c0d5784 100644 (file)
@@ -1,3 +1,23 @@
+2017-03-09  Mark Lam  <mark.lam@apple.com>
+
+        Make the VM Traps mechanism non-polling for the DFG and FTL.
+        https://bugs.webkit.org/show_bug.cgi?id=168920
+        <rdar://problem/30738588>
+
+        Reviewed by Filip Pizlo.
+
+        Make StackBounds more useful for checking if a pointer is within stack bounds.
+
+        * wtf/MetaAllocator.cpp:
+        (WTF::MetaAllocator::isInAllocatedMemory):
+        * wtf/MetaAllocator.h:
+        * wtf/Platform.h:
+        * wtf/StackBounds.h:
+        (WTF::StackBounds::emptyBounds):
+        (WTF::StackBounds::StackBounds):
+        (WTF::StackBounds::isEmpty):
+        (WTF::StackBounds::contains):
+
 2017-03-07  Filip Pizlo  <fpizlo@apple.com>
 
         WTF should make it super easy to do ARM concurrency tricks
index 6e44b67..aed0826 100644 (file)
@@ -426,7 +426,7 @@ void MetaAllocator::decrementPageOccupancy(void* address, size_t sizeInBytes)
     }
 }
 
-bool MetaAllocator::isInAllocatedMemory(const LockHolder&, void* address)
+bool MetaAllocator::isInAllocatedMemory(const AbstractLocker&, void* address)
 {
     ASSERT(m_lock.isLocked());
     uintptr_t page = reinterpret_cast<uintptr_t>(address) >> m_logPageSize;
index 4bcd93b..c27356a 100644 (file)
@@ -98,7 +98,7 @@ public:
     WTF_EXPORT_PRIVATE size_t debugFreeSpaceSize();
 
     Lock& getLock() { return m_lock; }
-    WTF_EXPORT_PRIVATE bool isInAllocatedMemory(const LockHolder&, void* address);
+    WTF_EXPORT_PRIVATE bool isInAllocatedMemory(const AbstractLocker&, void* address);
     
 #if ENABLE(META_ALLOCATOR_PROFILE)
     void dumpProfile();
index 7519062..0c26027 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2006-2009, 2013-2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2006-2017 Apple Inc. All rights reserved.
  * Copyright (C) 2007-2009 Torch Mobile, Inc.
  * Copyright (C) 2010, 2011 Research In Motion Limited. All rights reserved.
  *
 #endif
 #endif
 
+#if OS(DARWIN) && ENABLE(JIT)
+#define ENABLE_SIGNAL_BASED_VM_TRAPS 1
+#endif
+
 /* CSS Selector JIT Compiler */
 #if !defined(ENABLE_CSS_SELECTOR_JIT)
 #if (CPU(X86_64) || CPU(ARM64) || (CPU(ARM_THUMB2) && PLATFORM(IOS))) && ENABLE(JIT) && (OS(DARWIN) || PLATFORM(GTK))
index ce9ea96..554604f 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2010, 2013 Apple Inc. All Rights Reserved.
+ * Copyright (C) 2010-2017 Apple Inc. All Rights Reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -40,6 +40,8 @@ class StackBounds {
     const static size_t s_defaultAvailabilityDelta = 64 * 1024;
 
 public:
+    static StackBounds emptyBounds() { return StackBounds(); }
+
     static StackBounds currentThreadStackBounds()
     {
         StackBounds bounds;
@@ -48,6 +50,13 @@ public:
         return bounds;
     }
 
+    StackBounds(void* origin, void* end)
+        : m_origin(origin)
+        , m_bound(end)
+    {
+        checkConsistency();
+    }
+
     void* origin() const
     {
         ASSERT(m_origin);
@@ -67,6 +76,17 @@ public:
         return static_cast<char*>(m_bound) - static_cast<char*>(m_origin);
     }
 
+    bool isEmpty() const { return !m_origin; }
+
+    bool contains(void* p) const
+    {
+        if (isEmpty())
+            return false;
+        if (isGrowingDownward())
+            return (m_origin >= p) && (p > m_bound);
+        return (m_bound > p) && (p >= m_origin);
+    }
+
     void* recursionLimit(size_t minAvailableDelta = s_defaultAvailabilityDelta) const
     {
         checkConsistency();
index 6637098..d53584d 100644 (file)
@@ -1,3 +1,17 @@
+2017-03-09  Mark Lam  <mark.lam@apple.com>
+
+        Make the VM Traps mechanism non-polling for the DFG and FTL.
+        https://bugs.webkit.org/show_bug.cgi?id=168920
+        <rdar://problem/30738588>
+
+        Reviewed by Filip Pizlo.
+
+        No new tests needed.  This is covered by existing tests.
+
+        * bindings/js/WorkerScriptController.cpp:
+        (WebCore::WorkerScriptController::WorkerScriptController):
+        (WebCore::WorkerScriptController::scheduleExecutionTermination):
+
 2017-03-08  Dean Jackson  <dino@apple.com>
 
         WebGPU: Backend - Library and Functions
index 8dd8333..d0a277d 100644 (file)
@@ -51,7 +51,6 @@ WorkerScriptController::WorkerScriptController(WorkerGlobalScope* workerGlobalSc
     , m_workerGlobalScopeWrapper(*m_vm)
 {
     m_vm->heap.acquireAccess(); // It's not clear that we have good discipline for heap access, so turn it on permanently.
-    m_vm->setNeedAsynchronousTerminationSupport();
     JSVMClientData::initNormalWorld(m_vm.get());
 }
 
@@ -151,11 +150,13 @@ void WorkerScriptController::setException(JSC::Exception* exception)
 
 void WorkerScriptController::scheduleExecutionTermination()
 {
-    // The mutex provides a memory barrier to ensure that once
-    // termination is scheduled, isTerminatingExecution() will
-    // accurately reflect that state when called from another thread.
-    LockHolder locker(m_scheduledTerminationMutex);
-    m_isTerminatingExecution = true;
+    {
+        // The mutex provides a memory barrier to ensure that once
+        // termination is scheduled, isTerminatingExecution() will
+        // accurately reflect that state when called from another thread.
+        LockHolder locker(m_scheduledTerminationMutex);
+        m_isTerminatingExecution = true;
+    }
     m_vm->notifyNeedTermination();
 }