Heap::isMarked() shouldn't pay the price of concurrent lazy flipping
authorfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 9 Sep 2016 01:22:05 +0000 (01:22 +0000)
committerfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 9 Sep 2016 01:22:05 +0000 (01:22 +0000)
https://bugs.webkit.org/show_bug.cgi?id=161760

Reviewed by Mark Lam.
Source/JavaScriptCore:

To fix a race condition in marking, I made Heap::isMarked() and Heap::isLive() atomic by
using flipIfNecessaryConcurrently() instead of flipIfNecessary().

This introduces three unnecessary overheads:

- isLive() is not called by marking, so that change was not necessary.

- isMarked() gets calls many times outside of marking, so it shouldn't always do the
  concurrent thing. This adds isMarkedConcurrently() for use in marking, and reverts
  isMarked().

- isMarked() and isMarkedConcurrently() don't actually have to do the lazy flip. They can
  return false if the flip is necessary.

I added a bunch of debug assertions to make sure that isLive() and isMarked() are not called
during marking.

If we needed to, we could remove most of the calls to isMarkedConcurrently(). As a kind of
optimization, CodeBlock does an initial fixpoint iteration during marking, and so all of the
code called from CodeBlock's fixpoint iterator needs to use isMarkedConcurrently(). But we
could probably arrange for CodeBlock only do fixpoint iterating during the weak reference
thing.

* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::visitWeakly):
(JSC::CodeBlock::shouldJettisonDueToOldAge):
(JSC::shouldMarkTransition):
(JSC::CodeBlock::propagateTransitions):
(JSC::CodeBlock::determineLiveness):
* bytecode/PolymorphicAccess.cpp:
(JSC::AccessCase::propagateTransitions):
* heap/Heap.h:
* heap/HeapInlines.h:
(JSC::Heap::isLive):
(JSC::Heap::isMarked):
(JSC::Heap::isMarkedConcurrently):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::flipIfNecessarySlow):
(JSC::MarkedBlock::flipIfNecessaryConcurrentlySlow):
(JSC::MarkedBlock::needsFlip):
* heap/MarkedBlock.h:
(JSC::MarkedBlock::needsFlip):
(JSC::MarkedBlock::flipIfNecessary):
(JSC::MarkedBlock::flipIfNecessaryConcurrently):
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::appendToMarkStack):
(JSC::SlotVisitor::markAuxiliary):
(JSC::SlotVisitor::visitChildren):
* runtime/Structure.cpp:
(JSC::Structure::isCheapDuringGC):
(JSC::Structure::markIfCheap):

Source/WTF:

* wtf/MainThread.cpp:
(WTF::isMainThreadOrGCThread):
(WTF::mayBeGCThread):
* wtf/MainThread.h:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@205683 268f45cc-cd09-0410-ab3c-d52691b4dbfc

13 files changed:
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp
Source/JavaScriptCore/heap/Heap.h
Source/JavaScriptCore/heap/HeapInlines.h
Source/JavaScriptCore/heap/HeapSnapshotBuilder.cpp
Source/JavaScriptCore/heap/MarkedBlock.cpp
Source/JavaScriptCore/heap/MarkedBlock.h
Source/JavaScriptCore/heap/SlotVisitor.cpp
Source/JavaScriptCore/runtime/Structure.cpp
Source/WTF/ChangeLog
Source/WTF/wtf/MainThread.cpp
Source/WTF/wtf/MainThread.h

index 42b6d60..980f0be 100644 (file)
@@ -1,3 +1,62 @@
+2016-09-08  Filip Pizlo  <fpizlo@apple.com>
+
+        Heap::isMarked() shouldn't pay the price of concurrent lazy flipping
+        https://bugs.webkit.org/show_bug.cgi?id=161760
+
+        Reviewed by Mark Lam.
+        
+        To fix a race condition in marking, I made Heap::isMarked() and Heap::isLive() atomic by
+        using flipIfNecessaryConcurrently() instead of flipIfNecessary().
+        
+        This introduces three unnecessary overheads:
+        
+        - isLive() is not called by marking, so that change was not necessary.
+        
+        - isMarked() gets calls many times outside of marking, so it shouldn't always do the
+          concurrent thing. This adds isMarkedConcurrently() for use in marking, and reverts
+          isMarked().
+        
+        - isMarked() and isMarkedConcurrently() don't actually have to do the lazy flip. They can
+          return false if the flip is necessary.
+        
+        I added a bunch of debug assertions to make sure that isLive() and isMarked() are not called
+        during marking.
+        
+        If we needed to, we could remove most of the calls to isMarkedConcurrently(). As a kind of
+        optimization, CodeBlock does an initial fixpoint iteration during marking, and so all of the
+        code called from CodeBlock's fixpoint iterator needs to use isMarkedConcurrently(). But we
+        could probably arrange for CodeBlock only do fixpoint iterating during the weak reference
+        thing.
+
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::visitWeakly):
+        (JSC::CodeBlock::shouldJettisonDueToOldAge):
+        (JSC::shouldMarkTransition):
+        (JSC::CodeBlock::propagateTransitions):
+        (JSC::CodeBlock::determineLiveness):
+        * bytecode/PolymorphicAccess.cpp:
+        (JSC::AccessCase::propagateTransitions):
+        * heap/Heap.h:
+        * heap/HeapInlines.h:
+        (JSC::Heap::isLive):
+        (JSC::Heap::isMarked):
+        (JSC::Heap::isMarkedConcurrently):
+        * heap/MarkedBlock.cpp:
+        (JSC::MarkedBlock::flipIfNecessarySlow):
+        (JSC::MarkedBlock::flipIfNecessaryConcurrentlySlow):
+        (JSC::MarkedBlock::needsFlip):
+        * heap/MarkedBlock.h:
+        (JSC::MarkedBlock::needsFlip):
+        (JSC::MarkedBlock::flipIfNecessary):
+        (JSC::MarkedBlock::flipIfNecessaryConcurrently):
+        * heap/SlotVisitor.cpp:
+        (JSC::SlotVisitor::appendToMarkStack):
+        (JSC::SlotVisitor::markAuxiliary):
+        (JSC::SlotVisitor::visitChildren):
+        * runtime/Structure.cpp:
+        (JSC::Structure::isCheapDuringGC):
+        (JSC::Structure::markIfCheap):
+
 2016-09-08  Saam Barati  <sbarati@apple.com>
 
         We should inline operationConvertJSValueToBoolean into JIT code
index 043c946..9a8dc0c 100644 (file)
@@ -2485,7 +2485,7 @@ void CodeBlock::visitWeakly(SlotVisitor& visitor)
     if (!setByMe)
         return;
 
-    if (Heap::isMarked(this))
+    if (Heap::isMarkedConcurrently(this))
         return;
 
     if (shouldVisitStrongly()) {
@@ -2628,7 +2628,7 @@ static std::chrono::milliseconds timeToLive(JITCode::JITType jitType)
 
 bool CodeBlock::shouldJettisonDueToOldAge()
 {
-    if (Heap::isMarked(this))
+    if (Heap::isMarkedConcurrently(this))
         return false;
 
     if (UNLIKELY(Options::forceCodeBlockToJettisonDueToOldAge()))
@@ -2643,10 +2643,10 @@ bool CodeBlock::shouldJettisonDueToOldAge()
 #if ENABLE(DFG_JIT)
 static bool shouldMarkTransition(DFG::WeakReferenceTransition& transition)
 {
-    if (transition.m_codeOrigin && !Heap::isMarked(transition.m_codeOrigin.get()))
+    if (transition.m_codeOrigin && !Heap::isMarkedConcurrently(transition.m_codeOrigin.get()))
         return false;
     
-    if (!Heap::isMarked(transition.m_from.get()))
+    if (!Heap::isMarkedConcurrently(transition.m_from.get()))
         return false;
     
     return true;
@@ -2677,7 +2677,7 @@ void CodeBlock::propagateTransitions(SlotVisitor& visitor)
                     m_vm->heap.structureIDTable().get(oldStructureID);
                 Structure* newStructure =
                     m_vm->heap.structureIDTable().get(newStructureID);
-                if (Heap::isMarked(oldStructure))
+                if (Heap::isMarkedConcurrently(oldStructure))
                     visitor.appendUnbarrieredReadOnlyPointer(newStructure);
                 else
                     allAreMarkedSoFar = false;
@@ -2749,14 +2749,14 @@ void CodeBlock::determineLiveness(SlotVisitor& visitor)
     // GC we still have not proved liveness, then this code block is toast.
     bool allAreLiveSoFar = true;
     for (unsigned i = 0; i < dfgCommon->weakReferences.size(); ++i) {
-        if (!Heap::isMarked(dfgCommon->weakReferences[i].get())) {
+        if (!Heap::isMarkedConcurrently(dfgCommon->weakReferences[i].get())) {
             allAreLiveSoFar = false;
             break;
         }
     }
     if (allAreLiveSoFar) {
         for (unsigned i = 0; i < dfgCommon->weakStructureReferences.size(); ++i) {
-            if (!Heap::isMarked(dfgCommon->weakStructureReferences[i].get())) {
+            if (!Heap::isMarkedConcurrently(dfgCommon->weakStructureReferences[i].get())) {
                 allAreLiveSoFar = false;
                 break;
             }
index 5caca06..256ca43 100644 (file)
@@ -555,7 +555,7 @@ bool AccessCase::propagateTransitions(SlotVisitor& visitor) const
     
     switch (m_type) {
     case Transition:
-        if (Heap::isMarked(m_structure->previousID()))
+        if (Heap::isMarkedConcurrently(m_structure->previousID()))
             visitor.appendUnbarrieredReadOnlyPointer(m_structure.get());
         else
             result = false;
index 378ba13..f0e19da 100644 (file)
@@ -99,6 +99,7 @@ public:
 
     static bool isLive(const void*);
     static bool isMarked(const void*);
+    static bool isMarkedConcurrently(const void*);
     static bool testAndSetMarked(HeapVersion, const void*);
     static void setMarked(const void*);
     
index f278887..4f8a4ee 100644 (file)
@@ -34,6 +34,7 @@
 #include "Structure.h"
 #include <type_traits>
 #include <wtf/Assertions.h>
+#include <wtf/MainThread.h>
 #include <wtf/RandomNumber.h>
 
 namespace JSC {
@@ -75,21 +76,36 @@ inline Heap* Heap::heap(const JSValue v)
 
 inline bool Heap::isLive(const void* rawCell)
 {
+    ASSERT(!mayBeGCThread());
     HeapCell* cell = bitwise_cast<HeapCell*>(rawCell);
     if (cell->isLargeAllocation())
         return cell->largeAllocation().isLive();
     MarkedBlock& block = cell->markedBlock();
-    block.flipIfNecessaryConcurrently(block.vm()->heap.objectSpace().version());
+    block.flipIfNecessary(block.vm()->heap.objectSpace().version());
     return block.handle().isLiveCell(cell);
 }
 
 ALWAYS_INLINE bool Heap::isMarked(const void* rawCell)
 {
+    ASSERT(!mayBeGCThread());
     HeapCell* cell = bitwise_cast<HeapCell*>(rawCell);
     if (cell->isLargeAllocation())
         return cell->largeAllocation().isMarked();
     MarkedBlock& block = cell->markedBlock();
-    block.flipIfNecessaryConcurrently(block.vm()->heap.objectSpace().version());
+    if (block.needsFlip(block.vm()->heap.objectSpace().version()))
+        return false;
+    return block.isMarked(cell);
+}
+
+ALWAYS_INLINE bool Heap::isMarkedConcurrently(const void* rawCell)
+{
+    HeapCell* cell = bitwise_cast<HeapCell*>(rawCell);
+    if (cell->isLargeAllocation())
+        return cell->largeAllocation().isMarked();
+    MarkedBlock& block = cell->markedBlock();
+    if (block.needsFlip(block.vm()->heap.objectSpace().version()))
+        return false;
+    WTF::loadLoadFence();
     return block.isMarked(cell);
 }
 
index d5d89ae..81d1bbd 100644 (file)
@@ -66,7 +66,7 @@ void HeapSnapshotBuilder::buildSnapshot()
 void HeapSnapshotBuilder::appendNode(JSCell* cell)
 {
     ASSERT(m_profiler.activeSnapshotBuilder() == this);
-    ASSERT(Heap::isMarked(cell));
+    ASSERT(Heap::isMarkedConcurrently(cell));
 
     if (hasExistingNodeForCell(cell))
         return;
index b7b19bd..ee2efb0 100644 (file)
@@ -357,14 +357,14 @@ void MarkedBlock::Handle::flipIfNecessary()
 
 void MarkedBlock::flipIfNecessarySlow()
 {
-    ASSERT(m_version != vm()->heap.objectSpace().version());
+    ASSERT(needsFlip());
     clearMarks();
 }
 
 void MarkedBlock::flipIfNecessaryConcurrentlySlow()
 {
     LockHolder locker(m_lock);
-    if (m_version != vm()->heap.objectSpace().version())
+    if (needsFlip())
         clearMarks();
 }
 
@@ -388,7 +388,7 @@ void MarkedBlock::assertFlipped()
 
 bool MarkedBlock::needsFlip()
 {
-    return vm()->heap.objectSpace().version() != m_version;
+    return needsFlip(vm()->heap.objectSpace().version());
 }
 
 bool MarkedBlock::Handle::needsFlip()
index b38c7e7..f0f7b4a 100644 (file)
@@ -264,6 +264,7 @@ public:
         
     WeakSet& weakSet();
 
+    bool needsFlip(HeapVersion);
     bool needsFlip();
         
     void flipIfNecessaryConcurrently(HeapVersion);
@@ -462,15 +463,20 @@ inline size_t MarkedBlock::atomNumber(const void* p)
     return (reinterpret_cast<Bits>(p) - reinterpret_cast<Bits>(this)) / atomSize;
 }
 
+inline bool MarkedBlock::needsFlip(HeapVersion heapVersion)
+{
+    return heapVersion != m_version;
+}
+
 inline void MarkedBlock::flipIfNecessary(HeapVersion heapVersion)
 {
-    if (UNLIKELY(heapVersion != m_version))
+    if (UNLIKELY(needsFlip(heapVersion)))
         flipIfNecessarySlow();
 }
 
 inline void MarkedBlock::flipIfNecessaryConcurrently(HeapVersion heapVersion)
 {
-    if (UNLIKELY(heapVersion != m_version))
+    if (UNLIKELY(needsFlip(heapVersion)))
         flipIfNecessaryConcurrentlySlow();
     WTF::loadLoadFence();
 }
index 1ddc290..fc3db66 100644 (file)
@@ -224,7 +224,7 @@ void SlotVisitor::appendToMarkStack(JSCell* cell)
 template<typename ContainerType>
 ALWAYS_INLINE void SlotVisitor::appendToMarkStack(ContainerType& container, JSCell* cell)
 {
-    ASSERT(Heap::isMarked(cell));
+    ASSERT(Heap::isMarkedConcurrently(cell));
     ASSERT(!cell->isZapped());
     
     container.noteMarked();
@@ -247,7 +247,7 @@ void SlotVisitor::markAuxiliary(const void* base)
     ASSERT(cell->heap() == heap());
     
     if (Heap::testAndSetMarked(m_version, cell)) {
-        RELEASE_ASSERT(Heap::isMarked(cell));
+        RELEASE_ASSERT(Heap::isMarkedConcurrently(cell));
         return;
     }
     
@@ -292,7 +292,7 @@ private:
 
 ALWAYS_INLINE void SlotVisitor::visitChildren(const JSCell* cell)
 {
-    ASSERT(Heap::isMarked(cell));
+    ASSERT(Heap::isMarkedConcurrently(cell));
     
     SetCurrentCellScope currentCellScope(*this, cell);
     
index 8005271..7205b69 100644 (file)
@@ -1140,14 +1140,14 @@ bool Structure::isCheapDuringGC()
     // has any large property names.
     // https://bugs.webkit.org/show_bug.cgi?id=157334
     
-    return (!m_globalObject || Heap::isMarked(m_globalObject.get()))
-        && (!storedPrototypeObject() || Heap::isMarked(storedPrototypeObject()));
+    return (!m_globalObject || Heap::isMarkedConcurrently(m_globalObject.get()))
+        && (!storedPrototypeObject() || Heap::isMarkedConcurrently(storedPrototypeObject()));
 }
 
 bool Structure::markIfCheap(SlotVisitor& visitor)
 {
     if (!isCheapDuringGC())
-        return Heap::isMarked(this);
+        return Heap::isMarkedConcurrently(this);
     
     visitor.appendUnbarrieredReadOnlyPointer(this);
     return true;
index 6de2948..8d8508d 100644 (file)
@@ -1,3 +1,15 @@
+2016-09-08  Filip Pizlo  <fpizlo@apple.com>
+
+        Heap::isMarked() shouldn't pay the price of concurrent lazy flipping
+        https://bugs.webkit.org/show_bug.cgi?id=161760
+
+        Reviewed by Mark Lam.
+
+        * wtf/MainThread.cpp:
+        (WTF::isMainThreadOrGCThread):
+        (WTF::mayBeGCThread):
+        * wtf/MainThread.h:
+
 2016-09-08  Myles C. Maxfield  <mmaxfield@apple.com>
 
         Support new emoji group candidates
index a9d46a9..8ca75cd 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2007, 2008, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2007, 2008, 2015-2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -210,10 +210,15 @@ void registerGCThread()
 
 bool isMainThreadOrGCThread()
 {
-    if (isGCThread->isSet() && **isGCThread)
+    if (mayBeGCThread())
         return true;
 
     return isMainThread();
 }
 
+bool mayBeGCThread()
+{
+    return isGCThread->isSet() && **isGCThread;
+}
+
 } // namespace WTF
index 94c70c6..d7adc6c 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2007, 2008, 2010 Apple Inc. All rights reserved.
+ * Copyright (C) 2007, 2008, 2010, 2016 Apple Inc. All rights reserved.
  * Copyright (C) 2007 Justin Haygood (jhaygood@reaktix.com)
  *
  * Redistribution and use in source and binary forms, with or without
@@ -68,6 +68,7 @@ inline bool isUIThread() { return isMainThread(); }
 void initializeGCThreads();
 
 WTF_EXPORT_PRIVATE void registerGCThread();
+WTF_EXPORT_PRIVATE bool mayBeGCThread();
 WTF_EXPORT_PRIVATE bool isMainThreadOrGCThread();
 
 // NOTE: these functions are internal to the callOnMainThread implementation.
@@ -88,15 +89,16 @@ void initializeMainThreadToProcessMainThreadPlatform();
 } // namespace WTF
 
 using WTF::callOnMainThread;
-#if PLATFORM(COCOA)
-using WTF::callOnWebThreadOrDispatchAsyncOnMainThread;
-#endif
-using WTF::setMainThreadCallbacksPaused;
+using WTF::canAccessThreadLocalDataForThread;
 using WTF::isMainThread;
 using WTF::isMainThreadOrGCThread;
-using WTF::canAccessThreadLocalDataForThread;
 using WTF::isUIThread;
 using WTF::isWebThread;
+using WTF::mayBeGCThread;
+using WTF::setMainThreadCallbacksPaused;
+#if PLATFORM(COCOA)
+using WTF::callOnWebThreadOrDispatchAsyncOnMainThread;
+#endif
 #if USE(WEB_THREAD)
 using WTF::initializeWebThread;
 using WTF::initializeApplicationUIThreadIdentifier;