Marking should be generational
authormhahnenberg@apple.com <mhahnenberg@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 10 Jan 2014 02:28:27 +0000 (02:28 +0000)
committermhahnenberg@apple.com <mhahnenberg@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 10 Jan 2014 02:28:27 +0000 (02:28 +0000)
https://bugs.webkit.org/show_bug.cgi?id=126552

Reviewed by Geoffrey Garen.

Source/JavaScriptCore:

Re-marking the same objects over and over is a waste of effort. This patch implements
the sticky mark bit algorithm (along with our already-present write barriers) to reduce
overhead during garbage collection caused by rescanning objects.

There are now two collection modes, EdenCollection and FullCollection. EdenCollections
only visit new objects or objects that were added to the remembered set by a write barrier.
FullCollections are normal collections that visit all objects regardless of their
generation.

In this patch EdenCollections do not do anything in CopiedSpace. This will be fixed in
https://bugs.webkit.org/show_bug.cgi?id=126555.

* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::visitAggregate):
* bytecode/CodeBlock.h:
(JSC::CodeBlockSet::mark):
* dfg/DFGOperations.cpp:
* heap/CodeBlockSet.cpp:
(JSC::CodeBlockSet::add):
(JSC::CodeBlockSet::traceMarked):
(JSC::CodeBlockSet::rememberCurrentlyExecutingCodeBlocks):
* heap/CodeBlockSet.h:
* heap/CopiedBlockInlines.h:
(JSC::CopiedBlock::reportLiveBytes):
* heap/CopiedSpace.cpp:
(JSC::CopiedSpace::didStartFullCollection):
* heap/CopiedSpace.h:
(JSC::CopiedSpace::heap):
* heap/Heap.cpp:
(JSC::Heap::Heap):
(JSC::Heap::didAbandon):
(JSC::Heap::markRoots):
(JSC::Heap::copyBackingStores):
(JSC::Heap::addToRememberedSet):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collect):
(JSC::Heap::didAllocate):
(JSC::Heap::writeBarrier):
* heap/Heap.h:
(JSC::Heap::isInRememberedSet):
(JSC::Heap::operationInProgress):
(JSC::Heap::shouldCollect):
(JSC::Heap::isCollecting):
(JSC::Heap::isWriteBarrierEnabled):
(JSC::Heap::writeBarrier):
* heap/HeapOperation.h:
* heap/MarkStack.cpp:
(JSC::MarkStackArray::~MarkStackArray):
(JSC::MarkStackArray::clear):
(JSC::MarkStackArray::fillVector):
* heap/MarkStack.h:
* heap/MarkedAllocator.cpp:
(JSC::isListPagedOut):
(JSC::MarkedAllocator::isPagedOut):
(JSC::MarkedAllocator::tryAllocateHelper):
(JSC::MarkedAllocator::addBlock):
(JSC::MarkedAllocator::removeBlock):
(JSC::MarkedAllocator::reset):
* heap/MarkedAllocator.h:
(JSC::MarkedAllocator::MarkedAllocator):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::clearMarks):
(JSC::MarkedBlock::clearRememberedSet):
(JSC::MarkedBlock::clearMarksWithCollectionType):
(JSC::MarkedBlock::lastChanceToFinalize):
* heap/MarkedBlock.h: Changed atomSize to 16 bytes because we have no objects smaller
than 16 bytes. This is also to pay for the additional Bitmap for the remembered set.
(JSC::MarkedBlock::didConsumeEmptyFreeList):
(JSC::MarkedBlock::setRemembered):
(JSC::MarkedBlock::clearRemembered):
(JSC::MarkedBlock::atomicClearRemembered):
(JSC::MarkedBlock::isRemembered):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::~MarkedSpace):
(JSC::MarkedSpace::resetAllocators):
(JSC::MarkedSpace::visitWeakSets):
(JSC::MarkedSpace::reapWeakSets):
(JSC::VerifyMarked::operator()):
(JSC::MarkedSpace::clearMarks):
* heap/MarkedSpace.h:
(JSC::ClearMarks::operator()):
(JSC::ClearRememberedSet::operator()):
(JSC::MarkedSpace::didAllocateInBlock):
(JSC::MarkedSpace::clearRememberedSet):
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::~SlotVisitor):
(JSC::SlotVisitor::clearMarkStack):
* heap/SlotVisitor.h:
(JSC::SlotVisitor::markStack):
(JSC::SlotVisitor::sharedData):
* heap/SlotVisitorInlines.h:
(JSC::SlotVisitor::internalAppend):
(JSC::SlotVisitor::unconditionallyAppend):
(JSC::SlotVisitor::copyLater):
(JSC::SlotVisitor::reportExtraMemoryUsage):
(JSC::SlotVisitor::heap):
* jit/Repatch.cpp:
* runtime/JSGenericTypedArrayViewInlines.h:
(JSC::JSGenericTypedArrayView<Adaptor>::visitChildren):
* runtime/JSPropertyNameIterator.h:
(JSC::StructureRareData::setEnumerationCache):
* runtime/JSString.cpp:
(JSC::JSString::visitChildren):
* runtime/StructureRareDataInlines.h:
(JSC::StructureRareData::setPreviousID):
(JSC::StructureRareData::setObjectToStringValue):
* runtime/WeakMapData.cpp:
(JSC::WeakMapData::visitChildren):

Source/WTF:

* wtf/Bitmap.h:
(WTF::WordType>::count): Added a cast that became necessary when Bitmap
is used with smaller types than int32_t.

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@161615 268f45cc-cd09-0410-ab3c-d52691b4dbfc

31 files changed:
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/bytecode/CodeBlock.h
Source/JavaScriptCore/dfg/DFGOperations.cpp
Source/JavaScriptCore/heap/CodeBlockSet.cpp
Source/JavaScriptCore/heap/CodeBlockSet.h
Source/JavaScriptCore/heap/CopiedBlockInlines.h
Source/JavaScriptCore/heap/CopiedSpace.cpp
Source/JavaScriptCore/heap/CopiedSpace.h
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/heap/Heap.h
Source/JavaScriptCore/heap/HeapOperation.h
Source/JavaScriptCore/heap/MarkStack.cpp
Source/JavaScriptCore/heap/MarkStack.h
Source/JavaScriptCore/heap/MarkedAllocator.cpp
Source/JavaScriptCore/heap/MarkedAllocator.h
Source/JavaScriptCore/heap/MarkedBlock.cpp
Source/JavaScriptCore/heap/MarkedBlock.h
Source/JavaScriptCore/heap/MarkedSpace.cpp
Source/JavaScriptCore/heap/MarkedSpace.h
Source/JavaScriptCore/heap/SlotVisitor.cpp
Source/JavaScriptCore/heap/SlotVisitor.h
Source/JavaScriptCore/heap/SlotVisitorInlines.h
Source/JavaScriptCore/jit/Repatch.cpp
Source/JavaScriptCore/runtime/JSGenericTypedArrayViewInlines.h
Source/JavaScriptCore/runtime/JSPropertyNameIterator.h
Source/JavaScriptCore/runtime/JSString.cpp
Source/JavaScriptCore/runtime/StructureRareDataInlines.h
Source/JavaScriptCore/runtime/WeakMapData.cpp
Source/WTF/ChangeLog
Source/WTF/wtf/Bitmap.h

index 8ef8667..3089948 100644 (file)
@@ -1,3 +1,119 @@
+2014-01-07  Mark Hahnenberg  <mhahnenberg@apple.com>
+
+        Marking should be generational
+        https://bugs.webkit.org/show_bug.cgi?id=126552
+
+        Reviewed by Geoffrey Garen.
+
+        Re-marking the same objects over and over is a waste of effort. This patch implements 
+        the sticky mark bit algorithm (along with our already-present write barriers) to reduce 
+        overhead during garbage collection caused by rescanning objects.
+
+        There are now two collection modes, EdenCollection and FullCollection. EdenCollections
+        only visit new objects or objects that were added to the remembered set by a write barrier.
+        FullCollections are normal collections that visit all objects regardless of their 
+        generation.
+
+        In this patch EdenCollections do not do anything in CopiedSpace. This will be fixed in 
+        https://bugs.webkit.org/show_bug.cgi?id=126555.
+
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::visitAggregate):
+        * bytecode/CodeBlock.h:
+        (JSC::CodeBlockSet::mark):
+        * dfg/DFGOperations.cpp:
+        * heap/CodeBlockSet.cpp:
+        (JSC::CodeBlockSet::add):
+        (JSC::CodeBlockSet::traceMarked):
+        (JSC::CodeBlockSet::rememberCurrentlyExecutingCodeBlocks):
+        * heap/CodeBlockSet.h:
+        * heap/CopiedBlockInlines.h:
+        (JSC::CopiedBlock::reportLiveBytes):
+        * heap/CopiedSpace.cpp:
+        (JSC::CopiedSpace::didStartFullCollection):
+        * heap/CopiedSpace.h:
+        (JSC::CopiedSpace::heap):
+        * heap/Heap.cpp:
+        (JSC::Heap::Heap):
+        (JSC::Heap::didAbandon):
+        (JSC::Heap::markRoots):
+        (JSC::Heap::copyBackingStores):
+        (JSC::Heap::addToRememberedSet):
+        (JSC::Heap::collectAllGarbage):
+        (JSC::Heap::collect):
+        (JSC::Heap::didAllocate):
+        (JSC::Heap::writeBarrier):
+        * heap/Heap.h:
+        (JSC::Heap::isInRememberedSet):
+        (JSC::Heap::operationInProgress):
+        (JSC::Heap::shouldCollect):
+        (JSC::Heap::isCollecting):
+        (JSC::Heap::isWriteBarrierEnabled):
+        (JSC::Heap::writeBarrier):
+        * heap/HeapOperation.h:
+        * heap/MarkStack.cpp:
+        (JSC::MarkStackArray::~MarkStackArray):
+        (JSC::MarkStackArray::clear):
+        (JSC::MarkStackArray::fillVector):
+        * heap/MarkStack.h:
+        * heap/MarkedAllocator.cpp:
+        (JSC::isListPagedOut):
+        (JSC::MarkedAllocator::isPagedOut):
+        (JSC::MarkedAllocator::tryAllocateHelper):
+        (JSC::MarkedAllocator::addBlock):
+        (JSC::MarkedAllocator::removeBlock):
+        (JSC::MarkedAllocator::reset):
+        * heap/MarkedAllocator.h:
+        (JSC::MarkedAllocator::MarkedAllocator):
+        * heap/MarkedBlock.cpp:
+        (JSC::MarkedBlock::clearMarks):
+        (JSC::MarkedBlock::clearRememberedSet):
+        (JSC::MarkedBlock::clearMarksWithCollectionType):
+        (JSC::MarkedBlock::lastChanceToFinalize):
+        * heap/MarkedBlock.h: Changed atomSize to 16 bytes because we have no objects smaller
+        than 16 bytes. This is also to pay for the additional Bitmap for the remembered set.
+        (JSC::MarkedBlock::didConsumeEmptyFreeList):
+        (JSC::MarkedBlock::setRemembered):
+        (JSC::MarkedBlock::clearRemembered):
+        (JSC::MarkedBlock::atomicClearRemembered):
+        (JSC::MarkedBlock::isRemembered):
+        * heap/MarkedSpace.cpp:
+        (JSC::MarkedSpace::~MarkedSpace):
+        (JSC::MarkedSpace::resetAllocators):
+        (JSC::MarkedSpace::visitWeakSets):
+        (JSC::MarkedSpace::reapWeakSets):
+        (JSC::VerifyMarked::operator()):
+        (JSC::MarkedSpace::clearMarks):
+        * heap/MarkedSpace.h:
+        (JSC::ClearMarks::operator()):
+        (JSC::ClearRememberedSet::operator()):
+        (JSC::MarkedSpace::didAllocateInBlock):
+        (JSC::MarkedSpace::clearRememberedSet):
+        * heap/SlotVisitor.cpp:
+        (JSC::SlotVisitor::~SlotVisitor):
+        (JSC::SlotVisitor::clearMarkStack):
+        * heap/SlotVisitor.h:
+        (JSC::SlotVisitor::markStack):
+        (JSC::SlotVisitor::sharedData):
+        * heap/SlotVisitorInlines.h:
+        (JSC::SlotVisitor::internalAppend):
+        (JSC::SlotVisitor::unconditionallyAppend):
+        (JSC::SlotVisitor::copyLater):
+        (JSC::SlotVisitor::reportExtraMemoryUsage):
+        (JSC::SlotVisitor::heap):
+        * jit/Repatch.cpp:
+        * runtime/JSGenericTypedArrayViewInlines.h:
+        (JSC::JSGenericTypedArrayView<Adaptor>::visitChildren):
+        * runtime/JSPropertyNameIterator.h:
+        (JSC::StructureRareData::setEnumerationCache):
+        * runtime/JSString.cpp:
+        (JSC::JSString::visitChildren):
+        * runtime/StructureRareDataInlines.h:
+        (JSC::StructureRareData::setPreviousID):
+        (JSC::StructureRareData::setObjectToStringValue):
+        * runtime/WeakMapData.cpp:
+        (JSC::WeakMapData::visitChildren):
+
 2014-01-09  Joseph Pecoraro  <pecoraro@apple.com>
 
         Unreviewed Windows build fix for r161563.
index a2aaa1d..462c062 100644 (file)
@@ -1954,15 +1954,15 @@ void CodeBlock::visitAggregate(SlotVisitor& visitor)
     if (CodeBlock* otherBlock = specialOSREntryBlockOrNull())
         otherBlock->visitAggregate(visitor);
 
-    visitor.reportExtraMemoryUsage(sizeof(CodeBlock));
+    visitor.reportExtraMemoryUsage(ownerExecutable(), sizeof(CodeBlock));
     if (m_jitCode)
-        visitor.reportExtraMemoryUsage(m_jitCode->size());
+        visitor.reportExtraMemoryUsage(ownerExecutable(), m_jitCode->size());
     if (m_instructions.size()) {
         // Divide by refCount() because m_instructions points to something that is shared
         // by multiple CodeBlocks, and we only want to count it towards the heap size once.
         // Having each CodeBlock report only its proportional share of the size is one way
         // of accomplishing this.
-        visitor.reportExtraMemoryUsage(m_instructions.size() * sizeof(Instruction) / m_instructions.refCount());
+        visitor.reportExtraMemoryUsage(ownerExecutable(), m_instructions.size() * sizeof(Instruction) / m_instructions.refCount());
     }
 
     visitor.append(&m_unlinkedCode);
index 2ff11a9..cf62f8e 100644 (file)
@@ -1269,6 +1269,9 @@ inline void CodeBlockSet::mark(void* candidateCodeBlock)
         return;
     
     (*iter)->m_mayBeExecuting = true;
+#if ENABLE(GGC)
+    m_currentlyExecuting.append(static_cast<CodeBlock*>(candidateCodeBlock));
+#endif
 }
 
 } // namespace JSC
index eb63aee..b31b6fc 100644 (file)
@@ -850,6 +850,7 @@ char* JIT_OPERATION operationReallocateButterflyToHavePropertyStorageWithInitial
     NativeCallFrameTracer tracer(&vm, exec);
 
     ASSERT(!object->structure()->outOfLineCapacity());
+    DeferGC deferGC(vm.heap);
     Butterfly* result = object->growOutOfLineStorage(vm, 0, initialOutOfLineCapacity);
     object->setButterflyWithoutChangingStructure(vm, result);
     return reinterpret_cast<char*>(result);
@@ -860,6 +861,7 @@ char* JIT_OPERATION operationReallocateButterflyToGrowPropertyStorage(ExecState*
     VM& vm = exec->vm();
     NativeCallFrameTracer tracer(&vm, exec);
 
+    DeferGC deferGC(vm.heap);
     Butterfly* result = object->growOutOfLineStorage(vm, object->structure()->outOfLineCapacity(), newSize);
     object->setButterflyWithoutChangingStructure(vm, result);
     return reinterpret_cast<char*>(result);
index ae27480..2fc999b 100644 (file)
@@ -45,7 +45,8 @@ CodeBlockSet::~CodeBlockSet()
 
 void CodeBlockSet::add(PassRefPtr<CodeBlock> codeBlock)
 {
-    bool isNewEntry = m_set.add(codeBlock.leakRef()).isNewEntry;
+    CodeBlock* block = codeBlock.leakRef();
+    bool isNewEntry = m_set.add(block).isNewEntry;
     ASSERT_UNUSED(isNewEntry, isNewEntry);
 }
 
@@ -101,9 +102,20 @@ void CodeBlockSet::traceMarked(SlotVisitor& visitor)
         CodeBlock* codeBlock = *iter;
         if (!codeBlock->m_mayBeExecuting)
             continue;
-        codeBlock->visitAggregate(visitor);
+        codeBlock->ownerExecutable()->methodTable()->visitChildren(codeBlock->ownerExecutable(), visitor);
     }
 }
 
+void CodeBlockSet::rememberCurrentlyExecutingCodeBlocks(Heap* heap)
+{
+#if ENABLE(GGC)
+    for (size_t i = 0; i < m_currentlyExecuting.size(); ++i)
+        heap->addToRememberedSet(m_currentlyExecuting[i]->ownerExecutable());
+    m_currentlyExecuting.clear();
+#else
+    UNUSED_PARAM(heap);
+#endif // ENABLE(GGC)
+}
+
 } // namespace JSC
 
index 2e4e606..bb786f0 100644 (file)
 #include <wtf/Noncopyable.h>
 #include <wtf/PassRefPtr.h>
 #include <wtf/RefPtr.h>
+#include <wtf/Vector.h>
 
 namespace JSC {
 
 class CodeBlock;
+class Heap;
 class SlotVisitor;
 
 // CodeBlockSet tracks all CodeBlocks. Every CodeBlock starts out with one
@@ -65,11 +67,16 @@ public:
     // mayBeExecuting.
     void traceMarked(SlotVisitor&);
 
+    // Add all currently executing CodeBlocks to the remembered set to be 
+    // re-scanned during the next collection.
+    void rememberCurrentlyExecutingCodeBlocks(Heap*);
+
 private:
     // This is not a set of RefPtr<CodeBlock> because we need to be able to find
     // arbitrary bogus pointers. I could have written a thingy that had peek types
     // and all, but that seemed like overkill.
     HashSet<CodeBlock* > m_set;
+    Vector<CodeBlock*> m_currentlyExecuting;
 };
 
 } // namespace JSC
index 61996ce..150b4c7 100644 (file)
@@ -42,6 +42,9 @@ inline void CopiedBlock::reportLiveBytes(JSCell* owner, CopyToken token, unsigne
 #endif
     m_liveBytes += bytes;
 
+    if (isPinned())
+        return;
+
     if (!shouldEvacuate()) {
         pin();
         return;
index f0e7722..9601634 100644 (file)
@@ -316,4 +316,17 @@ bool CopiedSpace::isPagedOut(double deadline)
         || isBlockListPagedOut(deadline, &m_oversizeBlocks);
 }
 
+void CopiedSpace::didStartFullCollection()
+{
+    ASSERT(heap()->operationInProgress() == FullCollection);
+
+    ASSERT(m_fromSpace->isEmpty());
+
+    for (CopiedBlock* block = m_toSpace->head(); block; block = block->next())
+        block->didSurviveGC();
+
+    for (CopiedBlock* block = m_oversizeBlocks.head(); block; block = block->next())
+        block->didSurviveGC();
+}
+
 } // namespace JSC
index 65ca04e..5fca45a 100644 (file)
@@ -60,6 +60,8 @@ public:
     
     CopiedAllocator& allocator() { return m_allocator; }
 
+    void didStartFullCollection();
+
     void startedCopying();
     void doneCopying();
     bool isInCopyPhase() { return m_inCopyingPhase; }
@@ -80,6 +82,8 @@ public:
 
     static CopiedBlock* blockFor(void*);
 
+    Heap* heap() const { return m_heap; }
+
 private:
     static bool isOversize(size_t);
 
index 307c30c..7cc3860 100644 (file)
@@ -253,9 +253,11 @@ Heap::Heap(VM* vm, HeapType heapType)
     , m_ramSize(ramSize())
     , m_minBytesPerCycle(minHeapSize(m_heapType, m_ramSize))
     , m_sizeAfterLastCollect(0)
-    , m_bytesAllocatedLimit(m_minBytesPerCycle)
-    , m_bytesAllocated(0)
-    , m_bytesAbandoned(0)
+    , m_bytesAllocatedThisCycle(0)
+    , m_bytesAbandonedThisCycle(0)
+    , m_maxEdenSize(m_minBytesPerCycle)
+    , m_maxHeapSize(m_minBytesPerCycle)
+    , m_shouldDoFullCollection(false)
     , m_totalBytesVisited(0)
     , m_totalBytesCopied(0)
     , m_operationInProgress(NoOperation)
@@ -269,7 +271,7 @@ Heap::Heap(VM* vm, HeapType heapType)
     , m_copyVisitor(m_sharedData)
     , m_handleSet(vm)
     , m_isSafeToCollect(false)
-    , m_writeBarrierBuffer(128)
+    , m_writeBarrierBuffer(256)
     , m_vm(vm)
     , m_lastGCLength(0)
     , m_lastCodeDiscardTime(WTF::monotonicallyIncreasingTime())
@@ -332,8 +334,8 @@ void Heap::reportAbandonedObjectGraph()
 void Heap::didAbandon(size_t bytes)
 {
     if (m_activityCallback)
-        m_activityCallback->didAllocate(m_bytesAllocated + m_bytesAbandoned);
-    m_bytesAbandoned += bytes;
+        m_activityCallback->didAllocate(m_bytesAllocatedThisCycle + m_bytesAbandonedThisCycle);
+    m_bytesAbandonedThisCycle += bytes;
 }
 
 void Heap::protect(JSValue k)
@@ -487,6 +489,9 @@ void Heap::markRoots()
     visitor.setup();
     HeapRootVisitor heapRootVisitor(visitor);
 
+    Vector<const JSCell*> rememberedSet(m_slotVisitor.markStack().size());
+    m_slotVisitor.markStack().fillVector(rememberedSet);
+
     {
         ParallelModeEnabler enabler(visitor);
 
@@ -590,6 +595,14 @@ void Heap::markRoots()
         }
     }
 
+    {
+        GCPHASE(ClearRememberedSet);
+        for (unsigned i = 0; i < rememberedSet.size(); ++i) {
+            const JSCell* cell = rememberedSet[i];
+            MarkedBlock::blockFor(cell)->clearRemembered(cell);
+        }
+    }
+
     GCCOUNTER(VisitedValueCount, visitor.visitCount());
 
     m_sharedData.didFinishMarking();
@@ -601,8 +614,14 @@ void Heap::markRoots()
     MARK_LOG_MESSAGE2("\nNumber of live Objects after full GC %lu, took %.6f secs\n", visitCount, WTF::monotonicallyIncreasingTime() - gcStartTime);
 #endif
 
-    m_totalBytesVisited = visitor.bytesVisited();
-    m_totalBytesCopied = visitor.bytesCopied();
+    if (m_operationInProgress == EdenCollection) {
+        m_totalBytesVisited += visitor.bytesVisited();
+        m_totalBytesCopied += visitor.bytesCopied();
+    } else {
+        ASSERT(m_operationInProgress == FullCollection);
+        m_totalBytesVisited = visitor.bytesVisited();
+        m_totalBytesCopied = visitor.bytesCopied();
+    }
 #if ENABLE(PARALLEL_GC)
     m_totalBytesVisited += m_sharedData.childBytesVisited();
     m_totalBytesCopied += m_sharedData.childBytesCopied();
@@ -615,8 +634,12 @@ void Heap::markRoots()
     m_sharedData.reset();
 }
 
+template <HeapOperation collectionType>
 void Heap::copyBackingStores()
 {
+    if (collectionType == EdenCollection)
+        return;
+
     m_storageSpace.startedCopying();
     if (m_storageSpace.shouldDoCopyPhase()) {
         m_sharedData.didStartCopying();
@@ -627,7 +650,7 @@ void Heap::copyBackingStores()
         // before signaling that the phase is complete.
         m_storageSpace.doneCopying();
         m_sharedData.didFinishCopying();
-    } else 
+    } else
         m_storageSpace.doneCopying();
 }
 
@@ -723,11 +746,22 @@ void Heap::deleteUnmarkedCompiledCode()
     m_jitStubRoutines.deleteUnmarkedJettisonedStubRoutines();
 }
 
+void Heap::addToRememberedSet(const JSCell* cell)
+{
+    ASSERT(cell);
+    ASSERT(!Options::enableConcurrentJIT() || !isCompilationThread());
+    if (isInRememberedSet(cell))
+        return;
+    MarkedBlock::blockFor(cell)->setRemembered(cell);
+    m_slotVisitor.unconditionallyAppend(const_cast<JSCell*>(cell));
+}
+
 void Heap::collectAllGarbage()
 {
     if (!m_isSafeToCollect)
         return;
 
+    m_shouldDoFullCollection = true;
     collect();
 
     SamplingRegion samplingRegion("Garbage Collection: Sweeping");
@@ -764,9 +798,28 @@ void Heap::collect()
         RecursiveAllocationScope scope(*this);
         m_vm->prepareToDiscardCode();
     }
-    
-    m_operationInProgress = Collection;
-    m_extraMemoryUsage = 0;
+
+    bool isFullCollection = m_shouldDoFullCollection;
+    if (isFullCollection) {
+        m_operationInProgress = FullCollection;
+        m_slotVisitor.clearMarkStack();
+        m_shouldDoFullCollection = false;
+        if (Options::logGC())
+            dataLog("FullCollection, ");
+    } else {
+#if ENABLE(GGC)
+        m_operationInProgress = EdenCollection;
+        if (Options::logGC())
+            dataLog("EdenCollection, ");
+#else
+        m_operationInProgress = FullCollection;
+        m_slotVisitor.clearMarkStack();
+        if (Options::logGC())
+            dataLog("FullCollection, ");
+#endif
+    }
+    if (m_operationInProgress == FullCollection)
+        m_extraMemoryUsage = 0;
 
     if (m_activityCallback)
         m_activityCallback->willCollect();
@@ -780,6 +833,16 @@ void Heap::collect()
     {
         GCPHASE(StopAllocation);
         m_objectSpace.stopAllocating();
+        if (m_operationInProgress == FullCollection)
+            m_storageSpace.didStartFullCollection();
+    }
+
+    {
+        GCPHASE(FlushWriteBarrierBuffer);
+        if (m_operationInProgress == EdenCollection)
+            m_writeBarrierBuffer.flush(*this);
+        else
+            m_writeBarrierBuffer.reset();
     }
 
     markRoots();
@@ -796,13 +859,16 @@ void Heap::collect()
         m_arrayBuffers.sweep();
     }
 
-    {
+    if (m_operationInProgress == FullCollection) {
         m_blockSnapshot.resize(m_objectSpace.blocks().set().size());
         MarkedBlockSnapshotFunctor functor(m_blockSnapshot);
         m_objectSpace.forEachBlock(functor);
     }
 
-    copyBackingStores();
+    if (m_operationInProgress == FullCollection)
+        copyBackingStores<FullCollection>();
+    else
+        copyBackingStores<EdenCollection>();
 
     {
         GCPHASE(FinalizeUnconditionalFinalizers);
@@ -819,8 +885,15 @@ void Heap::collect()
         m_vm->clearSourceProviderCaches();
     }
 
-    m_sweeper->startSweeping(m_blockSnapshot);
-    m_bytesAbandoned = 0;
+    if (m_operationInProgress == FullCollection)
+        m_sweeper->startSweeping(m_blockSnapshot);
+
+    {
+        GCPHASE(AddCurrentlyExecutingCodeBlocksToRememberedSet);
+        m_codeBlocks.rememberCurrentlyExecutingCodeBlocks(this);
+    }
+
+    m_bytesAbandonedThisCycle = 0;
 
     {
         GCPHASE(ResetAllocators);
@@ -831,21 +904,32 @@ void Heap::collect()
     if (Options::gcMaxHeapSize() && currentHeapSize > Options::gcMaxHeapSize())
         HeapStatistics::exitWithFailure();
 
-    m_sizeAfterLastCollect = currentHeapSize;
+    if (m_operationInProgress == FullCollection) {
+        // To avoid pathological GC churn in very small and very large heaps, we set
+        // the new allocation limit based on the current size of the heap, with a
+        // fixed minimum.
+        m_maxHeapSize = max(minHeapSize(m_heapType, m_ramSize), proportionalHeapSize(currentHeapSize, m_ramSize));
+        m_maxEdenSize = m_maxHeapSize - currentHeapSize;
+    } else {
+        ASSERT(currentHeapSize >= m_sizeAfterLastCollect);
+        m_maxEdenSize = m_maxHeapSize - currentHeapSize;
+        double edenToOldGenerationRatio = (double)m_maxEdenSize / (double)m_maxHeapSize;
+        double minEdenToOldGenerationRatio = 1.0 / 3.0;
+        if (edenToOldGenerationRatio < minEdenToOldGenerationRatio)
+            m_shouldDoFullCollection = true;
+        m_maxHeapSize += currentHeapSize - m_sizeAfterLastCollect;
+        m_maxEdenSize = m_maxHeapSize - currentHeapSize;
+    }
 
-    // To avoid pathological GC churn in very small and very large heaps, we set
-    // the new allocation limit based on the current size of the heap, with a
-    // fixed minimum.
-    size_t maxHeapSize = max(minHeapSize(m_heapType, m_ramSize), proportionalHeapSize(currentHeapSize, m_ramSize));
-    m_bytesAllocatedLimit = maxHeapSize - currentHeapSize;
+    m_sizeAfterLastCollect = currentHeapSize;
 
-    m_bytesAllocated = 0;
+    m_bytesAllocatedThisCycle = 0;
     double lastGCEndTime = WTF::monotonicallyIncreasingTime();
     m_lastGCLength = lastGCEndTime - lastGCStartTime;
 
     if (Options::recordGCPauseTimes())
         HeapStatistics::recordGCPauseTime(lastGCStartTime, lastGCEndTime);
-    RELEASE_ASSERT(m_operationInProgress == Collection);
+    RELEASE_ASSERT(m_operationInProgress == EdenCollection || m_operationInProgress == FullCollection);
 
     m_operationInProgress = NoOperation;
     JAVASCRIPTCORE_GC_END();
@@ -863,10 +947,6 @@ void Heap::collect()
         double after = currentTimeMS();
         dataLog(after - before, " ms, ", currentHeapSize / 1024, " kb]\n");
     }
-
-#if ENABLE(ALLOCATION_LOGGING)
-    dataLogF("JSC GC finishing collection.\n");
-#endif
 }
 
 bool Heap::collectIfNecessaryOrDefer()
@@ -916,8 +996,8 @@ void Heap::setGarbageCollectionTimerEnabled(bool enable)
 void Heap::didAllocate(size_t bytes)
 {
     if (m_activityCallback)
-        m_activityCallback->didAllocate(m_bytesAllocated + m_bytesAbandoned);
-    m_bytesAllocated += bytes;
+        m_activityCallback->didAllocate(m_bytesAllocatedThisCycle + m_bytesAbandonedThisCycle);
+    m_bytesAllocatedThisCycle += bytes;
 }
 
 bool Heap::isValidAllocation(size_t)
@@ -994,6 +1074,15 @@ void Heap::decrementDeferralDepthAndGCIfNeeded()
     collectIfNecessaryOrDefer();
 }
 
+void Heap::writeBarrier(const JSCell* from)
+{
+    ASSERT_GC_OBJECT_LOOKS_VALID(const_cast<JSCell*>(from));
+    if (!from || !isMarked(from))
+        return;
+    Heap* heap = Heap::heap(from);
+    heap->addToRememberedSet(from);
+}
+
 void Heap::flushWriteBarrierBuffer(JSCell* cell)
 {
 #if ENABLE(GGC)
index ba4e801..ab580aa 100644 (file)
@@ -94,11 +94,17 @@ namespace JSC {
         static bool testAndSetMarked(const void*);
         static void setMarked(const void*);
 
+        JS_EXPORT_PRIVATE void addToRememberedSet(const JSCell*);
+        bool isInRememberedSet(const JSCell* cell) const
+        {
+            ASSERT(cell);
+            ASSERT(!Options::enableConcurrentJIT() || !isCompilationThread());
+            return MarkedBlock::blockFor(cell)->isRemembered(cell);
+        }
         static bool isWriteBarrierEnabled();
-        static void writeBarrier(const JSCell*);
+        JS_EXPORT_PRIVATE static void writeBarrier(const JSCell*);
         static void writeBarrier(const JSCell*, JSValue);
         static void writeBarrier(const JSCell*, JSCell*);
-        static uint8_t* addressOfCardFor(JSCell*);
 
         WriteBarrierBuffer& writeBarrierBuffer() { return m_writeBarrierBuffer; }
         void flushWriteBarrierBuffer(JSCell*);
@@ -120,6 +126,7 @@ namespace JSC {
 
         // true if collection is in progress
         inline bool isCollecting();
+        inline HeapOperation operationInProgress() { return m_operationInProgress; }
         // true if an allocation or collection is in progress
         inline bool isBusy();
         
@@ -236,6 +243,7 @@ namespace JSC {
         void markRoots();
         void markProtectedObjects(HeapRootVisitor&);
         void markTempSortVectors(HeapRootVisitor&);
+        template <HeapOperation collectionType>
         void copyBackingStores();
         void harvestWeakReferences();
         void finalizeUnconditionalFinalizers();
@@ -257,10 +265,11 @@ namespace JSC {
         const size_t m_minBytesPerCycle;
         size_t m_sizeAfterLastCollect;
 
-        size_t m_bytesAllocatedLimit;
-        size_t m_bytesAllocated;
-        size_t m_bytesAbandoned;
-
+        size_t m_bytesAllocatedThisCycle;
+        size_t m_bytesAbandonedThisCycle;
+        size_t m_maxEdenSize;
+        size_t m_maxHeapSize;
+        bool m_shouldDoFullCollection;
         size_t m_totalBytesVisited;
         size_t m_totalBytesCopied;
         
@@ -271,6 +280,8 @@ namespace JSC {
         GCIncomingRefCountedSet<ArrayBuffer> m_arrayBuffers;
         size_t m_extraMemoryUsage;
 
+        HashSet<const JSCell*> m_copyingRememberedSet;
+
         ProtectCountSet m_protectedValues;
         Vector<Vector<ValueStringPair, 0, UnsafeVectorOverflow>* > m_tempSortingVectors;
         OwnPtr<HashSet<MarkedArgumentBuffer*>> m_markListSet;
@@ -322,8 +333,8 @@ namespace JSC {
         if (isDeferred())
             return false;
         if (Options::gcMaxHeapSize())
-            return m_bytesAllocated > Options::gcMaxHeapSize() && m_isSafeToCollect && m_operationInProgress == NoOperation;
-        return m_bytesAllocated > m_bytesAllocatedLimit && m_isSafeToCollect && m_operationInProgress == NoOperation;
+            return m_bytesAllocatedThisCycle > Options::gcMaxHeapSize() && m_isSafeToCollect && m_operationInProgress == NoOperation;
+        return m_bytesAllocatedThisCycle > m_maxEdenSize && m_isSafeToCollect && m_operationInProgress == NoOperation;
     }
 
     bool Heap::isBusy()
@@ -333,7 +344,7 @@ namespace JSC {
 
     bool Heap::isCollecting()
     {
-        return m_operationInProgress == Collection;
+        return m_operationInProgress == FullCollection || m_operationInProgress == EdenCollection;
     }
 
     inline Heap* Heap::heap(const JSCell* cell)
@@ -370,26 +381,33 @@ namespace JSC {
 
     inline bool Heap::isWriteBarrierEnabled()
     {
-#if ENABLE(WRITE_BARRIER_PROFILING)
+#if ENABLE(WRITE_BARRIER_PROFILING) || ENABLE(GGC)
         return true;
 #else
         return false;
 #endif
     }
 
-    inline void Heap::writeBarrier(const JSCell*)
-    {
-        WriteBarrierCounters::countWriteBarrier();
-    }
-
-    inline void Heap::writeBarrier(const JSCell*, JSCell*)
+    inline void Heap::writeBarrier(const JSCell* from, JSCell* to)
     {
+#if ENABLE(WRITE_BARRIER_PROFILING)
         WriteBarrierCounters::countWriteBarrier();
+#endif
+        if (!from || !isMarked(from))
+            return;
+        if (!to || isMarked(to))
+            return;
+        Heap::heap(from)->addToRememberedSet(from);
     }
 
-    inline void Heap::writeBarrier(const JSCell*, JSValue)
+    inline void Heap::writeBarrier(const JSCell* from, JSValue to)
     {
+#if ENABLE(WRITE_BARRIER_PROFILING)
         WriteBarrierCounters::countWriteBarrier();
+#endif
+        if (!to.isCell())
+            return;
+        writeBarrier(from, to.asCell());
     }
 
     inline void Heap::reportExtraMemoryCost(size_t cost)
index 8f0a023..769127e 100644 (file)
@@ -28,7 +28,7 @@
 
 namespace JSC {
 
-enum HeapOperation { NoOperation, Allocation, Collection };
+enum HeapOperation { NoOperation, Allocation, FullCollection, EdenCollection };
 
 } // namespace JSC
 
index 39907c7..688de42 100644 (file)
@@ -57,8 +57,29 @@ MarkStackArray::MarkStackArray(BlockAllocator& blockAllocator)
 
 MarkStackArray::~MarkStackArray()
 {
-    ASSERT(m_numberOfSegments == 1 && m_segments.size() == 1);
+    ASSERT(m_numberOfSegments == 1);
+    ASSERT(m_segments.size() == 1);
     m_blockAllocator.deallocate(MarkStackSegment::destroy(m_segments.removeHead()));
+    m_numberOfSegments--;
+    ASSERT(!m_numberOfSegments);
+    ASSERT(!m_segments.size());
+}
+
+void MarkStackArray::clear()
+{
+    if (!m_segments.head())
+        return;
+    MarkStackSegment* next;
+    for (MarkStackSegment* current = m_segments.head(); current->next(); current = next) {
+        next = current->next();
+        m_segments.remove(current);
+        m_blockAllocator.deallocate(MarkStackSegment::destroy(current));
+    }
+    m_top = 0;
+    m_numberOfSegments = 1;
+#if !ASSERT_DISABLED
+    m_segments.head()->m_top = 0;
+#endif
 }
 
 void MarkStackArray::expand()
@@ -167,4 +188,28 @@ void MarkStackArray::stealSomeCellsFrom(MarkStackArray& other, size_t idleThread
         append(other.removeLast());
 }
 
+void MarkStackArray::fillVector(Vector<const JSCell*>& vector)
+{
+    ASSERT(vector.size() == size());
+
+    MarkStackSegment* currentSegment = m_segments.head();
+    if (!currentSegment)
+        return;
+
+    unsigned count = 0;
+    for (unsigned i = 0; i < m_top; ++i) {
+        ASSERT(currentSegment->data()[i]);
+        vector[count++] = currentSegment->data()[i];
+    }
+
+    currentSegment = currentSegment->next();
+    while (currentSegment) {
+        for (unsigned i = 0; i < s_segmentCapacity; ++i) {
+            ASSERT(currentSegment->data()[i]);
+            vector[count++] = currentSegment->data()[i];
+        }
+        currentSegment = currentSegment->next();
+    }
+}
+
 } // namespace JSC
index c97b6a7..6729bad 100644 (file)
@@ -52,6 +52,7 @@
 
 #include "HeapBlock.h"
 #include <wtf/StdLibExtras.h>
+#include <wtf/Vector.h>
 
 namespace JSC {
 
@@ -100,6 +101,9 @@ public:
     size_t size();
     bool isEmpty();
 
+    void fillVector(Vector<const JSCell*>&);
+    void clear();
+
 private:
     template <size_t size> struct CapacityFromSize {
         static const size_t value = (size - sizeof(MarkStackSegment)) / sizeof(const JSCell*);
index 7440208..c2b0f72 100644 (file)
 
 namespace JSC {
 
-bool MarkedAllocator::isPagedOut(double deadline)
+static bool isListPagedOut(double deadline, DoublyLinkedList<MarkedBlock>& list)
 {
     unsigned itersSinceLastTimeCheck = 0;
-    MarkedBlock* block = m_blockList.head();
+    MarkedBlock* block = list.head();
     while (block) {
         block = block->next();
         ++itersSinceLastTimeCheck;
@@ -24,7 +24,13 @@ bool MarkedAllocator::isPagedOut(double deadline)
             itersSinceLastTimeCheck = 0;
         }
     }
+    return false;
+}
 
+bool MarkedAllocator::isPagedOut(double deadline)
+{
+    if (isListPagedOut(deadline, m_blockList))
+        return true;
     return false;
 }
 
@@ -36,15 +42,23 @@ inline void* MarkedAllocator::tryAllocateHelper(size_t bytes)
     while (!m_freeList.head) {
         DelayedReleaseScope delayedReleaseScope(*m_markedSpace);
         if (m_currentBlock) {
-            ASSERT(m_currentBlock == m_blocksToSweep);
+            ASSERT(m_currentBlock == m_nextBlockToSweep);
             m_currentBlock->didConsumeFreeList();
-            m_blocksToSweep = m_currentBlock->next();
+            m_nextBlockToSweep = m_currentBlock->next();
         }
 
-        for (MarkedBlock*& block = m_blocksToSweep; block; block = block->next()) {
+        MarkedBlock* next;
+        for (MarkedBlock*& block = m_nextBlockToSweep; block; block = next) {
+            next = block->next();
+
             MarkedBlock::FreeList freeList = block->sweep(MarkedBlock::SweepToFreeList);
+            
             if (!freeList.head) {
                 block->didConsumeEmptyFreeList();
+                m_blockList.remove(block);
+                m_blockList.push(block);
+                if (!m_lastFullBlock)
+                    m_lastFullBlock = block;
                 continue;
             }
 
@@ -68,6 +82,7 @@ inline void* MarkedAllocator::tryAllocateHelper(size_t bytes)
     MarkedBlock::FreeCell* head = m_freeList.head;
     m_freeList.head = head->next;
     ASSERT(head);
+    m_markedSpace->didAllocateInBlock(m_currentBlock);
     return head;
 }
     
@@ -136,7 +151,7 @@ void MarkedAllocator::addBlock(MarkedBlock* block)
     ASSERT(!m_freeList.head);
     
     m_blockList.append(block);
-    m_blocksToSweep = m_currentBlock = block;
+    m_nextBlockToSweep = m_currentBlock = block;
     m_freeList = block->sweep(MarkedBlock::SweepToFreeList);
     m_markedSpace->didAddBlock(block);
 }
@@ -147,9 +162,27 @@ void MarkedAllocator::removeBlock(MarkedBlock* block)
         m_currentBlock = m_currentBlock->next();
         m_freeList = MarkedBlock::FreeList();
     }
-    if (m_blocksToSweep == block)
-        m_blocksToSweep = m_blocksToSweep->next();
+    if (m_nextBlockToSweep == block)
+        m_nextBlockToSweep = m_nextBlockToSweep->next();
+
+    if (block == m_lastFullBlock)
+        m_lastFullBlock = m_lastFullBlock->prev();
+    
     m_blockList.remove(block);
 }
 
+void MarkedAllocator::reset()
+{
+    m_lastActiveBlock = 0;
+    m_currentBlock = 0;
+    m_freeList = MarkedBlock::FreeList();
+    if (m_heap->operationInProgress() == FullCollection)
+        m_lastFullBlock = 0;
+
+    if (m_lastFullBlock)
+        m_nextBlockToSweep = m_lastFullBlock->next() ? m_lastFullBlock->next() : m_lastFullBlock;
+    else
+        m_nextBlockToSweep = m_blockList.head();
+}
+
 } // namespace JSC
index 3a629c3..e0d3e89 100644 (file)
@@ -52,7 +52,8 @@ private:
     MarkedBlock::FreeList m_freeList;
     MarkedBlock* m_currentBlock;
     MarkedBlock* m_lastActiveBlock;
-    MarkedBlock* m_blocksToSweep;
+    MarkedBlock* m_nextBlockToSweep;
+    MarkedBlock* m_lastFullBlock;
     DoublyLinkedList<MarkedBlock> m_blockList;
     size_t m_cellSize;
     MarkedBlock::DestructorType m_destructorType;
@@ -68,7 +69,8 @@ inline ptrdiff_t MarkedAllocator::offsetOfFreeListHead()
 inline MarkedAllocator::MarkedAllocator()
     : m_currentBlock(0)
     , m_lastActiveBlock(0)
-    , m_blocksToSweep(0)
+    , m_nextBlockToSweep(0)
+    , m_lastFullBlock(0)
     , m_cellSize(0)
     , m_destructorType(MarkedBlock::None)
     , m_heap(0)
@@ -102,14 +104,6 @@ inline void* MarkedAllocator::allocate(size_t bytes)
     return head;
 }
 
-inline void MarkedAllocator::reset()
-{
-    m_lastActiveBlock = 0;
-    m_currentBlock = 0;
-    m_freeList = MarkedBlock::FreeList();
-    m_blocksToSweep = m_blockList.head();
-}
-
 inline void MarkedAllocator::stopAllocating()
 {
     ASSERT(!m_lastActiveBlock);
index 1085804..34a0931 100644 (file)
@@ -197,6 +197,45 @@ void MarkedBlock::stopAllocating(const FreeList& freeList)
     m_state = Marked;
 }
 
+void MarkedBlock::clearMarks()
+{
+    if (heap()->operationInProgress() == JSC::EdenCollection)
+        this->clearMarksWithCollectionType<EdenCollection>();
+    else
+        this->clearMarksWithCollectionType<FullCollection>();
+}
+
+void MarkedBlock::clearRememberedSet()
+{
+    m_rememberedSet.clearAll();
+}
+
+template <HeapOperation collectionType>
+void MarkedBlock::clearMarksWithCollectionType()
+{
+    ASSERT(collectionType == FullCollection || collectionType == EdenCollection);
+    HEAP_LOG_BLOCK_STATE_TRANSITION(this);
+
+    ASSERT(m_state != New && m_state != FreeListed);
+    if (collectionType == FullCollection) {
+        m_marks.clearAll();
+        m_rememberedSet.clearAll();
+    }
+
+    // This will become true at the end of the mark phase. We set it now to
+    // avoid an extra pass to do so later.
+    m_state = Marked;
+}
+
+void MarkedBlock::lastChanceToFinalize()
+{
+    m_weakSet.lastChanceToFinalize();
+
+    clearNewlyAllocated();
+    clearMarksWithCollectionType<FullCollection>();
+    sweep();
+}
+
 MarkedBlock::FreeList MarkedBlock::resumeAllocating()
 {
     HEAP_LOG_BLOCK_STATE_TRANSITION(this);
index 2f1bfbd..73f56cd 100644 (file)
@@ -25,6 +25,7 @@
 #include "BlockAllocator.h"
 #include "HeapBlock.h"
 
+#include "HeapOperation.h"
 #include "WeakSet.h"
 #include <wtf/Bitmap.h>
 #include <wtf/DataLog.h>
@@ -72,7 +73,7 @@ namespace JSC {
         friend class LLIntOffsetsExtractor;
 
     public:
-        static const size_t atomSize = 8; // bytes
+        static const size_t atomSize = 16; // bytes
         static const size_t atomShiftAmount = 4; // log_2(atomSize) FIXME: Change atomSize to 16.
         static const size_t blockSize = 64 * KB;
         static const size_t blockMask = ~(blockSize - 1); // blockSize must be a power of two.
@@ -140,11 +141,16 @@ namespace JSC {
         void stopAllocating(const FreeList&);
         FreeList resumeAllocating(); // Call this if you canonicalized a block for some non-collection related purpose.
         void didConsumeEmptyFreeList(); // Call this if you sweep a block, but the returned FreeList is empty.
+        void didSweepToNoAvail(); // Call this if you sweep a block and get an empty free list back.
 
         // Returns true if the "newly allocated" bitmap was non-null 
         // and was successfully cleared and false otherwise.
         bool clearNewlyAllocated();
         void clearMarks();
+        void clearRememberedSet();
+        template <HeapOperation collectionType>
+        void clearMarksWithCollectionType();
+
         size_t markCount();
         bool isEmpty();
 
@@ -161,6 +167,11 @@ namespace JSC {
         void setMarked(const void*);
         void clearMarked(const void*);
 
+        void setRemembered(const void*);
+        void clearRemembered(const void*);
+        void atomicClearRemembered(const void*);
+        bool isRemembered(const void*);
+
         bool isNewlyAllocated(const void*);
         void setNewlyAllocated(const void*);
         void clearNewlyAllocated(const void*);
@@ -190,9 +201,11 @@ namespace JSC {
         size_t m_atomsPerCell;
         size_t m_endAtom; // This is a fuzzy end. Always test for < m_endAtom.
 #if ENABLE(PARALLEL_GC)
-        WTF::Bitmap<atomsPerBlock, WTF::BitmapAtomic> m_marks;
+        WTF::Bitmap<atomsPerBlock, WTF::BitmapAtomic, uint8_t> m_marks;
+        WTF::Bitmap<atomsPerBlock, WTF::BitmapAtomic, uint8_t> m_rememberedSet;
 #else
-        WTF::Bitmap<atomsPerBlock, WTF::BitmapNotAtomic> m_marks;
+        WTF::Bitmap<atomsPerBlock, WTF::BitmapNotAtomic, uint8_t> m_marks;
+        WTF::Bitmap<atomsPerBlock, WTF::BitmapNotAtomic, uint8_t> m_rememberedSet;
 #endif
         OwnPtr<WTF::Bitmap<atomsPerBlock>> m_newlyAllocated;
 
@@ -234,15 +247,6 @@ namespace JSC {
         return reinterpret_cast<MarkedBlock*>(reinterpret_cast<Bits>(p) & blockMask);
     }
 
-    inline void MarkedBlock::lastChanceToFinalize()
-    {
-        m_weakSet.lastChanceToFinalize();
-
-        clearNewlyAllocated();
-        clearMarks();
-        sweep();
-    }
-
     inline MarkedAllocator* MarkedBlock::allocator() const
     {
         return m_allocator;
@@ -291,26 +295,10 @@ namespace JSC {
         HEAP_LOG_BLOCK_STATE_TRANSITION(this);
 
         ASSERT(!m_newlyAllocated);
-#ifndef NDEBUG
-        for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell)
-            ASSERT(m_marks.get(i));
-#endif
         ASSERT(m_state == FreeListed);
         m_state = Marked;
     }
 
-    inline void MarkedBlock::clearMarks()
-    {
-        HEAP_LOG_BLOCK_STATE_TRANSITION(this);
-
-        ASSERT(m_state != New && m_state != FreeListed);
-        m_marks.clearAll();
-
-        // This will become true at the end of the mark phase. We set it now to
-        // avoid an extra pass to do so later.
-        m_state = Marked;
-    }
-
     inline size_t MarkedBlock::markCount()
     {
         return m_marks.count();
@@ -346,6 +334,26 @@ namespace JSC {
         return (reinterpret_cast<Bits>(p) - reinterpret_cast<Bits>(this)) / atomSize;
     }
 
+    inline void MarkedBlock::setRemembered(const void* p)
+    {
+        m_rememberedSet.set(atomNumber(p));
+    }
+
+    inline void MarkedBlock::clearRemembered(const void* p)
+    {
+        m_rememberedSet.clear(atomNumber(p));
+    }
+
+    inline void MarkedBlock::atomicClearRemembered(const void* p)
+    {
+        m_rememberedSet.concurrentTestAndClear(atomNumber(p));
+    }
+
+    inline bool MarkedBlock::isRemembered(const void* p)
+    {
+        return m_rememberedSet.get(atomNumber(p));
+    }
+
     inline bool MarkedBlock::isMarked(const void* p)
     {
         return m_marks.get(atomNumber(p));
index 48648d2..4deca13 100644 (file)
@@ -105,6 +105,7 @@ MarkedSpace::~MarkedSpace()
 {
     Free free(Free::FreeAll, this);
     forEachBlock(free);
+    ASSERT(!m_blocks.set().size());
 }
 
 struct LastChanceToFinalize : MarkedBlock::VoidFunctor {
@@ -143,17 +144,27 @@ void MarkedSpace::resetAllocators()
     m_normalSpace.largeAllocator.reset();
     m_normalDestructorSpace.largeAllocator.reset();
     m_immortalStructureDestructorSpace.largeAllocator.reset();
+
+    m_blocksWithNewObjects.clear();
 }
 
 void MarkedSpace::visitWeakSets(HeapRootVisitor& heapRootVisitor)
 {
     VisitWeakSet visitWeakSet(heapRootVisitor);
-    forEachBlock(visitWeakSet);
+    if (m_heap->operationInProgress() == EdenCollection) {
+        for (unsigned i = 0; i < m_blocksWithNewObjects.size(); ++i)
+            visitWeakSet(m_blocksWithNewObjects[i]);
+    } else
+        forEachBlock(visitWeakSet);
 }
 
 void MarkedSpace::reapWeakSets()
 {
-    forEachBlock<ReapWeakSet>();
+    if (m_heap->operationInProgress() == EdenCollection) {
+        for (unsigned i = 0; i < m_blocksWithNewObjects.size(); ++i)
+            m_blocksWithNewObjects[i]->reapWeakSet();
+    } else
+        forEachBlock<ReapWeakSet>();
 }
 
 template <typename Functor>
@@ -305,6 +316,24 @@ void MarkedSpace::clearNewlyAllocated()
 #endif
 }
 
+#ifndef NDEBUG
+struct VerifyMarked : MarkedBlock::VoidFunctor {
+    void operator()(MarkedBlock* block) { ASSERT(block->needsSweeping()); }
+};
+#endif
+
+void MarkedSpace::clearMarks()
+{
+    if (m_heap->operationInProgress() == EdenCollection) {
+        for (unsigned i = 0; i < m_blocksWithNewObjects.size(); ++i)
+            m_blocksWithNewObjects[i]->clearMarks();
+    } else
+        forEachBlock<ClearMarks>();
+#ifndef NDEBUG
+    forEachBlock<VerifyMarked>();
+#endif
+}
+
 void MarkedSpace::willStartIterating()
 {
     ASSERT(!isIterating());
index 9680670..9c97fbd 100644 (file)
@@ -46,7 +46,17 @@ class WeakGCHandle;
 class SlotVisitor;
 
 struct ClearMarks : MarkedBlock::VoidFunctor {
-    void operator()(MarkedBlock* block) { block->clearMarks(); }
+    void operator()(MarkedBlock* block)
+    {
+        block->clearMarks();
+    }
+};
+
+struct ClearRememberedSet : MarkedBlock::VoidFunctor {
+    void operator()(MarkedBlock* block)
+    {
+        block->clearRememberedSet();
+    }
 };
 
 struct Sweep : MarkedBlock::VoidFunctor {
@@ -105,8 +115,10 @@ public:
 
     void didAddBlock(MarkedBlock*);
     void didConsumeFreeList(MarkedBlock*);
+    void didAllocateInBlock(MarkedBlock*);
 
     void clearMarks();
+    void clearRememberedSet();
     void clearNewlyAllocated();
     void sweep();
     size_t objectCount();
@@ -150,6 +162,7 @@ private:
     size_t m_capacity;
     bool m_isIterating;
     MarkedBlockSet m_blocks;
+    Vector<MarkedBlock*> m_blocksWithNewObjects;
 
     DelayedReleaseScope* m_currentDelayedReleaseScope;
 };
@@ -262,9 +275,14 @@ inline void MarkedSpace::didAddBlock(MarkedBlock* block)
     m_blocks.add(block);
 }
 
-inline void MarkedSpace::clearMarks()
+inline void MarkedSpace::didAllocateInBlock(MarkedBlock* block)
+{
+    m_blocksWithNewObjects.append(block);
+}
+
+inline void MarkedSpace::clearRememberedSet()
 {
-    forEachBlock<ClearMarks>();
+    forEachBlock<ClearRememberedSet>();
 }
 
 inline size_t MarkedSpace::objectCount()
index cda2b79..05fb001 100644 (file)
@@ -33,7 +33,7 @@ SlotVisitor::SlotVisitor(GCThreadSharedData& shared)
 
 SlotVisitor::~SlotVisitor()
 {
-    ASSERT(m_stack.isEmpty());
+    clearMarkStack();
 }
 
 void SlotVisitor::setup()
@@ -63,6 +63,11 @@ void SlotVisitor::reset()
     }
 }
 
+void SlotVisitor::clearMarkStack()
+{
+    m_stack.clear();
+}
+
 void SlotVisitor::append(ConservativeRoots& conservativeRoots)
 {
     StackStats::probe();
index a4aacdc..4a8dc3e 100644 (file)
@@ -49,6 +49,10 @@ public:
     SlotVisitor(GCThreadSharedData&);
     ~SlotVisitor();
 
+    MarkStackArray& markStack() { return m_stack; }
+
+    Heap* heap() const;
+
     void append(ConservativeRoots&);
     
     template<typename T> void append(JITWriteBarrier<T>*);
@@ -61,17 +65,19 @@ public:
     void appendUnbarrieredValue(JSValue*);
     template<typename T>
     void appendUnbarrieredWeak(Weak<T>*);
+    void unconditionallyAppend(JSCell*);
     
     void addOpaqueRoot(void*);
     bool containsOpaqueRoot(void*);
     TriState containsOpaqueRootTriState(void*);
     int opaqueRootCount();
 
-    GCThreadSharedData& sharedData() { return m_shared; }
+    GCThreadSharedData& sharedData() const { return m_shared; }
     bool isEmpty() { return m_stack.isEmpty(); }
 
     void setup();
     void reset();
+    void clearMarkStack();
 
     size_t bytesVisited() const { return m_bytesVisited; }
     size_t bytesCopied() const { return m_bytesCopied; }
@@ -89,7 +95,7 @@ public:
 
     void copyLater(JSCell*, CopyToken, void*, size_t);
     
-    void reportExtraMemoryUsage(size_t size);
+    void reportExtraMemoryUsage(JSCell* owner, size_t);
     
     void addWeakReferenceHarvester(WeakReferenceHarvester*);
     void addUnconditionalFinalizer(UnconditionalFinalizer*);
index d503d1c..cd63ab5 100644 (file)
@@ -105,6 +105,14 @@ ALWAYS_INLINE void SlotVisitor::internalAppend(void* from, JSCell* cell)
         
     MARK_LOG_CHILD(*this, cell);
 
+    unconditionallyAppend(cell);
+}
+
+ALWAYS_INLINE void SlotVisitor::unconditionallyAppend(JSCell* cell)
+{
+    ASSERT(Heap::isMarked(cell));
+    m_visitCount++;
+        
     // Should never attempt to mark something that is zapped.
     ASSERT(!cell->isZapped());
         
@@ -218,6 +226,9 @@ inline void SlotVisitor::donateAndDrain()
 inline void SlotVisitor::copyLater(JSCell* owner, CopyToken token, void* ptr, size_t bytes)
 {
     ASSERT(bytes);
+    // We don't do any copying during EdenCollections.
+    ASSERT(heap()->operationInProgress() != EdenCollection);
+
     m_bytesCopied += bytes;
 
     CopiedBlock* block = CopiedSpace::blockFor(ptr);
@@ -226,14 +237,15 @@ inline void SlotVisitor::copyLater(JSCell* owner, CopyToken token, void* ptr, si
         return;
     }
 
-    if (block->isPinned())
-        return;
-
     block->reportLiveBytes(owner, token, bytes);
 }
     
-inline void SlotVisitor::reportExtraMemoryUsage(size_t size)
+inline void SlotVisitor::reportExtraMemoryUsage(JSCell* owner, size_t size)
 {
+    // We don't want to double-count the extra memory that was reported in previous collections.
+    if (heap()->operationInProgress() == EdenCollection && MarkedBlock::blockFor(owner)->isRemembered(owner))
+        return;
+
     size_t* counter = &m_shared.m_vm->heap.m_extraMemoryUsage;
     
 #if ENABLE(COMPARE_AND_SWAP)
@@ -247,6 +259,11 @@ inline void SlotVisitor::reportExtraMemoryUsage(size_t size)
 #endif
 }
 
+inline Heap* SlotVisitor::heap() const
+{
+    return &sharedData().m_vm->heap;
+}
+
 } // namespace JSC
 
 #endif // SlotVisitorInlines_h
index 3e29da5..5c9aa96 100644 (file)
@@ -39,6 +39,7 @@
 #include "PolymorphicPutByIdList.h"
 #include "RepatchBuffer.h"
 #include "ScratchRegisterAllocator.h"
+#include "StackAlignment.h"
 #include "StructureRareDataInlines.h"
 #include "StructureStubClearingWatchpoint.h"
 #include "ThunkGenerators.h"
index e920abe..6db8627 100644 (file)
@@ -447,7 +447,7 @@ void JSGenericTypedArrayView<Adaptor>::visitChildren(JSCell* cell, SlotVisitor&
     }
         
     case OversizeTypedArray: {
-        visitor.reportExtraMemoryUsage(thisObject->byteSize());
+        visitor.reportExtraMemoryUsage(thisObject, thisObject->byteSize());
         break;
     }
         
index 5914030..f4362ff 100644 (file)
@@ -109,9 +109,9 @@ namespace JSC {
         return m_enumerationCache.get();
     }
     
-    inline void StructureRareData::setEnumerationCache(VM& vm, const Structure* owner, JSPropertyNameIterator* value)
+    inline void StructureRareData::setEnumerationCache(VM& vm, const Structure*, JSPropertyNameIterator* value)
     {
-        m_enumerationCache.set(vm, owner, value);
+        m_enumerationCache.set(vm, this, value);
     }
 
 } // namespace JSC
index a5bfe26..099b623 100644 (file)
@@ -72,7 +72,7 @@ void JSString::visitChildren(JSCell* cell, SlotVisitor& visitor)
     else {
         StringImpl* impl = thisObject->m_value.impl();
         ASSERT(impl);
-        visitor.reportExtraMemoryUsage(impl->costDuringGC());
+        visitor.reportExtraMemoryUsage(thisObject, impl->costDuringGC());
     }
 }
 
index 20b7f8b..5b39bad 100644 (file)
@@ -35,9 +35,9 @@ inline Structure* StructureRareData::previousID() const
     return m_previous.get();
 }
 
-inline void StructureRareData::setPreviousID(VM& vm, Structure* transition, Structure* structure)
+inline void StructureRareData::setPreviousID(VM& vm, Structure*, Structure* structure)
 {
-    m_previous.set(vm, transition, structure);
+    m_previous.set(vm, this, structure);
 }
 
 inline void StructureRareData::clearPreviousID()
@@ -50,9 +50,9 @@ inline JSString* StructureRareData::objectToStringValue() const
     return m_objectToStringValue.get();
 }
 
-inline void StructureRareData::setObjectToStringValue(VM& vm, const JSCell* owner, JSString* value)
+inline void StructureRareData::setObjectToStringValue(VM& vm, const JSCell*, JSString* value)
 {
-    m_objectToStringValue.set(vm, owner, value);
+    m_objectToStringValue.set(vm, this, value);
 }
 
 } // namespace JSC
index ce60c8c..224be8a 100644 (file)
@@ -64,7 +64,7 @@ void WeakMapData::visitChildren(JSCell* cell, SlotVisitor& visitor)
     // Rough approximation of the external storage needed for the hashtable.
     // This isn't exact, but it is close enough, and proportional to the actual
     // external mermory usage.
-    visitor.reportExtraMemoryUsage(thisObj->m_map.capacity() * (sizeof(JSObject*) + sizeof(WriteBarrier<Unknown>)));
+    visitor.reportExtraMemoryUsage(thisObj, thisObj->m_map.capacity() * (sizeof(JSObject*) + sizeof(WriteBarrier<Unknown>)));
 }
 
 void WeakMapData::set(VM& vm, JSObject* key, JSValue value)
index 44902e2..62733a8 100644 (file)
@@ -1,3 +1,14 @@
+2014-01-07  Mark Hahnenberg  <mhahnenberg@apple.com>
+
+        Marking should be generational
+        https://bugs.webkit.org/show_bug.cgi?id=126552
+
+        Reviewed by Geoffrey Garen.
+
+        * wtf/Bitmap.h:
+        (WTF::WordType>::count): Added a cast that became necessary when Bitmap
+        is used with smaller types than int32_t.
+
 2014-01-09  Simon Fraser  <simon.fraser@apple.com>
 
         Enable async scrolling for iOS
index 936ccc2..7b288f9 100644 (file)
@@ -196,7 +196,7 @@ inline size_t Bitmap<size, atomicMode, WordType>::count(size_t start) const
             ++result;
     }
     for (size_t i = start / wordSize; i < words; ++i)
-        result += WTF::bitCount(bits[i]);
+        result += WTF::bitCount(static_cast<unsigned>(bits[i]));
     return result;
 }