Allocate new objects unmarked
authorggaren@apple.com <ggaren@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 24 Sep 2011 22:15:40 +0000 (22:15 +0000)
committerggaren@apple.com <ggaren@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 24 Sep 2011 22:15:40 +0000 (22:15 +0000)
https://bugs.webkit.org/show_bug.cgi?id=68764

Source/JavaScriptCore:

Reviewed by Oliver Hunt.

This is a pre-requisite to using the mark bit to determine object age.

~2% v8 speedup, mostly due to a 12% v8-splay speedup.

* heap/MarkedBlock.h:
(JSC::MarkedBlock::isLive):
(JSC::MarkedBlock::isLiveCell): These two functions are the reason for
this patch. They can now determine object liveness without relying on
newly allocated objects having their mark bits set. Each MarkedBlock
now has a state variable that tells us how to determine whether its
cells are live. (This new state variable supercedes the old one about
destructor state. The rest of this patch is just refactoring to support
the invariants of this new state variable without introducing a
performance regression.)

(JSC::MarkedBlock::didConsumeFreeList): New function for updating interal
state when a block becomes fully allocated.

(JSC::MarkedBlock::clearMarks): Folded a state change to 'Marked' into
this function because, logically, clearing all mark bits is the first
step in saying "mark bits now exactly reflect object liveness".

(JSC::MarkedBlock::markCountIsZero): Renamed from isEmpty() to clarify
that this function only tells you about the mark bits, so it's only
meaningful if you've put the mark bits into a meaningful state before
calling it.

(JSC::MarkedBlock::forEachCell): Changed to use isLive() helper function
instead of testing mark bits, since mark bits are not always the right
way to find out if an object is live anymore. (New objects are live, but
not marked.)

* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::recycle):
(JSC::MarkedBlock::MarkedBlock): Folded all initialization -- even
initialization when recycling an old block -- into the MarkedBlock
constructor, for simplicity.

(JSC::MarkedBlock::callDestructor): Inlined for speed. Always check for
a zapped cell before running a destructor, and always zap after
running a destructor. This does not seem to be expensive, and the
alternative just creates a too-confusing matrix of possible cell states
((zombie undestructed cell + zombie destructed cell + zapped destructed
cell) * 5! permutations for progressing through block states = "Oh my!").

(JSC::MarkedBlock::specializedSweep):
(JSC::MarkedBlock::sweep): Maintained and expanded a pre-existing
optimization to use template specialization to constant fold lots of
branches and elide certain operations entirely during a sweep. Merged
four or five functions that were logically about sweeping into this one
function pair, so there's only one way to do things now, it's
automatically correct, and it's always fast.

(JSC::MarkedBlock::zapFreeList): Renamed this function to be more explicit
about exactly what it does, and to honor the new block state system.

* heap/AllocationSpace.cpp:
(JSC::AllocationSpace::allocateBlock): Updated for rename.

(JSC::AllocationSpace::freeBlocks): Updated for changed interface.

(JSC::TakeIfUnmarked::TakeIfUnmarked):
(JSC::TakeIfUnmarked::operator()):
(JSC::TakeIfUnmarked::returnValue): Just like isEmpty() above, renamed
to clarify that this functor only tests the mark bits, so it's only
valid if you've put the mark bits into a meaningful state before
calling it.

(JSC::AllocationSpace::shrink): Updated for rename.

* heap/AllocationSpace.h:
(JSC::AllocationSpace::canonicalizeCellLivenessData): Renamed to be a
little more specific about what we're making canonical.

(JSC::AllocationSpace::forEachCell): Updated for rename.

(JSC::AllocationSpace::forEachBlock): No need to canonicalize cell
liveness data before iterating blocks -- clients that want iterated
blocks to have valid cell lieveness data should make this call for
themselves. (And not all clients want it.)

* heap/ConservativeRoots.cpp:
(JSC::ConservativeRoots::genericAddPointer): Updated for rename. Removed
obsolete comment.

* heap/Heap.cpp:
(JSC::CountFunctor::ClearMarks::operator()): Removed call to notify...()
because clearMarks() now does that implicitly.

(JSC::Heap::destroy): Make sure to canonicalize before tear-down, since
tear-down tests cell liveness when running destructors.

(JSC::Heap::markRoots):
(JSC::Heap::collect): Moved weak reference harvesting out of markRoots()
and into collect, since it strictly depends on root marking, and does
not contribute to root marking.

(JSC::Heap::canonicalizeCellLivenessData): Renamed to be a little more
specific about what we're making canonical.

* heap/Heap.h:
(JSC::Heap::forEachProtectedCell): No need to canonicalize cell liveness
data before iterating protected cells, since we know they're all live,
and don't need to test for it.

* heap/Local.h:
(JSC::::set): Can't make the same ASSERT we used to because we just don't
have the mark bits for it anymore. Perhaps we can bring this ASSERT back
in a weaker form in the future.

* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::addBlock):
(JSC::MarkedSpace::removeBlock): Updated for interface change.
(JSC::MarkedSpace::canonicalizeCellLivenessData): Renamed to be a little more
specific about what we're making canonical.

* heap/MarkedSpace.h:
(JSC::MarkedSpace::allocate):
(JSC::MarkedSpace::SizeClass::SizeClass):
(JSC::MarkedSpace::SizeClass::resetAllocator):
(JSC::MarkedSpace::SizeClass::zapFreeList): Simplified this allocator
functionality a bit. We now track only one block -- "currentBlock" --
and rely on its internal state to know whether it has more cells to
allocate.

* heap/Weak.h:
(JSC::Weak::set): Can't make the same ASSERT we used to because we just don't
have the mark bits for it anymore. Perhaps we can bring this ASSERT back
in a weaker form in the future.

* runtime/JSCell.h:
(JSC::JSCell::vptr):
(JSC::JSCell::zap):
(JSC::JSCell::isZapped):
(JSC::isZapped): Made zapping a property of JSCell, for a little abstraction.
In the future, exactly how a JSCell zaps itself will change, as the
internal representation of JSCell changes.

LayoutTests:

Reviewed by Oliver Hunt.

Made this flaky test less flaky. (Just enough to make my patch not fail.)

* fast/dom/gc-10.html: Count objects immediately after GC to get an
exact count. Call 'reload' a few times to improve test coverage. Preload
properties in case they're lazily instantiated, which would change
object count numbers. Also, use the 'var' keyword like a good little
JavaScripter.

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@95912 268f45cc-cd09-0410-ab3c-d52691b4dbfc

15 files changed:
LayoutTests/ChangeLog
LayoutTests/fast/dom/gc-10.html
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/heap/AllocationSpace.cpp
Source/JavaScriptCore/heap/AllocationSpace.h
Source/JavaScriptCore/heap/ConservativeRoots.cpp
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/heap/Heap.h
Source/JavaScriptCore/heap/Local.h
Source/JavaScriptCore/heap/MarkedBlock.cpp
Source/JavaScriptCore/heap/MarkedBlock.h
Source/JavaScriptCore/heap/MarkedSpace.cpp
Source/JavaScriptCore/heap/MarkedSpace.h
Source/JavaScriptCore/heap/Weak.h
Source/JavaScriptCore/runtime/JSCell.h

index 141a84b..a57428b 100644 (file)
@@ -1,3 +1,18 @@
+2011-09-24  Geoffrey Garen  <ggaren@apple.com>
+
+        Allocate new objects unmarked
+        https://bugs.webkit.org/show_bug.cgi?id=68764
+
+        Reviewed by Oliver Hunt.
+        
+        Made this flaky test less flaky. (Just enough to make my patch not fail.)
+
+        * fast/dom/gc-10.html: Count objects immediately after GC to get an
+        exact count. Call 'reload' a few times to improve test coverage. Preload
+        properties in case they're lazily instantiated, which would change
+        object count numbers. Also, use the 'var' keyword like a good little
+        JavaScripter.
+
 2011-09-24  Adam Barth  <abarth@webkit.org>
 
         Remove ENABLE(WCSS) and associated code
index a9571a8..e54b3e2 100644 (file)
@@ -12,22 +12,24 @@ function print(message, color)
     document.getElementById("console").appendChild(paragraph);
 }
 
-var before,after;
 var threshold = 5;
 
 function test()
 {
     if (window.GCController)
     {
+        global = window.frames.myframe.location.reload; // Eagerly construct these properties so they don't influence test outcome.
+
         GCController.collect();
+        var before = GCController.getJSObjectCount();
+
+        window.frames.myframe.location.reload(true);
         window.frames.myframe.location.reload(true);
-        before = GCController.getJSObjectCount();
-        
         window.frames.myframe.location.reload(true);
 
         GCController.collect();
-        after = GCController.getJSObjectCount();
-        
+        var after = GCController.getJSObjectCount();
+
         // Unfortunately we cannot do a strict check here because there is still very minor (3) JS object increase,
         // likely due to temporary JS objects being created during further execution of this test function.
         // However, the iframe document leaking everything has an addition of ~25 objects every
index 6695628..192512f 100644 (file)
@@ -1,3 +1,148 @@
+2011-09-24  Geoffrey Garen  <ggaren@apple.com>
+
+        Allocate new objects unmarked
+        https://bugs.webkit.org/show_bug.cgi?id=68764
+
+        Reviewed by Oliver Hunt.
+        
+        This is a pre-requisite to using the mark bit to determine object age.
+
+        ~2% v8 speedup, mostly due to a 12% v8-splay speedup.
+
+        * heap/MarkedBlock.h:
+        (JSC::MarkedBlock::isLive):
+        (JSC::MarkedBlock::isLiveCell): These two functions are the reason for
+        this patch. They can now determine object liveness without relying on
+        newly allocated objects having their mark bits set. Each MarkedBlock
+        now has a state variable that tells us how to determine whether its
+        cells are live. (This new state variable supercedes the old one about
+        destructor state. The rest of this patch is just refactoring to support
+        the invariants of this new state variable without introducing a
+        performance regression.)
+
+        (JSC::MarkedBlock::didConsumeFreeList): New function for updating interal
+        state when a block becomes fully allocated.
+
+        (JSC::MarkedBlock::clearMarks): Folded a state change to 'Marked' into
+        this function because, logically, clearing all mark bits is the first
+        step in saying "mark bits now exactly reflect object liveness".
+
+        (JSC::MarkedBlock::markCountIsZero): Renamed from isEmpty() to clarify
+        that this function only tells you about the mark bits, so it's only
+        meaningful if you've put the mark bits into a meaningful state before
+        calling it.
+
+        (JSC::MarkedBlock::forEachCell): Changed to use isLive() helper function
+        instead of testing mark bits, since mark bits are not always the right
+        way to find out if an object is live anymore. (New objects are live, but
+        not marked.)
+
+        * heap/MarkedBlock.cpp:
+        (JSC::MarkedBlock::recycle):
+        (JSC::MarkedBlock::MarkedBlock): Folded all initialization -- even
+        initialization when recycling an old block -- into the MarkedBlock
+        constructor, for simplicity.
+
+        (JSC::MarkedBlock::callDestructor): Inlined for speed. Always check for
+        a zapped cell before running a destructor, and always zap after
+        running a destructor. This does not seem to be expensive, and the
+        alternative just creates a too-confusing matrix of possible cell states
+        ((zombie undestructed cell + zombie destructed cell + zapped destructed
+        cell) * 5! permutations for progressing through block states = "Oh my!").
+
+        (JSC::MarkedBlock::specializedSweep):
+        (JSC::MarkedBlock::sweep): Maintained and expanded a pre-existing
+        optimization to use template specialization to constant fold lots of
+        branches and elide certain operations entirely during a sweep. Merged
+        four or five functions that were logically about sweeping into this one
+        function pair, so there's only one way to do things now, it's
+        automatically correct, and it's always fast.
+
+        (JSC::MarkedBlock::zapFreeList): Renamed this function to be more explicit
+        about exactly what it does, and to honor the new block state system.
+
+        * heap/AllocationSpace.cpp:
+        (JSC::AllocationSpace::allocateBlock): Updated for rename.
+
+        (JSC::AllocationSpace::freeBlocks): Updated for changed interface.
+
+        (JSC::TakeIfUnmarked::TakeIfUnmarked):
+        (JSC::TakeIfUnmarked::operator()):
+        (JSC::TakeIfUnmarked::returnValue): Just like isEmpty() above, renamed
+        to clarify that this functor only tests the mark bits, so it's only
+        valid if you've put the mark bits into a meaningful state before
+        calling it.
+        
+        (JSC::AllocationSpace::shrink): Updated for rename.
+
+        * heap/AllocationSpace.h:
+        (JSC::AllocationSpace::canonicalizeCellLivenessData): Renamed to be a
+        little more specific about what we're making canonical.
+
+        (JSC::AllocationSpace::forEachCell): Updated for rename.
+
+        (JSC::AllocationSpace::forEachBlock): No need to canonicalize cell
+        liveness data before iterating blocks -- clients that want iterated
+        blocks to have valid cell lieveness data should make this call for
+        themselves. (And not all clients want it.)
+
+        * heap/ConservativeRoots.cpp:
+        (JSC::ConservativeRoots::genericAddPointer): Updated for rename. Removed
+        obsolete comment.
+
+        * heap/Heap.cpp:
+        (JSC::CountFunctor::ClearMarks::operator()): Removed call to notify...()
+        because clearMarks() now does that implicitly.
+
+        (JSC::Heap::destroy): Make sure to canonicalize before tear-down, since
+        tear-down tests cell liveness when running destructors.
+
+        (JSC::Heap::markRoots):
+        (JSC::Heap::collect): Moved weak reference harvesting out of markRoots()
+        and into collect, since it strictly depends on root marking, and does
+        not contribute to root marking.
+
+        (JSC::Heap::canonicalizeCellLivenessData): Renamed to be a little more
+        specific about what we're making canonical.
+
+        * heap/Heap.h:
+        (JSC::Heap::forEachProtectedCell): No need to canonicalize cell liveness
+        data before iterating protected cells, since we know they're all live,
+        and don't need to test for it.
+
+        * heap/Local.h:
+        (JSC::::set): Can't make the same ASSERT we used to because we just don't
+        have the mark bits for it anymore. Perhaps we can bring this ASSERT back
+        in a weaker form in the future.
+
+        * heap/MarkedSpace.cpp:
+        (JSC::MarkedSpace::addBlock):
+        (JSC::MarkedSpace::removeBlock): Updated for interface change.
+        (JSC::MarkedSpace::canonicalizeCellLivenessData): Renamed to be a little more
+        specific about what we're making canonical.
+
+        * heap/MarkedSpace.h:
+        (JSC::MarkedSpace::allocate):
+        (JSC::MarkedSpace::SizeClass::SizeClass):
+        (JSC::MarkedSpace::SizeClass::resetAllocator):
+        (JSC::MarkedSpace::SizeClass::zapFreeList): Simplified this allocator
+        functionality a bit. We now track only one block -- "currentBlock" --
+        and rely on its internal state to know whether it has more cells to
+        allocate.
+
+        * heap/Weak.h:
+        (JSC::Weak::set): Can't make the same ASSERT we used to because we just don't
+        have the mark bits for it anymore. Perhaps we can bring this ASSERT back
+        in a weaker form in the future.
+
+        * runtime/JSCell.h:
+        (JSC::JSCell::vptr):
+        (JSC::JSCell::zap):
+        (JSC::JSCell::isZapped):
+        (JSC::isZapped): Made zapping a property of JSCell, for a little abstraction.
+        In the future, exactly how a JSCell zaps itself will change, as the
+        internal representation of JSCell changes.
+
 2011-09-24  Filip Pizlo  <fpizlo@apple.com>
 
         DFG JIT should not eagerly initialize integer tags in the register file
index 41c127b..b548e49 100644 (file)
@@ -98,7 +98,7 @@ MarkedBlock* AllocationSpace::allocateBlock(size_t cellSize, AllocationSpace::Al
             block = 0;
     }
     if (block)
-        block->initForCellSize(cellSize);
+        block = MarkedBlock::recycle(block, cellSize);
     else if (allocationEffort == AllocationCanFail)
         return 0;
     else
@@ -116,18 +116,18 @@ void AllocationSpace::freeBlocks(MarkedBlock* head)
         next = block->next();
         
         m_blocks.remove(block);
-        block->reset();
+        block->sweep();
         MutexLocker locker(m_heap->m_freeBlockLock);
         m_heap->m_freeBlocks.append(block);
         m_heap->m_numberOfFreeBlocks++;
     }
 }
 
-class TakeIfEmpty {
+class TakeIfUnmarked {
 public:
     typedef MarkedBlock* ReturnType;
     
-    TakeIfEmpty(MarkedSpace*);
+    TakeIfUnmarked(MarkedSpace*);
     void operator()(MarkedBlock*);
     ReturnType returnValue();
     
@@ -136,21 +136,21 @@ private:
     DoublyLinkedList<MarkedBlock> m_empties;
 };
 
-inline TakeIfEmpty::TakeIfEmpty(MarkedSpace* newSpace)
+inline TakeIfUnmarked::TakeIfUnmarked(MarkedSpace* newSpace)
     : m_markedSpace(newSpace)
 {
 }
 
-inline void TakeIfEmpty::operator()(MarkedBlock* block)
+inline void TakeIfUnmarked::operator()(MarkedBlock* block)
 {
-    if (!block->isEmpty())
+    if (!block->markCountIsZero())
         return;
     
     m_markedSpace->removeBlock(block);
     m_empties.append(block);
 }
 
-inline TakeIfEmpty::ReturnType TakeIfEmpty::returnValue()
+inline TakeIfUnmarked::ReturnType TakeIfUnmarked::returnValue()
 {
     return m_empties.head();
 }
@@ -158,8 +158,8 @@ inline TakeIfEmpty::ReturnType TakeIfEmpty::returnValue()
 void AllocationSpace::shrink()
 {
     // We record a temporary list of empties to avoid modifying m_blocks while iterating it.
-    TakeIfEmpty takeIfEmpty(&m_markedSpace);
-    freeBlocks(forEachBlock(takeIfEmpty));
+    TakeIfUnmarked takeIfUnmarked(&m_markedSpace);
+    freeBlocks(forEachBlock(takeIfUnmarked));
 }
 
 }
index 25b95e3..2270ee0 100644 (file)
@@ -56,7 +56,7 @@ public:
     template<typename Functor> typename Functor::ReturnType forEachBlock(Functor&);
     template<typename Functor> typename Functor::ReturnType forEachBlock();
     
-    void canonicalizeBlocks() { m_markedSpace.canonicalizeBlocks(); }
+    void canonicalizeCellLivenessData() { m_markedSpace.canonicalizeCellLivenessData(); }
     void resetAllocator() { m_markedSpace.resetAllocator(); }
     
     void* allocate(size_t);
@@ -78,7 +78,8 @@ private:
 
 template<typename Functor> inline typename Functor::ReturnType AllocationSpace::forEachCell(Functor& functor)
 {
-    canonicalizeBlocks();
+    canonicalizeCellLivenessData();
+
     BlockIterator end = m_blocks.set().end();
     for (BlockIterator it = m_blocks.set().begin(); it != end; ++it)
         (*it)->forEachCell(functor);
@@ -93,7 +94,6 @@ template<typename Functor> inline typename Functor::ReturnType AllocationSpace::
 
 template<typename Functor> inline typename Functor::ReturnType AllocationSpace::forEachBlock(Functor& functor)
 {
-    canonicalizeBlocks();
     BlockIterator end = m_blocks.set().end();
     for (BlockIterator it = m_blocks.set().begin(); it != end; ++it)
         functor(*it);
index e33dd6b..7de2250 100644 (file)
@@ -26,7 +26,9 @@
 #include "config.h"
 #include "ConservativeRoots.h"
 
+#include "JSCell.h"
 #include "JettisonedCodeBlocks.h"
+#include "Structure.h"
 
 namespace JSC {
 
@@ -82,10 +84,7 @@ inline void ConservativeRoots::genericAddPointer(void* p, TinyBloomFilter filter
     if (!m_blocks->set().contains(candidate))
         return;
 
-    // The conservative set inverts the typical meaning of mark bits: We only
-    // visit marked pointers, and our visit clears the mark bit. This efficiently
-    // sifts out pointers to dead objects and duplicate pointers.
-    if (!candidate->testAndClearMarked(p))
+    if (!candidate->isLiveCell(p))
         return;
 
     if (m_size == m_capacity)
index 8845db1..f0b3c09 100644 (file)
@@ -113,7 +113,6 @@ struct ClearMarks : MarkedBlock::VoidFunctor {
 inline void ClearMarks::operator()(MarkedBlock* block)
 {
     block->clearMarks();
-    block->notifyMayHaveFreshFreeCells();
 }
 
 struct Sweep : MarkedBlock::VoidFunctor {
@@ -268,10 +267,11 @@ void Heap::destroy()
     delete m_markListSet;
     m_markListSet = 0;
 
+    canonicalizeCellLivenessData();
     clearMarks();
+
     m_handleHeap.finalizeWeakHandles();
     m_globalData->smallStrings.finalizeSmallStrings();
-
     shrink();
     ASSERT(!size());
     
@@ -514,10 +514,6 @@ void Heap::markRoots()
     // If the set of opaque roots has grown, more weak handles may have become reachable.
     } while (lastOpaqueRootCount != visitor.opaqueRootCount());
 
-    // Need to call this here because weak handle processing could add weak
-    // reference harvesters.
-    harvestWeakReferences();
-
     visitor.reset();
 
     m_operationInProgress = NoOperation;
@@ -589,9 +585,10 @@ void Heap::collect(SweepToggle sweepToggle)
     ASSERT(m_isSafeToCollect);
     JAVASCRIPTCORE_GC_BEGIN();
     
-    canonicalizeBlocks();
-    
+    canonicalizeCellLivenessData();
     markRoots();
+
+    harvestWeakReferences();
     m_handleHeap.finalizeWeakHandles();
     m_globalData->smallStrings.finalizeSmallStrings();
 
@@ -615,9 +612,9 @@ void Heap::collect(SweepToggle sweepToggle)
     (*m_activityCallback)();
 }
 
-void Heap::canonicalizeBlocks()
+void Heap::canonicalizeCellLivenessData()
 {
-    m_objectSpace.canonicalizeBlocks();
+    m_objectSpace.canonicalizeCellLivenessData();
 }
 
 void Heap::resetAllocator()
index 5c6c06f..925a9d6 100644 (file)
@@ -69,7 +69,6 @@ namespace JSC {
 
         static bool isMarked(const void*);
         static bool testAndSetMarked(const void*);
-        static bool testAndClearMarked(const void*);
         static void setMarked(const void*);
 
         static void writeBarrier(const JSCell*, JSValue);
@@ -135,9 +134,13 @@ namespace JSC {
 
         bool isValidAllocation(size_t);
         void reportExtraMemoryCostSlowCase(size_t);
-        void canonicalizeBlocks();
-        void resetAllocator();
 
+        // Call this function before any operation that needs to know which cells
+        // in the heap are live. (For example, call this function before
+        // conservative marking, eager sweeping, or iterating the cells in a MarkedBlock.)
+        void canonicalizeCellLivenessData();
+
+        void resetAllocator();
         void freeBlocks(MarkedBlock*);
 
         void clearMarks();
@@ -223,11 +226,6 @@ namespace JSC {
         return MarkedBlock::blockFor(cell)->testAndSetMarked(cell);
     }
 
-    inline bool Heap::testAndClearMarked(const void* cell)
-    {
-        return MarkedBlock::blockFor(cell)->testAndClearMarked(cell);
-    }
-
     inline void Heap::setMarked(const void* cell)
     {
         MarkedBlock::blockFor(cell)->setMarked(cell);
@@ -274,7 +272,6 @@ namespace JSC {
 
     template<typename Functor> inline typename Functor::ReturnType Heap::forEachProtectedCell(Functor& functor)
     {
-        canonicalizeBlocks();
         ProtectCountSet::iterator end = m_protectedValues.end();
         for (ProtectCountSet::iterator it = m_protectedValues.begin(); it != end; ++it)
             functor(it->first);
index ac7d136..4c11a49 100644 (file)
@@ -94,7 +94,6 @@ template <typename T> inline Local<T>& Local<T>::operator=(Handle<T> other)
 template <typename T> inline void Local<T>::set(ExternalType externalType)
 {
     ASSERT(slot());
-    ASSERT(!HandleTypes<T>::toJSValue(externalType) || !HandleTypes<T>::toJSValue(externalType).isCell() || Heap::isMarked(HandleTypes<T>::toJSValue(externalType).asCell()));
     *slot() = externalType;
 }
 
index ad6fd0a..52fbad3 100644 (file)
@@ -40,196 +40,117 @@ MarkedBlock* MarkedBlock::create(Heap* heap, size_t cellSize)
     return new (allocation.base()) MarkedBlock(allocation, heap, cellSize);
 }
 
+MarkedBlock* MarkedBlock::recycle(MarkedBlock* block, size_t cellSize)
+{
+    return new (block) MarkedBlock(block->m_allocation, block->m_heap, cellSize);
+}
+
 void MarkedBlock::destroy(MarkedBlock* block)
 {
     block->m_allocation.deallocate();
 }
 
 MarkedBlock::MarkedBlock(const PageAllocationAligned& allocation, Heap* heap, size_t cellSize)
-    : m_inNewSpace(false)
+    : m_atomsPerCell((cellSize + atomSize - 1) / atomSize)
+    , m_endAtom(atomsPerBlock - m_atomsPerCell + 1)
+    , m_state(New) // All cells start out unmarked.
     , m_allocation(allocation)
     , m_heap(heap)
 {
-    initForCellSize(cellSize);
-}
-
-void MarkedBlock::initForCellSize(size_t cellSize)
-{
-    m_atomsPerCell = (cellSize + atomSize - 1) / atomSize;
-    m_endAtom = atomsPerBlock - m_atomsPerCell + 1;
-    setDestructorState(SomeFreeCellsStillHaveObjects);
+    HEAP_LOG_BLOCK_STATE_TRANSITION(this);
 }
 
-template<MarkedBlock::DestructorState specializedDestructorState>
-void MarkedBlock::callDestructor(JSCell* cell, void* jsFinalObjectVPtr)
+inline void MarkedBlock::callDestructor(JSCell* cell, void* jsFinalObjectVPtr)
 {
-    if (specializedDestructorState == FreeCellsDontHaveObjects)
+    // A previous eager sweep may already have run cell's destructor.
+    if (cell->isZapped())
         return;
+
     void* vptr = cell->vptr();
-    if (specializedDestructorState == AllFreeCellsHaveObjects || vptr) {
 #if ENABLE(SIMPLE_HEAP_PROFILING)
-        m_heap->m_destroyedTypeCounts.countVPtr(vptr);
+    m_heap->m_destroyedTypeCounts.countVPtr(vptr);
 #endif
-        if (vptr == jsFinalObjectVPtr) {
-            JSFinalObject* object = reinterpret_cast<JSFinalObject*>(cell);
-            object->JSFinalObject::~JSFinalObject();
-        } else
-            cell->~JSCell();
-    }
-}
+    if (vptr == jsFinalObjectVPtr)
+        reinterpret_cast<JSFinalObject*>(cell)->JSFinalObject::~JSFinalObject();
+    else
+        cell->~JSCell();
 
-template<MarkedBlock::DestructorState specializedDestructorState>
-void MarkedBlock::specializedReset()
-{
-    void* jsFinalObjectVPtr = m_heap->globalData()->jsFinalObjectVPtr;
-
-    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell)
-        callDestructor<specializedDestructorState>(reinterpret_cast<JSCell*>(&atoms()[i]), jsFinalObjectVPtr);
+    cell->zap();
 }
 
-void MarkedBlock::reset()
+template<MarkedBlock::BlockState blockState, MarkedBlock::SweepMode sweepMode>
+MarkedBlock::FreeCell* MarkedBlock::specializedSweep()
 {
-    switch (destructorState()) {
-    case FreeCellsDontHaveObjects:
-    case SomeFreeCellsStillHaveObjects:
-        specializedReset<SomeFreeCellsStillHaveObjects>();
-        break;
-    default:
-        ASSERT(destructorState() == AllFreeCellsHaveObjects);
-        specializedReset<AllFreeCellsHaveObjects>();
-        break;
-    }
-}
-
-template<MarkedBlock::DestructorState specializedDestructorState>
-void MarkedBlock::specializedSweep()
-{
-    if (specializedDestructorState != FreeCellsDontHaveObjects) {
-        void* jsFinalObjectVPtr = m_heap->globalData()->jsFinalObjectVPtr;
-        
-        for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
-            if (m_marks.get(i))
-                continue;
-            
-            JSCell* cell = reinterpret_cast<JSCell*>(&atoms()[i]);
-            callDestructor<specializedDestructorState>(cell, jsFinalObjectVPtr);
-            cell->setVPtr(0);
-        }
-        
-        setDestructorState(FreeCellsDontHaveObjects);
-    }
-}
+    ASSERT(blockState != Allocated && blockState != FreeListed);
 
-void MarkedBlock::sweep()
-{
-    HEAP_DEBUG_BLOCK(this);
-    
-    switch (destructorState()) {
-    case FreeCellsDontHaveObjects:
-        break;
-    case SomeFreeCellsStillHaveObjects:
-        specializedSweep<SomeFreeCellsStillHaveObjects>();
-        break;
-    default:
-        ASSERT(destructorState() == AllFreeCellsHaveObjects);
-        specializedSweep<AllFreeCellsHaveObjects>();
-        break;
-    }
-}
-
-template<MarkedBlock::DestructorState specializedDestructorState>
-ALWAYS_INLINE MarkedBlock::FreeCell* MarkedBlock::produceFreeList()
-{
-    // This returns a free list that is ordered in reverse through the block.
+    // This produces a free list that is ordered in reverse through the block.
     // This is fine, since the allocation code makes no assumptions about the
     // order of the free list.
-    
+    FreeCell* head = 0;
     void* jsFinalObjectVPtr = m_heap->globalData()->jsFinalObjectVPtr;
-    
-    FreeCell* result = 0;
-    
     for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
-        if (!m_marks.testAndSet(i)) {
-            JSCell* cell = reinterpret_cast<JSCell*>(&atoms()[i]);
-            if (specializedDestructorState != FreeCellsDontHaveObjects)
-                callDestructor<specializedDestructorState>(cell, jsFinalObjectVPtr);
+        if (blockState == Marked && m_marks.get(i))
+            continue;
+
+        JSCell* cell = reinterpret_cast<JSCell*>(&atoms()[i]);
+        if (blockState == Zapped && !cell->isZapped())
+            continue;
+
+        if (blockState != New)
+            callDestructor(cell, jsFinalObjectVPtr);
+
+        if (sweepMode == SweepToFreeList) {
             FreeCell* freeCell = reinterpret_cast<FreeCell*>(cell);
-            freeCell->next = result;
-            result = freeCell;
+            freeCell->next = head;
+            head = freeCell;
         }
     }
-    
-    // This is sneaky: if we're producing a free list then we intend to
-    // fill up the free cells in the block with objects, which means that
-    // if we have a new GC then all of the free stuff in this block will
-    // comprise objects rather than empty cells.
-    setDestructorState(AllFreeCellsHaveObjects);
-
-    return result;
+
+    m_state = ((sweepMode == SweepToFreeList) ? FreeListed : Zapped);
+    return head;
 }
 
-MarkedBlock::FreeCell* MarkedBlock::lazySweep()
+MarkedBlock::FreeCell* MarkedBlock::sweep(SweepMode sweepMode)
 {
-    // This returns a free list that is ordered in reverse through the block.
-    // This is fine, since the allocation code makes no assumptions about the
-    // order of the free list.
-    
-    HEAP_DEBUG_BLOCK(this);
-    
-    switch (destructorState()) {
-    case FreeCellsDontHaveObjects:
-        return produceFreeList<FreeCellsDontHaveObjects>();
-    case SomeFreeCellsStillHaveObjects:
-        return produceFreeList<SomeFreeCellsStillHaveObjects>();
-    default:
-        ASSERT(destructorState() == AllFreeCellsHaveObjects);
-        return produceFreeList<AllFreeCellsHaveObjects>();
+    HEAP_LOG_BLOCK_STATE_TRANSITION(this);
+
+    switch (m_state) {
+    case New:
+        ASSERT(sweepMode == SweepToFreeList);
+        return specializedSweep<New, SweepToFreeList>();
+    case FreeListed:
+        // Happens when a block transitions to fully allocated.
+        ASSERT(sweepMode == SweepToFreeList);
+        return 0;
+    case Allocated:
+        ASSERT_NOT_REACHED();
+        return 0;
+    case Marked:
+        return sweepMode == SweepToFreeList
+            ? specializedSweep<Marked, SweepToFreeList>()
+            : specializedSweep<Marked, SweepOnly>();
+    case Zapped:
+        return sweepMode == SweepToFreeList
+            ? specializedSweep<Zapped, SweepToFreeList>()
+            : specializedSweep<Zapped, SweepOnly>();
     }
 }
 
-MarkedBlock::FreeCell* MarkedBlock::blessNewBlock()
+void MarkedBlock::zapFreeList(FreeCell* firstFreeCell)
 {
-    // This returns a free list that is ordered in reverse through the block,
-    // as in lazySweep() above.
-    
-    HEAP_DEBUG_BLOCK(this);
+    HEAP_LOG_BLOCK_STATE_TRANSITION(this);
 
-    FreeCell* result = 0;
-    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
-        m_marks.set(i);
-        FreeCell* freeCell = reinterpret_cast<FreeCell*>(&atoms()[i]);
-        freeCell->next = result;
-        result = freeCell;
-    }
-    
-    // See produceFreeList(). If we're here then we intend to fill the
-    // block with objects, so once a GC happens, all free cells will be
-    // occupied by objects.
-    setDestructorState(AllFreeCellsHaveObjects);
+    // Roll back to a coherent state for Heap introspection. Cells newly
+    // allocated from our free list are not currently marked, so we need another
+    // way to tell what's live vs dead. We use zapping for that.
 
-    return result;
-}
-
-void MarkedBlock::canonicalizeBlock(FreeCell* firstFreeCell)
-{
-    HEAP_DEBUG_BLOCK(this);
-    
-    ASSERT(destructorState() == AllFreeCellsHaveObjects);
-    
-    if (firstFreeCell) {
-        for (FreeCell* current = firstFreeCell; current;) {
-            FreeCell* next = current->next;
-            size_t i = atomNumber(current);
-            
-            m_marks.clear(i);
-            
-            current->setNoObject();
-            
-            current = next;
-        }
-        
-        setDestructorState(SomeFreeCellsStillHaveObjects);
+    FreeCell* next;
+    for (FreeCell* current = firstFreeCell; current; current = next) {
+        next = current->next;
+        reinterpret_cast<JSCell*>(current)->zap();
     }
+
+    m_state = Zapped;
 }
 
 } // namespace JSC
index 1ae587b..315835c 100644 (file)
 #define HEAP_LOG_BLOCK_STATE_TRANSITIONS 0
 
 #if HEAP_LOG_BLOCK_STATE_TRANSITIONS
-#define HEAP_DEBUG_BLOCK(block) do {                                    \
-        printf("%s:%d %s: block %s = %p\n",                             \
-               __FILE__, __LINE__, __FUNCTION__, #block, (block));      \
+#define HEAP_LOG_BLOCK_STATE_TRANSITION(block) do {                                  \
+        printf("%s:%d %s: block %s = %p, %d\n",                                      \
+               __FILE__, __LINE__, __FUNCTION__, #block, (block), (block)->m_state); \
     } while (false)
 #else
-#define HEAP_DEBUG_BLOCK(block) ((void)0)
+#define HEAP_LOG_BLOCK_STATE_TRANSITION(block) ((void)0)
 #endif
 
 namespace JSC {
@@ -52,6 +52,8 @@ namespace JSC {
     static const size_t KB = 1024;
     static const size_t MB = 1024 * 1024;
     
+    bool isZapped(const JSCell*);
+    
     // A marked block is a page-aligned container for heap-allocated objects.
     // Objects are allocated within cells of the marked block. For a given
     // marked block, all cells have the same size. Objects smaller than the
@@ -75,13 +77,6 @@ namespace JSC {
 
         struct FreeCell {
             FreeCell* next;
-            
-            void setNoObject()
-            {
-                // This relies on FreeCell not having a vtable, and the next field
-                // falling exactly where a vtable would have been.
-                next = 0;
-            }
         };
         
         struct VoidFunctor {
@@ -90,6 +85,7 @@ namespace JSC {
         };
 
         static MarkedBlock* create(Heap*, size_t cellSize);
+        static MarkedBlock* recycle(MarkedBlock*, size_t cellSize);
         static void destroy(MarkedBlock*);
 
         static bool isAtomAligned(const void*);
@@ -98,36 +94,20 @@ namespace JSC {
         
         Heap* heap() const;
         
-        bool inNewSpace();
-        void setInNewSpace(bool);
-
         void* allocate();
-        void sweep();
-        
-        // This invokes destructors on all cells that are not marked, marks
-        // them, and returns a linked list of those cells.
-        FreeCell* lazySweep();
-        
-        // Notify the block that destructors may have to be called again.
-        void notifyMayHaveFreshFreeCells();
-        
-        void initForCellSize(size_t cellSize);
-        
-        // These should be called immediately after a block is created.
-        // Blessing for fast path creates a linked list, while blessing for
-        // slow path creates dummy cells.
-        FreeCell* blessNewBlock();
-        
-        void reset();
-        
-        // This unmarks all cells on the free list, and allocates dummy JSCells
-        // in their place.
-        void canonicalizeBlock(FreeCell* firstFreeCell);
-        
-        bool isEmpty();
+
+        enum SweepMode { SweepOnly, SweepToFreeList };
+        FreeCell* sweep(SweepMode = SweepOnly);
+
+        // While allocating from a free list, MarkedBlock temporarily has bogus
+        // cell liveness data. To restore accurate cell liveness data, call one
+        // of these functions:
+        void didConsumeFreeList(); // Call this once you've allocated all the items in the free list.
+        void zapFreeList(FreeCell* firstFreeCell); // Call this to undo the free list.
 
         void clearMarks();
         size_t markCount();
+        bool markCountIsZero(); // Faster than markCount().
 
         size_t cellSize();
 
@@ -136,7 +116,8 @@ namespace JSC {
 
         bool isMarked(const void*);
         bool testAndSetMarked(const void*);
-        bool testAndClearMarked(const void*);
+        bool isLive(const JSCell*);
+        bool isLiveCell(const void*);
         void setMarked(const void*);
         
 #if ENABLE(GGC)
@@ -160,47 +141,25 @@ namespace JSC {
 
     private:
         static const size_t atomMask = ~(atomSize - 1); // atomSize must be a power of two.
-        
-        enum DestructorState { FreeCellsDontHaveObjects, SomeFreeCellsStillHaveObjects, AllFreeCellsHaveObjects };
+
+        enum BlockState { New, FreeListed, Allocated, Marked, Zapped };
 
         typedef char Atom[atomSize];
 
         MarkedBlock(const PageAllocationAligned&, Heap*, size_t cellSize);
         Atom* atoms();
-
         size_t atomNumber(const void*);
-        
-        template<DestructorState destructorState>
         void callDestructor(JSCell*, void* jsFinalObjectVPtr);
+        template<BlockState, SweepMode> FreeCell* specializedSweep();
         
-        template<DestructorState destructorState>
-        void specializedReset();
-        
-        template<DestructorState destructorState>
-        void specializedSweep();
-        
-        template<DestructorState destructorState>
-        MarkedBlock::FreeCell* produceFreeList();
-        
-        void setDestructorState(DestructorState destructorState)
-        {
-            m_destructorState = static_cast<int8_t>(destructorState);
-        }
-        
-        DestructorState destructorState()
-        {
-            return static_cast<DestructorState>(m_destructorState);
-        }
-
 #if ENABLE(GGC)
         CardSet<bytesPerCard, blockSize> m_cards;
 #endif
 
-        size_t m_endAtom; // This is a fuzzy end. Always test for < m_endAtom.
         size_t m_atomsPerCell;
+        size_t m_endAtom; // This is a fuzzy end. Always test for < m_endAtom.
         WTF::Bitmap<atomsPerBlock> m_marks;
-        bool m_inNewSpace;
-        int8_t m_destructorState; // use getters/setters for this, particularly since we may want to compact this (effectively log(3)/log(2)-bit) field into other fields
+        BlockState m_state;
         PageAllocationAligned m_allocation;
         Heap* m_heap;
         MarkedBlock* m_prev;
@@ -232,52 +191,36 @@ namespace JSC {
         return m_heap;
     }
 
-    inline bool MarkedBlock::inNewSpace()
+    inline void MarkedBlock::didConsumeFreeList()
     {
-        return m_inNewSpace;
-    }
-    
-    inline void MarkedBlock::setInNewSpace(bool inNewSpace)
-    {
-        m_inNewSpace = inNewSpace;
-    }
-    
-    inline void MarkedBlock::notifyMayHaveFreshFreeCells()
-    {
-        HEAP_DEBUG_BLOCK(this);
-        
-        // This is called at the beginning of GC. If this block is
-        // AllFreeCellsHaveObjects, then it means that we filled up
-        // the block in this collection. If it's in any other state,
-        // then this collection will potentially produce new free
-        // cells; new free cells always have objects. Hence if the
-        // state currently claims that there are no objects in free
-        // cells then we need to bump it over. Otherwise leave it be.
-        // This all crucially relies on the collector canonicalizing
-        // blocks before doing anything else, as canonicalizeBlocks
-        // will correctly set SomeFreeCellsStillHaveObjects for
-        // blocks that were only partially filled during this
-        // mutation cycle.
-        
-        if (destructorState() == FreeCellsDontHaveObjects)
-            setDestructorState(SomeFreeCellsStillHaveObjects);
-    }
+        HEAP_LOG_BLOCK_STATE_TRANSITION(this);
 
-    inline bool MarkedBlock::isEmpty()
-    {
-        return m_marks.isEmpty();
+        ASSERT(m_state == FreeListed);
+        m_state = Allocated;
     }
 
     inline void MarkedBlock::clearMarks()
     {
+        HEAP_LOG_BLOCK_STATE_TRANSITION(this);
+
+        ASSERT(m_state != New && m_state != FreeListed);
         m_marks.clearAll();
+
+        // This will become true at the end of the mark phase. We set it now to
+        // avoid an extra pass to do so later.
+        m_state = Marked;
     }
-    
+
     inline size_t MarkedBlock::markCount()
     {
         return m_marks.count();
     }
 
+    inline bool MarkedBlock::markCountIsZero()
+    {
+        return m_marks.isEmpty();
+    }
+
     inline size_t MarkedBlock::cellSize()
     {
         return m_atomsPerCell * atomSize;
@@ -308,22 +251,44 @@ namespace JSC {
         return m_marks.testAndSet(atomNumber(p));
     }
 
-    inline bool MarkedBlock::testAndClearMarked(const void* p)
+    inline void MarkedBlock::setMarked(const void* p)
     {
-        return m_marks.testAndClear(atomNumber(p));
+        m_marks.set(atomNumber(p));
     }
 
-    inline void MarkedBlock::setMarked(const void* p)
+    inline bool MarkedBlock::isLive(const JSCell* cell)
     {
-        m_marks.set(atomNumber(p));
+        switch (m_state) {
+        case Allocated:
+            return true;
+        case Zapped:
+            return !isZapped(cell);
+        case Marked:
+            return m_marks.get(atomNumber(cell));
+
+        case New:
+        case FreeListed:
+            ASSERT_NOT_REACHED();
+            return false;
+        }
+    }
+
+    inline bool MarkedBlock::isLiveCell(const void* p)
+    {
+        if ((atomNumber(p) - firstAtom()) % m_atomsPerCell) // Filters pointers to cell middles.
+            return false;
+
+        return isLive(static_cast<const JSCell*>(p));
     }
 
     template <typename Functor> inline void MarkedBlock::forEachCell(Functor& functor)
     {
         for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
-            if (!m_marks.get(i))
+            JSCell* cell = reinterpret_cast<JSCell*>(&atoms()[i]);
+            if (!isLive(cell))
                 continue;
-            functor(reinterpret_cast<JSCell*>(&atoms()[i]));
+
+            functor(cell);
         }
     }
 
index 2bfd1eb..34cbe75 100644 (file)
@@ -44,21 +44,19 @@ MarkedSpace::MarkedSpace(Heap* heap)
 
 void MarkedSpace::addBlock(SizeClass& sizeClass, MarkedBlock* block)
 {
-    block->setInNewSpace(true);
-    sizeClass.nextBlock = block;
-    sizeClass.blockList.append(block);
     ASSERT(!sizeClass.currentBlock);
     ASSERT(!sizeClass.firstFreeCell);
+
+    sizeClass.blockList.append(block);
     sizeClass.currentBlock = block;
-    sizeClass.firstFreeCell = block->blessNewBlock();
+    sizeClass.firstFreeCell = block->sweep(MarkedBlock::SweepToFreeList);
 }
 
 void MarkedSpace::removeBlock(MarkedBlock* block)
 {
-    block->setInNewSpace(false);
     SizeClass& sizeClass = sizeClassFor(block->cellSize());
-    if (sizeClass.nextBlock == block)
-        sizeClass.nextBlock = block->next();
+    if (sizeClass.currentBlock == block)
+        sizeClass.currentBlock = 0;
     sizeClass.blockList.remove(block);
 }
 
@@ -73,13 +71,13 @@ void MarkedSpace::resetAllocator()
         sizeClassFor(cellSize).resetAllocator();
 }
 
-void MarkedSpace::canonicalizeBlocks()
+void MarkedSpace::canonicalizeCellLivenessData()
 {
     for (size_t cellSize = preciseStep; cellSize < preciseCutoff; cellSize += preciseStep)
-        sizeClassFor(cellSize).canonicalizeBlock();
+        sizeClassFor(cellSize).zapFreeList();
 
     for (size_t cellSize = impreciseStep; cellSize < impreciseCutoff; cellSize += impreciseStep)
-        sizeClassFor(cellSize).canonicalizeBlock();
+        sizeClassFor(cellSize).zapFreeList();
 }
 
 } // namespace JSC
index 94ad593..9ec1ac5 100644 (file)
@@ -50,11 +50,10 @@ public:
     struct SizeClass {
         SizeClass();
         void resetAllocator();
-        void canonicalizeBlock();
+        void zapFreeList();
 
         MarkedBlock::FreeCell* firstFreeCell;
         MarkedBlock* currentBlock;
-        MarkedBlock* nextBlock;
         DoublyLinkedList<MarkedBlock> blockList;
         size_t cellSize;
     };
@@ -69,7 +68,7 @@ public:
     void addBlock(SizeClass&, MarkedBlock*);
     void removeBlock(MarkedBlock*);
     
-    void canonicalizeBlocks();
+    void canonicalizeCellLivenessData();
 
     size_t waterMark();
     size_t highWaterMark();
@@ -124,39 +123,21 @@ inline void* MarkedSpace::allocate(SizeClass& sizeClass)
 {
     MarkedBlock::FreeCell* firstFreeCell = sizeClass.firstFreeCell;
     if (!firstFreeCell) {
-        // There are two possibilities for why we got here:
-        // 1) We've exhausted the allocation cache for currentBlock, in which case
-        //    currentBlock == nextBlock, and we know that there is no reason to
-        //    repeat a lazy sweep of nextBlock because we won't find anything.
-        // 2) Allocation caches have been cleared, in which case nextBlock may
-        //    have (and most likely does have) free cells, so we almost certainly
-        //    should do a lazySweep for nextBlock. This also implies that
-        //    currentBlock == 0.
-        
-        if (sizeClass.currentBlock) {
-            ASSERT(sizeClass.currentBlock == sizeClass.nextBlock);
-            m_waterMark += sizeClass.nextBlock->capacity();
-            sizeClass.nextBlock = sizeClass.nextBlock->next();
-            sizeClass.currentBlock = 0;
-        }
-        
-        for (MarkedBlock*& block = sizeClass.nextBlock ; block; block = block->next()) {
-            firstFreeCell = block->lazySweep();
-            if (firstFreeCell) {
-                sizeClass.firstFreeCell = firstFreeCell;
-                sizeClass.currentBlock = block;
+        for (MarkedBlock*& block = sizeClass.currentBlock; block; block = block->next()) {
+            firstFreeCell = block->sweep(MarkedBlock::SweepToFreeList);
+            if (firstFreeCell)
                 break;
-            }
-            
+
             m_waterMark += block->capacity();
+            block->didConsumeFreeList();
         }
-        
+
         if (!firstFreeCell)
             return 0;
     }
-    
+
     ASSERT(firstFreeCell);
-    
+
     sizeClass.firstFreeCell = firstFreeCell->next;
     return firstFreeCell;
 }
@@ -193,26 +174,23 @@ template <typename Functor> inline typename Functor::ReturnType MarkedSpace::for
 inline MarkedSpace::SizeClass::SizeClass()
     : firstFreeCell(0)
     , currentBlock(0)
-    , nextBlock(0)
     , cellSize(0)
 {
 }
 
 inline void MarkedSpace::SizeClass::resetAllocator()
 {
-    nextBlock = blockList.head();
+    currentBlock = blockList.head();
 }
 
-inline void MarkedSpace::SizeClass::canonicalizeBlock()
+inline void MarkedSpace::SizeClass::zapFreeList()
 {
-    if (currentBlock) {
-        currentBlock->canonicalizeBlock(firstFreeCell);
-        firstFreeCell = 0;
+    if (!currentBlock) {
+        ASSERT(!firstFreeCell);
+        return;
     }
-    
-    ASSERT(!firstFreeCell);
-    
-    currentBlock = 0;
+
+    currentBlock->zapFreeList(firstFreeCell);
     firstFreeCell = 0;
 }
 
index a235a57..7dd03d2 100644 (file)
@@ -144,7 +144,6 @@ private:
     {
         ASSERT(slot());
         JSValue value = HandleTypes<T>::toJSValue(externalType);
-        ASSERT(!value || !value.isCell() || Heap::isMarked(value.asCell()));
         HandleHeap::heapFor(slot())->writeBarrier(slot(), value);
         *slot() = value;
     }
index a7b3df5..feb7de4 100644 (file)
@@ -96,8 +96,11 @@ namespace JSC {
 
         virtual JSObject* toThisObject(ExecState*) const;
         JSValue getJSNumber() const;
-        void* vptr() { return *reinterpret_cast<void**>(this); }
-        void setVPtr(void* vptr) { *reinterpret_cast<void**>(this) = vptr; }
+
+        void* vptr() const { ASSERT(!isZapped()); return *reinterpret_cast<void* const*>(this); }
+        void setVptr(void* vptr) { *reinterpret_cast<void**>(this) = vptr; ASSERT(!isZapped()); }
+        void zap() { *reinterpret_cast<uintptr_t**>(this) = 0; }
+        bool isZapped() const { return !*reinterpret_cast<uintptr_t* const*>(this); }
 
         // FIXME: Rename getOwnPropertySlot to virtualGetOwnPropertySlot, and
         // fastGetOwnPropertySlot to getOwnPropertySlot. Callers should always
@@ -346,6 +349,11 @@ namespace JSC {
 #endif
         return heap.allocate(sizeof(T));
     }
+    
+    inline bool isZapped(const JSCell* cell)
+    {
+        return cell->isZapped();
+    }
 
 } // namespace JSC