Copying collection shouldn't require O(live bytes) memory overhead
authormhahnenberg@apple.com <mhahnenberg@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 12 Oct 2012 19:38:35 +0000 (19:38 +0000)
committermhahnenberg@apple.com <mhahnenberg@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 12 Oct 2012 19:38:35 +0000 (19:38 +0000)
https://bugs.webkit.org/show_bug.cgi?id=98792

Reviewed by Filip Pizlo.

Currently our copying collection occurs simultaneously with the marking phase. We'd like
to be able to reuse CopiedBlocks as soon as they become fully evacuated, but this is not
currently possible because we don't know the liveness statistics of each old CopiedBlock
until marking/copying has already finished. Instead, we have to allocate additional memory
from the OS to use as our working set of CopiedBlocks while copying. We then return the
fully evacuated old CopiedBlocks back to the block allocator, thus giving our copying phase
an O(live bytes) overhead.

To fix this, we should instead split the copying phase apart from the marking phase. This
way we have full liveness data for each CopiedBlock during the copying phase so that we
can reuse them the instant they become fully evacuated. With the additional liveness data
that each CopiedBlock accumulates, we can add some additional heuristics to the collector.
For example, we can calculate our global Heap fragmentation and only choose to do a copying
phase if that fragmentation exceeds some limit. As another example, we can skip copying
blocks that are already above a particular fragmentation limit, which allows older objects
to coalesce into blocks that are rarely copied.

* JavaScriptCore.xcodeproj/project.pbxproj:
* heap/CopiedBlock.h:
(CopiedBlock):
(JSC::CopiedBlock::CopiedBlock): Added support for tracking live bytes in a CopiedBlock in a
thread-safe fashion.
(JSC::CopiedBlock::reportLiveBytes): Adds a number of live bytes to the block in a thread-safe
fashion using compare and swap.
(JSC):
(JSC::CopiedBlock::didSurviveGC): Called when a block survives a single GC without being
evacuated. This could be called for a couple reasons: (a) the block was pinned or (b) we
decided not to do any copying. A block can become pinned for a few reasons: (1) a pointer into
the block was found during the conservative scan. (2) the block was deemed full enough to
not warrant any copying. (3) The block is oversize and was found to be live.
(JSC::CopiedBlock::didEvacuateBytes): Called when some number of bytes are copied from this
block. If the number of live bytes ever hits zero, the block will return itself to the
BlockAllocator to be recycled.
(JSC::CopiedBlock::canBeRecycled): Indicates that a block has no live bytes and can be
immediately recycled. This is used for blocks that are found to have zero live bytes at the
beginning of the copying phase.
(JSC::CopiedBlock::shouldEvacuate): This function returns true if the current fragmentation
of the block is above our fragmentation threshold, and false otherwise.
(JSC::CopiedBlock::isPinned): Added an accessor for the pinned flag
(JSC::CopiedBlock::liveBytes):
* heap/CopiedSpace.cpp:
(JSC::CopiedSpace::CopiedSpace):
(JSC::CopiedSpace::doneFillingBlock): Changed so that we can exchange our filled block for a
fresh block. This avoids the situation where a thread returns its borrowed block, it's the last
borrowed block, so CopiedSpace thinks that copying has completed, and it starts doing all of the
copying phase cleanup. In actuality, the thread wanted another block after returning the current
block. So we allow the thread to atomically exchange its block for another block.
(JSC::CopiedSpace::startedCopying): Added the calculation of global Heap fragmentation to
determine if the copying phase should commence. We include the MarkedSpace in our fragmentation
calculation by assuming that the MarkedSpace is 0% fragmented since we can reuse any currently
free memory in it (i.e. we ignore any internal fragmentation in the MarkedSpace). While we're
calculating the fragmentation of CopiedSpace, we also return any free blocks we find along the
way (meaning liveBytes() == 0).
(JSC):
(JSC::CopiedSpace::doneCopying): We still have to iterate over all the blocks, regardless of
whether the copying phase took place or not so that we can reset all of the live bytes counters
and un-pin any pinned blocks.
* heap/CopiedSpace.h:
(CopiedSpace):
(JSC::CopiedSpace::shouldDoCopyPhase):
* heap/CopiedSpaceInlineMethods.h:
(JSC::CopiedSpace::recycleEvacuatedBlock): This function is distinct from recycling a borrowed block
because a borrowed block hasn't been added to the CopiedSpace yet, but an evacuated block is still
currently in CopiedSpace, so we have to make sure we properly remove all traces of the block from
CopiedSpace before returning it to BlockAllocator.
(JSC::CopiedSpace::recycleBorrowedBlock): Renamed to indicate the distinction mentioned above.
* heap/CopyVisitor.cpp: Added.
(JSC):
(JSC::CopyVisitor::CopyVisitor):
(JSC::CopyVisitor::copyFromShared): Main function for any thread participating in the copying phase.
Grabs chunks of MarkedBlocks from the shared list and copies the backing store of anybody who needs
it until there are no more chunks to copy.
* heap/CopyVisitor.h: Added.
(JSC):
(CopyVisitor):
* heap/CopyVisitorInlineMethods.h: Added.
(JSC):
(GCCopyPhaseFunctor):
(JSC::GCCopyPhaseFunctor::GCCopyPhaseFunctor):
(JSC::GCCopyPhaseFunctor::operator()):
(JSC::CopyVisitor::checkIfShouldCopy): We don't have to check shouldEvacuate() because all of those
checks are done during the marking phase.
(JSC::CopyVisitor::allocateNewSpace):
(JSC::CopyVisitor::allocateNewSpaceSlow):
(JSC::CopyVisitor::startCopying): Initialization function for a thread that is about to start copying.
(JSC::CopyVisitor::doneCopying):
(JSC::CopyVisitor::didCopy): This callback is called by an object that has just successfully copied its
backing store. It indicates to the CopiedBlock that somebody has just finished evacuating some number of
bytes from it, and, if the CopiedBlock now has no more live bytes, can be recycled immediately.
* heap/GCThread.cpp: Added.
(JSC):
(JSC::GCThread::GCThread): This is a new class that encapsulates a single thread responsible for participating
in a specific set of GC phases. Currently, that set of phases includes Mark, Copy, and Exit. Each thread
monitors a shared variable in its associated GCThreadSharedData. The main thread updates this m_currentPhase
variable as collection progresses through the various phases. Parallel marking still works exactly like it
has. In other words, the "run loop" for each of the GC threads sits above any individual phase, thus keeping
the separate phases of the collector orthogonal.
(JSC::GCThread::threadID):
(JSC::GCThread::initializeThreadID):
(JSC::GCThread::slotVisitor):
(JSC::GCThread::copyVisitor):
(JSC::GCThread::waitForNextPhase):
(JSC::GCThread::gcThreadMain):
(JSC::GCThread::gcThreadStartFunc):
* heap/GCThread.h: Added.
(JSC):
(GCThread):
* heap/GCThreadSharedData.cpp: The GCThreadSharedData now has a list of GCThread objects rather than raw
ThreadIdentifiers.
(JSC::GCThreadSharedData::resetChildren):
(JSC::GCThreadSharedData::childVisitCount):
(JSC::GCThreadSharedData::GCThreadSharedData):
(JSC::GCThreadSharedData::~GCThreadSharedData):
(JSC::GCThreadSharedData::reset):
(JSC::GCThreadSharedData::didStartMarking): Callback to let the GCThreadSharedData know that marking has
started and updates the m_currentPhase variable and notifies the GCThreads accordingly.
(JSC::GCThreadSharedData::didFinishMarking): Ditto for finishing marking.
(JSC::GCThreadSharedData::didStartCopying): Ditto for starting the copying phase.
(JSC::GCThreadSharedData::didFinishCopying): Ditto for finishing copying.
* heap/GCThreadSharedData.h:
(JSC):
(GCThreadSharedData):
(JSC::GCThreadSharedData::getNextBlocksToCopy): Atomically gets the next chunk of work for a copying thread.
* heap/Heap.cpp:
(JSC::Heap::Heap):
(JSC::Heap::markRoots):
(JSC):
(JSC::Heap::copyBackingStores): Responsible for setting up the copying phase, notifying the copying threads,
and doing any copying work if necessary.
(JSC::Heap::collect):
* heap/Heap.h:
(Heap):
(JSC):
(JSC::CopyFunctor::CopyFunctor):
(CopyFunctor):
(JSC::CopyFunctor::operator()):
* heap/IncrementalSweeper.cpp: Changed the incremental sweeper to have a reference to the list of MarkedBlocks
that need sweeping, since this now resides in the Heap so that it can be easily shared by the GCThreads.
(JSC::IncrementalSweeper::IncrementalSweeper):
(JSC::IncrementalSweeper::startSweeping):
* heap/IncrementalSweeper.h:
(JSC):
(IncrementalSweeper):
* heap/SlotVisitor.cpp:
(JSC::SlotVisitor::setup):
(JSC::SlotVisitor::drainFromShared): We no longer do any copying-related work here.
(JSC):
* heap/SlotVisitor.h:
(SlotVisitor):
* heap/SlotVisitorInlineMethods.h:
(JSC):
(JSC::SlotVisitor::copyLater): Notifies the CopiedBlock that there are some live bytes that may need
to be copied.
* runtime/Butterfly.h:
(JSC):
(Butterfly):
* runtime/ButterflyInlineMethods.h:
(JSC::Butterfly::createUninitializedDuringCollection): Uses the new CopyVisitor.
* runtime/ClassInfo.h:
(MethodTable): Added new "virtual" function copyBackingStore to method table.
(JSC):
* runtime/JSCell.cpp:
(JSC::JSCell::copyBackingStore): Default implementation that does nothing.
(JSC):
* runtime/JSCell.h:
(JSC):
(JSCell):
* runtime/JSObject.cpp:
(JSC::JSObject::copyButterfly): Does the actual copying of the butterfly.
(JSC):
(JSC::JSObject::visitButterfly): Calls copyLater for the butterfly.
(JSC::JSObject::copyBackingStore):
* runtime/JSObject.h:
(JSObject):
(JSC::JSCell::methodTable):
(JSC::JSCell::inherits):
* runtime/Options.h: Added two new constants, minHeapUtilization and minCopiedBlockUtilization,
to govern the amount of fragmentation we allow before doing copying.
(JSC):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@131213 268f45cc-cd09-0410-ab3c-d52691b4dbfc

33 files changed:
Source/JavaScriptCore/CMakeLists.txt
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/GNUmakefile.list.am
Source/JavaScriptCore/JavaScriptCore.vcproj/JavaScriptCore/JavaScriptCore.def
Source/JavaScriptCore/JavaScriptCore.vcproj/JavaScriptCore/JavaScriptCore.vcproj
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/Target.pri
Source/JavaScriptCore/heap/CopiedBlock.h
Source/JavaScriptCore/heap/CopiedSpace.cpp
Source/JavaScriptCore/heap/CopiedSpace.h
Source/JavaScriptCore/heap/CopiedSpaceInlineMethods.h
Source/JavaScriptCore/heap/CopyVisitor.cpp [new file with mode: 0644]
Source/JavaScriptCore/heap/CopyVisitor.h [new file with mode: 0644]
Source/JavaScriptCore/heap/CopyVisitorInlineMethods.h [new file with mode: 0644]
Source/JavaScriptCore/heap/GCThread.cpp [new file with mode: 0644]
Source/JavaScriptCore/heap/GCThread.h [new file with mode: 0644]
Source/JavaScriptCore/heap/GCThreadSharedData.cpp
Source/JavaScriptCore/heap/GCThreadSharedData.h
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/heap/Heap.h
Source/JavaScriptCore/heap/IncrementalSweeper.cpp
Source/JavaScriptCore/heap/IncrementalSweeper.h
Source/JavaScriptCore/heap/SlotVisitor.cpp
Source/JavaScriptCore/heap/SlotVisitor.h
Source/JavaScriptCore/heap/SlotVisitorInlineMethods.h
Source/JavaScriptCore/runtime/Butterfly.h
Source/JavaScriptCore/runtime/ButterflyInlineMethods.h
Source/JavaScriptCore/runtime/ClassInfo.h
Source/JavaScriptCore/runtime/JSCell.cpp
Source/JavaScriptCore/runtime/JSCell.h
Source/JavaScriptCore/runtime/JSObject.cpp
Source/JavaScriptCore/runtime/JSObject.h
Source/JavaScriptCore/runtime/Options.h

index f84d01d..4656c5a 100644 (file)
@@ -107,8 +107,10 @@ SET(JavaScriptCore_SOURCES
 
     heap/BlockAllocator.cpp
     heap/CopiedSpace.cpp
+    heap/CopyVisitor.cpp
     heap/ConservativeRoots.cpp
     heap/DFGCodeBlocks.cpp
+    heap/GCThread.cpp
     heap/GCThreadSharedData.cpp
     heap/HandleSet.cpp
     heap/HandleStack.cpp
index cb4ae18..c84d5b3 100644 (file)
@@ -1,3 +1,190 @@
+2012-10-09  Mark Hahnenberg  <mhahnenberg@apple.com>
+
+        Copying collection shouldn't require O(live bytes) memory overhead
+        https://bugs.webkit.org/show_bug.cgi?id=98792
+
+        Reviewed by Filip Pizlo.
+
+        Currently our copying collection occurs simultaneously with the marking phase. We'd like 
+        to be able to reuse CopiedBlocks as soon as they become fully evacuated, but this is not 
+        currently possible because we don't know the liveness statistics of each old CopiedBlock 
+        until marking/copying has already finished. Instead, we have to allocate additional memory 
+        from the OS to use as our working set of CopiedBlocks while copying. We then return the 
+        fully evacuated old CopiedBlocks back to the block allocator, thus giving our copying phase 
+        an O(live bytes) overhead.
+
+        To fix this, we should instead split the copying phase apart from the marking phase. This 
+        way we have full liveness data for each CopiedBlock during the copying phase so that we 
+        can reuse them the instant they become fully evacuated. With the additional liveness data 
+        that each CopiedBlock accumulates, we can add some additional heuristics to the collector. 
+        For example, we can calculate our global Heap fragmentation and only choose to do a copying 
+        phase if that fragmentation exceeds some limit. As another example, we can skip copying 
+        blocks that are already above a particular fragmentation limit, which allows older objects 
+        to coalesce into blocks that are rarely copied.
+
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * heap/CopiedBlock.h:
+        (CopiedBlock):
+        (JSC::CopiedBlock::CopiedBlock): Added support for tracking live bytes in a CopiedBlock in a 
+        thread-safe fashion.
+        (JSC::CopiedBlock::reportLiveBytes): Adds a number of live bytes to the block in a thread-safe 
+        fashion using compare and swap.
+        (JSC):
+        (JSC::CopiedBlock::didSurviveGC): Called when a block survives a single GC without being 
+        evacuated. This could be called for a couple reasons: (a) the block was pinned or (b) we 
+        decided not to do any copying. A block can become pinned for a few reasons: (1) a pointer into 
+        the block was found during the conservative scan. (2) the block was deemed full enough to 
+        not warrant any copying. (3) The block is oversize and was found to be live. 
+        (JSC::CopiedBlock::didEvacuateBytes): Called when some number of bytes are copied from this 
+        block. If the number of live bytes ever hits zero, the block will return itself to the 
+        BlockAllocator to be recycled.
+        (JSC::CopiedBlock::canBeRecycled): Indicates that a block has no live bytes and can be 
+        immediately recycled. This is used for blocks that are found to have zero live bytes at the 
+        beginning of the copying phase.
+        (JSC::CopiedBlock::shouldEvacuate): This function returns true if the current fragmentation 
+        of the block is above our fragmentation threshold, and false otherwise.
+        (JSC::CopiedBlock::isPinned): Added an accessor for the pinned flag
+        (JSC::CopiedBlock::liveBytes): 
+        * heap/CopiedSpace.cpp:
+        (JSC::CopiedSpace::CopiedSpace):
+        (JSC::CopiedSpace::doneFillingBlock): Changed so that we can exchange our filled block for a 
+        fresh block. This avoids the situation where a thread returns its borrowed block, it's the last 
+        borrowed block, so CopiedSpace thinks that copying has completed, and it starts doing all of the 
+        copying phase cleanup. In actuality, the thread wanted another block after returning the current 
+        block. So we allow the thread to atomically exchange its block for another block.
+        (JSC::CopiedSpace::startedCopying): Added the calculation of global Heap fragmentation to 
+        determine if the copying phase should commence. We include the MarkedSpace in our fragmentation 
+        calculation by assuming that the MarkedSpace is 0% fragmented since we can reuse any currently 
+        free memory in it (i.e. we ignore any internal fragmentation in the MarkedSpace). While we're 
+        calculating the fragmentation of CopiedSpace, we also return any free blocks we find along the 
+        way (meaning liveBytes() == 0).
+        (JSC):
+        (JSC::CopiedSpace::doneCopying): We still have to iterate over all the blocks, regardless of
+        whether the copying phase took place or not so that we can reset all of the live bytes counters 
+        and un-pin any pinned blocks.
+        * heap/CopiedSpace.h:
+        (CopiedSpace):
+        (JSC::CopiedSpace::shouldDoCopyPhase):
+        * heap/CopiedSpaceInlineMethods.h:
+        (JSC::CopiedSpace::recycleEvacuatedBlock): This function is distinct from recycling a borrowed block 
+        because a borrowed block hasn't been added to the CopiedSpace yet, but an evacuated block is still
+        currently in CopiedSpace, so we have to make sure we properly remove all traces of the block from 
+        CopiedSpace before returning it to BlockAllocator.
+        (JSC::CopiedSpace::recycleBorrowedBlock): Renamed to indicate the distinction mentioned above.
+        * heap/CopyVisitor.cpp: Added.
+        (JSC):
+        (JSC::CopyVisitor::CopyVisitor):
+        (JSC::CopyVisitor::copyFromShared): Main function for any thread participating in the copying phase.
+        Grabs chunks of MarkedBlocks from the shared list and copies the backing store of anybody who needs
+        it until there are no more chunks to copy.
+        * heap/CopyVisitor.h: Added.
+        (JSC):
+        (CopyVisitor):
+        * heap/CopyVisitorInlineMethods.h: Added.
+        (JSC):
+        (GCCopyPhaseFunctor):
+        (JSC::GCCopyPhaseFunctor::GCCopyPhaseFunctor):
+        (JSC::GCCopyPhaseFunctor::operator()):
+        (JSC::CopyVisitor::checkIfShouldCopy): We don't have to check shouldEvacuate() because all of those 
+        checks are done during the marking phase.
+        (JSC::CopyVisitor::allocateNewSpace): 
+        (JSC::CopyVisitor::allocateNewSpaceSlow):
+        (JSC::CopyVisitor::startCopying): Initialization function for a thread that is about to start copying.
+        (JSC::CopyVisitor::doneCopying):
+        (JSC::CopyVisitor::didCopy): This callback is called by an object that has just successfully copied its
+        backing store. It indicates to the CopiedBlock that somebody has just finished evacuating some number of 
+        bytes from it, and, if the CopiedBlock now has no more live bytes, can be recycled immediately.
+        * heap/GCThread.cpp: Added.
+        (JSC):
+        (JSC::GCThread::GCThread): This is a new class that encapsulates a single thread responsible for participating 
+        in a specific set of GC phases. Currently, that set of phases includes Mark, Copy, and Exit. Each thread 
+        monitors a shared variable in its associated GCThreadSharedData. The main thread updates this m_currentPhase
+        variable as collection progresses through the various phases. Parallel marking still works exactly like it 
+        has. In other words, the "run loop" for each of the GC threads sits above any individual phase, thus keeping 
+        the separate phases of the collector orthogonal.
+        (JSC::GCThread::threadID):
+        (JSC::GCThread::initializeThreadID):
+        (JSC::GCThread::slotVisitor):
+        (JSC::GCThread::copyVisitor):
+        (JSC::GCThread::waitForNextPhase):
+        (JSC::GCThread::gcThreadMain):
+        (JSC::GCThread::gcThreadStartFunc):
+        * heap/GCThread.h: Added.
+        (JSC):
+        (GCThread):
+        * heap/GCThreadSharedData.cpp: The GCThreadSharedData now has a list of GCThread objects rather than raw 
+        ThreadIdentifiers.
+        (JSC::GCThreadSharedData::resetChildren):
+        (JSC::GCThreadSharedData::childVisitCount):
+        (JSC::GCThreadSharedData::GCThreadSharedData):
+        (JSC::GCThreadSharedData::~GCThreadSharedData):
+        (JSC::GCThreadSharedData::reset):
+        (JSC::GCThreadSharedData::didStartMarking): Callback to let the GCThreadSharedData know that marking has 
+        started and updates the m_currentPhase variable and notifies the GCThreads accordingly.
+        (JSC::GCThreadSharedData::didFinishMarking): Ditto for finishing marking. 
+        (JSC::GCThreadSharedData::didStartCopying): Ditto for starting the copying phase.
+        (JSC::GCThreadSharedData::didFinishCopying): Ditto for finishing copying. 
+        * heap/GCThreadSharedData.h:
+        (JSC):
+        (GCThreadSharedData):
+        (JSC::GCThreadSharedData::getNextBlocksToCopy): Atomically gets the next chunk of work for a copying thread.
+        * heap/Heap.cpp:
+        (JSC::Heap::Heap):
+        (JSC::Heap::markRoots):
+        (JSC):
+        (JSC::Heap::copyBackingStores): Responsible for setting up the copying phase, notifying the copying threads, 
+        and doing any copying work if necessary.
+        (JSC::Heap::collect):
+        * heap/Heap.h:
+        (Heap):
+        (JSC):
+        (JSC::CopyFunctor::CopyFunctor):
+        (CopyFunctor):
+        (JSC::CopyFunctor::operator()):
+        * heap/IncrementalSweeper.cpp: Changed the incremental sweeper to have a reference to the list of MarkedBlocks 
+        that need sweeping, since this now resides in the Heap so that it can be easily shared by the GCThreads.
+        (JSC::IncrementalSweeper::IncrementalSweeper):
+        (JSC::IncrementalSweeper::startSweeping):
+        * heap/IncrementalSweeper.h:
+        (JSC):
+        (IncrementalSweeper):
+        * heap/SlotVisitor.cpp:
+        (JSC::SlotVisitor::setup):
+        (JSC::SlotVisitor::drainFromShared): We no longer do any copying-related work here.
+        (JSC):
+        * heap/SlotVisitor.h:
+        (SlotVisitor):
+        * heap/SlotVisitorInlineMethods.h:
+        (JSC):
+        (JSC::SlotVisitor::copyLater): Notifies the CopiedBlock that there are some live bytes that may need 
+        to be copied.
+        * runtime/Butterfly.h:
+        (JSC):
+        (Butterfly):
+        * runtime/ButterflyInlineMethods.h:
+        (JSC::Butterfly::createUninitializedDuringCollection): Uses the new CopyVisitor.
+        * runtime/ClassInfo.h:
+        (MethodTable): Added new "virtual" function copyBackingStore to method table.
+        (JSC):
+        * runtime/JSCell.cpp:
+        (JSC::JSCell::copyBackingStore): Default implementation that does nothing.
+        (JSC):
+        * runtime/JSCell.h:
+        (JSC):
+        (JSCell):
+        * runtime/JSObject.cpp:
+        (JSC::JSObject::copyButterfly): Does the actual copying of the butterfly.
+        (JSC):
+        (JSC::JSObject::visitButterfly): Calls copyLater for the butterfly.
+        (JSC::JSObject::copyBackingStore): 
+        * runtime/JSObject.h:
+        (JSObject):
+        (JSC::JSCell::methodTable):
+        (JSC::JSCell::inherits):
+        * runtime/Options.h: Added two new constants, minHeapUtilization and minCopiedBlockUtilization, 
+        to govern the amount of fragmentation we allow before doing copying.
+        (JSC):
+
 2012-10-12  Filip Pizlo  <fpizlo@apple.com>
 
         DFG array allocation calls should not return an encoded JSValue
index 752e570..235beb1 100644 (file)
@@ -256,6 +256,9 @@ javascriptcore_sources += \
        Source/JavaScriptCore/heap/CopiedSpace.cpp \
        Source/JavaScriptCore/heap/CopiedSpace.h \
        Source/JavaScriptCore/heap/CopiedSpaceInlineMethods.h \
+    Source/JavaScriptCore/heap/CopyVisitor.h \
+    Source/JavaScriptCore/heap/CopyVisitorInlineMethods.h \
+    Source/JavaScriptCore/heap/CopyVisitor.cpp \
        Source/JavaScriptCore/heap/CardSet.h \
        Source/JavaScriptCore/heap/ConservativeRoots.cpp \
        Source/JavaScriptCore/heap/ConservativeRoots.h \
@@ -280,6 +283,8 @@ javascriptcore_sources += \
        Source/JavaScriptCore/heap/BlockAllocator.h \
     Source/JavaScriptCore/heap/GCThreadSharedData.cpp \
     Source/JavaScriptCore/heap/GCThreadSharedData.h \
+    Source/JavaScriptCore/heap/GCThread.cpp \
+    Source/JavaScriptCore/heap/GCThread.h \
        Source/JavaScriptCore/heap/Heap.cpp \
        Source/JavaScriptCore/heap/Heap.h \
     Source/JavaScriptCore/heap/HeapStatistics.cpp \
index a8aa9a4..0724ca1 100755 (executable)
@@ -116,6 +116,7 @@ EXPORTS
     ?convertLatin1ToUTF8@Unicode@WTF@@YA?AW4ConversionResult@12@PAPBEPBEPAPADPAD@Z
     ?convertUTF16ToUTF8@Unicode@WTF@@YA?AW4ConversionResult@12@PAPB_WPB_WPAPADPAD_N@Z
     ?convertUTF8ToUTF16@Unicode@WTF@@YA?AW4ConversionResult@12@PAPBDPBDPAPA_WPA_W_N@Z
+    ?copyBackingStore@JSObject@JSC@@SAXPAVJSCell@2@AAVCopyVisitor@2@@Z
     ?create@JSFunction@JSC@@SAPAV12@PAVExecState@2@PAVJSGlobalObject@2@HABVString@WTF@@P6I_J0@ZW4Intrinsic@2@3@Z
     ?create@JSGlobalData@JSC@@SA?AV?$PassRefPtr@VJSGlobalData@JSC@@@WTF@@W4ThreadStackType@2@W4HeapType@2@@Z
     ?create@RegExp@JSC@@SAPAV12@AAVJSGlobalData@2@ABVString@WTF@@W4RegExpFlags@2@@Z
index 86f8d11..0f96a60 100644 (file)
                                >
                        </File>
                        <File
+                               RelativePath="..\..\heap\CopyVisitor.cpp"
+                               >
+                       </File>
+                       <File
+                               RelativePath="..\..\heap\CopyVisitor.h"
+                               >
+                       </File>
+                       <File
+                               RelativePath="..\..\heap\CopyVisitorInlineMethods.h"
+                               >
+                       </File>
+                       <File
                                RelativePath="..\..\heap\DFGCodeBlocks.cpp"
                                >
                        </File>
                                >
                        </File>
                        <File
+                               RelativePath="..\..\heap\GCThread.cpp"
+                               >
+                       </File>
+                       <File
+                               RelativePath="..\..\heap\GCThread.h"
+                               >
+                       </File>
+                       <File
                                RelativePath="..\..\heap\GCThreadSharedData.cpp"
                                >
                        </File>
index ce349da..6ebdb9a 100644 (file)
                C21122E215DD9AB300790E3A /* GCThreadSharedData.h in Headers */ = {isa = PBXBuildFile; fileRef = C21122DF15DD9AB300790E3A /* GCThreadSharedData.h */; settings = {ATTRIBUTES = (Private, ); }; };
                C21122E315DD9AB300790E3A /* MarkStackInlineMethods.h in Headers */ = {isa = PBXBuildFile; fileRef = C21122E015DD9AB300790E3A /* MarkStackInlineMethods.h */; settings = {ATTRIBUTES = (Private, ); }; };
                C2160FE715F7E95E00942DFC /* SlotVisitorInlineMethods.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FCB408515C0A3C30048932B /* SlotVisitorInlineMethods.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               C2239D1716262BDD005AC5FD /* CopyVisitor.cpp in Sources */ = {isa = PBXBuildFile; fileRef = C2239D1216262BDD005AC5FD /* CopyVisitor.cpp */; };
+               C2239D1816262BDD005AC5FD /* CopyVisitor.h in Headers */ = {isa = PBXBuildFile; fileRef = C2239D1316262BDD005AC5FD /* CopyVisitor.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               C2239D1916262BDD005AC5FD /* CopyVisitorInlineMethods.h in Headers */ = {isa = PBXBuildFile; fileRef = C2239D1416262BDD005AC5FD /* CopyVisitorInlineMethods.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               C2239D1A16262BDD005AC5FD /* GCThread.cpp in Sources */ = {isa = PBXBuildFile; fileRef = C2239D1516262BDD005AC5FD /* GCThread.cpp */; };
+               C2239D1B16262BDD005AC5FD /* GCThread.h in Headers */ = {isa = PBXBuildFile; fileRef = C2239D1616262BDD005AC5FD /* GCThread.h */; };
                C225494315F7DBAA0065E898 /* SlotVisitor.cpp in Sources */ = {isa = PBXBuildFile; fileRef = C225494215F7DBAA0065E898 /* SlotVisitor.cpp */; };
                C22B31B9140577D700DB475A /* SamplingCounter.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F77008E1402FDD60078EB39 /* SamplingCounter.h */; settings = {ATTRIBUTES = (Private, ); }; };
                C240305514B404E60079EB64 /* CopiedSpace.cpp in Sources */ = {isa = PBXBuildFile; fileRef = C240305314B404C90079EB64 /* CopiedSpace.cpp */; };
                C21122DE15DD9AB300790E3A /* GCThreadSharedData.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = GCThreadSharedData.cpp; sourceTree = "<group>"; };
                C21122DF15DD9AB300790E3A /* GCThreadSharedData.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GCThreadSharedData.h; sourceTree = "<group>"; };
                C21122E015DD9AB300790E3A /* MarkStackInlineMethods.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MarkStackInlineMethods.h; sourceTree = "<group>"; };
+               C2239D1216262BDD005AC5FD /* CopyVisitor.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CopyVisitor.cpp; sourceTree = "<group>"; };
+               C2239D1316262BDD005AC5FD /* CopyVisitor.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CopyVisitor.h; sourceTree = "<group>"; };
+               C2239D1416262BDD005AC5FD /* CopyVisitorInlineMethods.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CopyVisitorInlineMethods.h; sourceTree = "<group>"; };
+               C2239D1516262BDD005AC5FD /* GCThread.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = GCThread.cpp; sourceTree = "<group>"; };
+               C2239D1616262BDD005AC5FD /* GCThread.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GCThread.h; sourceTree = "<group>"; };
                C225494215F7DBAA0065E898 /* SlotVisitor.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SlotVisitor.cpp; sourceTree = "<group>"; };
                C240305314B404C90079EB64 /* CopiedSpace.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CopiedSpace.cpp; sourceTree = "<group>"; };
                C24D31E0161CD695002AA4DB /* HeapStatistics.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = HeapStatistics.cpp; sourceTree = "<group>"; };
                142E312A134FF0A600AFADB5 /* heap */ = {
                        isa = PBXGroup;
                        children = (
+                               C2239D1216262BDD005AC5FD /* CopyVisitor.cpp */,
+                               C2239D1316262BDD005AC5FD /* CopyVisitor.h */,
+                               C2239D1416262BDD005AC5FD /* CopyVisitorInlineMethods.h */,
+                               C2239D1516262BDD005AC5FD /* GCThread.cpp */,
+                               C2239D1616262BDD005AC5FD /* GCThread.h */,
                                C24D31E0161CD695002AA4DB /* HeapStatistics.cpp */,
                                C24D31E1161CD695002AA4DB /* HeapStatistics.h */,
                                C225494215F7DBAA0065E898 /* SlotVisitor.cpp */,
                                86ADD1450FDDEA980006EEC2 /* ARMv7Assembler.h in Headers */,
                                C2EAD2FC14F0249800A4B159 /* CopiedAllocator.h in Headers */,
                                C2B916C214DA014E00CBAC86 /* MarkedAllocator.h in Headers */,
+                               C2239D1816262BDD005AC5FD /* CopyVisitor.h in Headers */,
+                               C2239D1916262BDD005AC5FD /* CopyVisitorInlineMethods.h in Headers */,
                                C24D31E3161CD695002AA4DB /* HeapStatistics.h in Headers */,
                                C2A7F688160432D400F76B98 /* JSDestructibleObject.h in Headers */,
                                FE20CE9E15F04A9500DF3430 /* LLIntCLoop.h in Headers */,
                                862553D216136E1A009F17D0 /* JSProxy.h in Headers */,
                                0F5541B21613C1FB00CE3E25 /* SpecialPointer.h in Headers */,
                                0FEB3ECD16237F4D00AB67AD /* TypedArrayDescriptor.h in Headers */,
+                               C2239D1B16262BDD005AC5FD /* GCThread.h in Headers */,
                        );
                        runOnlyForDeploymentPostprocessing = 0;
                };
                                0F5541B11613C1FB00CE3E25 /* SpecialPointer.cpp in Sources */,
                                0FEB3ECF16237F6C00AB67AD /* MacroAssembler.cpp in Sources */,
                                C24D31E2161CD695002AA4DB /* HeapStatistics.cpp in Sources */,
+                               C2239D1716262BDD005AC5FD /* CopyVisitor.cpp in Sources */,
+                               C2239D1A16262BDD005AC5FD /* GCThread.cpp in Sources */,
                        );
                        runOnlyForDeploymentPostprocessing = 0;
                };
index 7d75c9b..861dbc7 100644 (file)
@@ -73,6 +73,7 @@ SOURCES += \
     bytecompiler/BytecodeGenerator.cpp \
     bytecompiler/NodesCodegen.cpp \
     heap/CopiedSpace.cpp \
+    heap/CopyVisitor.cpp \
     heap/ConservativeRoots.cpp \
     heap/DFGCodeBlocks.cpp \
     heap/WeakSet.cpp \
@@ -82,6 +83,7 @@ SOURCES += \
     heap/HandleStack.cpp \
     heap/BlockAllocator.cpp \
     heap/GCThreadSharedData.cpp \
+    heap/GCThread.cpp \
     heap/Heap.cpp \
     heap/HeapStatistics.cpp \
     heap/HeapTimer.cpp \
index 582d1cc..eb57efc 100644 (file)
@@ -30,6 +30,8 @@
 #include "HeapBlock.h"
 #include "JSValue.h"
 #include "JSValueInlineMethods.h"
+#include "Options.h"
+#include <wtf/Atomics.h>
 
 namespace JSC {
 
@@ -42,6 +44,15 @@ public:
     static CopiedBlock* create(DeadBlock*);
     static CopiedBlock* createNoZeroFill(DeadBlock*);
 
+    bool isPinned();
+
+    unsigned liveBytes();
+    void reportLiveBytes(unsigned);
+    void didSurviveGC();
+    bool didEvacuateBytes(unsigned);
+    bool shouldEvacuate();
+    bool canBeRecycled();
+
     // The payload is the region of the block that is usable for allocations.
     char* payload();
     char* payloadEnd();
@@ -69,6 +80,7 @@ private:
 
     size_t m_remaining;
     uintptr_t m_isPinned;
+    unsigned m_liveBytes;
 };
 
 inline CopiedBlock* CopiedBlock::createNoZeroFill(DeadBlock* block)
@@ -100,10 +112,60 @@ inline CopiedBlock::CopiedBlock(Region* region)
     : HeapBlock<CopiedBlock>(region)
     , m_remaining(payloadCapacity())
     , m_isPinned(false)
+    , m_liveBytes(0)
 {
     ASSERT(is8ByteAligned(reinterpret_cast<void*>(m_remaining)));
 }
 
+inline void CopiedBlock::reportLiveBytes(unsigned bytes)
+{
+    unsigned oldValue = 0;
+    unsigned newValue = 0;
+    do {
+        oldValue = m_liveBytes;
+        newValue = oldValue + bytes;
+    } while (!WTF::weakCompareAndSwap(&m_liveBytes, oldValue, newValue));
+}
+
+inline void CopiedBlock::didSurviveGC()
+{
+    m_liveBytes = 0;
+    m_isPinned = false;
+}
+
+inline bool CopiedBlock::didEvacuateBytes(unsigned bytes)
+{
+    ASSERT(m_liveBytes >= bytes);
+    unsigned oldValue = 0;
+    unsigned newValue = 0;
+    do {
+        oldValue = m_liveBytes;
+        newValue = oldValue - bytes;
+    } while (!WTF::weakCompareAndSwap(&m_liveBytes, oldValue, newValue));
+    ASSERT(m_liveBytes < oldValue);
+    return !newValue;
+}
+
+inline bool CopiedBlock::canBeRecycled()
+{
+    return !m_liveBytes;
+}
+
+inline bool CopiedBlock::shouldEvacuate()
+{
+    return static_cast<double>(m_liveBytes) / static_cast<double>(payloadCapacity()) <= Options::minCopiedBlockUtilization();
+}
+
+inline bool CopiedBlock::isPinned()
+{
+    return m_isPinned;
+}
+
+inline unsigned CopiedBlock::liveBytes()
+{
+    return m_liveBytes;
+}
+
 inline char* CopiedBlock::payload()
 {
     return reinterpret_cast<char*>(this) + ((sizeof(CopiedBlock) + 7) & ~7);
index bb641bd..cedafee 100644 (file)
@@ -28,6 +28,7 @@
 
 #include "CopiedSpaceInlineMethods.h"
 #include "GCActivityCallback.h"
+#include "Options.h"
 
 namespace JSC {
 
@@ -36,6 +37,7 @@ CopiedSpace::CopiedSpace(Heap* heap)
     , m_toSpace(0)
     , m_fromSpace(0)
     , m_inCopyingPhase(false)
+    , m_shouldDoCopyPhase(false)
     , m_numberOfLoanedBlocks(0)
 {
     m_toSpaceLock.Init();
@@ -144,15 +146,18 @@ CheckedBoolean CopiedSpace::tryReallocateOversize(void** ptr, size_t oldSize, si
     return true;
 }
 
-void CopiedSpace::doneFillingBlock(CopiedBlock* block)
+void CopiedSpace::doneFillingBlock(CopiedBlock* block, CopiedBlock** exchange)
 {
     ASSERT(m_inCopyingPhase);
     
+    if (exchange)
+        *exchange = allocateBlockForCopyingPhase();
+
     if (!block)
         return;
 
     if (!block->dataSize()) {
-        recycleBlock(block);
+        recycleBorrowedBlock(block);
         return;
     }
 
@@ -174,6 +179,38 @@ void CopiedSpace::doneFillingBlock(CopiedBlock* block)
     }
 }
 
+void CopiedSpace::startedCopying()
+{
+    std::swap(m_fromSpace, m_toSpace);
+
+    m_blockFilter.reset();
+    m_allocator.resetCurrentBlock();
+
+    CopiedBlock* next = 0;
+    size_t totalLiveBytes = 0;
+    size_t totalUsableBytes = 0;
+    for (CopiedBlock* block = m_fromSpace->head(); block; block = next) {
+        next = block->next();
+        if (!block->isPinned() && block->canBeRecycled()) {
+            recycleEvacuatedBlock(block);
+            continue;
+        }
+        totalLiveBytes += block->liveBytes();
+        totalUsableBytes += block->payloadCapacity();
+    }
+
+    double markedSpaceBytes = m_heap->objectSpace().capacity();
+    double totalFragmentation = ((double)totalLiveBytes + markedSpaceBytes) / ((double)totalUsableBytes + markedSpaceBytes);
+    m_shouldDoCopyPhase = totalFragmentation <= Options::minHeapUtilization();
+    if (!m_shouldDoCopyPhase)
+        return;
+
+    ASSERT(m_shouldDoCopyPhase);
+    ASSERT(!m_inCopyingPhase);
+    ASSERT(!m_numberOfLoanedBlocks);
+    m_inCopyingPhase = true;
+}
+
 void CopiedSpace::doneCopying()
 {
     {
@@ -182,12 +219,13 @@ void CopiedSpace::doneCopying()
             m_loanedBlocksCondition.wait(m_loanedBlocksLock);
     }
 
-    ASSERT(m_inCopyingPhase);
+    ASSERT(m_inCopyingPhase == m_shouldDoCopyPhase);
     m_inCopyingPhase = false;
+
     while (!m_fromSpace->isEmpty()) {
         CopiedBlock* block = m_fromSpace->removeHead();
-        if (block->m_isPinned) {
-            block->m_isPinned = false;
+        if (block->isPinned() || !m_shouldDoCopyPhase) {
+            block->didSurviveGC();
             // We don't add the block to the blockSet because it was never removed.
             ASSERT(m_blockSet.contains(block));
             m_blockFilter.add(reinterpret_cast<Bits>(block));
@@ -202,13 +240,13 @@ void CopiedSpace::doneCopying()
     CopiedBlock* curr = m_oversizeBlocks.head();
     while (curr) {
         CopiedBlock* next = curr->next();
-        if (!curr->m_isPinned) {
+        if (!curr->isPinned()) {
             m_oversizeBlocks.remove(curr);
             m_blockSet.remove(curr);
             m_heap->blockAllocator().deallocateCustomSize(CopiedBlock::destroy(curr));
         } else {
             m_blockFilter.add(reinterpret_cast<Bits>(curr));
-            curr->m_isPinned = false;
+            curr->didSurviveGC();
         }
         curr = next;
     }
@@ -217,6 +255,8 @@ void CopiedSpace::doneCopying()
         allocateBlock();
     else
         m_allocator.setCurrentBlock(m_toSpace->head());
+
+    m_shouldDoCopyPhase = false;
 }
 
 size_t CopiedSpace::size()
index e8a4f87..8a3710d 100644 (file)
@@ -46,6 +46,7 @@ class Heap;
 class CopiedBlock;
 
 class CopiedSpace {
+    friend class CopyVisitor;
     friend class SlotVisitor;
     friend class JIT;
 public:
@@ -74,6 +75,7 @@ public:
     size_t capacity();
 
     bool isPagedOut(double deadline);
+    bool shouldDoCopyPhase() { return m_shouldDoCopyPhase; }
 
     static CopiedBlock* blockFor(void*);
 
@@ -88,8 +90,9 @@ private:
     void allocateBlock();
     CopiedBlock* allocateBlockForCopyingPhase();
 
-    void doneFillingBlock(CopiedBlock*);
-    void recycleBlock(CopiedBlock*);
+    void doneFillingBlock(CopiedBlock*, CopiedBlock**);
+    void recycleEvacuatedBlock(CopiedBlock*);
+    void recycleBorrowedBlock(CopiedBlock*);
 
     Heap* m_heap;
 
@@ -108,6 +111,7 @@ private:
     DoublyLinkedList<CopiedBlock> m_oversizeBlocks;
    
     bool m_inCopyingPhase;
+    bool m_shouldDoCopyPhase;
 
     Mutex m_loanedBlocksLock; 
     ThreadCondition m_loanedBlocksCondition;
index 764ebe6..01e8167 100644 (file)
@@ -93,19 +93,20 @@ inline void CopiedSpace::pinIfNecessary(void* opaquePointer)
         pin(block);
 }
 
-inline void CopiedSpace::startedCopying()
+inline void CopiedSpace::recycleEvacuatedBlock(CopiedBlock* block)
 {
-    std::swap(m_fromSpace, m_toSpace);
-
-    m_blockFilter.reset();
-    m_allocator.resetCurrentBlock();
-
-    ASSERT(!m_inCopyingPhase);
-    ASSERT(!m_numberOfLoanedBlocks);
-    m_inCopyingPhase = true;
+    ASSERT(block);
+    ASSERT(block->canBeRecycled());
+    ASSERT(!block->m_isPinned);
+    {
+        SpinLockHolder locker(&m_toSpaceLock);
+        m_blockSet.remove(block);
+        m_fromSpace->remove(block);
+    }
+    m_heap->blockAllocator().deallocate(CopiedBlock::destroy(block));
 }
 
-inline void CopiedSpace::recycleBlock(CopiedBlock* block)
+inline void CopiedSpace::recycleBorrowedBlock(CopiedBlock* block)
 {
     m_heap->blockAllocator().deallocate(CopiedBlock::destroy(block));
 
diff --git a/Source/JavaScriptCore/heap/CopyVisitor.cpp b/Source/JavaScriptCore/heap/CopyVisitor.cpp
new file mode 100644 (file)
index 0000000..ae826f0
--- /dev/null
@@ -0,0 +1,57 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "CopyVisitor.h"
+
+#include "CopyVisitorInlineMethods.h"
+#include "GCThreadSharedData.h"
+#include "JSCell.h"
+#include "JSObject.h"
+#include <wtf/Threading.h>
+
+namespace JSC {
+
+CopyVisitor::CopyVisitor(GCThreadSharedData& shared)
+    : m_shared(shared)
+{
+}
+
+void CopyVisitor::copyFromShared()
+{
+    GCCopyPhaseFunctor functor(*this);
+    Vector<MarkedBlock*>& blocksToCopy = m_shared.m_blocksToCopy;
+    size_t startIndex, endIndex;
+
+    m_shared.getNextBlocksToCopy(startIndex, endIndex);
+    while (startIndex < endIndex) {
+        for (size_t i = startIndex; i < endIndex; i++)
+            blocksToCopy[i]->forEachLiveCell(functor);
+        m_shared.getNextBlocksToCopy(startIndex, endIndex);
+    }
+    ASSERT(startIndex == endIndex);
+}
+
+} // namespace JSC
diff --git a/Source/JavaScriptCore/heap/CopyVisitor.h b/Source/JavaScriptCore/heap/CopyVisitor.h
new file mode 100644 (file)
index 0000000..45a2e0a
--- /dev/null
@@ -0,0 +1,60 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef CopyVisitor_h
+#define CopyVisitor_h
+
+#include "CopiedSpace.h"
+
+namespace JSC {
+
+class GCThreadSharedData;
+
+class CopyVisitor {
+public:
+    CopyVisitor(GCThreadSharedData&);
+
+    void copyFromShared();
+
+    void startCopying();
+    void doneCopying();
+
+    // Low-level API for copying, appropriate for cases where the object's heap references
+    // are discontiguous or if the object occurs frequently enough that you need to focus on
+    // performance. Use this with care as it is easy to shoot yourself in the foot.
+    bool checkIfShouldCopy(void*, size_t);
+    void* allocateNewSpace(size_t);
+    void didCopy(void*, size_t);
+
+private:
+    void* allocateNewSpaceSlow(size_t);
+
+    GCThreadSharedData& m_shared;
+    CopiedAllocator m_copiedAllocator;
+};
+
+} // namespace JSC
+
+#endif
diff --git a/Source/JavaScriptCore/heap/CopyVisitorInlineMethods.h b/Source/JavaScriptCore/heap/CopyVisitorInlineMethods.h
new file mode 100644 (file)
index 0000000..7340075
--- /dev/null
@@ -0,0 +1,121 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef CopyVisitorInlineMethods_h
+#define CopyVisitorInlineMethods_h
+
+#include "ClassInfo.h"
+#include "CopyVisitor.h"
+#include "GCThreadSharedData.h"
+#include "JSCell.h"
+#include "JSDestructibleObject.h"
+
+namespace JSC {
+
+class GCCopyPhaseFunctor : public MarkedBlock::VoidFunctor {
+public:
+    GCCopyPhaseFunctor(CopyVisitor& visitor)
+        : m_visitor(visitor)
+    {
+    }
+
+    void operator()(JSCell* cell)
+    {
+        Structure* structure = cell->structure();
+        if (!structure->outOfLineCapacity() && !hasIndexedProperties(structure->indexingType()))
+            return;
+        ASSERT(structure->classInfo()->methodTable.copyBackingStore == JSObject::copyBackingStore);
+        JSObject::copyBackingStore(cell, m_visitor);
+    }
+
+private:
+    CopyVisitor& m_visitor;
+};
+
+inline bool CopyVisitor::checkIfShouldCopy(void* oldPtr, size_t bytes)
+{
+    if (CopiedSpace::isOversize(bytes)) {
+        ASSERT(CopiedSpace::oversizeBlockFor(oldPtr)->isPinned());
+        return false;
+    }
+
+    if (CopiedSpace::blockFor(oldPtr)->isPinned())
+        return false;
+
+    return true;
+}
+
+inline void* CopyVisitor::allocateNewSpace(size_t bytes)
+{
+    void* result = 0; // Compilers don't realize that this will be assigned.
+    if (LIKELY(m_copiedAllocator.tryAllocate(bytes, &result)))
+        return result;
+    
+    result = allocateNewSpaceSlow(bytes);
+    ASSERT(result);
+    return result;
+}       
+
+inline void* CopyVisitor::allocateNewSpaceSlow(size_t bytes)
+{
+    CopiedBlock* newBlock = 0;
+    m_shared.m_copiedSpace->doneFillingBlock(m_copiedAllocator.resetCurrentBlock(), &newBlock);
+    m_copiedAllocator.setCurrentBlock(newBlock);
+
+    void* result = 0;
+    CheckedBoolean didSucceed = m_copiedAllocator.tryAllocate(bytes, &result);
+    ASSERT(didSucceed);
+    return result;
+}
+
+inline void CopyVisitor::startCopying()
+{
+    ASSERT(!m_copiedAllocator.isValid());
+    CopiedBlock* block = 0;
+    m_shared.m_copiedSpace->doneFillingBlock(m_copiedAllocator.resetCurrentBlock(), &block);
+    m_copiedAllocator.setCurrentBlock(block);
+}
+
+inline void CopyVisitor::doneCopying()
+{
+    if (!m_copiedAllocator.isValid())
+        return;
+
+    m_shared.m_copiedSpace->doneFillingBlock(m_copiedAllocator.resetCurrentBlock(), 0);
+}
+
+inline void CopyVisitor::didCopy(void* ptr, size_t bytes)
+{
+    ASSERT(!CopiedSpace::isOversize(bytes));
+    CopiedBlock* block = CopiedSpace::blockFor(ptr);
+    ASSERT(!block->isPinned());
+
+    if (block->didEvacuateBytes(bytes))
+        m_shared.m_copiedSpace->recycleEvacuatedBlock(block);
+}
+
+} // namespace JSC
+
+#endif
diff --git a/Source/JavaScriptCore/heap/GCThread.cpp b/Source/JavaScriptCore/heap/GCThread.cpp
new file mode 100644 (file)
index 0000000..5b74b2d
--- /dev/null
@@ -0,0 +1,131 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "GCThread.h"
+
+#include "CopyVisitor.h"
+#include "CopyVisitorInlineMethods.h"
+#include "GCThreadSharedData.h"
+#include "SlotVisitor.h"
+#include <wtf/MainThread.h>
+#include <wtf/PassOwnPtr.h>
+
+namespace JSC {
+
+GCThread::GCThread(GCThreadSharedData& shared, SlotVisitor* slotVisitor, CopyVisitor* copyVisitor, size_t index)
+    : m_threadID(0)
+    , m_shared(shared)
+    , m_slotVisitor(WTF::adoptPtr(slotVisitor))
+    , m_copyVisitor(WTF::adoptPtr(copyVisitor))
+    , m_index(index)
+{
+}
+
+ThreadIdentifier GCThread::threadID()
+{
+    ASSERT(m_threadID);
+    return m_threadID;
+}
+
+void GCThread::initializeThreadID(ThreadIdentifier threadID)
+{
+    ASSERT(!m_threadID);
+    m_threadID = threadID;
+}
+
+SlotVisitor* GCThread::slotVisitor()
+{
+    ASSERT(m_slotVisitor);
+    return m_slotVisitor.get();
+}
+
+CopyVisitor* GCThread::copyVisitor()
+{
+    ASSERT(m_copyVisitor);
+    return m_copyVisitor.get();
+}
+
+GCPhase GCThread::waitForNextPhase()
+{
+    MutexLocker locker(m_shared.m_phaseLock);
+    while (m_shared.m_currentPhase == NoPhase)
+        m_shared.m_phaseCondition.wait(m_shared.m_phaseLock);
+    return m_shared.m_currentPhase;
+}
+
+void GCThread::gcThreadMain()
+{
+    GCPhase currentPhase;
+#if ENABLE(PARALLEL_GC)
+    WTF::registerGCThread();
+#endif
+    // Wait for the main thread to finish creating and initializing us. The main thread grabs this lock before 
+    // creating this thread. We aren't guaranteed to have a valid threadID until the main thread releases this lock.
+    {
+        MutexLocker locker(m_shared.m_markingLock);
+    }
+    {
+        ParallelModeEnabler enabler(*m_slotVisitor);
+        while ((currentPhase = waitForNextPhase()) != Exit) {
+            // Note: Each phase is responsible for its own termination conditions. The comments below describe 
+            // how each phase reaches termination.
+            switch (currentPhase) {
+            case Mark:
+                m_slotVisitor->drainFromShared(SlotVisitor::SlaveDrain);
+                // GCThreads only return from drainFromShared() if the main thread sets the m_parallelMarkersShouldExit 
+                // flag in the GCThreadSharedData. The only way the main thread sets that flag is if it realizes 
+                // that all of the various subphases in Heap::markRoots() have been fully finished and there is 
+                // no more marking work to do and all of the GCThreads are idle, meaning no more work can be generated.
+                break;
+            case Copy:
+                // We don't have to call startCopying() because it's called for us on the main thread to avoid a 
+                // race condition.
+                m_copyVisitor->copyFromShared();
+                // We know we're done copying when we return from copyFromShared() because we would 
+                // only do so if there were no more chunks of copying work left to do. When there is no 
+                // more copying work to do, the main thread will wait in CopiedSpace::doneCopying() until 
+                // all of the blocks that the GCThreads borrowed have been returned. doneCopying() 
+                // returns our borrowed CopiedBlock, allowing the copying phase to finish.
+                m_copyVisitor->doneCopying();
+                break;
+            case NoPhase:
+                ASSERT_NOT_REACHED();
+                break;
+            case Exit:
+                ASSERT_NOT_REACHED();
+                break;
+            }
+        }
+    }
+}
+
+void GCThread::gcThreadStartFunc(void* data)
+{
+    GCThread* thread = static_cast<GCThread*>(data);
+    thread->gcThreadMain();
+}
+
+} // namespace JSC
diff --git a/Source/JavaScriptCore/heap/GCThread.h b/Source/JavaScriptCore/heap/GCThread.h
new file mode 100644 (file)
index 0000000..07746dc
--- /dev/null
@@ -0,0 +1,64 @@
+/*
+ * Copyright (C) 2012 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef GCThread_h
+#define GCThread_h
+
+#include <GCThreadSharedData.h>
+#include <wtf/Deque.h>
+#include <wtf/OwnPtr.h>
+#include <wtf/Threading.h>
+
+namespace JSC {
+
+class CopyVisitor;
+class GCThreadSharedData;
+class SlotVisitor;
+
+class GCThread {
+public:
+    GCThread(GCThreadSharedData&, SlotVisitor*, CopyVisitor*, size_t);
+
+    SlotVisitor* slotVisitor();
+    CopyVisitor* copyVisitor();
+    ThreadIdentifier threadID();
+    void initializeThreadID(ThreadIdentifier);
+
+    static void gcThreadStartFunc(void*);
+
+private:
+    void gcThreadMain();
+    GCPhase waitForNextPhase();
+
+    ThreadIdentifier m_threadID;
+    GCThreadSharedData& m_shared;
+    OwnPtr<SlotVisitor> m_slotVisitor;
+    OwnPtr<CopyVisitor> m_copyVisitor;
+    size_t m_index;
+};
+
+} // namespace JSC
+
+#endif
index 23a6b97..12f3ef4 100644 (file)
 #include "config.h"
 #include "GCThreadSharedData.h"
 
+#include "CopyVisitor.h"
+#include "CopyVisitorInlineMethods.h"
+#include "GCThread.h"
 #include "JSGlobalData.h"
 #include "MarkStack.h"
 #include "SlotVisitor.h"
 #include "SlotVisitorInlineMethods.h"
-#include <wtf/MainThread.h>
 
 namespace JSC {
 
 #if ENABLE(PARALLEL_GC)
 void GCThreadSharedData::resetChildren()
 {
-    for (unsigned i = 0; i < m_markingThreadsMarkStack.size(); ++i)
-        m_markingThreadsMarkStack[i]->reset();
+    for (size_t i = 0; i < m_gcThreads.size(); ++i)
+        m_gcThreads[i]->slotVisitor()->reset();
 }
 
 size_t GCThreadSharedData::childVisitCount()
 {       
     unsigned long result = 0;
-    for (unsigned i = 0; i < m_markingThreadsMarkStack.size(); ++i)
-        result += m_markingThreadsMarkStack[i]->visitCount();
+    for (unsigned i = 0; i < m_gcThreads.size(); ++i)
+        result += m_gcThreads[i]->slotVisitor()->visitCount();
     return result;
 }
-
-void GCThreadSharedData::markingThreadMain(SlotVisitor* slotVisitor)
-{
-    WTF::registerGCThread();
-    {
-        ParallelModeEnabler enabler(*slotVisitor);
-        slotVisitor->drainFromShared(SlotVisitor::SlaveDrain);
-    }
-    delete slotVisitor;
-}
-
-void GCThreadSharedData::markingThreadStartFunc(void* myVisitor)
-{               
-    SlotVisitor* slotVisitor = static_cast<SlotVisitor*>(myVisitor);
-
-    slotVisitor->sharedData().markingThreadMain(slotVisitor);
-}
 #endif
 
 GCThreadSharedData::GCThreadSharedData(JSGlobalData* globalData)
@@ -74,13 +59,22 @@ GCThreadSharedData::GCThreadSharedData(JSGlobalData* globalData)
     , m_sharedMarkStack(m_segmentAllocator)
     , m_numberOfActiveParallelMarkers(0)
     , m_parallelMarkersShouldExit(false)
+    , m_blocksToCopy(globalData->heap.m_blockSnapshot)
+    , m_copyIndex(0)
+    , m_currentPhase(NoPhase)
 {
+    m_copyLock.Init();
 #if ENABLE(PARALLEL_GC)
+    // Grab the lock so the new GC threads can be properly initialized before they start running.
+    MutexLocker locker(m_markingLock);
     for (unsigned i = 1; i < Options::numberOfGCMarkers(); ++i) {
         SlotVisitor* slotVisitor = new SlotVisitor(*this);
-        m_markingThreadsMarkStack.append(slotVisitor);
-        m_markingThreads.append(createThread(markingThreadStartFunc, slotVisitor, "JavaScriptCore::Marking"));
-        ASSERT(m_markingThreads.last());
+        CopyVisitor* copyVisitor = new CopyVisitor(*this);
+        size_t index = m_gcThreads.size();
+        GCThread* newThread = new GCThread(*this, slotVisitor, copyVisitor, index);
+        ThreadIdentifier threadID = createThread(GCThread::gcThreadStartFunc, newThread, "JavaScriptCore::Marking");
+        newThread->initializeThreadID(threadID);
+        m_gcThreads.append(newThread);
     }
 #endif
 }
@@ -90,19 +84,22 @@ GCThreadSharedData::~GCThreadSharedData()
 #if ENABLE(PARALLEL_GC)    
     // Destroy our marking threads.
     {
-        MutexLocker locker(m_markingLock);
+        MutexLocker markingLocker(m_markingLock);
+        MutexLocker phaseLocker(m_phaseLock);
+        ASSERT(m_currentPhase == NoPhase);
         m_parallelMarkersShouldExit = true;
-        m_markingCondition.broadcast();
+        m_currentPhase = Exit;
+        m_phaseCondition.broadcast();
+    }
+    for (unsigned i = 0; i < m_gcThreads.size(); ++i) {
+        waitForThreadCompletion(m_gcThreads[i]->threadID());
+        delete m_gcThreads[i];
     }
-    for (unsigned i = 0; i < m_markingThreads.size(); ++i)
-        waitForThreadCompletion(m_markingThreads[i]);
 #endif
 }
     
 void GCThreadSharedData::reset()
 {
-    ASSERT(!m_numberOfActiveParallelMarkers);
-    ASSERT(!m_parallelMarkersShouldExit);
     ASSERT(m_sharedMarkStack.isEmpty());
     
 #if ENABLE(PARALLEL_GC)
@@ -119,4 +116,53 @@ void GCThreadSharedData::reset()
     }
 }
 
+void GCThreadSharedData::didStartMarking()
+{
+    MutexLocker markingLocker(m_markingLock);
+    MutexLocker phaseLocker(m_phaseLock);
+    ASSERT(m_currentPhase == NoPhase);
+    m_currentPhase = Mark;
+    m_parallelMarkersShouldExit = false;
+    m_phaseCondition.broadcast();
+}
+
+void GCThreadSharedData::didFinishMarking()
+{
+    MutexLocker markingLocker(m_markingLock);
+    MutexLocker phaseLocker(m_phaseLock);
+    ASSERT(m_currentPhase == Mark);
+    m_currentPhase = NoPhase;
+    m_parallelMarkersShouldExit = true;
+    m_markingCondition.broadcast();
+}
+
+void GCThreadSharedData::didStartCopying()
+{
+    {
+        SpinLockHolder locker(&m_copyLock);
+        m_blocksToCopy = m_globalData->heap.m_blockSnapshot;
+        m_copyIndex = 0;
+    }
+
+    // We do this here so that we avoid a race condition where the main thread can 
+    // blow through all of the copying work before the GCThreads fully wake up. 
+    // The GCThreads then request a block from the CopiedSpace when the copying phase 
+    // has completed, which isn't allowed.
+    for (size_t i = 0; i < m_gcThreads.size(); i++)
+        m_gcThreads[i]->copyVisitor()->startCopying();
+
+    MutexLocker locker(m_phaseLock);
+    ASSERT(m_currentPhase == NoPhase);
+    m_currentPhase = Copy;
+    m_phaseCondition.broadcast(); 
+}
+
+void GCThreadSharedData::didFinishCopying()
+{
+    MutexLocker locker(m_phaseLock);
+    ASSERT(m_currentPhase == Copy);
+    m_currentPhase = NoPhase;
+    m_phaseCondition.broadcast();
+}
+
 } // namespace JSC
index 3f09a28..bd48d92 100644 (file)
 
 #include "ListableHandler.h"
 #include "MarkStack.h"
+#include "MarkedBlock.h"
 #include "UnconditionalFinalizer.h"
 #include "WeakReferenceHarvester.h"
 #include <wtf/HashSet.h>
+#include <wtf/TCSpinLock.h>
 #include <wtf/Threading.h>
 #include <wtf/Vector.h>
 
 namespace JSC {
 
+class GCThread;
 class JSGlobalData;
 class CopiedSpace;
+class CopyVisitor;
+
+enum GCPhase {
+    NoPhase,
+    Mark,
+    Copy,
+    Exit
+};
 
 class GCThreadSharedData {
 public:
@@ -46,6 +57,11 @@ public:
     
     void reset();
 
+    void didStartMarking();
+    void didFinishMarking();
+    void didStartCopying();
+    void didFinishCopying();
+
 #if ENABLE(PARALLEL_GC)
     void resetChildren();
     size_t childVisitCount();
@@ -53,12 +69,11 @@ public:
 #endif
     
 private:
+    friend class GCThread;
     friend class SlotVisitor;
+    friend class CopyVisitor;
 
-#if ENABLE(PARALLEL_GC)
-    void markingThreadMain(SlotVisitor*);
-    static void markingThreadStartFunc(void* heap);
-#endif
+    void getNextBlocksToCopy(size_t&, size_t&);
 
     JSGlobalData* m_globalData;
     CopiedSpace* m_copiedSpace;
@@ -67,9 +82,8 @@ private:
     
     bool m_shouldHashConst;
 
-    Vector<ThreadIdentifier> m_markingThreads;
-    Vector<SlotVisitor*> m_markingThreadsMarkStack;
-    
+    Vector<GCThread*> m_gcThreads;
+
     Mutex m_markingLock;
     ThreadCondition m_markingCondition;
     MarkStackArray m_sharedMarkStack;
@@ -79,10 +93,27 @@ private:
     Mutex m_opaqueRootsLock;
     HashSet<void*> m_opaqueRoots;
 
+    SpinLock m_copyLock;
+    Vector<MarkedBlock*>& m_blocksToCopy;
+    size_t m_copyIndex;
+    static const size_t s_blockFragmentLength = 32;
+
+    Mutex m_phaseLock;
+    ThreadCondition m_phaseCondition;
+    GCPhase m_currentPhase;
+
     ListableHandler<WeakReferenceHarvester>::List m_weakReferenceHarvesters;
     ListableHandler<UnconditionalFinalizer>::List m_unconditionalFinalizers;
 };
 
+inline void GCThreadSharedData::getNextBlocksToCopy(size_t& start, size_t& end)
+{
+    SpinLockHolder locker(&m_copyLock);
+    start = m_copyIndex;
+    end = std::min(m_blocksToCopy.size(), m_copyIndex + s_blockFragmentLength);
+    m_copyIndex = end;
+}
+
 } // namespace JSC
 
 #endif
index 2d881eb..83a8d56 100644 (file)
 #include "config.h"
 #include "Heap.h"
 
-#include "CopiedSpace.h"
-#include "CopiedSpaceInlineMethods.h"
 #include "CodeBlock.h"
 #include "ConservativeRoots.h"
+#include "CopiedSpace.h"
+#include "CopiedSpaceInlineMethods.h"
+#include "CopyVisitorInlineMethods.h"
 #include "GCActivityCallback.h"
 #include "HeapRootVisitor.h"
 #include "HeapStatistics.h"
@@ -252,6 +253,7 @@ Heap::Heap(JSGlobalData* globalData, HeapType heapType)
     , m_machineThreads(this)
     , m_sharedData(globalData)
     , m_slotVisitor(m_sharedData)
+    , m_copyVisitor(m_sharedData)
     , m_handleSet(globalData)
     , m_isSafeToCollect(false)
     , m_globalData(globalData)
@@ -464,7 +466,7 @@ void Heap::markRoots(bool fullGC)
         m_objectSpace.clearMarks();
     }
 
-    m_storageSpace.startedCopying();
+    m_sharedData.didStartMarking();
     SlotVisitor& visitor = m_slotVisitor;
     visitor.setup();
     HeapRootVisitor heapRootVisitor(visitor);
@@ -589,7 +591,7 @@ void Heap::markRoots(bool fullGC)
 
     GCCOUNTER(VisitedValueCount, visitor.visitCount());
 
-    visitor.doneCopying();
+    m_sharedData.didFinishMarking();
 #if ENABLE(OBJECT_MARK_LOGGING)
     size_t visitCount = visitor.visitCount();
 #if ENABLE(PARALLEL_GC)
@@ -603,6 +605,19 @@ void Heap::markRoots(bool fullGC)
     m_sharedData.resetChildren();
 #endif
     m_sharedData.reset();
+}
+
+void Heap::copyBackingStores()
+{
+    m_storageSpace.startedCopying();
+    if (m_storageSpace.shouldDoCopyPhase()) {
+        m_sharedData.didStartCopying();
+        CopyVisitor& visitor = m_copyVisitor;
+        visitor.startCopying();
+        visitor.copyFromShared();
+        visitor.doneCopying();
+        m_sharedData.didFinishCopying();
+    } 
     m_storageSpace.doneCopying();
 }
 
@@ -734,6 +749,14 @@ void Heap::collect(SweepToggle sweepToggle)
     JAVASCRIPTCORE_GC_MARKED();
 
     {
+        m_blockSnapshot.resize(m_objectSpace.blocks().set().size());
+        MarkedBlockSnapshotFunctor functor(m_blockSnapshot);
+        m_objectSpace.forEachBlock(functor);
+    }
+
+    copyBackingStores();
+
+    {
         GCPHASE(FinalizeUnconditionalFinalizers);
         finalizeUnconditionalFinalizers();
     }
@@ -755,7 +778,7 @@ void Heap::collect(SweepToggle sweepToggle)
         m_objectSpace.shrink();
     }
 
-    m_sweeper->startSweeping(m_objectSpace.blocks().set());
+    m_sweeper->startSweeping(m_blockSnapshot);
     m_bytesAbandoned = 0;
 
     {
index c7254a8..88dc201 100644 (file)
@@ -23,6 +23,7 @@
 #define Heap_h
 
 #include "BlockAllocator.h"
+#include "CopyVisitor.h"
 #include "DFGCodeBlocks.h"
 #include "GCThreadSharedData.h"
 #include "HandleSet.h"
@@ -182,7 +183,9 @@ namespace JSC {
         friend class MarkedAllocator;
         friend class MarkedBlock;
         friend class CopiedSpace;
+        friend class CopyVisitor;
         friend class SlotVisitor;
+        friend class IncrementalSweeper;
         friend class HeapStatistics;
         template<typename T> friend void* allocateCell(Heap&);
         template<typename T> friend void* allocateCell(Heap&, size_t);
@@ -204,6 +207,7 @@ namespace JSC {
         void markRoots(bool fullGC);
         void markProtectedObjects(HeapRootVisitor&);
         void markTempSortVectors(HeapRootVisitor&);
+        void copyBackingStores();
         void harvestWeakReferences();
         void finalizeUnconditionalFinalizers();
         void deleteUnmarkedCompiledCode();
@@ -239,6 +243,7 @@ namespace JSC {
         
         GCThreadSharedData m_sharedData;
         SlotVisitor m_slotVisitor;
+        CopyVisitor m_copyVisitor;
 
         HandleSet m_handleSet;
         HandleStack m_handleStack;
@@ -256,6 +261,20 @@ namespace JSC {
         
         GCActivityCallback* m_activityCallback;
         IncrementalSweeper* m_sweeper;
+        Vector<MarkedBlock*> m_blockSnapshot;
+    };
+
+    struct MarkedBlockSnapshotFunctor : public MarkedBlock::VoidFunctor {
+        MarkedBlockSnapshotFunctor(Vector<MarkedBlock*>& blocks) 
+            : m_index(0) 
+            , m_blocks(blocks)
+        {
+        }
+    
+        void operator()(MarkedBlock* block) { m_blocks[m_index++] = block; }
+    
+        size_t m_index;
+        Vector<MarkedBlock*>& m_blocks;
     };
 
     inline bool Heap::shouldCollect()
index 9719c95..4aec4dd 100644 (file)
@@ -48,6 +48,7 @@ static const double sweepTimeMultiplier = 1.0 / sweepTimeTotal;
 IncrementalSweeper::IncrementalSweeper(Heap* heap, CFRunLoopRef runLoop)
     : HeapTimer(heap->globalData(), runLoop)
     , m_currentBlockToSweepIndex(0)
+    , m_blocksToSweep(heap->m_blockSnapshot)
 {
 }
 
@@ -127,11 +128,9 @@ void IncrementalSweeper::sweepNextBlock()
     }
 }
 
-void IncrementalSweeper::startSweeping(const HashSet<MarkedBlock*>& blockSnapshot)
+void IncrementalSweeper::startSweeping(Vector<MarkedBlock*>& blockSnapshot)
 {
-    m_blocksToSweep.resize(blockSnapshot.size());
-    CopyFunctor functor(m_blocksToSweep);
-    m_globalData->heap.objectSpace().forEachBlock(functor);
+    m_blocksToSweep = blockSnapshot;
     m_currentBlockToSweepIndex = 0;
     scheduleTimer();
 }
@@ -160,7 +159,7 @@ IncrementalSweeper* IncrementalSweeper::create(Heap* heap)
     return new IncrementalSweeper(heap->globalData());
 }
 
-void IncrementalSweeper::startSweeping(const HashSet<MarkedBlock*>&)
+void IncrementalSweeper::startSweeping(Vector<MarkedBlock*>&)
 {
 }
 
index e83447b..5b9267b 100644 (file)
@@ -37,23 +37,10 @@ namespace JSC {
 
 class Heap;
 
-struct CopyFunctor : public MarkedBlock::VoidFunctor {
-    CopyFunctor(Vector<MarkedBlock*>& blocks) 
-        : m_index(0) 
-        , m_blocks(blocks)
-    {
-    }
-
-    void operator()(MarkedBlock* block) { m_blocks[m_index++] = block; }
-
-    size_t m_index;
-    Vector<MarkedBlock*>& m_blocks;
-};
-
 class IncrementalSweeper : public HeapTimer {
 public:
     static IncrementalSweeper* create(Heap*);
-    void startSweeping(const HashSet<MarkedBlock*>& blockSnapshot);
+    void startSweeping(Vector<MarkedBlock*>&);
     virtual void doWork();
     void sweepNextBlock();
     void willFinishSweeping();
@@ -71,7 +58,7 @@ private:
     void cancelTimer();
     
     unsigned m_currentBlockToSweepIndex;
-    Vector<MarkedBlock*> m_blocksToSweep;
+    Vector<MarkedBlock*>& m_blocksToSweep;
 #else
     
     IncrementalSweeper(JSGlobalData*);
index 8dce086..26d056f 100644 (file)
@@ -4,6 +4,7 @@
 #include "ConservativeRoots.h"
 #include "CopiedSpace.h"
 #include "CopiedSpaceInlineMethods.h"
+#include "GCThread.h"
 #include "JSArray.h"
 #include "JSDestructibleObject.h"
 #include "JSGlobalData.h"
@@ -35,8 +36,8 @@ void SlotVisitor::setup()
     m_shared.m_shouldHashConst = m_shared.m_globalData->haveEnoughNewStringsToHashConst();
     m_shouldHashConst = m_shared.m_shouldHashConst;
 #if ENABLE(PARALLEL_GC)
-    for (unsigned i = 0; i < m_shared.m_markingThreadsMarkStack.size(); ++i)
-        m_shared.m_markingThreadsMarkStack[i]->m_shouldHashConst = m_shared.m_shouldHashConst;
+    for (unsigned i = 0; i < m_shared.m_gcThreads.size(); ++i)
+        m_shared.m_gcThreads[i]->slotVisitor()->m_shouldHashConst = m_shared.m_shouldHashConst;
 #endif
 }
 
@@ -181,7 +182,7 @@ void SlotVisitor::drainFromShared(SharedDrainMode sharedDrainMode)
                 while (true) {
                     // Did we reach termination?
                     if (!m_shared.m_numberOfActiveParallelMarkers && m_shared.m_sharedMarkStack.isEmpty()) {
-                        // Let any sleeping slaves know it's time for them to give their private CopiedBlocks back
+                        // Let any sleeping slaves know it's time for them to return;
                         m_shared.m_markingCondition.broadcast();
                         return;
                     }
@@ -200,17 +201,12 @@ void SlotVisitor::drainFromShared(SharedDrainMode sharedDrainMode)
                 if (!m_shared.m_numberOfActiveParallelMarkers && m_shared.m_sharedMarkStack.isEmpty())
                     m_shared.m_markingCondition.broadcast();
                 
-                while (m_shared.m_sharedMarkStack.isEmpty() && !m_shared.m_parallelMarkersShouldExit) {
-                    if (!m_shared.m_numberOfActiveParallelMarkers && m_shared.m_sharedMarkStack.isEmpty())
-                        doneCopying();
+                while (m_shared.m_sharedMarkStack.isEmpty() && !m_shared.m_parallelMarkersShouldExit)
                     m_shared.m_markingCondition.wait(m_shared.m_markingLock);
-                }
                 
-                // Is the VM exiting? If so, exit this thread.
-                if (m_shared.m_parallelMarkersShouldExit) {
-                    doneCopying();
+                // Is the current phase done? If so, return from this function.
+                if (m_shared.m_parallelMarkersShouldExit)
                     return;
-                }
             }
            
             size_t idleThreadCount = Options::numberOfGCMarkers() - m_shared.m_numberOfActiveParallelMarkers;
@@ -236,30 +232,6 @@ void SlotVisitor::mergeOpaqueRoots()
     m_opaqueRoots.clear();
 }
 
-void SlotVisitor::startCopying()
-{
-    ASSERT(!m_copiedAllocator.isValid());
-}
-
-void* SlotVisitor::allocateNewSpaceSlow(size_t bytes)
-{
-    m_shared.m_copiedSpace->doneFillingBlock(m_copiedAllocator.resetCurrentBlock());
-    m_copiedAllocator.setCurrentBlock(m_shared.m_copiedSpace->allocateBlockForCopyingPhase());
-
-    void* result = 0;
-    CheckedBoolean didSucceed = m_copiedAllocator.tryAllocate(bytes, &result);
-    ASSERT(didSucceed);
-    return result;
-}
-
-void* SlotVisitor::allocateNewSpaceOrPin(void* ptr, size_t bytes)
-{
-    if (!checkIfShouldCopyAndPinOtherwise(ptr, bytes))
-        return 0;
-    
-    return allocateNewSpace(bytes);
-}
-
 ALWAYS_INLINE bool JSString::tryHashConstLock()
 {
 #if ENABLE(PARALLEL_GC)
@@ -335,36 +307,6 @@ ALWAYS_INLINE void SlotVisitor::internalAppend(JSValue* slot)
     internalAppend(cell);
 }
 
-void SlotVisitor::copyAndAppend(void** ptr, size_t bytes, JSValue* values, unsigned length)
-{
-    void* oldPtr = *ptr;
-    void* newPtr = allocateNewSpaceOrPin(oldPtr, bytes);
-    if (newPtr) {
-        size_t jsValuesOffset = static_cast<size_t>(reinterpret_cast<char*>(values) - static_cast<char*>(oldPtr));
-
-        JSValue* newValues = reinterpret_cast_ptr<JSValue*>(static_cast<char*>(newPtr) + jsValuesOffset);
-        for (unsigned i = 0; i < length; i++) {
-            JSValue& value = values[i];
-            newValues[i] = value;
-            if (!value)
-                continue;
-            internalAppend(&newValues[i]);
-        }
-
-        memcpy(newPtr, oldPtr, jsValuesOffset);
-        *ptr = newPtr;
-    } else
-        append(values, length);
-}
-    
-void SlotVisitor::doneCopying()
-{
-    if (!m_copiedAllocator.isValid())
-        return;
-
-    m_shared.m_copiedSpace->doneFillingBlock(m_copiedAllocator.resetCurrentBlock());
-}
-
 void SlotVisitor::harvestWeakReferences()
 {
     for (WeakReferenceHarvester* current = m_shared.m_weakReferenceHarvesters.head(); current; current = current->next())
index 230ed33..dcd4b75 100644 (file)
@@ -26,7 +26,6 @@
 #ifndef SlotVisitor_h
 #define SlotVisitor_h
 
-#include "CopiedSpace.h"
 #include "HandleTypes.h"
 #include "MarkStackInlineMethods.h"
 
@@ -80,21 +79,8 @@ public:
     void harvestWeakReferences();
     void finalizeUnconditionalFinalizers();
 
-    void startCopying();
+    void copyLater(void*, size_t);
     
-    // High-level API for copying, appropriate for cases where the object's heap references
-    // fall into a contiguous region of the storage chunk and if the object for which you're
-    // doing copying does not occur frequently.
-    void copyAndAppend(void**, size_t, JSValue*, unsigned);
-    
-    // Low-level API for copying, appropriate for cases where the object's heap references
-    // are discontiguous or if the object occurs frequently enough that you need to focus on
-    // performance. Use this with care as it is easy to shoot yourself in the foot.
-    bool checkIfShouldCopyAndPinOtherwise(void* oldPtr, size_t);
-    void* allocateNewSpace(size_t);
-    
-    void doneCopying(); 
-        
 #if ENABLE(SIMPLE_HEAP_PROFILING)
     VTableSpectrum m_visitedTypeCounts;
 #endif
@@ -125,9 +111,6 @@ private:
     void mergeOpaqueRootsIfNecessary();
     void mergeOpaqueRootsIfProfitable();
     
-    void* allocateNewSpaceOrPin(void*, size_t);
-    void* allocateNewSpaceSlow(size_t);
-
     void donateKnownParallel();
 
     MarkStackArray m_stack;
@@ -146,8 +129,6 @@ private:
     unsigned m_logChildCount;
 #endif
 
-    CopiedAllocator m_copiedAllocator;
-
 public:
 #if !ASSERT_DISABLED
     bool m_isCheckingForDefaultMarkViolation;
index 540da3b..e5908bf 100644 (file)
@@ -136,30 +136,6 @@ inline void SlotVisitor::mergeOpaqueRootsIfProfitable()
     mergeOpaqueRoots();
 }
     
-ALWAYS_INLINE bool SlotVisitor::checkIfShouldCopyAndPinOtherwise(void* oldPtr, size_t bytes)
-{
-    if (CopiedSpace::isOversize(bytes)) {
-        m_shared.m_copiedSpace->pin(CopiedSpace::oversizeBlockFor(oldPtr));
-        return false;
-    }
-
-    if (m_shared.m_copiedSpace->isPinned(oldPtr))
-        return false;
-    
-    return true;
-}
-
-ALWAYS_INLINE void* SlotVisitor::allocateNewSpace(size_t bytes)
-{
-    void* result = 0; // Compilers don't realize that this will be assigned.
-    if (LIKELY(m_copiedAllocator.tryAllocate(bytes, &result)))
-        return result;
-    
-    result = allocateNewSpaceSlow(bytes);
-    ASSERT(result);
-    return result;
-}       
-
 inline void SlotVisitor::donate()
 {
     ASSERT(m_isInParallelMode);
@@ -175,6 +151,23 @@ inline void SlotVisitor::donateAndDrain()
     drain();
 }
 
+inline void SlotVisitor::copyLater(void* ptr, size_t bytes)
+{
+    if (CopiedSpace::isOversize(bytes)) {
+        m_shared.m_copiedSpace->pin(CopiedSpace::oversizeBlockFor(ptr));
+        return;
+    }
+
+    CopiedBlock* block = CopiedSpace::blockFor(ptr);
+    if (block->isPinned())
+        return;
+
+    block->reportLiveBytes(bytes);
+
+    if (!block->shouldEvacuate())
+        m_shared.m_copiedSpace->pin(block);
+}
+    
 } // namespace JSC
 
 #endif // SlotVisitorInlineMethods_h
index e014e55..cb93aea 100644 (file)
@@ -35,7 +35,7 @@
 namespace JSC {
 
 class JSGlobalData;
-class SlotVisitor;
+class CopyVisitor;
 struct ArrayStorage;
 
 class Butterfly {
@@ -73,7 +73,7 @@ public:
 
     static Butterfly* create(JSGlobalData&, size_t preCapacity, size_t propertyCapacity, bool hasIndexingHeader, const IndexingHeader&, size_t indexingPayloadSizeInBytes);
     static Butterfly* create(JSGlobalData&, Structure*);
-    static Butterfly* createUninitializedDuringCollection(SlotVisitor&, size_t preCapacity, size_t propertyCapacity, bool hasIndexingHeader, size_t indexingPayloadSizeInBytes);
+    static Butterfly* createUninitializedDuringCollection(CopyVisitor&, size_t preCapacity, size_t propertyCapacity, bool hasIndexingHeader, size_t indexingPayloadSizeInBytes);
     
     IndexingHeader* indexingHeader() { return IndexingHeader::from(this); }
     const IndexingHeader* indexingHeader() const { return IndexingHeader::from(this); }
index 9200259..86a836b 100644 (file)
@@ -29,8 +29,8 @@
 #include "ArrayStorage.h"
 #include "Butterfly.h"
 #include "CopiedSpaceInlineMethods.h"
+#include "CopyVisitor.h"
 #include "JSGlobalData.h"
-#include "SlotVisitor.h"
 #include "Structure.h"
 
 namespace JSC {
@@ -59,7 +59,7 @@ inline Butterfly* Butterfly::create(JSGlobalData& globalData, Structure* structu
     return create(globalData, 0, structure->outOfLineCapacity(), hasIndexingHeader(structure->indexingType()), IndexingHeader(), 0);
 }
 
-inline Butterfly* Butterfly::createUninitializedDuringCollection(SlotVisitor& visitor, size_t preCapacity, size_t propertyCapacity, bool hasIndexingHeader, size_t indexingPayloadSizeInBytes)
+inline Butterfly* Butterfly::createUninitializedDuringCollection(CopyVisitor& visitor, size_t preCapacity, size_t propertyCapacity, bool hasIndexingHeader, size_t indexingPayloadSizeInBytes)
 {
     Butterfly* result = fromBase(
         visitor.allocateNewSpace(totalSize(preCapacity, propertyCapacity, hasIndexingHeader, indexingPayloadSizeInBytes)),
index e8823d5..c918621 100644 (file)
@@ -39,6 +39,9 @@ namespace JSC {
         typedef void (*VisitChildrenFunctionPtr)(JSCell*, SlotVisitor&);
         VisitChildrenFunctionPtr visitChildren;
 
+        typedef void (*CopyBackingStoreFunctionPtr)(JSCell*, CopyVisitor&);
+        CopyBackingStoreFunctionPtr copyBackingStore;
+
         typedef CallType (*GetCallDataFunctionPtr)(JSCell*, CallData&);
         GetCallDataFunctionPtr getCallData;
 
@@ -116,6 +119,7 @@ struct MemberCheck##member { \
 #define CREATE_METHOD_TABLE(ClassName) { \
         &ClassName::destroy, \
         &ClassName::visitChildren, \
+        &ClassName::copyBackingStore, \
         &ClassName::getCallData, \
         &ClassName::getConstructData, \
         &ClassName::put, \
index 739247f..f6f4d71 100644 (file)
@@ -38,6 +38,10 @@ void JSCell::destroy(JSCell* cell)
     cell->JSCell::~JSCell();
 }
 
+void JSCell::copyBackingStore(JSCell*, CopyVisitor&)
+{
+}
+
 bool JSCell::getString(ExecState* exec, String& stringValue) const
 {
     if (!isString())
index 1cc5b81..a39af12 100644 (file)
@@ -38,6 +38,7 @@
 
 namespace JSC {
 
+    class CopyVisitor;
     class JSDestructibleObject;
     class JSGlobalObject;
     class LLIntOffsetsExtractor;
@@ -100,6 +101,7 @@ namespace JSC {
         JS_EXPORT_PRIVATE JSObject* toObject(ExecState*, JSGlobalObject*) const;
 
         static void visitChildren(JSCell*, SlotVisitor&);
+        JS_EXPORT_PRIVATE static void copyBackingStore(JSCell*, CopyVisitor&);
 
         // Object operations, with the toObject operation included.
         const ClassInfo* classInfo() const;
index fb69591..6a3fb84 100644 (file)
@@ -26,6 +26,8 @@
 
 #include "ButterflyInlineMethods.h"
 #include "CopiedSpaceInlineMethods.h"
+#include "CopyVisitor.h"
+#include "CopyVisitorInlineMethods.h"
 #include "DatePrototype.h"
 #include "ErrorConstructor.h"
 #include "GetterSetter.h"
@@ -90,7 +92,7 @@ static inline void getClassPropertyNames(ExecState* exec, const ClassInfo* class
     }
 }
 
-ALWAYS_INLINE void JSObject::visitButterfly(SlotVisitor& visitor, Butterfly* butterfly, size_t storageSize)
+ALWAYS_INLINE void JSObject::copyButterfly(CopyVisitor& visitor, Butterfly* butterfly, size_t storageSize)
 {
     ASSERT(butterfly);
     
@@ -107,26 +109,20 @@ ALWAYS_INLINE void JSObject::visitButterfly(SlotVisitor& visitor, Butterfly* but
         preCapacity = 0;
         indexingPayloadSizeInBytes = 0;
     }
-    size_t capacityInBytes = Butterfly::totalSize(
-        preCapacity, propertyCapacity, hasIndexingHeader, indexingPayloadSizeInBytes);
-    if (visitor.checkIfShouldCopyAndPinOtherwise(
-            butterfly->base(preCapacity, propertyCapacity), capacityInBytes)) {
+    size_t capacityInBytes = Butterfly::totalSize(preCapacity, propertyCapacity, hasIndexingHeader, indexingPayloadSizeInBytes);
+    if (visitor.checkIfShouldCopy(butterfly->base(preCapacity, propertyCapacity), capacityInBytes)) {
         Butterfly* newButterfly = Butterfly::createUninitializedDuringCollection(visitor, preCapacity, propertyCapacity, hasIndexingHeader, indexingPayloadSizeInBytes);
         
-        // Mark and copy the properties.
+        // Copy the properties.
         PropertyStorage currentTarget = newButterfly->propertyStorage();
         PropertyStorage currentSource = butterfly->propertyStorage();
-        for (size_t count = storageSize; count--;) {
-            JSValue value = (--currentSource)->get();
-            ASSERT(value);
-            visitor.appendUnbarrieredValue(&value);
-            (--currentTarget)->setWithoutWriteBarrier(value);
-        }
+        for (size_t count = storageSize; count--;)
+            (--currentTarget)->setWithoutWriteBarrier((--currentSource)->get());
         
         if (UNLIKELY(hasIndexingHeader)) {
             *newButterfly->indexingHeader() = *butterfly->indexingHeader();
             
-            // Mark and copy the array if appropriate.
+            // Copy the array if appropriate.
             
             WriteBarrier<Unknown>* currentTarget;
             WriteBarrier<Unknown>* currentSource;
@@ -146,8 +142,6 @@ ALWAYS_INLINE void JSObject::visitButterfly(SlotVisitor& visitor, Butterfly* but
                 currentTarget = newButterfly->arrayStorage()->m_vector;
                 currentSource = butterfly->arrayStorage()->m_vector;
                 count = newButterfly->arrayStorage()->vectorLength();
-                if (newButterfly->arrayStorage()->m_sparseMap)
-                    visitor.append(&newButterfly->arrayStorage()->m_sparseMap);
                 break;
             }
             default:
@@ -158,32 +152,50 @@ ALWAYS_INLINE void JSObject::visitButterfly(SlotVisitor& visitor, Butterfly* but
                 break;
             }
 
-            while (count--) {
-                JSValue value = (currentSource++)->get();
-                if (value)
-                    visitor.appendUnbarrieredValue(&value);
-                (currentTarget++)->setWithoutWriteBarrier(value);
-            }
+            while (count--)
+                (currentTarget++)->setWithoutWriteBarrier((currentSource++)->get());
         }
         
         m_butterfly = newButterfly;
+        visitor.didCopy(butterfly->base(preCapacity, propertyCapacity), capacityInBytes);
+    } 
+}
+
+ALWAYS_INLINE void JSObject::visitButterfly(SlotVisitor& visitor, Butterfly* butterfly, size_t storageSize)
+{
+    ASSERT(butterfly);
+    
+    Structure* structure = this->structure();
+    
+    size_t propertyCapacity = structure->outOfLineCapacity();
+    size_t preCapacity;
+    size_t indexingPayloadSizeInBytes;
+    bool hasIndexingHeader = JSC::hasIndexingHeader(structure->indexingType());
+    if (UNLIKELY(hasIndexingHeader)) {
+        preCapacity = butterfly->indexingHeader()->preCapacity(structure);
+        indexingPayloadSizeInBytes = butterfly->indexingHeader()->indexingPayloadSizeInBytes(structure);
     } else {
-        // Mark the properties.
-        visitor.appendValues(butterfly->propertyStorage() - storageSize, storageSize);
-        
-        // Mark the array if appropriate.
-        switch (structure->indexingType()) {
-        case ALL_CONTIGUOUS_INDEXING_TYPES:
-            visitor.appendValues(butterfly->contiguous(), butterfly->publicLength());
-            break;
-        case ALL_ARRAY_STORAGE_INDEXING_TYPES:
-            visitor.appendValues(butterfly->arrayStorage()->m_vector, butterfly->arrayStorage()->vectorLength());
-            if (butterfly->arrayStorage()->m_sparseMap)
-                visitor.append(&butterfly->arrayStorage()->m_sparseMap);
-            break;
-        default:
-            break;
-        }
+        preCapacity = 0;
+        indexingPayloadSizeInBytes = 0;
+    }
+    size_t capacityInBytes = Butterfly::totalSize(preCapacity, propertyCapacity, hasIndexingHeader, indexingPayloadSizeInBytes);
+
+    // Mark the properties.
+    visitor.appendValues(butterfly->propertyStorage() - storageSize, storageSize);
+    visitor.copyLater(butterfly->base(preCapacity, propertyCapacity), capacityInBytes);
+    
+    // Mark the array if appropriate.
+    switch (structure->indexingType()) {
+    case ALL_CONTIGUOUS_INDEXING_TYPES:
+        visitor.appendValues(butterfly->contiguous(), butterfly->publicLength());
+        break;
+    case ALL_ARRAY_STORAGE_INDEXING_TYPES:
+        visitor.appendValues(butterfly->arrayStorage()->m_vector, butterfly->arrayStorage()->vectorLength());
+        if (butterfly->arrayStorage()->m_sparseMap)
+            visitor.append(&butterfly->arrayStorage()->m_sparseMap);
+        break;
+    default:
+        break;
     }
 }
 
@@ -207,6 +219,16 @@ void JSObject::visitChildren(JSCell* cell, SlotVisitor& visitor)
 #endif
 }
 
+void JSObject::copyBackingStore(JSCell* cell, CopyVisitor& visitor)
+{
+    JSObject* thisObject = jsCast<JSObject*>(cell);
+    ASSERT_GC_OBJECT_INHERITS(thisObject, &s_info);
+    
+    Butterfly* butterfly = thisObject->butterfly();
+    if (butterfly)
+        thisObject->copyButterfly(visitor, butterfly, thisObject->structure()->outOfLineSize());
+}
+
 void JSFinalObject::visitChildren(JSCell* cell, SlotVisitor& visitor)
 {
     JSFinalObject* thisObject = jsCast<JSFinalObject*>(cell);
index eb09755..9204099 100644 (file)
@@ -115,6 +115,7 @@ namespace JSC {
         }
         
         JS_EXPORT_PRIVATE static void visitChildren(JSCell*, SlotVisitor&);
+        JS_EXPORT_PRIVATE static void copyBackingStore(JSCell*, CopyVisitor&);
 
         JS_EXPORT_PRIVATE static String className(const JSObject*);
 
@@ -639,6 +640,7 @@ namespace JSC {
         void resetInheritorID(JSGlobalData&);
         
         void visitButterfly(SlotVisitor&, Butterfly*, size_t storageSize);
+        void copyButterfly(CopyVisitor&, Butterfly*, size_t storageSize);
 
         // Call this if you know that the object is in a mode where it has array
         // storage. This will assert otherwise.
@@ -964,14 +966,14 @@ inline JSValue JSObject::prototype() const
     return structure()->storedPrototype();
 }
 
-inline bool JSCell::inherits(const ClassInfo* info) const
+inline const MethodTable* JSCell::methodTable() const
 {
-    return classInfo()->isSubClassOf(info);
+    return &classInfo()->methodTable;
 }
 
-inline const MethodTable* JSCell::methodTable() const
+inline bool JSCell::inherits(const ClassInfo* info) const
 {
-    return &classInfo()->methodTable;
+    return classInfo()->isSubClassOf(info);
 }
 
 // this method is here to be after the inline declaration of JSCell::inherits
index 6413512..d6d8c66 100644 (file)
@@ -115,6 +115,8 @@ namespace JSC {
     v(unsigned, gcMarkStackSegmentSize, pageSize()) \
     v(unsigned, numberOfGCMarkers, computeNumberOfGCMarkers(7)) \
     v(unsigned, opaqueRootMergeThreshold, 1000) \
+    v(double, minHeapUtilization, 0.8) \
+    v(double, minCopiedBlockUtilization, 0.9) \
     \
     v(bool, forceWeakRandomSeed, false) \
     v(unsigned, forcedWeakRandomSeed, 0) \