MarkedBlock should have a footer instead of a header
authorfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sun, 28 Jan 2018 02:23:25 +0000 (02:23 +0000)
committerfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sun, 28 Jan 2018 02:23:25 +0000 (02:23 +0000)
https://bugs.webkit.org/show_bug.cgi?id=182217

Reviewed by JF Bastien.

This moves the MarkedBlock's meta-data from the header to the footer. This doesn't really
change anything except for some compile-time constants, so it should not affect performance.

This change is to help protect against Spectre attacks on structure checks, which allow for
small-offset out-of-bounds access. By putting the meta-data at the end of the block, small
OOBs will only get to other objects in the same block or the block footer. The block footer
is not super interesting. So, if we combine this with the TLC change (r227617), this means we
can use blocks as the mechanism of achieving distance between objects from different origins.
We just need to avoid ever putting objects from different origins in the same block. That's
what bug 181636 is about.

* heap/BlockDirectory.cpp:
(JSC::blockHeaderSize): Deleted.
(JSC::BlockDirectory::blockSizeForBytes): Deleted.
* heap/BlockDirectory.h:
* heap/HeapUtil.h:
(JSC::HeapUtil::findGCObjectPointersForMarking):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::MarkedBlock):
(JSC::MarkedBlock::~MarkedBlock):
(JSC::MarkedBlock::Footer::Footer):
(JSC::MarkedBlock::Footer::~Footer):
(JSC::MarkedBlock::Handle::stopAllocating):
(JSC::MarkedBlock::Handle::lastChanceToFinalize):
(JSC::MarkedBlock::Handle::resumeAllocating):
(JSC::MarkedBlock::aboutToMarkSlow):
(JSC::MarkedBlock::resetMarks):
(JSC::MarkedBlock::assertMarksNotStale):
(JSC::MarkedBlock::Handle::didConsumeFreeList):
(JSC::MarkedBlock::markCount):
(JSC::MarkedBlock::clearHasAnyMarked):
(JSC::MarkedBlock::Handle::didAddToDirectory):
(JSC::MarkedBlock::Handle::didRemoveFromDirectory):
(JSC::MarkedBlock::Handle::sweep):
* heap/MarkedBlock.h:
(JSC::MarkedBlock::markingVersion const):
(JSC::MarkedBlock::lock):
(JSC::MarkedBlock::subspace const):
(JSC::MarkedBlock::footer):
(JSC::MarkedBlock::footer const):
(JSC::MarkedBlock::handle):
(JSC::MarkedBlock::handle const):
(JSC::MarkedBlock::Handle::blockFooter):
(JSC::MarkedBlock::isAtomAligned):
(JSC::MarkedBlock::Handle::cellAlign):
(JSC::MarkedBlock::blockFor):
(JSC::MarkedBlock::vm const):
(JSC::MarkedBlock::weakSet):
(JSC::MarkedBlock::cellSize):
(JSC::MarkedBlock::attributes const):
(JSC::MarkedBlock::atomNumber):
(JSC::MarkedBlock::areMarksStale):
(JSC::MarkedBlock::aboutToMark):
(JSC::MarkedBlock::isMarkedRaw):
(JSC::MarkedBlock::isMarked):
(JSC::MarkedBlock::testAndSetMarked):
(JSC::MarkedBlock::marks const):
(JSC::MarkedBlock::isAtom):
(JSC::MarkedBlock::Handle::forEachCell):
(JSC::MarkedBlock::hasAnyMarked const):
(JSC::MarkedBlock::noteMarked):
(WTF::MarkedBlockHash::hash):
(JSC::MarkedBlock::firstAtom): Deleted.
* heap/MarkedBlockInlines.h:
(JSC::MarkedBlock::marksConveyLivenessDuringMarking):
(JSC::MarkedBlock::Handle::isLive):
(JSC::MarkedBlock::Handle::specializedSweep):
(JSC::MarkedBlock::Handle::forEachLiveCell):
(JSC::MarkedBlock::Handle::forEachDeadCell):
(JSC::MarkedBlock::Handle::forEachMarkedCell):
* heap/MarkedSpace.cpp:
* heap/MarkedSpace.h:
* llint/LowLevelInterpreter.asm:
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@227717 268f45cc-cd09-0410-ab3c-d52691b4dbfc

12 files changed:
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/heap/BlockDirectory.cpp
Source/JavaScriptCore/heap/BlockDirectory.h
Source/JavaScriptCore/heap/HeapUtil.h
Source/JavaScriptCore/heap/MarkedBlock.cpp
Source/JavaScriptCore/heap/MarkedBlock.h
Source/JavaScriptCore/heap/MarkedBlockInlines.h
Source/JavaScriptCore/heap/MarkedSpace.cpp
Source/JavaScriptCore/heap/MarkedSpace.h
Source/JavaScriptCore/llint/LowLevelInterpreter.asm
Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
Source/JavaScriptCore/llint/LowLevelInterpreter64.asm

index 6e3b139..1b0aa08 100644 (file)
@@ -1,3 +1,86 @@
+2018-01-27  Filip Pizlo  <fpizlo@apple.com>
+
+        MarkedBlock should have a footer instead of a header
+        https://bugs.webkit.org/show_bug.cgi?id=182217
+
+        Reviewed by JF Bastien.
+        
+        This moves the MarkedBlock's meta-data from the header to the footer. This doesn't really
+        change anything except for some compile-time constants, so it should not affect performance.
+        
+        This change is to help protect against Spectre attacks on structure checks, which allow for
+        small-offset out-of-bounds access. By putting the meta-data at the end of the block, small
+        OOBs will only get to other objects in the same block or the block footer. The block footer
+        is not super interesting. So, if we combine this with the TLC change (r227617), this means we
+        can use blocks as the mechanism of achieving distance between objects from different origins.
+        We just need to avoid ever putting objects from different origins in the same block. That's
+        what bug 181636 is about.
+        
+        * heap/BlockDirectory.cpp:
+        (JSC::blockHeaderSize): Deleted.
+        (JSC::BlockDirectory::blockSizeForBytes): Deleted.
+        * heap/BlockDirectory.h:
+        * heap/HeapUtil.h:
+        (JSC::HeapUtil::findGCObjectPointersForMarking):
+        * heap/MarkedBlock.cpp:
+        (JSC::MarkedBlock::MarkedBlock):
+        (JSC::MarkedBlock::~MarkedBlock):
+        (JSC::MarkedBlock::Footer::Footer):
+        (JSC::MarkedBlock::Footer::~Footer):
+        (JSC::MarkedBlock::Handle::stopAllocating):
+        (JSC::MarkedBlock::Handle::lastChanceToFinalize):
+        (JSC::MarkedBlock::Handle::resumeAllocating):
+        (JSC::MarkedBlock::aboutToMarkSlow):
+        (JSC::MarkedBlock::resetMarks):
+        (JSC::MarkedBlock::assertMarksNotStale):
+        (JSC::MarkedBlock::Handle::didConsumeFreeList):
+        (JSC::MarkedBlock::markCount):
+        (JSC::MarkedBlock::clearHasAnyMarked):
+        (JSC::MarkedBlock::Handle::didAddToDirectory):
+        (JSC::MarkedBlock::Handle::didRemoveFromDirectory):
+        (JSC::MarkedBlock::Handle::sweep):
+        * heap/MarkedBlock.h:
+        (JSC::MarkedBlock::markingVersion const):
+        (JSC::MarkedBlock::lock):
+        (JSC::MarkedBlock::subspace const):
+        (JSC::MarkedBlock::footer):
+        (JSC::MarkedBlock::footer const):
+        (JSC::MarkedBlock::handle):
+        (JSC::MarkedBlock::handle const):
+        (JSC::MarkedBlock::Handle::blockFooter):
+        (JSC::MarkedBlock::isAtomAligned):
+        (JSC::MarkedBlock::Handle::cellAlign):
+        (JSC::MarkedBlock::blockFor):
+        (JSC::MarkedBlock::vm const):
+        (JSC::MarkedBlock::weakSet):
+        (JSC::MarkedBlock::cellSize):
+        (JSC::MarkedBlock::attributes const):
+        (JSC::MarkedBlock::atomNumber):
+        (JSC::MarkedBlock::areMarksStale):
+        (JSC::MarkedBlock::aboutToMark):
+        (JSC::MarkedBlock::isMarkedRaw):
+        (JSC::MarkedBlock::isMarked):
+        (JSC::MarkedBlock::testAndSetMarked):
+        (JSC::MarkedBlock::marks const):
+        (JSC::MarkedBlock::isAtom):
+        (JSC::MarkedBlock::Handle::forEachCell):
+        (JSC::MarkedBlock::hasAnyMarked const):
+        (JSC::MarkedBlock::noteMarked):
+        (WTF::MarkedBlockHash::hash):
+        (JSC::MarkedBlock::firstAtom): Deleted.
+        * heap/MarkedBlockInlines.h:
+        (JSC::MarkedBlock::marksConveyLivenessDuringMarking):
+        (JSC::MarkedBlock::Handle::isLive):
+        (JSC::MarkedBlock::Handle::specializedSweep):
+        (JSC::MarkedBlock::Handle::forEachLiveCell):
+        (JSC::MarkedBlock::Handle::forEachDeadCell):
+        (JSC::MarkedBlock::Handle::forEachMarkedCell):
+        * heap/MarkedSpace.cpp:
+        * heap/MarkedSpace.h:
+        * llint/LowLevelInterpreter.asm:
+        * llint/LowLevelInterpreter32_64.asm:
+        * llint/LowLevelInterpreter64.asm:
+
 2018-01-27  Yusuke Suzuki  <utatane.tea@gmail.com>
 
         DFG strength reduction fails to convert NumberToStringWithValidRadixConstant for 0 to constant '0'
index e9a3f07..dd89707 100644 (file)
@@ -90,19 +90,6 @@ MarkedBlock::Handle* BlockDirectory::findBlockForAllocation()
     return m_blocks[m_allocationCursor];
 }
 
-static size_t blockHeaderSize()
-{
-    return WTF::roundUpToMultipleOf<MarkedBlock::atomSize>(sizeof(MarkedBlock));
-}
-
-size_t BlockDirectory::blockSizeForBytes(size_t bytes)
-{
-    size_t minBlockSize = MarkedBlock::blockSize;
-    size_t minAllocationSize = blockHeaderSize() + WTF::roundUpToMultipleOf<MarkedBlock::atomSize>(bytes);
-    minAllocationSize = WTF::roundUpToMultipleOf(WTF::pageSize(), minAllocationSize);
-    return std::max(minBlockSize, minAllocationSize);
-}
-
 MarkedBlock::Handle* BlockDirectory::tryAllocateBlock()
 {
     SuperSamplerScope superSamplerScope(false);
index 2dd63ba..00dcd23 100644 (file)
@@ -113,8 +113,6 @@ public:
 
     bool isPagedOut(double deadline);
     
-    static size_t blockSizeForBytes(size_t);
-    
     Lock& bitvectorLock() { return m_bitvectorLock; }
 
 #define BLOCK_DIRECTORY_BIT_ACCESSORS(lowerBitName, capitalBitName)     \
index 32c455a..01cfdbc 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2016-2018 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -124,7 +124,7 @@ public:
     
         // Also, a butterfly could point at the end of an object plus sizeof(IndexingHeader). In that
         // case, this is pointing to the object to the right of the one we should be marking.
-        if (candidate->atomNumber(alignedPointer) > MarkedBlock::firstAtom()
+        if (candidate->atomNumber(alignedPointer) > 0
             && pointer <= alignedPointer + sizeof(IndexingHeader))
             tryPointer(alignedPointer - candidate->cellSize());
     }
index dc7f579..7514993 100644 (file)
@@ -86,12 +86,26 @@ MarkedBlock::Handle::~Handle()
 }
 
 MarkedBlock::MarkedBlock(VM& vm, Handle& handle)
+{
+    new (&footer()) Footer(vm, handle);
+    if (false)
+        dataLog(RawPointer(this), ": Allocated.\n");
+}
+
+MarkedBlock::~MarkedBlock()
+{
+    footer().~Footer();
+}
+
+MarkedBlock::Footer::Footer(VM& vm, Handle& handle)
     : m_handle(handle)
     , m_vm(&vm)
     , m_markingVersion(MarkedSpace::nullVersion)
 {
-    if (false)
-        dataLog(RawPointer(this), ": Allocated.\n");
+}
+
+MarkedBlock::Footer::~Footer()
+{
 }
 
 void MarkedBlock::Handle::unsweepWithNoNewlyAllocated()
@@ -108,7 +122,7 @@ void MarkedBlock::Handle::setIsFreeListed()
 
 void MarkedBlock::Handle::stopAllocating(const FreeList& freeList)
 {
-    auto locker = holdLock(block().m_lock);
+    auto locker = holdLock(blockFooter().m_lock);
     
     if (false)
         dataLog(RawPointer(this), ": MarkedBlock::Handle::stopAllocating!\n");
@@ -155,9 +169,9 @@ void MarkedBlock::Handle::lastChanceToFinalize()
 {
     directory()->setIsAllocated(NoLockingNecessary, this, false);
     directory()->setIsDestructible(NoLockingNecessary, this, true);
-    m_block->m_marks.clearAll();
-    m_block->clearHasAnyMarked();
-    m_block->m_markingVersion = heap()->objectSpace().markingVersion();
+    blockFooter().m_marks.clearAll();
+    block().clearHasAnyMarked();
+    blockFooter().m_markingVersion = heap()->objectSpace().markingVersion();
     m_weakSet.lastChanceToFinalize();
     m_newlyAllocated.clearAll();
     m_newlyAllocatedVersion = heap()->objectSpace().newlyAllocatedVersion();
@@ -167,7 +181,7 @@ void MarkedBlock::Handle::lastChanceToFinalize()
 void MarkedBlock::Handle::resumeAllocating(FreeList& freeList)
 {
     {
-        auto locker = holdLock(block().m_lock);
+        auto locker = holdLock(blockFooter().m_lock);
         
         if (false)
             dataLog(RawPointer(this), ": MarkedBlock::Handle::resumeAllocating!\n");
@@ -200,7 +214,7 @@ void MarkedBlock::Handle::zap(const FreeList& freeList)
 void MarkedBlock::aboutToMarkSlow(HeapVersion markingVersion)
 {
     ASSERT(vm()->heap.objectSpace().isMarking());
-    auto locker = holdLock(m_lock);
+    auto locker = holdLock(footer().m_lock);
     
     if (!areMarksStale(markingVersion))
         return;
@@ -217,7 +231,7 @@ void MarkedBlock::aboutToMarkSlow(HeapVersion markingVersion)
         // date version! If it does, then we want to leave the newlyAllocated alone, since that
         // means that we had allocated in this previously empty block but did not fill it up, so
         // we created a newlyAllocated.
-        m_marks.clearAll();
+        footer().m_marks.clearAll();
     } else {
         if (false)
             dataLog(RawPointer(this), ": Doing things.\n");
@@ -230,16 +244,16 @@ void MarkedBlock::aboutToMarkSlow(HeapVersion markingVersion)
             // cannot be lastChanceToFinalize. So it must be stopAllocating. That means that we just
             // computed the newlyAllocated bits just before the start of an increment. When we are in that
             // mode, it seems as if newlyAllocated should subsume marks.
-            ASSERT(handle().m_newlyAllocated.subsumes(m_marks));
-            m_marks.clearAll();
+            ASSERT(handle().m_newlyAllocated.subsumes(footer().m_marks));
+            footer().m_marks.clearAll();
         } else {
-            handle().m_newlyAllocated.setAndClear(m_marks);
+            handle().m_newlyAllocated.setAndClear(footer().m_marks);
             handle().m_newlyAllocatedVersion = newlyAllocatedVersion;
         }
     }
     clearHasAnyMarked();
     WTF::storeStoreFence();
-    m_markingVersion = markingVersion;
+    footer().m_markingVersion = markingVersion;
     
     // This means we're the first ones to mark any object in this block.
     directory->setIsMarkingNotEmpty(holdLock(directory->bitvectorLock()), &handle(), true);
@@ -260,14 +274,14 @@ void MarkedBlock::resetMarks()
     // version is null, aboutToMarkSlow() will assume that the marks were not stale as of before
     // beginMarking(). Hence the need to whip the marks into shape.
     if (areMarksStale())
-        m_marks.clearAll();
-    m_markingVersion = MarkedSpace::nullVersion;
+        footer().m_marks.clearAll();
+    footer().m_markingVersion = MarkedSpace::nullVersion;
 }
 
 #if !ASSERT_DISABLED
 void MarkedBlock::assertMarksNotStale()
 {
-    ASSERT(m_markingVersion == vm()->heap.objectSpace().markingVersion());
+    ASSERT(footer().m_markingVersion == vm()->heap.objectSpace().markingVersion());
 }
 #endif // !ASSERT_DISABLED
 
@@ -288,7 +302,7 @@ bool MarkedBlock::isMarked(const void* p)
 
 void MarkedBlock::Handle::didConsumeFreeList()
 {
-    auto locker = holdLock(block().m_lock);
+    auto locker = holdLock(blockFooter().m_lock);
     if (false)
         dataLog(RawPointer(this), ": MarkedBlock::Handle::didConsumeFreeList!\n");
     ASSERT(isFreeListed());
@@ -298,12 +312,12 @@ void MarkedBlock::Handle::didConsumeFreeList()
 
 size_t MarkedBlock::markCount()
 {
-    return areMarksStale() ? 0 : m_marks.count();
+    return areMarksStale() ? 0 : footer().m_marks.count();
 }
 
 void MarkedBlock::clearHasAnyMarked()
 {
-    m_biasedMarkCount = m_markCountBias;
+    footer().m_biasedMarkCount = footer().m_markCountBias;
 }
 
 void MarkedBlock::noteMarkedSlow()
@@ -329,11 +343,11 @@ void MarkedBlock::Handle::didAddToDirectory(BlockDirectory* directory, size_t in
     
     m_index = index;
     m_directory = directory;
-    m_block->m_subspace = directory->subspace();
+    blockFooter().m_subspace = directory->subspace();
     
     size_t cellSize = directory->cellSize();
     m_atomsPerCell = (cellSize + atomSize - 1) / atomSize;
-    m_endAtom = atomsPerBlock - m_atomsPerCell + 1;
+    m_endAtom = endAtom - m_atomsPerCell + 1;
     
     m_attributes = directory->attributes();
 
@@ -347,7 +361,7 @@ void MarkedBlock::Handle::didAddToDirectory(BlockDirectory* directory, size_t in
     RELEASE_ASSERT(markCountBias < 0);
     
     // This means we haven't marked anything yet.
-    block().m_biasedMarkCount = block().m_markCountBias = static_cast<int16_t>(markCountBias);
+    blockFooter().m_biasedMarkCount = blockFooter().m_markCountBias = static_cast<int16_t>(markCountBias);
 }
 
 void MarkedBlock::Handle::didRemoveFromDirectory()
@@ -357,7 +371,7 @@ void MarkedBlock::Handle::didRemoveFromDirectory()
     
     m_index = std::numeric_limits<size_t>::max();
     m_directory = nullptr;
-    m_block->m_subspace = nullptr;
+    blockFooter().m_subspace = nullptr;
 }
 
 #if !ASSERT_DISABLED
@@ -410,7 +424,7 @@ void MarkedBlock::Handle::sweep(FreeList* freeList)
     }
     
     if (space()->isMarking())
-        block().m_lock.lock();
+        blockFooter().m_lock.lock();
     
     subspace()->didBeginSweepingToFreeList(this);
     
index 905b9d4..a1a3d5f 100644 (file)
@@ -43,7 +43,6 @@ class MarkedSpace;
 class SlotVisitor;
 class Subspace;
 
-typedef uintptr_t Bits;
 typedef uint32_t HeapVersion;
 
 // A marked block is a page-aligned container for heap-allocated objects.
@@ -60,16 +59,18 @@ class MarkedBlock {
     friend struct VerifyMarked;
 
 public:
+    class Footer;
     class Handle;
 private:
+    friend class Footer;
     friend class Handle;
 public:
-    static const size_t atomSize = 16; // bytes
-    static const size_t blockSize = 16 * KB;
-    static const size_t blockMask = ~(blockSize - 1); // blockSize must be a power of two.
-
-    static const size_t atomsPerBlock = blockSize / atomSize;
+    static constexpr size_t atomSize = 16; // bytes
+    static constexpr size_t blockSize = 16 * KB;
+    static constexpr size_t blockMask = ~(blockSize - 1); // blockSize must be a power of two.
 
+    static constexpr size_t atomsPerBlock = blockSize / atomSize;
+    
     static_assert(!(MarkedBlock::atomSize & (MarkedBlock::atomSize - 1)), "MarkedBlock::atomSize must be a power of two.");
     static_assert(!(MarkedBlock::blockSize & (MarkedBlock::blockSize - 1)), "MarkedBlock::blockSize must be a power of two.");
     
@@ -103,6 +104,7 @@ public:
         ~Handle();
             
         MarkedBlock& block();
+        MarkedBlock::Footer& blockFooter();
             
         void* cellAlign(void*);
             
@@ -244,10 +246,71 @@ public:
             
         MarkedBlock* m_block { nullptr };
     };
+
+private:    
+    static constexpr size_t atomAlignmentMask = atomSize - 1;
+
+    typedef char Atom[atomSize];
+
+public:
+    class Footer {
+    public:
+        Footer(VM&, Handle&);
+        ~Footer();
+        
+    private:
+        friend class LLIntOffsetsExtractor;
+        friend class MarkedBlock;
+        
+        Handle& m_handle;
+        VM* m_vm;
+        Subspace* m_subspace;
+
+        CountingLock m_lock;
+    
+        // The actual mark count can be computed by doing: m_biasedMarkCount - m_markCountBias. Note
+        // that this count is racy. It will accurately detect whether or not exactly zero things were
+        // marked, but if N things got marked, then this may report anything in the range [1, N] (or
+        // before unbiased, it would be [1 + m_markCountBias, N + m_markCountBias].)
+        int16_t m_biasedMarkCount;
+    
+        // We bias the mark count so that if m_biasedMarkCount >= 0 then the block should be retired.
+        // We go to all this trouble to make marking a bit faster: this way, marking knows when to
+        // retire a block using a js/jns on m_biasedMarkCount.
+        //
+        // For example, if a block has room for 100 objects and retirement happens whenever 90% are
+        // live, then m_markCountBias will be -90. This way, when marking begins, this will cause us to
+        // set m_biasedMarkCount to -90 as well, since:
+        //
+        //     m_biasedMarkCount = actualMarkCount + m_markCountBias.
+        //
+        // Marking an object will increment m_biasedMarkCount. Once 90 objects get marked, we will have
+        // m_biasedMarkCount = 0, which will trigger retirement. In other words, we want to set
+        // m_markCountBias like so:
+        //
+        //     m_markCountBias = -(minMarkedBlockUtilization * cellsPerBlock)
+        //
+        // All of this also means that you can detect if any objects are marked by doing:
+        //
+        //     m_biasedMarkCount != m_markCountBias
+        int16_t m_markCountBias;
+
+        HeapVersion m_markingVersion;
+
+        Bitmap<atomsPerBlock> m_marks;
+    };
         
+private:    
+    Footer& footer();
+    const Footer& footer() const;
+
+public:
+    static constexpr size_t endAtom = (blockSize - sizeof(Footer)) / atomSize;
+
     static MarkedBlock::Handle* tryCreate(Heap&, AlignedMemoryAllocator*);
         
     Handle& handle();
+    const Handle& handle() const;
         
     VM* vm() const;
     inline Heap* heap() const;
@@ -255,7 +318,6 @@ public:
 
     static bool isAtomAligned(const void*);
     static MarkedBlock* blockFor(const void*);
-    static size_t firstAtom();
     size_t atomNumber(const void*);
         
     size_t markCount();
@@ -295,20 +357,19 @@ public:
     void resetMarks();
     
     bool isMarkedRaw(const void* p);
-    HeapVersion markingVersion() const { return m_markingVersion; }
+    HeapVersion markingVersion() const { return footer().m_markingVersion; }
     
     const Bitmap<atomsPerBlock>& marks() const;
     
-    CountingLock& lock() { return m_lock; }
+    CountingLock& lock() { return footer().m_lock; }
+    
+    Subspace* subspace() const { return footer().m_subspace; }
     
-    Subspace* subspace() const { return m_subspace; }
+    static constexpr size_t offsetOfFooter = endAtom * atomSize;
 
 private:
-    static const size_t atomAlignmentMask = atomSize - 1;
-
-    typedef char Atom[atomSize];
-
     MarkedBlock(VM&, Handle&);
+    ~MarkedBlock();
     Atom* atoms();
         
     JS_EXPORT_PRIVATE void aboutToMarkSlow(HeapVersion markingVersion);
@@ -318,48 +379,26 @@ private:
     
     inline bool marksConveyLivenessDuringMarking(HeapVersion markingVersion);
     inline bool marksConveyLivenessDuringMarking(HeapVersion myMarkingVersion, HeapVersion markingVersion);
-        
-    Handle& m_handle;
-    VM* m_vm;
-    Subspace* m_subspace;
-
-    CountingLock m_lock;
-    
-    // The actual mark count can be computed by doing: m_biasedMarkCount - m_markCountBias. Note
-    // that this count is racy. It will accurately detect whether or not exactly zero things were
-    // marked, but if N things got marked, then this may report anything in the range [1, N] (or
-    // before unbiased, it would be [1 + m_markCountBias, N + m_markCountBias].)
-    int16_t m_biasedMarkCount;
-    
-    // We bias the mark count so that if m_biasedMarkCount >= 0 then the block should be retired.
-    // We go to all this trouble to make marking a bit faster: this way, marking knows when to
-    // retire a block using a js/jns on m_biasedMarkCount.
-    //
-    // For example, if a block has room for 100 objects and retirement happens whenever 90% are
-    // live, then m_markCountBias will be -90. This way, when marking begins, this will cause us to
-    // set m_biasedMarkCount to -90 as well, since:
-    //
-    //     m_biasedMarkCount = actualMarkCount + m_markCountBias.
-    //
-    // Marking an object will increment m_biasedMarkCount. Once 90 objects get marked, we will have
-    // m_biasedMarkCount = 0, which will trigger retirement. In other words, we want to set
-    // m_markCountBias like so:
-    //
-    //     m_markCountBias = -(minMarkedBlockUtilization * cellsPerBlock)
-    //
-    // All of this also means that you can detect if any objects are marked by doing:
-    //
-    //     m_biasedMarkCount != m_markCountBias
-    int16_t m_markCountBias;
-
-    HeapVersion m_markingVersion;
-
-    Bitmap<atomsPerBlock> m_marks;
 };
 
+inline MarkedBlock::Footer& MarkedBlock::footer()
+{
+    return *bitwise_cast<MarkedBlock::Footer*>(atoms() + endAtom);
+}
+
+inline const MarkedBlock::Footer& MarkedBlock::footer() const
+{
+    return const_cast<MarkedBlock*>(this)->footer();
+}
+
 inline MarkedBlock::Handle& MarkedBlock::handle()
 {
-    return m_handle;
+    return footer().m_handle;
+}
+
+inline const MarkedBlock::Handle& MarkedBlock::handle() const
+{
+    return const_cast<MarkedBlock*>(this)->handle();
 }
 
 inline MarkedBlock& MarkedBlock::Handle::block()
@@ -367,9 +406,9 @@ inline MarkedBlock& MarkedBlock::Handle::block()
     return *m_block;
 }
 
-inline size_t MarkedBlock::firstAtom()
+inline MarkedBlock::Footer& MarkedBlock::Handle::blockFooter()
 {
-    return WTF::roundUpToMultipleOf<atomSize>(sizeof(MarkedBlock)) / atomSize;
+    return block().footer();
 }
 
 inline MarkedBlock::Atom* MarkedBlock::atoms()
@@ -379,13 +418,13 @@ inline MarkedBlock::Atom* MarkedBlock::atoms()
 
 inline bool MarkedBlock::isAtomAligned(const void* p)
 {
-    return !(reinterpret_cast<Bits>(p) & atomAlignmentMask);
+    return !(reinterpret_cast<uintptr_t>(p) & atomAlignmentMask);
 }
 
 inline void* MarkedBlock::Handle::cellAlign(void* p)
 {
-    Bits base = reinterpret_cast<Bits>(block().atoms() + firstAtom());
-    Bits bits = reinterpret_cast<Bits>(p);
+    uintptr_t base = reinterpret_cast<uintptr_t>(block().atoms());
+    uintptr_t bits = reinterpret_cast<uintptr_t>(p);
     bits -= base;
     bits -= bits % cellSize();
     bits += base;
@@ -394,7 +433,7 @@ inline void* MarkedBlock::Handle::cellAlign(void* p)
 
 inline MarkedBlock* MarkedBlock::blockFor(const void* p)
 {
-    return reinterpret_cast<MarkedBlock*>(reinterpret_cast<Bits>(p) & blockMask);
+    return reinterpret_cast<MarkedBlock*>(reinterpret_cast<uintptr_t>(p) & blockMask);
 }
 
 inline BlockDirectory* MarkedBlock::Handle::directory() const
@@ -419,7 +458,7 @@ inline VM* MarkedBlock::Handle::vm() const
 
 inline VM* MarkedBlock::vm() const
 {
-    return m_vm;
+    return footer().m_vm;
 }
 
 inline WeakSet& MarkedBlock::Handle::weakSet()
@@ -429,7 +468,7 @@ inline WeakSet& MarkedBlock::Handle::weakSet()
 
 inline WeakSet& MarkedBlock::weakSet()
 {
-    return m_handle.weakSet();
+    return handle().weakSet();
 }
 
 inline void MarkedBlock::Handle::shrink()
@@ -454,7 +493,7 @@ inline size_t MarkedBlock::Handle::cellSize()
 
 inline size_t MarkedBlock::cellSize()
 {
-    return m_handle.cellSize();
+    return handle().cellSize();
 }
 
 inline const CellAttributes& MarkedBlock::Handle::attributes() const
@@ -464,7 +503,7 @@ inline const CellAttributes& MarkedBlock::Handle::attributes() const
 
 inline const CellAttributes& MarkedBlock::attributes() const
 {
-    return m_handle.attributes();
+    return handle().attributes();
 }
 
 inline bool MarkedBlock::Handle::needsDestruction() const
@@ -494,17 +533,17 @@ inline size_t MarkedBlock::Handle::size()
 
 inline size_t MarkedBlock::atomNumber(const void* p)
 {
-    return (reinterpret_cast<Bits>(p) - reinterpret_cast<Bits>(this)) / atomSize;
+    return (reinterpret_cast<uintptr_t>(p) - reinterpret_cast<uintptr_t>(this)) / atomSize;
 }
 
 inline bool MarkedBlock::areMarksStale(HeapVersion markingVersion)
 {
-    return markingVersion != m_markingVersion;
+    return markingVersion != footer().m_markingVersion;
 }
 
 inline Dependency MarkedBlock::aboutToMark(HeapVersion markingVersion)
 {
-    HeapVersion version = m_markingVersion;
+    HeapVersion version = footer().m_markingVersion;
     if (UNLIKELY(version != markingVersion))
         aboutToMarkSlow(markingVersion);
     return Dependency::fence(version);
@@ -517,32 +556,32 @@ inline void MarkedBlock::Handle::assertMarksNotStale()
 
 inline bool MarkedBlock::isMarkedRaw(const void* p)
 {
-    return m_marks.get(atomNumber(p));
+    return footer().m_marks.get(atomNumber(p));
 }
 
 inline bool MarkedBlock::isMarked(HeapVersion markingVersion, const void* p)
 {
-    HeapVersion version = m_markingVersion;
+    HeapVersion version = footer().m_markingVersion;
     if (UNLIKELY(version != markingVersion))
         return false;
-    return m_marks.get(atomNumber(p), Dependency::fence(version));
+    return footer().m_marks.get(atomNumber(p), Dependency::fence(version));
 }
 
 inline bool MarkedBlock::isMarked(const void* p, Dependency dependency)
 {
     assertMarksNotStale();
-    return m_marks.get(atomNumber(p), dependency);
+    return footer().m_marks.get(atomNumber(p), dependency);
 }
 
 inline bool MarkedBlock::testAndSetMarked(const void* p, Dependency dependency)
 {
     assertMarksNotStale();
-    return m_marks.concurrentTestAndSet(atomNumber(p), dependency);
+    return footer().m_marks.concurrentTestAndSet(atomNumber(p), dependency);
 }
 
 inline const Bitmap<MarkedBlock::atomsPerBlock>& MarkedBlock::marks() const
 {
-    return m_marks;
+    return footer().m_marks;
 }
 
 inline bool MarkedBlock::Handle::isNewlyAllocated(const void* p)
@@ -569,12 +608,9 @@ inline bool MarkedBlock::isAtom(const void* p)
 {
     ASSERT(MarkedBlock::isAtomAligned(p));
     size_t atomNumber = this->atomNumber(p);
-    size_t firstAtom = MarkedBlock::firstAtom();
-    if (atomNumber < firstAtom) // Filters pointers into MarkedBlock metadata.
-        return false;
-    if ((atomNumber - firstAtom) % m_handle.m_atomsPerCell) // Filters pointers into cell middles.
+    if (atomNumber % handle().m_atomsPerCell) // Filters pointers into cell middles.
         return false;
-    if (atomNumber >= m_handle.m_endAtom) // Filters pointers into invalid cells out of the range.
+    if (atomNumber >= handle().m_endAtom) // Filters pointers into invalid cells out of the range.
         return false;
     return true;
 }
@@ -583,7 +619,7 @@ template <typename Functor>
 inline IterationStatus MarkedBlock::Handle::forEachCell(const Functor& functor)
 {
     HeapCell::Kind kind = m_attributes.cellKind;
-    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
+    for (size_t i = 0; i < m_endAtom; i += m_atomsPerCell) {
         HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
         if (functor(cell, kind) == IterationStatus::Done)
             return IterationStatus::Done;
@@ -593,15 +629,15 @@ inline IterationStatus MarkedBlock::Handle::forEachCell(const Functor& functor)
 
 inline bool MarkedBlock::hasAnyMarked() const
 {
-    return m_biasedMarkCount != m_markCountBias;
+    return footer().m_biasedMarkCount != footer().m_markCountBias;
 }
 
 inline void MarkedBlock::noteMarked()
 {
     // This is racy by design. We don't want to pay the price of an atomic increment!
-    int16_t biasedMarkCount = m_biasedMarkCount;
+    int16_t biasedMarkCount = footer().m_biasedMarkCount;
     ++biasedMarkCount;
-    m_biasedMarkCount = biasedMarkCount;
+    footer().m_biasedMarkCount = biasedMarkCount;
     if (UNLIKELY(!biasedMarkCount))
         noteMarkedSlow();
 }
@@ -616,7 +652,7 @@ struct MarkedBlockHash : PtrHash<JSC::MarkedBlock*> {
         // Aligned VM regions tend to be monotonically increasing integers,
         // which is a great hash function, but we have to remove the low bits,
         // since they're always zero, which is a terrible hash function!
-        return reinterpret_cast<JSC::Bits>(key) / JSC::MarkedBlock::blockSize;
+        return reinterpret_cast<uintptr_t>(key) / JSC::MarkedBlock::blockSize;
     }
 };
 
index 2f7a44b..295f8ab 100644 (file)
@@ -67,7 +67,7 @@ inline MarkedSpace* MarkedBlock::Handle::space() const
 
 inline bool MarkedBlock::marksConveyLivenessDuringMarking(HeapVersion markingVersion)
 {
-    return marksConveyLivenessDuringMarking(m_markingVersion, markingVersion);
+    return marksConveyLivenessDuringMarking(footer().m_markingVersion, markingVersion);
 }
 
 inline bool MarkedBlock::marksConveyLivenessDuringMarking(HeapVersion myMarkingVersion, HeapVersion markingVersion)
@@ -138,8 +138,9 @@ ALWAYS_INLINE bool MarkedBlock::Handle::isLive(HeapVersion markingVersion, HeapV
     // impact on perf - around 2% on splay if you get it wrong.
 
     MarkedBlock& block = this->block();
+    MarkedBlock::Footer& footer = block.footer();
     
-    auto count = block.m_lock.tryOptimisticFencelessRead();
+    auto count = footer.m_lock.tryOptimisticFencelessRead();
     if (count.value) {
         Dependency fenceBefore = Dependency::fence(count.input);
         MarkedBlock::Handle* fencedThis = fenceBefore.consume(this);
@@ -149,25 +150,26 @@ ALWAYS_INLINE bool MarkedBlock::Handle::isLive(HeapVersion markingVersion, HeapV
         HeapVersion myNewlyAllocatedVersion = fencedThis->m_newlyAllocatedVersion;
         if (myNewlyAllocatedVersion == newlyAllocatedVersion) {
             bool result = fencedThis->isNewlyAllocated(cell);
-            if (block.m_lock.fencelessValidate(count.value, Dependency::fence(result)))
+            if (footer.m_lock.fencelessValidate(count.value, Dependency::fence(result)))
                 return result;
         } else {
             MarkedBlock& fencedBlock = *fenceBefore.consume(&block);
+            MarkedBlock::Footer& fencedFooter = fencedBlock.footer();
             
-            HeapVersion myMarkingVersion = fencedBlock.m_markingVersion;
+            HeapVersion myMarkingVersion = fencedFooter.m_markingVersion;
             if (myMarkingVersion != markingVersion
                 && (!isMarking || !fencedBlock.marksConveyLivenessDuringMarking(myMarkingVersion, markingVersion))) {
-                if (block.m_lock.fencelessValidate(count.value, Dependency::fence(myMarkingVersion)))
+                if (footer.m_lock.fencelessValidate(count.value, Dependency::fence(myMarkingVersion)))
                     return false;
             } else {
-                bool result = fencedBlock.m_marks.get(block.atomNumber(cell));
-                if (block.m_lock.fencelessValidate(count.value, Dependency::fence(result)))
+                bool result = fencedFooter.m_marks.get(block.atomNumber(cell));
+                if (footer.m_lock.fencelessValidate(count.value, Dependency::fence(result)))
                     return result;
             }
         }
     }
     
-    auto locker = holdLock(block.m_lock);
+    auto locker = holdLock(footer.m_lock);
 
     ASSERT(!isFreeListed());
     
@@ -182,7 +184,7 @@ ALWAYS_INLINE bool MarkedBlock::Handle::isLive(HeapVersion markingVersion, HeapV
             return false;
     }
     
-    return block.m_marks.get(block.atomNumber(cell));
+    return footer.m_marks.get(block.atomNumber(cell));
 }
 
 inline bool MarkedBlock::Handle::isLiveCell(HeapVersion markingVersion, HeapVersion newlyAllocatedVersion, bool isMarking, const void* p)
@@ -240,6 +242,7 @@ void MarkedBlock::Handle::specializedSweep(FreeList* freeList, MarkedBlock::Hand
     SuperSamplerScope superSamplerScope(false);
 
     MarkedBlock& block = this->block();
+    MarkedBlock::Footer& footer = block.footer();
     
     if (false)
         dataLog(RawPointer(this), "/", RawPointer(&block), ": MarkedBlock::Handle::specializedSweep!\n");
@@ -262,12 +265,12 @@ void MarkedBlock::Handle::specializedSweep(FreeList* freeList, MarkedBlock::Hand
         && newlyAllocatedMode == DoesNotHaveNewlyAllocated) {
         
         // This is an incredibly powerful assertion that checks the sanity of our block bits.
-        if (marksMode == MarksNotStale && !block.m_marks.isEmpty()) {
+        if (marksMode == MarksNotStale && !footer.m_marks.isEmpty()) {
             WTF::dataFile().atomically(
                 [&] (PrintStream& out) {
                     out.print("Block ", RawPointer(&block), ": marks not empty!\n");
-                    out.print("Block lock is held: ", block.m_lock.isHeld(), "\n");
-                    out.print("Marking version of block: ", block.m_markingVersion, "\n");
+                    out.print("Block lock is held: ", footer.m_lock.isHeld(), "\n");
+                    out.print("Marking version of block: ", footer.m_markingVersion, "\n");
                     out.print("Marking version of heap: ", space()->markingVersion(), "\n");
                     UNREACHABLE_FOR_PLATFORM();
                 });
@@ -276,12 +279,12 @@ void MarkedBlock::Handle::specializedSweep(FreeList* freeList, MarkedBlock::Hand
         char* startOfLastCell = static_cast<char*>(cellAlign(block.atoms() + m_endAtom - 1));
         char* payloadEnd = startOfLastCell + cellSize;
         RELEASE_ASSERT(payloadEnd - MarkedBlock::blockSize <= bitwise_cast<char*>(&block));
-        char* payloadBegin = bitwise_cast<char*>(block.atoms() + firstAtom());
+        char* payloadBegin = bitwise_cast<char*>(block.atoms());
         
         if (sweepMode == SweepToFreeList)
             setIsFreeListed();
         if (space()->isMarking())
-            block.m_lock.unlock();
+            footer.m_lock.unlock();
         if (destructionMode != BlockHasNoDestructors) {
             for (char* cell = payloadBegin; cell < payloadEnd; cell += cellSize)
                 destroy(cell);
@@ -320,9 +323,9 @@ void MarkedBlock::Handle::specializedSweep(FreeList* freeList, MarkedBlock::Hand
             ++count;
         }
     };
-    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
+    for (size_t i = 0; i < m_endAtom; i += m_atomsPerCell) {
         if (emptyMode == NotEmpty
-            && ((marksMode == MarksNotStale && block.m_marks.get(i))
+            && ((marksMode == MarksNotStale && footer.m_marks.get(i))
                 || (newlyAllocatedMode == HasNewlyAllocated && m_newlyAllocated.get(i)))) {
             isEmpty = false;
             continue;
@@ -340,7 +343,7 @@ void MarkedBlock::Handle::specializedSweep(FreeList* freeList, MarkedBlock::Hand
         m_newlyAllocatedVersion = MarkedSpace::nullVersion;
     
     if (space()->isMarking())
-        block.m_lock.unlock();
+        footer.m_lock.unlock();
     
     if (destructionMode == BlockHasDestructorsAndCollectorIsRunning) {
         for (size_t i : deadCells)
@@ -492,7 +495,7 @@ inline IterationStatus MarkedBlock::Handle::forEachLiveCell(const Functor& funct
     // https://bugs.webkit.org/show_bug.cgi?id=180315
     
     HeapCell::Kind kind = m_attributes.cellKind;
-    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
+    for (size_t i = 0; i < m_endAtom; i += m_atomsPerCell) {
         HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
         if (!isLive(cell))
             continue;
@@ -507,7 +510,7 @@ template <typename Functor>
 inline IterationStatus MarkedBlock::Handle::forEachDeadCell(const Functor& functor)
 {
     HeapCell::Kind kind = m_attributes.cellKind;
-    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
+    for (size_t i = 0; i < m_endAtom; i += m_atomsPerCell) {
         HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
         if (isLive(cell))
             continue;
@@ -527,8 +530,8 @@ inline IterationStatus MarkedBlock::Handle::forEachMarkedCell(const Functor& fun
     WTF::loadLoadFence();
     if (areMarksStale)
         return IterationStatus::Continue;
-    for (size_t i = firstAtom(); i < m_endAtom; i += m_atomsPerCell) {
-        if (!block.m_marks.get(i))
+    for (size_t i = 0; i < m_endAtom; i += m_atomsPerCell) {
+        if (!block.footer().m_marks.get(i))
             continue;
 
         HeapCell* cell = reinterpret_cast_ptr<HeapCell*>(&m_block->atoms()[i]);
index 94aafd8..f18fd1c 100644 (file)
@@ -135,7 +135,6 @@ const Vector<size_t>& sizeClasses()
             // FIXME: All of these things should have IsoSubspaces.
             // https://bugs.webkit.org/show_bug.cgi?id=179876
             add(sizeof(UnlinkedFunctionCodeBlock));
-            add(sizeof(FunctionCodeBlock));
             add(sizeof(JSString));
             add(sizeof(JSFunction));
 
index 9c92cd0..cf6293a 100644 (file)
@@ -50,25 +50,25 @@ class MarkedSpace {
     WTF_MAKE_NONCOPYABLE(MarkedSpace);
 public:
     // sizeStep is really a synonym for atomSize; it's no accident that they are the same.
-    static const size_t sizeStep = MarkedBlock::atomSize;
+    static constexpr size_t sizeStep = MarkedBlock::atomSize;
     
     // Sizes up to this amount get a size class for each size step.
-    static const size_t preciseCutoff = 80;
+    static constexpr size_t preciseCutoff = 80;
     
-    // The amount of available payload in a block is the block's size minus the header. But the
+    // The amount of available payload in a block is the block's size minus the footer. But the
     // header size might not be atom size aligned, so we round down the result accordingly.
-    static const size_t blockPayload = (MarkedBlock::blockSize - sizeof(MarkedBlock)) & ~(MarkedBlock::atomSize - 1);
+    static constexpr size_t blockPayload = (MarkedBlock::blockSize - sizeof(MarkedBlock::Footer)) & ~(MarkedBlock::atomSize - 1);
     
     // The largest cell we're willing to allocate in a MarkedBlock the "normal way" (i.e. using size
     // classes, rather than a large allocation) is half the size of the payload, rounded down. This
     // ensures that we only use the size class approach if it means being able to pack two things
     // into one block.
-    static const size_t largeCutoff = (blockPayload / 2) & ~(sizeStep - 1);
+    static constexpr size_t largeCutoff = (blockPayload / 2) & ~(sizeStep - 1);
 
-    static const size_t numSizeClasses = largeCutoff / sizeStep;
+    static constexpr size_t numSizeClasses = largeCutoff / sizeStep;
     
-    static const HeapVersion nullVersion = 0; // The version of freshly allocated blocks.
-    static const HeapVersion initialVersion = 2; // The version that the heap starts out with. Set to make sure that nextVersion(nullVersion) != initialVersion.
+    static constexpr HeapVersion nullVersion = 0; // The version of freshly allocated blocks.
+    static constexpr HeapVersion initialVersion = 2; // The version that the heap starts out with. Set to make sure that nextVersion(nullVersion) != initialVersion.
     
     static HeapVersion nextVersion(HeapVersion version)
     {
index e0bf298..7227518 100644 (file)
@@ -436,6 +436,7 @@ const NotInitialization = constexpr InitializationMode::NotInitialization
 
 const MarkedBlockSize = constexpr MarkedBlock::blockSize
 const MarkedBlockMask = ~(MarkedBlockSize - 1)
+const MarkedBlockFooterOffset = constexpr MarkedBlock::offsetOfFooter
 
 const BlackThreshold = constexpr blackThreshold
 
index 5816866..0e17ab1 100644 (file)
@@ -307,7 +307,7 @@ end
 _handleUncaughtException:
     loadp Callee + PayloadOffset[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
     restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(t3, t0)
     loadp VM::callFrameForCatch[t3], cfr
     storep 0, VM::callFrameForCatch[t3]
@@ -634,7 +634,7 @@ end
 macro branchIfException(label)
     loadp Callee + PayloadOffset[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
     btiz VM::m_exception[t3], .noException
     jmp label
 .noException:
@@ -2000,7 +2000,7 @@ _llint_op_catch:
     # and have set VM::targetInterpreterPCForThrow.
     loadp Callee + PayloadOffset[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
     restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(t3, t0)
     loadp VM::callFrameForCatch[t3], cfr
     storep 0, VM::callFrameForCatch[t3]
@@ -2015,7 +2015,7 @@ _llint_op_catch:
 .isCatchableException:
     loadp Callee + PayloadOffset[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
 
     loadi VM::m_exception[t3], t0
     storei 0, VM::m_exception[t3]
@@ -2053,7 +2053,7 @@ _llint_throw_from_slow_path_trampoline:
     # This essentially emulates the JIT's throwing protocol.
     loadp Callee[cfr], t1
     andp MarkedBlockMask, t1
-    loadp MarkedBlock::m_vm[t1], t1
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
     copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(t1, t2)
     jmp VM::targetMachinePCForThrow[t1]
 
@@ -2072,7 +2072,7 @@ macro nativeCallTrampoline(executableOffsetToFunction)
     if X86 or X86_WIN
         subp 8, sp # align stack pointer
         andp MarkedBlockMask, t1
-        loadp MarkedBlock::m_vm[t1], t3
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t3
         storep cfr, VM::topCallFrame[t3]
         move cfr, a0  # a0 = ecx
         storep a0, [sp]
@@ -2082,7 +2082,7 @@ macro nativeCallTrampoline(executableOffsetToFunction)
         call executableOffsetToFunction[t1]
         loadp Callee + PayloadOffset[cfr], t3
         andp MarkedBlockMask, t3
-        loadp MarkedBlock::m_vm[t3], t3
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
         addp 8, sp
     elsif ARM or ARMv7 or ARMv7_TRADITIONAL or C_LOOP or MIPS
         if MIPS
@@ -2095,7 +2095,7 @@ macro nativeCallTrampoline(executableOffsetToFunction)
         end
         # t1 already contains the Callee.
         andp MarkedBlockMask, t1
-        loadp MarkedBlock::m_vm[t1], t1
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
         storep cfr, VM::topCallFrame[t1]
         move cfr, a0
         loadi Callee + PayloadOffset[cfr], t1
@@ -2108,7 +2108,7 @@ macro nativeCallTrampoline(executableOffsetToFunction)
         end
         loadp Callee + PayloadOffset[cfr], t3
         andp MarkedBlockMask, t3
-        loadp MarkedBlock::m_vm[t3], t3
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
         if MIPS
             addp 24, sp
         else
@@ -2140,7 +2140,7 @@ macro internalFunctionCallTrampoline(offsetOfFunction)
     if X86 or X86_WIN
         subp 8, sp # align stack pointer
         andp MarkedBlockMask, t1
-        loadp MarkedBlock::m_vm[t1], t3
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t3
         storep cfr, VM::topCallFrame[t3]
         move cfr, a0  # a0 = ecx
         storep a0, [sp]
@@ -2149,13 +2149,13 @@ macro internalFunctionCallTrampoline(offsetOfFunction)
         call offsetOfFunction[t1]
         loadp Callee + PayloadOffset[cfr], t3
         andp MarkedBlockMask, t3
-        loadp MarkedBlock::m_vm[t3], t3
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
         addp 8, sp
     elsif ARM or ARMv7 or ARMv7_TRADITIONAL or C_LOOP or MIPS
         subp 8, sp # align stack pointer
         # t1 already contains the Callee.
         andp MarkedBlockMask, t1
-        loadp MarkedBlock::m_vm[t1], t1
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
         storep cfr, VM::topCallFrame[t1]
         move cfr, a0
         loadi Callee + PayloadOffset[cfr], t1
@@ -2167,7 +2167,7 @@ macro internalFunctionCallTrampoline(offsetOfFunction)
         end
         loadp Callee + PayloadOffset[cfr], t3
         andp MarkedBlockMask, t3
-        loadp MarkedBlock::m_vm[t3], t3
+        loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
         addp 8, sp
     else
         error
index b3a7246..dd27a91 100644 (file)
@@ -280,7 +280,7 @@ end
 _handleUncaughtException:
     loadp Callee[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
     restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(t3, t0)
     loadp VM::callFrameForCatch[t3], cfr
     storep 0, VM::callFrameForCatch[t3]
@@ -561,7 +561,7 @@ end
 macro branchIfException(label)
     loadp Callee[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
     btqz VM::m_exception[t3], .noException
     jmp label
 .noException:
@@ -2002,7 +2002,7 @@ _llint_op_catch:
     # and have set VM::targetInterpreterPCForThrow.
     loadp Callee[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
     restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(t3, t0)
     loadp VM::callFrameForCatch[t3], cfr
     storep 0, VM::callFrameForCatch[t3]
@@ -2022,7 +2022,7 @@ _llint_op_catch:
 .isCatchableException:
     loadp Callee[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
 
     loadq VM::m_exception[t3], t0
     storeq 0, VM::m_exception[t3]
@@ -2052,7 +2052,7 @@ _llint_op_end:
 _llint_throw_from_slow_path_trampoline:
     loadp Callee[cfr], t1
     andp MarkedBlockMask, t1
-    loadp MarkedBlock::m_vm[t1], t1
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
     copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(t1, t2)
 
     callSlowPath(_llint_slow_path_handle_exception)
@@ -2062,7 +2062,7 @@ _llint_throw_from_slow_path_trampoline:
     # This essentially emulates the JIT's throwing protocol.
     loadp Callee[cfr], t1
     andp MarkedBlockMask, t1
-    loadp MarkedBlock::m_vm[t1], t1
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
     jmp VM::targetMachinePCForThrow[t1]
 
 
@@ -2077,7 +2077,7 @@ macro nativeCallTrampoline(executableOffsetToFunction)
     storep 0, CodeBlock[cfr]
     loadp Callee[cfr], t0
     andp MarkedBlockMask, t0, t1
-    loadp MarkedBlock::m_vm[t1], t1
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
     storep cfr, VM::topCallFrame[t1]
     if ARM64 or C_LOOP
         storep lr, ReturnPC[cfr]
@@ -2104,7 +2104,7 @@ macro nativeCallTrampoline(executableOffsetToFunction)
 
     loadp Callee[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
 
     btqnz VM::m_exception[t3], .handleException
 
@@ -2121,7 +2121,7 @@ macro internalFunctionCallTrampoline(offsetOfFunction)
     storep 0, CodeBlock[cfr]
     loadp Callee[cfr], t0
     andp MarkedBlockMask, t0, t1
-    loadp MarkedBlock::m_vm[t1], t1
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t1], t1
     storep cfr, VM::topCallFrame[t1]
     if ARM64 or C_LOOP
         storep lr, ReturnPC[cfr]
@@ -2147,7 +2147,7 @@ macro internalFunctionCallTrampoline(offsetOfFunction)
 
     loadp Callee[cfr], t3
     andp MarkedBlockMask, t3
-    loadp MarkedBlock::m_vm[t3], t3
+    loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
 
     btqnz VM::m_exception[t3], .handleException