[JSC] Butterfly allocation from LargeAllocation should try "realloc" behavior if...
authorysuzuki@apple.com <ysuzuki@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Mon, 1 Apr 2019 06:51:11 +0000 (06:51 +0000)
committerysuzuki@apple.com <ysuzuki@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Mon, 1 Apr 2019 06:51:11 +0000 (06:51 +0000)
https://bugs.webkit.org/show_bug.cgi?id=196160

Reviewed by Saam Barati.

Source/JavaScriptCore:

"realloc" can be effective in terms of peak/current memory footprint when realloc succeeds because,

1. It does not allocate additional memory while expanding a vector
2. It does not deallocate an old memory, just reusing the current memory by expanding, so that memory footprint is tight even before scavenging

We found that we can "realloc" large butterflies in certain conditions are met because,

1. If it goes to LargeAllocation, this memory region is never reused until GC sweeps it.
2. Butterflies are owned by owner JSObjects, so we know the lifetime of Butterflies.

This patch attempts to use "realloc" onto butterflies if,

1. Butterflies are allocated in LargeAllocation kind
2. Concurrent collector is not active
3. Butterflies do not have property storage

The condition (2) is required to avoid deallocating butterflies while the concurrent collector looks into it. The condition (3) is
also required to avoid deallocating butterflies while the concurrent compiler looks into it.

We also change LargeAllocation mechanism to using "malloc" and "free" instead of "posix_memalign". This allows us to use "realloc"
safely in all the platforms. Since LargeAllocation uses alignment to distinguish LargeAllocation and MarkedBlock, we manually adjust
16B alignment by allocating 8B more memory in "malloc".

Speedometer2 and JetStream2 are neutral. RAMification shows about 1% progression (even in some of JIT tests).

* heap/AlignedMemoryAllocator.h:
* heap/CompleteSubspace.cpp:
(JSC::CompleteSubspace::tryAllocateSlow):
(JSC::CompleteSubspace::reallocateLargeAllocationNonVirtual):
* heap/CompleteSubspace.h:
* heap/FastMallocAlignedMemoryAllocator.cpp:
(JSC::FastMallocAlignedMemoryAllocator::tryAllocateMemory):
(JSC::FastMallocAlignedMemoryAllocator::freeMemory):
(JSC::FastMallocAlignedMemoryAllocator::tryReallocateMemory):
* heap/FastMallocAlignedMemoryAllocator.h:
* heap/GigacageAlignedMemoryAllocator.cpp:
(JSC::GigacageAlignedMemoryAllocator::tryAllocateMemory):
(JSC::GigacageAlignedMemoryAllocator::freeMemory):
(JSC::GigacageAlignedMemoryAllocator::tryReallocateMemory):
* heap/GigacageAlignedMemoryAllocator.h:
* heap/IsoAlignedMemoryAllocator.cpp:
(JSC::IsoAlignedMemoryAllocator::tryAllocateMemory):
(JSC::IsoAlignedMemoryAllocator::freeMemory):
(JSC::IsoAlignedMemoryAllocator::tryReallocateMemory):
* heap/IsoAlignedMemoryAllocator.h:
* heap/LargeAllocation.cpp:
(JSC::isAlignedForLargeAllocation):
(JSC::LargeAllocation::tryCreate):
(JSC::LargeAllocation::tryReallocate):
(JSC::LargeAllocation::LargeAllocation):
(JSC::LargeAllocation::destroy):
* heap/LargeAllocation.h:
(JSC::LargeAllocation::indexInSpace):
(JSC::LargeAllocation::setIndexInSpace):
(JSC::LargeAllocation::basePointer const):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::sweepLargeAllocations):
(JSC::MarkedSpace::prepareForConservativeScan):
* heap/WeakSet.h:
(JSC::WeakSet::isTriviallyDestructible const):
* runtime/Butterfly.h:
* runtime/ButterflyInlines.h:
(JSC::Butterfly::reallocArrayRightIfPossible):
* runtime/JSObject.cpp:
(JSC::JSObject::ensureLengthSlow):

Source/WTF:

* wtf/FastMalloc.h:
(WTF::FastMalloc::tryRealloc):
* wtf/Gigacage.cpp:
(Gigacage::tryRealloc):
* wtf/Gigacage.h:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@243688 268f45cc-cd09-0410-ab3c-d52691b4dbfc

21 files changed:
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/heap/AlignedMemoryAllocator.h
Source/JavaScriptCore/heap/CompleteSubspace.cpp
Source/JavaScriptCore/heap/CompleteSubspace.h
Source/JavaScriptCore/heap/FastMallocAlignedMemoryAllocator.cpp
Source/JavaScriptCore/heap/FastMallocAlignedMemoryAllocator.h
Source/JavaScriptCore/heap/GigacageAlignedMemoryAllocator.cpp
Source/JavaScriptCore/heap/GigacageAlignedMemoryAllocator.h
Source/JavaScriptCore/heap/IsoAlignedMemoryAllocator.cpp
Source/JavaScriptCore/heap/IsoAlignedMemoryAllocator.h
Source/JavaScriptCore/heap/LargeAllocation.cpp
Source/JavaScriptCore/heap/LargeAllocation.h
Source/JavaScriptCore/heap/MarkedSpace.cpp
Source/JavaScriptCore/heap/WeakSet.h
Source/JavaScriptCore/runtime/Butterfly.h
Source/JavaScriptCore/runtime/ButterflyInlines.h
Source/JavaScriptCore/runtime/JSObject.cpp
Source/WTF/ChangeLog
Source/WTF/wtf/FastMalloc.h
Source/WTF/wtf/Gigacage.cpp
Source/WTF/wtf/Gigacage.h

index 3b151b0..e52c7ec 100644 (file)
@@ -1,3 +1,76 @@
+2019-03-31  Yusuke Suzuki  <ysuzuki@apple.com>
+
+        [JSC] Butterfly allocation from LargeAllocation should try "realloc" behavior if collector thread is not active
+        https://bugs.webkit.org/show_bug.cgi?id=196160
+
+        Reviewed by Saam Barati.
+
+        "realloc" can be effective in terms of peak/current memory footprint when realloc succeeds because,
+
+        1. It does not allocate additional memory while expanding a vector
+        2. It does not deallocate an old memory, just reusing the current memory by expanding, so that memory footprint is tight even before scavenging
+
+        We found that we can "realloc" large butterflies in certain conditions are met because,
+
+        1. If it goes to LargeAllocation, this memory region is never reused until GC sweeps it.
+        2. Butterflies are owned by owner JSObjects, so we know the lifetime of Butterflies.
+
+        This patch attempts to use "realloc" onto butterflies if,
+
+        1. Butterflies are allocated in LargeAllocation kind
+        2. Concurrent collector is not active
+        3. Butterflies do not have property storage
+
+        The condition (2) is required to avoid deallocating butterflies while the concurrent collector looks into it. The condition (3) is
+        also required to avoid deallocating butterflies while the concurrent compiler looks into it.
+
+        We also change LargeAllocation mechanism to using "malloc" and "free" instead of "posix_memalign". This allows us to use "realloc"
+        safely in all the platforms. Since LargeAllocation uses alignment to distinguish LargeAllocation and MarkedBlock, we manually adjust
+        16B alignment by allocating 8B more memory in "malloc".
+
+        Speedometer2 and JetStream2 are neutral. RAMification shows about 1% progression (even in some of JIT tests).
+
+        * heap/AlignedMemoryAllocator.h:
+        * heap/CompleteSubspace.cpp:
+        (JSC::CompleteSubspace::tryAllocateSlow):
+        (JSC::CompleteSubspace::reallocateLargeAllocationNonVirtual):
+        * heap/CompleteSubspace.h:
+        * heap/FastMallocAlignedMemoryAllocator.cpp:
+        (JSC::FastMallocAlignedMemoryAllocator::tryAllocateMemory):
+        (JSC::FastMallocAlignedMemoryAllocator::freeMemory):
+        (JSC::FastMallocAlignedMemoryAllocator::tryReallocateMemory):
+        * heap/FastMallocAlignedMemoryAllocator.h:
+        * heap/GigacageAlignedMemoryAllocator.cpp:
+        (JSC::GigacageAlignedMemoryAllocator::tryAllocateMemory):
+        (JSC::GigacageAlignedMemoryAllocator::freeMemory):
+        (JSC::GigacageAlignedMemoryAllocator::tryReallocateMemory):
+        * heap/GigacageAlignedMemoryAllocator.h:
+        * heap/IsoAlignedMemoryAllocator.cpp:
+        (JSC::IsoAlignedMemoryAllocator::tryAllocateMemory):
+        (JSC::IsoAlignedMemoryAllocator::freeMemory):
+        (JSC::IsoAlignedMemoryAllocator::tryReallocateMemory):
+        * heap/IsoAlignedMemoryAllocator.h:
+        * heap/LargeAllocation.cpp:
+        (JSC::isAlignedForLargeAllocation):
+        (JSC::LargeAllocation::tryCreate):
+        (JSC::LargeAllocation::tryReallocate):
+        (JSC::LargeAllocation::LargeAllocation):
+        (JSC::LargeAllocation::destroy):
+        * heap/LargeAllocation.h:
+        (JSC::LargeAllocation::indexInSpace):
+        (JSC::LargeAllocation::setIndexInSpace):
+        (JSC::LargeAllocation::basePointer const):
+        * heap/MarkedSpace.cpp:
+        (JSC::MarkedSpace::sweepLargeAllocations):
+        (JSC::MarkedSpace::prepareForConservativeScan):
+        * heap/WeakSet.h:
+        (JSC::WeakSet::isTriviallyDestructible const):
+        * runtime/Butterfly.h:
+        * runtime/ButterflyInlines.h:
+        (JSC::Butterfly::reallocArrayRightIfPossible):
+        * runtime/JSObject.cpp:
+        (JSC::JSObject::ensureLengthSlow):
+
 2019-03-31  Sam Weinig  <weinig@apple.com>
 
         Remove more i386 specific configurations
index 47bf001..66442ad 100644 (file)
@@ -50,6 +50,12 @@ public:
 
     void registerSubspace(Subspace*);
 
+    // Some of derived memory allocators do not have these features because they do not use them.
+    // For example, IsoAlignedMemoryAllocator does not have "realloc" feature since it never extends / shrinks the allocated memory region.
+    virtual void* tryAllocateMemory(size_t) = 0;
+    virtual void freeMemory(void*) = 0;
+    virtual void* tryReallocateMemory(void*, size_t) = 0;
+
 private:
     SinglyLinkedListWithTail<BlockDirectory> m_directories;
     SinglyLinkedListWithTail<Subspace> m_subspaces;
index 6df63e1..22d7fd6 100644 (file)
@@ -140,11 +140,12 @@ void* CompleteSubspace::tryAllocateSlow(VM& vm, size_t size, GCDeferralContext*
     vm.heap.collectIfNecessaryOrDefer(deferralContext);
     
     size = WTF::roundUpToMultipleOf<MarkedSpace::sizeStep>(size);
-    LargeAllocation* allocation = LargeAllocation::tryCreate(vm.heap, size, this);
+    LargeAllocation* allocation = LargeAllocation::tryCreate(vm.heap, size, this, m_space.m_largeAllocations.size());
     if (!allocation)
         return nullptr;
     
     m_space.m_largeAllocations.append(allocation);
+    ASSERT(allocation->indexInSpace() == m_space.m_largeAllocations.size() - 1);
     vm.heap.didAllocate(size);
     m_space.m_capacity += size;
     
@@ -153,5 +154,54 @@ void* CompleteSubspace::tryAllocateSlow(VM& vm, size_t size, GCDeferralContext*
     return allocation->cell();
 }
 
+void* CompleteSubspace::reallocateLargeAllocationNonVirtual(VM& vm, HeapCell* oldCell, size_t size, GCDeferralContext* deferralContext, AllocationFailureMode failureMode)
+{
+    if (validateDFGDoesGC)
+        RELEASE_ASSERT(vm.heap.expectDoesGC());
+
+    // The following conditions are met in Butterfly for example.
+    ASSERT(oldCell->isLargeAllocation());
+
+    LargeAllocation* oldAllocation = &oldCell->largeAllocation();
+    ASSERT(oldAllocation->cellSize() <= size);
+    ASSERT(oldAllocation->weakSet().isTriviallyDestructible());
+    ASSERT(oldAllocation->attributes().destruction == DoesNotNeedDestruction);
+    ASSERT(oldAllocation->attributes().cellKind == HeapCell::Auxiliary);
+    ASSERT(size > MarkedSpace::largeCutoff);
+
+    sanitizeStackForVM(&vm);
+
+    if (size <= Options::largeAllocationCutoff()
+        && size <= MarkedSpace::largeCutoff) {
+        dataLog("FATAL: attampting to allocate small object using large allocation.\n");
+        dataLog("Requested allocation size: ", size, "\n");
+        RELEASE_ASSERT_NOT_REACHED();
+    }
+
+    vm.heap.collectIfNecessaryOrDefer(deferralContext);
+
+    size = WTF::roundUpToMultipleOf<MarkedSpace::sizeStep>(size);
+    size_t difference = size - oldAllocation->cellSize();
+    unsigned oldIndexInSpace = oldAllocation->indexInSpace();
+    if (oldAllocation->isOnList())
+        oldAllocation->remove();
+
+    LargeAllocation* allocation = oldAllocation->tryReallocate(size, this);
+    if (!allocation) {
+        RELEASE_ASSERT(failureMode != AllocationFailureMode::Assert);
+        m_largeAllocations.append(oldAllocation);
+        return nullptr;
+    }
+    ASSERT(oldIndexInSpace == allocation->indexInSpace());
+
+    m_space.m_largeAllocations[oldIndexInSpace] = allocation;
+    vm.heap.didAllocate(difference);
+    m_space.m_capacity += difference;
+
+    m_largeAllocations.append(allocation);
+
+    return allocation->cell();
+}
+
 } // namespace JSC
 
index 28ae34d..070f65a 100644 (file)
@@ -44,6 +44,7 @@ public:
     
     void* allocate(VM&, size_t, GCDeferralContext*, AllocationFailureMode) override;
     void* allocateNonVirtual(VM&, size_t, GCDeferralContext*, AllocationFailureMode);
+    void* reallocateLargeAllocationNonVirtual(VM&, HeapCell*, size_t, GCDeferralContext*, AllocationFailureMode);
     
     static ptrdiff_t offsetOfAllocatorForSizeStep() { return OBJECT_OFFSETOF(CompleteSubspace, m_allocatorForSizeStep); }
     
index 45f6bb7..cee66b0 100644 (file)
@@ -54,5 +54,20 @@ void FastMallocAlignedMemoryAllocator::dump(PrintStream& out) const
     out.print("FastMalloc");
 }
 
+void* FastMallocAlignedMemoryAllocator::tryAllocateMemory(size_t size)
+{
+    return FastMalloc::tryMalloc(size);
+}
+
+void FastMallocAlignedMemoryAllocator::freeMemory(void* pointer)
+{
+    FastMalloc::free(pointer);
+}
+
+void* FastMallocAlignedMemoryAllocator::tryReallocateMemory(void* pointer, size_t size)
+{
+    return FastMalloc::tryRealloc(pointer, size);
+}
+
 } // namespace JSC
 
index bdd57b7..cfa770b 100644 (file)
@@ -38,6 +38,10 @@ public:
     void freeAlignedMemory(void*) override;
     
     void dump(PrintStream&) const override;
+
+    void* tryAllocateMemory(size_t) override;
+    void freeMemory(void*) override;
+    void* tryReallocateMemory(void*, size_t) override;
 };
 
 } // namespace JSC
index d6796e6..8a1f636 100644 (file)
@@ -52,5 +52,20 @@ void GigacageAlignedMemoryAllocator::dump(PrintStream& out) const
     out.print(Gigacage::name(m_kind), "Gigacage");
 }
 
+void* GigacageAlignedMemoryAllocator::tryAllocateMemory(size_t size)
+{
+    return Gigacage::tryMalloc(m_kind, size);
+}
+
+void GigacageAlignedMemoryAllocator::freeMemory(void* pointer)
+{
+    Gigacage::free(m_kind, pointer);
+}
+
+void* GigacageAlignedMemoryAllocator::tryReallocateMemory(void* pointer, size_t size)
+{
+    return Gigacage::tryRealloc(m_kind, pointer, size);
+}
+
 } // namespace JSC
 
index 4d119a9..129d008 100644 (file)
@@ -40,6 +40,10 @@ public:
     
     void dump(PrintStream&) const override;
 
+    void* tryAllocateMemory(size_t) override;
+    void freeMemory(void*) override;
+    void* tryReallocateMemory(void*, size_t) override;
+
 private:
     Gigacage::Kind m_kind;
 };
index abddece..1a8d957 100644 (file)
@@ -88,5 +88,20 @@ void IsoAlignedMemoryAllocator::dump(PrintStream& out) const
     out.print("Iso(", RawPointer(this), ")");
 }
 
+void* IsoAlignedMemoryAllocator::tryAllocateMemory(size_t)
+{
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
+void IsoAlignedMemoryAllocator::freeMemory(void*)
+{
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
+void* IsoAlignedMemoryAllocator::tryReallocateMemory(void*, size_t)
+{
+    RELEASE_ASSERT_NOT_REACHED();
+}
+
 } // namespace JSC
 
index 336e9b8..2651658 100644 (file)
@@ -39,6 +39,10 @@ public:
 
     void dump(PrintStream&) const override;
 
+    void* tryAllocateMemory(size_t) override;
+    void freeMemory(void*) override;
+    void* tryReallocateMemory(void*, size_t) override;
+
 private:
     Vector<void*> m_blocks;
     HashMap<void*, unsigned> m_blockIndices;
index 8cf62a4..bda5808 100644 (file)
 
 namespace JSC {
 
-LargeAllocation* LargeAllocation::tryCreate(Heap& heap, size_t size, Subspace* subspace)
+static inline bool isAlignedForLargeAllocation(void* memory)
+{
+    uintptr_t allocatedPointer = bitwise_cast<uintptr_t>(memory);
+    return !(allocatedPointer & (LargeAllocation::alignment - 1));
+}
+
+LargeAllocation* LargeAllocation::tryCreate(Heap& heap, size_t size, Subspace* subspace, unsigned indexInSpace)
 {
     if (validateDFGDoesGC)
         RELEASE_ASSERT(heap.expectDoesGC());
 
-    size_t allocationSize = headerSize() + size;
+    size_t adjustedAlignmentAllocationSize = headerSize() + size + halfAlignment;
+    static_assert(halfAlignment == 8, "We assume that memory returned by malloc has alignment >= 8.");
     
-    void* space = subspace->alignedMemoryAllocator()->tryAllocateAlignedMemory(alignment, allocationSize);
+    // We must use tryAllocateMemory instead of tryAllocateAlignedMemory since we want to use "realloc" feature.
+    void* space = subspace->alignedMemoryAllocator()->tryAllocateMemory(adjustedAlignmentAllocationSize);
     if (!space)
         return nullptr;
+
+    bool adjustedAlignment = false;
+    if (!isAlignedForLargeAllocation(space)) {
+        space = bitwise_cast<void*>(bitwise_cast<uintptr_t>(space) + halfAlignment);
+        adjustedAlignment = true;
+        ASSERT(isAlignedForLargeAllocation(space));
+    }
     
     if (scribbleFreeCells())
         scribble(space, size);
-    return new (NotNull, space) LargeAllocation(heap, size, subspace);
+    return new (NotNull, space) LargeAllocation(heap, size, subspace, indexInSpace, adjustedAlignment);
+}
+
+LargeAllocation* LargeAllocation::tryReallocate(size_t size, Subspace* subspace)
+{
+    size_t adjustedAlignmentAllocationSize = headerSize() + size + halfAlignment;
+    static_assert(halfAlignment == 8, "We assume that memory returned by malloc has alignment >= 8.");
+
+    ASSERT(subspace == m_subspace);
+
+    unsigned oldCellSize = m_cellSize;
+    bool oldAdjustedAlignment = m_adjustedAlignment;
+    void* oldBasePointer = basePointer();
+
+    void* newBasePointer = subspace->alignedMemoryAllocator()->tryReallocateMemory(oldBasePointer, adjustedAlignmentAllocationSize);
+    if (!newBasePointer)
+        return nullptr;
+
+    LargeAllocation* newAllocation = bitwise_cast<LargeAllocation*>(newBasePointer);
+    bool newAdjustedAlignment = false;
+    if (!isAlignedForLargeAllocation(newBasePointer)) {
+        newAdjustedAlignment = true;
+        newAllocation = bitwise_cast<LargeAllocation*>(bitwise_cast<uintptr_t>(newBasePointer) + halfAlignment);
+        ASSERT(isAlignedForLargeAllocation(static_cast<void*>(newAllocation)));
+    }
+
+    // We have 4 patterns.
+    // oldAdjustedAlignment = true  newAdjustedAlignment = true  => Do nothing.
+    // oldAdjustedAlignment = true  newAdjustedAlignment = false => Shift forward by halfAlignment
+    // oldAdjustedAlignment = false newAdjustedAlignment = true  => Shift backward by halfAlignment
+    // oldAdjustedAlignment = false newAdjustedAlignment = false => Do nothing.
+
+    if (oldAdjustedAlignment != newAdjustedAlignment) {
+        if (oldAdjustedAlignment) {
+            ASSERT(!newAdjustedAlignment);
+            ASSERT(newAllocation == newBasePointer);
+            // Old   [ 8 ][  content  ]
+            // Now   [   ][  content  ]
+            // New   [  content  ]...
+            memmove(newBasePointer, bitwise_cast<char*>(newBasePointer) + halfAlignment, oldCellSize + LargeAllocation::headerSize());
+        } else {
+            ASSERT(newAdjustedAlignment);
+            ASSERT(newAllocation != newBasePointer);
+            ASSERT(newAllocation == bitwise_cast<void*>(bitwise_cast<char*>(newBasePointer) + halfAlignment));
+            // Old   [  content  ]
+            // Now   [  content  ][   ]
+            // New   [ 8 ][  content  ]
+            memmove(bitwise_cast<char*>(newBasePointer) + halfAlignment, newBasePointer, oldCellSize + LargeAllocation::headerSize());
+        }
+    }
+
+    newAllocation->m_cellSize = size;
+    newAllocation->m_adjustedAlignment = newAdjustedAlignment;
+    return newAllocation;
 }
 
-LargeAllocation::LargeAllocation(Heap& heap, size_t size, Subspace* subspace)
+LargeAllocation::LargeAllocation(Heap& heap, size_t size, Subspace* subspace, unsigned indexInSpace, bool adjustedAlignment)
     : m_cellSize(size)
+    , m_indexInSpace(indexInSpace)
     , m_isNewlyAllocated(true)
     , m_hasValidCell(true)
+    , m_adjustedAlignment(adjustedAlignment)
     , m_attributes(subspace->attributes())
     , m_subspace(subspace)
     , m_weakSet(heap.vm(), *this)
@@ -115,8 +185,9 @@ void LargeAllocation::sweep()
 void LargeAllocation::destroy()
 {
     AlignedMemoryAllocator* allocator = m_subspace->alignedMemoryAllocator();
+    void* basePointer = this->basePointer();
     this->~LargeAllocation();
-    allocator->freeAlignedMemory(this);
+    allocator->freeMemory(basePointer);
 }
 
 void LargeAllocation::dump(PrintStream& out) const
index fe943f9..231cb21 100644 (file)
@@ -39,7 +39,9 @@ class SlotVisitor;
 
 class LargeAllocation : public BasicRawSentinelNode<LargeAllocation> {
 public:
-    static LargeAllocation* tryCreate(Heap&, size_t, Subspace*);
+    static LargeAllocation* tryCreate(Heap&, size_t, Subspace*, unsigned indexInSpace);
+
+    LargeAllocation* tryReallocate(size_t, Subspace*);
     
     ~LargeAllocation();
     
@@ -65,6 +67,9 @@ public:
     Heap* heap() const { return m_weakSet.heap(); }
     VM* vm() const { return m_weakSet.vm(); }
     WeakSet& weakSet() { return m_weakSet; }
+
+    unsigned indexInSpace() { return m_indexInSpace; }
+    void setIndexInSpace(unsigned indexInSpace) { m_indexInSpace = indexInSpace; }
     
     void shrink();
     
@@ -140,17 +145,21 @@ public:
     
     void dump(PrintStream&) const;
     
-private:
-    LargeAllocation(Heap&, size_t, Subspace*);
-    
     static const unsigned alignment = MarkedBlock::atomSize;
     static const unsigned halfAlignment = alignment / 2;
 
+private:
+    LargeAllocation(Heap&, size_t, Subspace*, unsigned indexInSpace, bool adjustedAlignment);
+    
     static unsigned headerSize();
+
+    void* basePointer() const;
     
     size_t m_cellSize;
-    bool m_isNewlyAllocated;
-    bool m_hasValidCell;
+    unsigned m_indexInSpace { 0 };
+    bool m_isNewlyAllocated : 1;
+    bool m_hasValidCell : 1;
+    bool m_adjustedAlignment : 1;
     Atomic<bool> m_isMarked;
     CellAttributes m_attributes;
     Subspace* m_subspace;
@@ -162,5 +171,12 @@ inline unsigned LargeAllocation::headerSize()
     return ((sizeof(LargeAllocation) + halfAlignment - 1) & ~(halfAlignment - 1)) | halfAlignment;
 }
 
+inline void* LargeAllocation::basePointer() const
+{
+    if (m_adjustedAlignment)
+        return bitwise_cast<char*>(this) - halfAlignment;
+    return bitwise_cast<void*>(this);
+}
+
 } // namespace JSC
 
index 16be50f..7ebac36 100644 (file)
@@ -250,6 +250,7 @@ void MarkedSpace::sweepLargeAllocations()
             allocation->destroy();
             continue;
         }
+        allocation->setIndexInSpace(dstIndex);
         m_largeAllocations[dstIndex++] = allocation;
     }
     m_largeAllocations.shrink(dstIndex);
@@ -327,6 +328,12 @@ void MarkedSpace::prepareForConservativeScan()
         [&] (LargeAllocation* a, LargeAllocation* b) {
             return a < b;
         });
+    unsigned index = m_largeAllocationsOffsetForThisCollection;
+    for (auto* start = m_largeAllocationsForThisCollectionBegin; start != m_largeAllocationsForThisCollectionEnd; ++start, ++index) {
+        (*start)->setIndexInSpace(index);
+        ASSERT(m_largeAllocations[index] == *start);
+        ASSERT(m_largeAllocations[index]->indexInSpace() == index);
+    }
 }
 
 void MarkedSpace::prepareForMarking()
index ddcf743..b080203 100644 (file)
@@ -52,6 +52,7 @@ public:
     VM* vm() const;
 
     bool isEmpty() const;
+    bool isTriviallyDestructible() const;
 
     void visit(SlotVisitor&);
 
@@ -96,6 +97,15 @@ inline bool WeakSet::isEmpty() const
     return true;
 }
 
+inline bool WeakSet::isTriviallyDestructible() const
+{
+    if (!m_blocks.isEmpty())
+        return false;
+    if (isOnList())
+        return false;
+    return true;
+}
+
 inline void WeakSet::deallocate(WeakImpl* weakImpl)
 {
     weakImpl->setState(WeakImpl::Deallocated);
index 50ff8f0..2eb23bc 100644 (file)
@@ -222,6 +222,9 @@ public:
     static Butterfly* createOrGrowPropertyStorage(Butterfly*, VM&, JSObject* intendedOwner, Structure*, size_t oldPropertyCapacity, size_t newPropertyCapacity);
     Butterfly* growArrayRight(VM&, JSObject* intendedOwner, Structure* oldStructure, size_t propertyCapacity, bool hadIndexingHeader, size_t oldIndexingPayloadSizeInBytes, size_t newIndexingPayloadSizeInBytes); // Assumes that preCapacity is zero, and asserts as much.
     Butterfly* growArrayRight(VM&, JSObject* intendedOwner, Structure*, size_t newIndexingPayloadSizeInBytes);
+
+    Butterfly* reallocArrayRightIfPossible(VM&, GCDeferralContext&, JSObject* intendedOwner, Structure* oldStructure, size_t propertyCapacity, bool hadIndexingHeader, size_t oldIndexingPayloadSizeInBytes, size_t newIndexingPayloadSizeInBytes); // Assumes that preCapacity is zero, and asserts as much.
+
     Butterfly* resizeArray(VM&, JSObject* intendedOwner, size_t propertyCapacity, bool oldHasIndexingHeader, size_t oldIndexingPayloadSizeInBytes, size_t newPreCapacity, bool newHasIndexingHeader, size_t newIndexingPayloadSizeInBytes);
     Butterfly* resizeArray(VM&, JSObject* intendedOwner, Structure*, size_t newPreCapacity, size_t newIndexingPayloadSizeInBytes); // Assumes that you're not changing whether or not the object has an indexing header.
     Butterfly* unshift(Structure*, size_t numberOfSlots);
index 03a5118..8960df0 100644 (file)
@@ -194,6 +194,37 @@ inline Butterfly* Butterfly::growArrayRight(
         newIndexingPayloadSizeInBytes);
 }
 
+inline Butterfly* Butterfly::reallocArrayRightIfPossible(
+    VM& vm, GCDeferralContext& deferralContext, JSObject* intendedOwner, Structure* oldStructure, size_t propertyCapacity,
+    bool hadIndexingHeader, size_t oldIndexingPayloadSizeInBytes,
+    size_t newIndexingPayloadSizeInBytes)
+{
+    ASSERT_UNUSED(oldStructure, !indexingHeader()->preCapacity(oldStructure));
+    ASSERT_UNUSED(intendedOwner, hadIndexingHeader == oldStructure->hasIndexingHeader(intendedOwner));
+
+    void* theBase = base(0, propertyCapacity);
+    size_t oldSize = totalSize(0, propertyCapacity, hadIndexingHeader, oldIndexingPayloadSizeInBytes);
+    size_t newSize = totalSize(0, propertyCapacity, true, newIndexingPayloadSizeInBytes);
+    ASSERT(newSize >= oldSize);
+
+    // We can eagerly destroy butterfly backed by LargeAllocation if (1) concurrent collector is not active and (2) the butterfly does not contain any property storage.
+    // This is because during deallocation concurrent collector can access butterfly and DFG concurrent compilers accesses properties.
+    // Objects with no properties are common in arrays, and we are focusing on very large array crafted by repeating Array#push, so... that's fine!
+    bool canRealloc = !propertyCapacity && !vm.heap.mutatorShouldBeFenced() && bitwise_cast<HeapCell*>(theBase)->isLargeAllocation();
+    if (canRealloc) {
+        void* newBase = vm.jsValueGigacageAuxiliarySpace.reallocateLargeAllocationNonVirtual(vm, bitwise_cast<HeapCell*>(theBase), newSize, &deferralContext, AllocationFailureMode::ReturnNull);
+        if (!newBase)
+            return nullptr;
+        return fromBase(newBase, 0, propertyCapacity);
+    }
+
+    void* newBase = vm.jsValueGigacageAuxiliarySpace.allocateNonVirtual(vm, newSize, &deferralContext, AllocationFailureMode::ReturnNull);
+    if (!newBase)
+        return nullptr;
+    memcpy(newBase, theBase, oldSize);
+    return fromBase(newBase, 0, propertyCapacity);
+}
+
 inline Butterfly* Butterfly::resizeArray(
     VM& vm, JSObject* intendedOwner, size_t propertyCapacity, bool oldHasIndexingHeader,
     size_t oldIndexingPayloadSizeInBytes, size_t newPreCapacity, bool newHasIndexingHeader,
index ecd7b84..8d3a0e7 100644 (file)
@@ -30,6 +30,7 @@
 #include "DatePrototype.h"
 #include "ErrorConstructor.h"
 #include "Exception.h"
+#include "GCDeferralContextInlines.h"
 #include "GetterSetter.h"
 #include "HeapSnapshotBuilder.h"
 #include "IndexingHeaderInlines.h"
@@ -3357,6 +3358,8 @@ bool JSObject::ensureLengthSlow(VM& vm, unsigned length)
     Structure* structure = this->structure(vm);
     unsigned propertyCapacity = structure->outOfLineCapacity();
     
+    GCDeferralContext deferralContext(vm.heap);
+    DisallowGC disallowGC;
     unsigned availableOldLength =
         Butterfly::availableContiguousVectorLength(propertyCapacity, oldVectorLength);
     Butterfly* newButterfly = nullptr;
@@ -3368,8 +3371,8 @@ bool JSObject::ensureLengthSlow(VM& vm, unsigned length)
     } else {
         newVectorLength = Butterfly::optimalContiguousVectorLength(
             propertyCapacity, std::min(length * 2, MAX_STORAGE_VECTOR_LENGTH));
-        butterfly = butterfly->growArrayRight(
-            vm, this, structure, propertyCapacity, true,
+        butterfly = butterfly->reallocArrayRightIfPossible(
+            vm, deferralContext, this, structure, propertyCapacity, true,
             oldVectorLength * sizeof(EncodedJSValue),
             newVectorLength * sizeof(EncodedJSValue));
         if (!butterfly)
index f4b33b0..a5cad88 100644 (file)
@@ -1,3 +1,16 @@
+2019-03-31  Yusuke Suzuki  <ysuzuki@apple.com>
+
+        [JSC] Butterfly allocation from LargeAllocation should try "realloc" behavior if collector thread is not active
+        https://bugs.webkit.org/show_bug.cgi?id=196160
+
+        Reviewed by Saam Barati.
+
+        * wtf/FastMalloc.h:
+        (WTF::FastMalloc::tryRealloc):
+        * wtf/Gigacage.cpp:
+        (Gigacage::tryRealloc):
+        * wtf/Gigacage.h:
+
 2019-03-31  Andy Estes  <aestes@apple.com>
 
         [iOS] WebKit should consult the navigation response policy delegate before previewing a QuickLook document
index b796817..efefb3a 100644 (file)
@@ -201,6 +201,15 @@ struct FastMalloc {
     }
     
     static void* realloc(void* p, size_t size) { return fastRealloc(p, size); }
+
+    static void* tryRealloc(void* p, size_t size)
+    {
+        auto result = tryFastRealloc(p, size);
+        void* realResult;
+        if (result.getValue(realResult))
+            return realResult;
+        return nullptr;
+    }
     
     static void free(void* p) { fastFree(p); }
 };
index f7c2a5d..1b10e33 100644 (file)
@@ -41,6 +41,11 @@ void* tryMalloc(Kind, size_t size)
     return FastMalloc::tryMalloc(size);
 }
 
+void* tryRealloc(Kind, void* pointer, size_t size)
+{
+    return FastMalloc::tryRealloc(pointer, size);
+}
+
 void* tryAllocateZeroedVirtualPages(Kind, size_t requestedSize)
 {
     size_t size = roundUpToMultipleOf(WTF::pageSize(), requestedSize);
@@ -93,6 +98,13 @@ void* tryMalloc(Kind kind, size_t size)
     return result;
 }
 
+void* tryRealloc(Kind kind, void* pointer, size_t size)
+{
+    void* result = bmalloc::api::tryRealloc(pointer, size, bmalloc::heapKind(kind));
+    WTF::compilerFence();
+    return result;
+}
+
 void free(Kind kind, void* p)
 {
     if (!p)
index 2b86fea..233f56e 100644 (file)
@@ -120,6 +120,7 @@ inline bool isCaged(Kind, const void*) { return false; }
 inline void* tryAlignedMalloc(Kind, size_t alignment, size_t size) { return tryFastAlignedMalloc(alignment, size); }
 inline void alignedFree(Kind, void* p) { fastAlignedFree(p); }
 WTF_EXPORT_PRIVATE void* tryMalloc(Kind, size_t size);
+WTF_EXPORT_PRIVATE void* tryRealloc(Kind, void*, size_t);
 inline void free(Kind, void* p) { fastFree(p); }
 
 WTF_EXPORT_PRIVATE void* tryAllocateZeroedVirtualPages(Kind, size_t size);
@@ -134,6 +135,7 @@ namespace Gigacage {
 WTF_EXPORT_PRIVATE void* tryAlignedMalloc(Kind, size_t alignment, size_t size);
 WTF_EXPORT_PRIVATE void alignedFree(Kind, void*);
 WTF_EXPORT_PRIVATE void* tryMalloc(Kind, size_t);
+WTF_EXPORT_PRIVATE void* tryRealloc(Kind, void*, size_t);
 WTF_EXPORT_PRIVATE void free(Kind, void*);
 
 WTF_EXPORT_PRIVATE void* tryAllocateZeroedVirtualPages(Kind, size_t size);