Bmalloc and GC should put auxiliaries (butterflies, typed array backing stores) in...
authorfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 2 Aug 2017 01:50:16 +0000 (01:50 +0000)
committerfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 2 Aug 2017 01:50:16 +0000 (01:50 +0000)
https://bugs.webkit.org/show_bug.cgi?id=174727

Reviewed by Mark Lam.
Source/bmalloc:

This adds a mechanism for managing multiple isolated heaps in bmalloc. For now, these isoheaps
(isolated heaps) have a very simple relationship with each other and with the rest of bmalloc:

- You have to choose how many isoheaps you will have statically. See numHeaps in HeapKind.h.

- Because numHeaps is static, each isoheap gets fast thread-local allocation. Basically, we have a
  Cache for each heap kind.

- Each isoheap gets its own Heap.

- Each Heap gets a scavenger thread.

- Some things, like Zone/VMHeap/Scavenger, are per-process.

Most of the per-HeapKind functionality is handled by PerHeapKind<>.

This approach is ideal for supporting special per-HeapKind behaviors. For now we have two heaps:
the Primary heap for normal malloc and the Gigacage. The gigacage is a 64GB-aligned 64GB virtual
region that we now use for variable-length random-access allocations. No Primary allocations will
go into the Gigacage.

* CMakeLists.txt:
* bmalloc.xcodeproj/project.pbxproj:
* bmalloc/AllocationKind.h: Added.
* bmalloc/Allocator.cpp:
(bmalloc::Allocator::Allocator):
(bmalloc::Allocator::tryAllocate):
(bmalloc::Allocator::allocateImpl):
(bmalloc::Allocator::reallocate):
(bmalloc::Allocator::refillAllocatorSlowCase):
(bmalloc::Allocator::allocateLarge):
* bmalloc/Allocator.h:
* bmalloc/BExport.h: Added.
* bmalloc/Cache.cpp:
(bmalloc::Cache::scavenge):
(bmalloc::Cache::Cache):
(bmalloc::Cache::tryAllocateSlowCaseNullCache):
(bmalloc::Cache::allocateSlowCaseNullCache):
(bmalloc::Cache::deallocateSlowCaseNullCache):
(bmalloc::Cache::reallocateSlowCaseNullCache):
(bmalloc::Cache::operator new): Deleted.
(bmalloc::Cache::operator delete): Deleted.
* bmalloc/Cache.h:
(bmalloc::Cache::tryAllocate):
(bmalloc::Cache::allocate):
(bmalloc::Cache::deallocate):
(bmalloc::Cache::reallocate):
* bmalloc/Deallocator.cpp:
(bmalloc::Deallocator::Deallocator):
(bmalloc::Deallocator::scavenge):
(bmalloc::Deallocator::processObjectLog):
(bmalloc::Deallocator::deallocateSlowCase):
* bmalloc/Deallocator.h:
* bmalloc/Gigacage.cpp: Added.
(Gigacage::Callback::Callback):
(Gigacage::Callback::function):
(Gigacage::Callbacks::Callbacks):
(Gigacage::ensureGigacage):
(Gigacage::disableGigacage):
(Gigacage::addDisableCallback):
(Gigacage::removeDisableCallback):
* bmalloc/Gigacage.h: Added.
(Gigacage::caged):
(Gigacage::isCaged):
* bmalloc/Heap.cpp:
(bmalloc::Heap::Heap):
(bmalloc::Heap::usingGigacage):
(bmalloc::Heap::concurrentScavenge):
(bmalloc::Heap::splitAndAllocate):
(bmalloc::Heap::tryAllocateLarge):
(bmalloc::Heap::allocateLarge):
(bmalloc::Heap::shrinkLarge):
(bmalloc::Heap::deallocateLarge):
* bmalloc/Heap.h:
(bmalloc::Heap::mutex):
(bmalloc::Heap::kind const):
(bmalloc::Heap::setScavengerThreadQOSClass): Deleted.
* bmalloc/HeapKind.h: Added.
* bmalloc/ObjectType.cpp:
(bmalloc::objectType):
* bmalloc/ObjectType.h:
* bmalloc/PerHeapKind.h: Added.
(bmalloc::PerHeapKindBase::PerHeapKindBase):
(bmalloc::PerHeapKindBase::size):
(bmalloc::PerHeapKindBase::at):
(bmalloc::PerHeapKindBase::at const):
(bmalloc::PerHeapKindBase::operator[]):
(bmalloc::PerHeapKindBase::operator[] const):
(bmalloc::StaticPerHeapKind::StaticPerHeapKind):
(bmalloc::PerHeapKind::PerHeapKind):
(bmalloc::PerHeapKind::~PerHeapKind):
* bmalloc/PerThread.h:
(bmalloc::PerThread<T>::destructor):
(bmalloc::PerThread<T>::getSlowCase):
(bmalloc::PerThreadStorage<Cache>::get): Deleted.
(bmalloc::PerThreadStorage<Cache>::init): Deleted.
* bmalloc/Scavenger.cpp: Added.
(bmalloc::Scavenger::Scavenger):
(bmalloc::Scavenger::scavenge):
* bmalloc/Scavenger.h: Added.
(bmalloc::Scavenger::setScavengerThreadQOSClass):
(bmalloc::Scavenger::requestedScavengerThreadQOSClass const):
* bmalloc/VMHeap.cpp:
(bmalloc::VMHeap::VMHeap):
(bmalloc::VMHeap::tryAllocateLargeChunk):
* bmalloc/VMHeap.h:
* bmalloc/Zone.cpp:
(bmalloc::Zone::Zone):
* bmalloc/Zone.h:
* bmalloc/bmalloc.h:
(bmalloc::api::tryMalloc):
(bmalloc::api::malloc):
(bmalloc::api::tryMemalign):
(bmalloc::api::memalign):
(bmalloc::api::realloc):
(bmalloc::api::tryLargeMemalignVirtual):
(bmalloc::api::free):
(bmalloc::api::freeLargeVirtual):
(bmalloc::api::scavengeThisThread):
(bmalloc::api::scavenge):
(bmalloc::api::isEnabled):
(bmalloc::api::setScavengerThreadQOSClass):
* bmalloc/mbmalloc.cpp:

Source/JavaScriptCore:

This adopts the Gigacage for the GigacageSubspace, which we use for Auxiliary allocations. Also, in
one place in the code - the FTL codegen for butterfly and typed array access - we "cage" the accesses
themselves. Basically, we do masking to ensure that the pointer points into the gigacage.

This is neutral on JetStream.

* CMakeLists.txt:
* JavaScriptCore.xcodeproj/project.pbxproj:
* b3/B3InsertionSet.cpp:
(JSC::B3::InsertionSet::execute):
* dfg/DFGAbstractInterpreterInlines.h:
(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
* dfg/DFGArgumentsEliminationPhase.cpp:
* dfg/DFGClobberize.cpp:
(JSC::DFG::readsOverlap):
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGDoesGC.cpp:
(JSC::DFG::doesGC):
* dfg/DFGFixedButterflyAccessUncagingPhase.cpp: Added.
(JSC::DFG::performFixedButterflyAccessUncaging):
* dfg/DFGFixedButterflyAccessUncagingPhase.h: Added.
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
* dfg/DFGHeapLocation.cpp:
(WTF::printInternal):
* dfg/DFGHeapLocation.h:
* dfg/DFGNodeType.h:
* dfg/DFGPlan.cpp:
(JSC::DFG::Plan::compileInThreadImpl):
* dfg/DFGPredictionPropagationPhase.cpp:
* dfg/DFGSafeToExecute.h:
(JSC::DFG::safeToExecute):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::compileGetButterfly):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGTypeCheckHoistingPhase.cpp:
(JSC::DFG::TypeCheckHoistingPhase::identifyRedundantStructureChecks):
(JSC::DFG::TypeCheckHoistingPhase::identifyRedundantArrayChecks):
* ftl/FTLCapabilities.cpp:
(JSC::FTL::canCompile):
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::compileNode):
(JSC::FTL::DFG::LowerDFGToB3::compileGetButterfly):
(JSC::FTL::DFG::LowerDFGToB3::compileGetIndexedPropertyStorage):
(JSC::FTL::DFG::LowerDFGToB3::compileGetByVal):
(JSC::FTL::DFG::LowerDFGToB3::compileStringCharAt):
(JSC::FTL::DFG::LowerDFGToB3::compileStringCharCodeAt):
(JSC::FTL::DFG::LowerDFGToB3::compileGetMapBucket):
(JSC::FTL::DFG::LowerDFGToB3::compileGetDirectPname):
(JSC::FTL::DFG::LowerDFGToB3::compileToLowerCase):
(JSC::FTL::DFG::LowerDFGToB3::caged):
* heap/GigacageSubspace.cpp: Added.
(JSC::GigacageSubspace::GigacageSubspace):
(JSC::GigacageSubspace::~GigacageSubspace):
(JSC::GigacageSubspace::tryAllocateAlignedMemory):
(JSC::GigacageSubspace::freeAlignedMemory):
(JSC::GigacageSubspace::canTradeBlocksWith):
* heap/GigacageSubspace.h: Added.
* heap/Heap.cpp:
(JSC::Heap::Heap):
(JSC::Heap::lastChanceToFinalize):
(JSC::Heap::finalize):
(JSC::Heap::sweepInFinalize):
(JSC::Heap::updateAllocationLimits):
(JSC::Heap::shouldDoFullCollection):
(JSC::Heap::collectIfNecessaryOrDefer):
(JSC::Heap::reportWebAssemblyFastMemoriesAllocated): Deleted.
(JSC::Heap::webAssemblyFastMemoriesThisCycleAtThreshold const): Deleted.
(JSC::Heap::sweepLargeAllocations): Deleted.
(JSC::Heap::didAllocateWebAssemblyFastMemories): Deleted.
* heap/Heap.h:
* heap/LargeAllocation.cpp:
(JSC::LargeAllocation::tryCreate):
(JSC::LargeAllocation::destroy):
* heap/MarkedAllocator.cpp:
(JSC::MarkedAllocator::tryAllocateWithoutCollecting):
(JSC::MarkedAllocator::tryAllocateBlock):
* heap/MarkedBlock.cpp:
(JSC::MarkedBlock::tryCreate):
(JSC::MarkedBlock::Handle::Handle):
(JSC::MarkedBlock::Handle::~Handle):
(JSC::MarkedBlock::Handle::didAddToAllocator):
(JSC::MarkedBlock::Handle::subspace const): Deleted.
* heap/MarkedBlock.h:
(JSC::MarkedBlock::Handle::subspace const):
* heap/MarkedSpace.cpp:
(JSC::MarkedSpace::~MarkedSpace):
(JSC::MarkedSpace::freeMemory):
(JSC::MarkedSpace::prepareForAllocation):
(JSC::MarkedSpace::addMarkedAllocator):
(JSC::MarkedSpace::findEmptyBlockToSteal): Deleted.
* heap/MarkedSpace.h:
(JSC::MarkedSpace::firstAllocator const):
(JSC::MarkedSpace::allocatorForEmptyAllocation const): Deleted.
* heap/Subspace.cpp:
(JSC::Subspace::Subspace):
(JSC::Subspace::canTradeBlocksWith):
(JSC::Subspace::tryAllocateAlignedMemory):
(JSC::Subspace::freeAlignedMemory):
(JSC::Subspace::prepareForAllocation):
(JSC::Subspace::findEmptyBlockToSteal):
* heap/Subspace.h:
(JSC::Subspace::didCreateFirstAllocator):
* heap/SubspaceInlines.h:
(JSC::Subspace::forEachAllocator):
(JSC::Subspace::forEachMarkedBlock):
(JSC::Subspace::forEachNotEmptyMarkedBlock):
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emitDoubleLoad):
(JSC::JIT::emitContiguousLoad):
(JSC::JIT::emitArrayStorageLoad):
(JSC::JIT::emitGenericContiguousPutByVal):
(JSC::JIT::emitArrayStoragePutByVal):
(JSC::JIT::emit_op_get_from_scope):
(JSC::JIT::emit_op_put_to_scope):
(JSC::JIT::emitIntTypedArrayGetByVal):
(JSC::JIT::emitFloatTypedArrayGetByVal):
(JSC::JIT::emitIntTypedArrayPutByVal):
(JSC::JIT::emitFloatTypedArrayPutByVal):
* jsc.cpp:
(fillBufferWithContentsOfFile):
(functionReadFile):
(gigacageDisabled):
(jscmain):
* llint/LowLevelInterpreter64.asm:
* runtime/ArrayBuffer.cpp:
(JSC::ArrayBufferContents::tryAllocate):
(JSC::ArrayBuffer::createAdopted):
(JSC::ArrayBuffer::createFromBytes):
(JSC::ArrayBuffer::tryCreate):
* runtime/IndexingHeader.h:
* runtime/InitializeThreading.cpp:
(JSC::initializeThreading):
* runtime/JSArrayBuffer.cpp:
* runtime/JSArrayBufferView.cpp:
(JSC::JSArrayBufferView::ConstructionContext::ConstructionContext):
(JSC::JSArrayBufferView::finalize):
* runtime/JSLock.cpp:
(JSC::JSLock::didAcquireLock):
* runtime/JSObject.h:
* runtime/Options.cpp:
(JSC::recomputeDependentOptions):
* runtime/Options.h:
* runtime/ScopedArgumentsTable.h:
* runtime/VM.cpp:
(JSC::VM::VM):
(JSC::VM::~VM):
(JSC::VM::gigacageDisabledCallback):
(JSC::VM::gigacageDisabled):
* runtime/VM.h:
(JSC::VM::fireGigacageEnabledIfNecessary):
(JSC::VM::gigacageEnabled):
* wasm/WasmB3IRGenerator.cpp:
(JSC::Wasm::B3IRGenerator::B3IRGenerator):
(JSC::Wasm::B3IRGenerator::emitCheckAndPreparePointer):
* wasm/WasmCodeBlock.cpp:
(JSC::Wasm::CodeBlock::isSafeToRun):
* wasm/WasmMemory.cpp:
(JSC::Wasm::makeString):
(JSC::Wasm::Memory::create):
(JSC::Wasm::Memory::~Memory):
(JSC::Wasm::Memory::addressIsInActiveFastMemory):
(JSC::Wasm::Memory::grow):
(JSC::Wasm::Memory::initializePreallocations): Deleted.
(JSC::Wasm::Memory::maxFastMemoryCount): Deleted.
* wasm/WasmMemory.h:
* wasm/js/JSWebAssemblyInstance.cpp:
(JSC::JSWebAssemblyInstance::create):
* wasm/js/JSWebAssemblyMemory.cpp:
(JSC::JSWebAssemblyMemory::grow):
(JSC::JSWebAssemblyMemory::finishCreation):
* wasm/js/JSWebAssemblyMemory.h:
(JSC::JSWebAssemblyMemory::subspaceFor):

Source/WebCore:

No new tests because no change in behavior.

Needed to teach Metal how to allocate in the Gigacage.

* platform/graphics/cocoa/GPUBufferMetal.mm:
(WebCore::GPUBuffer::GPUBuffer):
(WebCore::GPUBuffer::contents):

Source/WebKit:

The WebProcess should never disable the Gigacage by allocating typed arrays outside the Gigacage. So,
we add a callback that crashes the process.

* WebProcess/WebProcess.cpp:
(WebKit::gigacageDisabled):
(WebKit::m_webSQLiteDatabaseTracker):

Source/WTF:

For the Gigacage project to have minimal impact, we need to have some abstraction that allows code to
avoid having to guard itself with #if's. This adds a Gigacage abstraction that overlays the Gigacage
namespace from bmalloc, which always lets you call things like Gigacage::caged and Gigacage::tryMalloc.

Because of how many places need to possibly allocate in a gigacage, or possibly perform caged accesses,
it's better to hide the question of whether or not it's enabled inside this API.

* WTF.xcodeproj/project.pbxproj:
* wtf/CMakeLists.txt:
* wtf/FastMalloc.cpp:
* wtf/Gigacage.cpp: Added.
(Gigacage::tryMalloc):
(Gigacage::tryAllocateVirtualPages):
(Gigacage::freeVirtualPages):
(Gigacage::tryAlignedMalloc):
(Gigacage::alignedFree):
(Gigacage::free):
* wtf/Gigacage.h: Added.
(Gigacage::ensureGigacage):
(Gigacage::disableGigacage):
(Gigacage::addDisableCallback):
(Gigacage::removeDisableCallback):
(Gigacage::caged):
(Gigacage::isCaged):
(Gigacage::tryAlignedMalloc):
(Gigacage::alignedFree):
(Gigacage::free):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@220118 268f45cc-cd09-0410-ab3c-d52691b4dbfc

98 files changed:
JSTests/wasm/stress/oom.js
Source/JavaScriptCore/CMakeLists.txt
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/b3/B3InsertionSet.cpp
Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp
Source/JavaScriptCore/dfg/DFGClobberize.h
Source/JavaScriptCore/dfg/DFGDoesGC.cpp
Source/JavaScriptCore/dfg/DFGFixedButterflyAccessUncagingPhase.cpp [new file with mode: 0644]
Source/JavaScriptCore/dfg/DFGFixedButterflyAccessUncagingPhase.h [new file with mode: 0644]
Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
Source/JavaScriptCore/dfg/DFGHeapLocation.cpp
Source/JavaScriptCore/dfg/DFGHeapLocation.h
Source/JavaScriptCore/dfg/DFGNodeType.h
Source/JavaScriptCore/dfg/DFGPlan.cpp
Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
Source/JavaScriptCore/dfg/DFGSafeToExecute.h
Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
Source/JavaScriptCore/dfg/DFGTypeCheckHoistingPhase.cpp
Source/JavaScriptCore/ftl/FTLCapabilities.cpp
Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
Source/JavaScriptCore/heap/GigacageSubspace.cpp [new file with mode: 0644]
Source/JavaScriptCore/heap/GigacageSubspace.h [new file with mode: 0644]
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/heap/Heap.h
Source/JavaScriptCore/heap/LargeAllocation.cpp
Source/JavaScriptCore/heap/MarkedAllocator.cpp
Source/JavaScriptCore/heap/MarkedBlock.cpp
Source/JavaScriptCore/heap/MarkedBlock.h
Source/JavaScriptCore/heap/MarkedSpace.cpp
Source/JavaScriptCore/heap/MarkedSpace.h
Source/JavaScriptCore/heap/Subspace.cpp
Source/JavaScriptCore/heap/Subspace.h
Source/JavaScriptCore/heap/SubspaceInlines.h
Source/JavaScriptCore/jit/JITPropertyAccess.cpp
Source/JavaScriptCore/jsc.cpp
Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
Source/JavaScriptCore/runtime/ArrayBuffer.cpp
Source/JavaScriptCore/runtime/IndexingHeader.h
Source/JavaScriptCore/runtime/InitializeThreading.cpp
Source/JavaScriptCore/runtime/JSArrayBuffer.cpp
Source/JavaScriptCore/runtime/JSArrayBufferView.cpp
Source/JavaScriptCore/runtime/JSLock.cpp
Source/JavaScriptCore/runtime/JSObject.h
Source/JavaScriptCore/runtime/Options.cpp
Source/JavaScriptCore/runtime/Options.h
Source/JavaScriptCore/runtime/ScopedArgumentsTable.h
Source/JavaScriptCore/runtime/VM.cpp
Source/JavaScriptCore/runtime/VM.h
Source/JavaScriptCore/wasm/WasmB3IRGenerator.cpp
Source/JavaScriptCore/wasm/WasmCodeBlock.cpp
Source/JavaScriptCore/wasm/WasmMemory.cpp
Source/JavaScriptCore/wasm/WasmMemory.h
Source/JavaScriptCore/wasm/js/JSWebAssemblyInstance.cpp
Source/JavaScriptCore/wasm/js/JSWebAssemblyMemory.cpp
Source/JavaScriptCore/wasm/js/JSWebAssemblyMemory.h
Source/WTF/ChangeLog
Source/WTF/WTF.xcodeproj/project.pbxproj
Source/WTF/wtf/CMakeLists.txt
Source/WTF/wtf/FastMalloc.cpp
Source/WTF/wtf/Gigacage.cpp [new file with mode: 0644]
Source/WTF/wtf/Gigacage.h [new file with mode: 0644]
Source/WebCore/ChangeLog
Source/WebCore/platform/graphics/cocoa/GPUBufferMetal.mm
Source/WebKit/ChangeLog
Source/WebKit/WebProcess/WebProcess.cpp
Source/bmalloc/CMakeLists.txt
Source/bmalloc/ChangeLog
Source/bmalloc/bmalloc.xcodeproj/project.pbxproj
Source/bmalloc/bmalloc/AllocationKind.h [new file with mode: 0644]
Source/bmalloc/bmalloc/Allocator.cpp
Source/bmalloc/bmalloc/Allocator.h
Source/bmalloc/bmalloc/BExport.h [new file with mode: 0644]
Source/bmalloc/bmalloc/Cache.cpp
Source/bmalloc/bmalloc/Cache.h
Source/bmalloc/bmalloc/Deallocator.cpp
Source/bmalloc/bmalloc/Deallocator.h
Source/bmalloc/bmalloc/Gigacage.cpp [new file with mode: 0644]
Source/bmalloc/bmalloc/Gigacage.h [new file with mode: 0644]
Source/bmalloc/bmalloc/Heap.cpp
Source/bmalloc/bmalloc/Heap.h
Source/bmalloc/bmalloc/HeapKind.h [new file with mode: 0644]
Source/bmalloc/bmalloc/ObjectType.cpp
Source/bmalloc/bmalloc/ObjectType.h
Source/bmalloc/bmalloc/PerHeapKind.h [new file with mode: 0644]
Source/bmalloc/bmalloc/PerThread.h
Source/bmalloc/bmalloc/Scavenger.cpp [new file with mode: 0644]
Source/bmalloc/bmalloc/Scavenger.h [new file with mode: 0644]
Source/bmalloc/bmalloc/VMHeap.cpp
Source/bmalloc/bmalloc/VMHeap.h
Source/bmalloc/bmalloc/Zone.cpp
Source/bmalloc/bmalloc/Zone.h
Source/bmalloc/bmalloc/bmalloc.h
Source/bmalloc/bmalloc/mbmalloc.cpp
Tools/Scripts/run-jsc-stress-tests

index c50f715..076ea76 100644 (file)
@@ -1,3 +1,6 @@
+// We don't need N versions of this simultaneously filling up RAM.
+//@ runDefault
+
 const verbose = false;
 
 // Use a full 4GiB so that exhaustion is likely to occur faster. We're not
index 0293987..9c11413 100644 (file)
@@ -336,6 +336,7 @@ set(JavaScriptCore_SOURCES
     dfg/DFGEpoch.cpp
     dfg/DFGFailedFinalizer.cpp
     dfg/DFGFinalizer.cpp
+    dfg/DFGFixedButterflyAccessUncagingPhase.cpp
     dfg/DFGFixupPhase.cpp
     dfg/DFGFlowIndexing.cpp
     dfg/DFGFlushFormat.cpp
@@ -504,6 +505,7 @@ set(JavaScriptCore_SOURCES
     heap/GCConductor.cpp
     heap/GCLogging.cpp
     heap/GCRequest.cpp
+    heap/GigacageSubspace.cpp
     heap/HandleSet.cpp
     heap/HandleStack.cpp
     heap/Heap.cpp
index abd19f6..deef0f6 100644 (file)
@@ -1,3 +1,188 @@
+2017-08-01  Filip Pizlo  <fpizlo@apple.com>
+
+        Bmalloc and GC should put auxiliaries (butterflies, typed array backing stores) in a gigacage (separate multi-GB VM region)
+        https://bugs.webkit.org/show_bug.cgi?id=174727
+
+        Reviewed by Mark Lam.
+        
+        This adopts the Gigacage for the GigacageSubspace, which we use for Auxiliary allocations. Also, in
+        one place in the code - the FTL codegen for butterfly and typed array access - we "cage" the accesses
+        themselves. Basically, we do masking to ensure that the pointer points into the gigacage.
+        
+        This is neutral on JetStream.
+
+        * CMakeLists.txt:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * b3/B3InsertionSet.cpp:
+        (JSC::B3::InsertionSet::execute):
+        * dfg/DFGAbstractInterpreterInlines.h:
+        (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
+        * dfg/DFGArgumentsEliminationPhase.cpp:
+        * dfg/DFGClobberize.cpp:
+        (JSC::DFG::readsOverlap):
+        * dfg/DFGClobberize.h:
+        (JSC::DFG::clobberize):
+        * dfg/DFGDoesGC.cpp:
+        (JSC::DFG::doesGC):
+        * dfg/DFGFixedButterflyAccessUncagingPhase.cpp: Added.
+        (JSC::DFG::performFixedButterflyAccessUncaging):
+        * dfg/DFGFixedButterflyAccessUncagingPhase.h: Added.
+        * dfg/DFGFixupPhase.cpp:
+        (JSC::DFG::FixupPhase::fixupNode):
+        * dfg/DFGHeapLocation.cpp:
+        (WTF::printInternal):
+        * dfg/DFGHeapLocation.h:
+        * dfg/DFGNodeType.h:
+        * dfg/DFGPlan.cpp:
+        (JSC::DFG::Plan::compileInThreadImpl):
+        * dfg/DFGPredictionPropagationPhase.cpp:
+        * dfg/DFGSafeToExecute.h:
+        (JSC::DFG::safeToExecute):
+        * dfg/DFGSpeculativeJIT.cpp:
+        (JSC::DFG::SpeculativeJIT::compileGetButterfly):
+        * dfg/DFGSpeculativeJIT32_64.cpp:
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGTypeCheckHoistingPhase.cpp:
+        (JSC::DFG::TypeCheckHoistingPhase::identifyRedundantStructureChecks):
+        (JSC::DFG::TypeCheckHoistingPhase::identifyRedundantArrayChecks):
+        * ftl/FTLCapabilities.cpp:
+        (JSC::FTL::canCompile):
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::compileNode):
+        (JSC::FTL::DFG::LowerDFGToB3::compileGetButterfly):
+        (JSC::FTL::DFG::LowerDFGToB3::compileGetIndexedPropertyStorage):
+        (JSC::FTL::DFG::LowerDFGToB3::compileGetByVal):
+        (JSC::FTL::DFG::LowerDFGToB3::compileStringCharAt):
+        (JSC::FTL::DFG::LowerDFGToB3::compileStringCharCodeAt):
+        (JSC::FTL::DFG::LowerDFGToB3::compileGetMapBucket):
+        (JSC::FTL::DFG::LowerDFGToB3::compileGetDirectPname):
+        (JSC::FTL::DFG::LowerDFGToB3::compileToLowerCase):
+        (JSC::FTL::DFG::LowerDFGToB3::caged):
+        * heap/GigacageSubspace.cpp: Added.
+        (JSC::GigacageSubspace::GigacageSubspace):
+        (JSC::GigacageSubspace::~GigacageSubspace):
+        (JSC::GigacageSubspace::tryAllocateAlignedMemory):
+        (JSC::GigacageSubspace::freeAlignedMemory):
+        (JSC::GigacageSubspace::canTradeBlocksWith):
+        * heap/GigacageSubspace.h: Added.
+        * heap/Heap.cpp:
+        (JSC::Heap::Heap):
+        (JSC::Heap::lastChanceToFinalize):
+        (JSC::Heap::finalize):
+        (JSC::Heap::sweepInFinalize):
+        (JSC::Heap::updateAllocationLimits):
+        (JSC::Heap::shouldDoFullCollection):
+        (JSC::Heap::collectIfNecessaryOrDefer):
+        (JSC::Heap::reportWebAssemblyFastMemoriesAllocated): Deleted.
+        (JSC::Heap::webAssemblyFastMemoriesThisCycleAtThreshold const): Deleted.
+        (JSC::Heap::sweepLargeAllocations): Deleted.
+        (JSC::Heap::didAllocateWebAssemblyFastMemories): Deleted.
+        * heap/Heap.h:
+        * heap/LargeAllocation.cpp:
+        (JSC::LargeAllocation::tryCreate):
+        (JSC::LargeAllocation::destroy):
+        * heap/MarkedAllocator.cpp:
+        (JSC::MarkedAllocator::tryAllocateWithoutCollecting):
+        (JSC::MarkedAllocator::tryAllocateBlock):
+        * heap/MarkedBlock.cpp:
+        (JSC::MarkedBlock::tryCreate):
+        (JSC::MarkedBlock::Handle::Handle):
+        (JSC::MarkedBlock::Handle::~Handle):
+        (JSC::MarkedBlock::Handle::didAddToAllocator):
+        (JSC::MarkedBlock::Handle::subspace const): Deleted.
+        * heap/MarkedBlock.h:
+        (JSC::MarkedBlock::Handle::subspace const):
+        * heap/MarkedSpace.cpp:
+        (JSC::MarkedSpace::~MarkedSpace):
+        (JSC::MarkedSpace::freeMemory):
+        (JSC::MarkedSpace::prepareForAllocation):
+        (JSC::MarkedSpace::addMarkedAllocator):
+        (JSC::MarkedSpace::findEmptyBlockToSteal): Deleted.
+        * heap/MarkedSpace.h:
+        (JSC::MarkedSpace::firstAllocator const):
+        (JSC::MarkedSpace::allocatorForEmptyAllocation const): Deleted.
+        * heap/Subspace.cpp:
+        (JSC::Subspace::Subspace):
+        (JSC::Subspace::canTradeBlocksWith):
+        (JSC::Subspace::tryAllocateAlignedMemory):
+        (JSC::Subspace::freeAlignedMemory):
+        (JSC::Subspace::prepareForAllocation):
+        (JSC::Subspace::findEmptyBlockToSteal):
+        * heap/Subspace.h:
+        (JSC::Subspace::didCreateFirstAllocator):
+        * heap/SubspaceInlines.h:
+        (JSC::Subspace::forEachAllocator):
+        (JSC::Subspace::forEachMarkedBlock):
+        (JSC::Subspace::forEachNotEmptyMarkedBlock):
+        * jit/JITPropertyAccess.cpp:
+        (JSC::JIT::emitDoubleLoad):
+        (JSC::JIT::emitContiguousLoad):
+        (JSC::JIT::emitArrayStorageLoad):
+        (JSC::JIT::emitGenericContiguousPutByVal):
+        (JSC::JIT::emitArrayStoragePutByVal):
+        (JSC::JIT::emit_op_get_from_scope):
+        (JSC::JIT::emit_op_put_to_scope):
+        (JSC::JIT::emitIntTypedArrayGetByVal):
+        (JSC::JIT::emitFloatTypedArrayGetByVal):
+        (JSC::JIT::emitIntTypedArrayPutByVal):
+        (JSC::JIT::emitFloatTypedArrayPutByVal):
+        * jsc.cpp:
+        (fillBufferWithContentsOfFile):
+        (functionReadFile):
+        (gigacageDisabled):
+        (jscmain):
+        * llint/LowLevelInterpreter64.asm:
+        * runtime/ArrayBuffer.cpp:
+        (JSC::ArrayBufferContents::tryAllocate):
+        (JSC::ArrayBuffer::createAdopted):
+        (JSC::ArrayBuffer::createFromBytes):
+        (JSC::ArrayBuffer::tryCreate):
+        * runtime/IndexingHeader.h:
+        * runtime/InitializeThreading.cpp:
+        (JSC::initializeThreading):
+        * runtime/JSArrayBuffer.cpp:
+        * runtime/JSArrayBufferView.cpp:
+        (JSC::JSArrayBufferView::ConstructionContext::ConstructionContext):
+        (JSC::JSArrayBufferView::finalize):
+        * runtime/JSLock.cpp:
+        (JSC::JSLock::didAcquireLock):
+        * runtime/JSObject.h:
+        * runtime/Options.cpp:
+        (JSC::recomputeDependentOptions):
+        * runtime/Options.h:
+        * runtime/ScopedArgumentsTable.h:
+        * runtime/VM.cpp:
+        (JSC::VM::VM):
+        (JSC::VM::~VM):
+        (JSC::VM::gigacageDisabledCallback):
+        (JSC::VM::gigacageDisabled):
+        * runtime/VM.h:
+        (JSC::VM::fireGigacageEnabledIfNecessary):
+        (JSC::VM::gigacageEnabled):
+        * wasm/WasmB3IRGenerator.cpp:
+        (JSC::Wasm::B3IRGenerator::B3IRGenerator):
+        (JSC::Wasm::B3IRGenerator::emitCheckAndPreparePointer):
+        * wasm/WasmCodeBlock.cpp:
+        (JSC::Wasm::CodeBlock::isSafeToRun):
+        * wasm/WasmMemory.cpp:
+        (JSC::Wasm::makeString):
+        (JSC::Wasm::Memory::create):
+        (JSC::Wasm::Memory::~Memory):
+        (JSC::Wasm::Memory::addressIsInActiveFastMemory):
+        (JSC::Wasm::Memory::grow):
+        (JSC::Wasm::Memory::initializePreallocations): Deleted.
+        (JSC::Wasm::Memory::maxFastMemoryCount): Deleted.
+        * wasm/WasmMemory.h:
+        * wasm/js/JSWebAssemblyInstance.cpp:
+        (JSC::JSWebAssemblyInstance::create):
+        * wasm/js/JSWebAssemblyMemory.cpp:
+        (JSC::JSWebAssemblyMemory::grow):
+        (JSC::JSWebAssemblyMemory::finishCreation):
+        * wasm/js/JSWebAssemblyMemory.h:
+        (JSC::JSWebAssemblyMemory::subspaceFor):
+
 2017-07-31  Mark Lam  <mark.lam@apple.com>
 
         Added some UNLIKELYs to operationOptimize().
index ba6946f..c449f73 100644 (file)
                0F5A6284188C98D40072C9DF /* FTLValueRange.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5A6282188C98D40072C9DF /* FTLValueRange.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F5AE2C41DF4F2800066EFE1 /* VMInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = FE90BB3A1B7CF64E006B3F03 /* VMInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F5B4A331C84F0D600F1B17E /* SlowPathReturnType.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5B4A321C84F0D600F1B17E /* SlowPathReturnType.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F5BF1561F22EB170029D91D /* GigacageSubspace.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5BF1541F22EB170029D91D /* GigacageSubspace.cpp */; };
+               0F5BF1571F22EB170029D91D /* GigacageSubspace.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5BF1551F22EB170029D91D /* GigacageSubspace.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F5BF1631F2317120029D91D /* B3HoistLoopInvariantValues.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5BF1611F2317120029D91D /* B3HoistLoopInvariantValues.cpp */; };
                0F5BF1641F2317120029D91D /* B3HoistLoopInvariantValues.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5BF1621F2317120029D91D /* B3HoistLoopInvariantValues.h */; };
                0F5BF1671F23A0980029D91D /* B3BackwardsCFG.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5BF1661F23A0980029D91D /* B3BackwardsCFG.h */; };
                0FD8A32A17D51F5700CA2C40 /* DFGToFTLDeferredCompilationCallback.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD8A32217D51F5700CA2C40 /* DFGToFTLDeferredCompilationCallback.h */; };
                0FD8A32B17D51F5700CA2C40 /* DFGToFTLForOSREntryDeferredCompilationCallback.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD8A32317D51F5700CA2C40 /* DFGToFTLForOSREntryDeferredCompilationCallback.cpp */; };
                0FD8A32C17D51F5700CA2C40 /* DFGToFTLForOSREntryDeferredCompilationCallback.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD8A32417D51F5700CA2C40 /* DFGToFTLForOSREntryDeferredCompilationCallback.h */; };
+               0FD9EA881F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FD9EA861F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.cpp */; };
+               0FD9EA891F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FD9EA871F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.h */; };
                0FDB2CC9173DA520007B3C1B /* FTLAbbreviatedTypes.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FDB2CC7173DA51E007B3C1B /* FTLAbbreviatedTypes.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FDB2CCA173DA523007B3C1B /* FTLValueFromBlock.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FDB2CC8173DA51E007B3C1B /* FTLValueFromBlock.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FDB2CE7174830A2007B3C1B /* DFGWorklist.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FDB2CE5174830A2007B3C1B /* DFGWorklist.cpp */; };
                0F5A6281188C98D40072C9DF /* FTLValueRange.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLValueRange.cpp; path = ftl/FTLValueRange.cpp; sourceTree = "<group>"; };
                0F5A6282188C98D40072C9DF /* FTLValueRange.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = FTLValueRange.h; path = ftl/FTLValueRange.h; sourceTree = "<group>"; };
                0F5B4A321C84F0D600F1B17E /* SlowPathReturnType.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SlowPathReturnType.h; sourceTree = "<group>"; };
+               0F5BF1541F22EB170029D91D /* GigacageSubspace.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; path = GigacageSubspace.cpp; sourceTree = "<group>"; };
+               0F5BF1551F22EB170029D91D /* GigacageSubspace.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = GigacageSubspace.h; sourceTree = "<group>"; };
                0F5BF1611F2317120029D91D /* B3HoistLoopInvariantValues.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; name = B3HoistLoopInvariantValues.cpp; path = b3/B3HoistLoopInvariantValues.cpp; sourceTree = "<group>"; };
                0F5BF1621F2317120029D91D /* B3HoistLoopInvariantValues.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = B3HoistLoopInvariantValues.h; path = b3/B3HoistLoopInvariantValues.h; sourceTree = "<group>"; };
                0F5BF1661F23A0980029D91D /* B3BackwardsCFG.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = B3BackwardsCFG.h; path = b3/B3BackwardsCFG.h; sourceTree = "<group>"; };
                0FD8A32217D51F5700CA2C40 /* DFGToFTLDeferredCompilationCallback.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGToFTLDeferredCompilationCallback.h; path = dfg/DFGToFTLDeferredCompilationCallback.h; sourceTree = "<group>"; };
                0FD8A32317D51F5700CA2C40 /* DFGToFTLForOSREntryDeferredCompilationCallback.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGToFTLForOSREntryDeferredCompilationCallback.cpp; path = dfg/DFGToFTLForOSREntryDeferredCompilationCallback.cpp; sourceTree = "<group>"; };
                0FD8A32417D51F5700CA2C40 /* DFGToFTLForOSREntryDeferredCompilationCallback.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGToFTLForOSREntryDeferredCompilationCallback.h; path = dfg/DFGToFTLForOSREntryDeferredCompilationCallback.h; sourceTree = "<group>"; };
+               0FD9EA861F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; name = DFGFixedButterflyAccessUncagingPhase.cpp; path = dfg/DFGFixedButterflyAccessUncagingPhase.cpp; sourceTree = "<group>"; };
+               0FD9EA871F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = DFGFixedButterflyAccessUncagingPhase.h; path = dfg/DFGFixedButterflyAccessUncagingPhase.h; sourceTree = "<group>"; };
                0FDB2CC7173DA51E007B3C1B /* FTLAbbreviatedTypes.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = FTLAbbreviatedTypes.h; path = ftl/FTLAbbreviatedTypes.h; sourceTree = "<group>"; };
                0FDB2CC8173DA51E007B3C1B /* FTLValueFromBlock.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = FTLValueFromBlock.h; path = ftl/FTLValueFromBlock.h; sourceTree = "<group>"; };
                0FDB2CE5174830A2007B3C1B /* DFGWorklist.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGWorklist.cpp; path = dfg/DFGWorklist.cpp; sourceTree = "<group>"; };
                                2A343F7418A1748B0039B085 /* GCSegmentedArray.h */,
                                2A343F7718A1749D0039B085 /* GCSegmentedArrayInlines.h */,
                                0F86A26E1D6F7B3100CB0C92 /* GCTypeMap.h */,
+                               0F5BF1541F22EB170029D91D /* GigacageSubspace.cpp */,
+                               0F5BF1551F22EB170029D91D /* GigacageSubspace.h */,
                                142E312B134FF0A600AFADB5 /* Handle.h */,
                                C28318FF16FE4B7D00157BFD /* HandleBlock.h */,
                                C283190116FE533E00157BFD /* HandleBlockInlines.h */,
                                A7BFF3BF179868940002F462 /* DFGFiltrationResult.h */,
                                A78A976E179738B8009DF744 /* DFGFinalizer.cpp */,
                                A78A976F179738B8009DF744 /* DFGFinalizer.h */,
+                               0FD9EA861F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.cpp */,
+                               0FD9EA871F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.h */,
                                0F2BDC12151C5D4A00CD8910 /* DFGFixupPhase.cpp */,
                                0F2BDC13151C5D4A00CD8910 /* DFGFixupPhase.h */,
                                0F20177D1DCADC3000EA5950 /* DFGFlowIndexing.cpp */,
                                0F0B83A714BCF50700885B4F /* CodeType.h in Headers */,
                                0FD0E5F21E46C8AF0006AB08 /* CollectingScope.h in Headers */,
                                0FA762051DB9242900B7A2FD /* CollectionScope.h in Headers */,
+                               0FD9EA891F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.h in Headers */,
                                0FD0E5E91E43D3490006AB08 /* CollectorPhase.h in Headers */,
                                A53243981856A489002ED692 /* CombinedDomains.json in Headers */,
                                BC18C3F30E16F5CD00B34460 /* CommonIdentifiers.h in Headers */,
                                0F5AE2C41DF4F2800066EFE1 /* VMInlines.h in Headers */,
                                FE3022D71E42857300BAC493 /* VMInspector.h in Headers */,
                                FE6F56DE1E64EAD600D17801 /* VMTraps.h in Headers */,
+                               0F5BF1571F22EB170029D91D /* GigacageSubspace.h in Headers */,
                                53F40E931D5A4AB30099A1B6 /* WasmB3IRGenerator.h in Headers */,
                                53CA730A1EA533D80076049D /* WasmBBQPlan.h in Headers */,
                                53F8D2001E8387D400D21116 /* WasmBBQPlanInlines.h in Headers */,
                                0F2017861DCAE14C00EA5950 /* DFGNodeFlowProjection.cpp in Sources */,
                                0F5D085D1B8CF99D001143B4 /* DFGNodeOrigin.cpp in Sources */,
                                0F2B9CE619D0BA7D00B1D1B5 /* DFGObjectAllocationSinkingPhase.cpp in Sources */,
+                               0FD9EA881F29162C00F32BEE /* DFGFixedButterflyAccessUncagingPhase.cpp in Sources */,
                                0F2B9CE819D0BA7D00B1D1B5 /* DFGObjectMaterializationData.cpp in Sources */,
                                86EC9DCF1328DF82002B2AD7 /* DFGOperations.cpp in Sources */,
                                A7D89CFD17A0B8CC00773AD8 /* DFGOSRAvailabilityAnalysisPhase.cpp in Sources */,
                                70EC0EC61AA0D7DA00B6AAFA /* StringIteratorPrototype.cpp in Sources */,
                                14469DEC107EC7E700650446 /* StringObject.cpp in Sources */,
                                14469DED107EC7E700650446 /* StringPrototype.cpp in Sources */,
+                               0F5BF1561F22EB170029D91D /* GigacageSubspace.cpp in Sources */,
                                9335F24D12E6765B002B5553 /* StringRecursionChecker.cpp in Sources */,
                                BCDE3B430E6C832D001453A7 /* Structure.cpp in Sources */,
                                7E4EE70F0EBB7A5B005934AA /* StructureChain.cpp in Sources */,
index f583c20..15dd9a6 100644 (file)
@@ -65,6 +65,8 @@ Value* InsertionSet::insertClone(size_t index, Value* value)
 
 void InsertionSet::execute(BasicBlock* block)
 {
+    for (Insertion& insertion : m_insertions)
+        insertion.element()->owner = block;
     bubbleSort(m_insertions.begin(), m_insertions.end());
     executeInsertions(block->m_values, m_insertions);
     m_bottomForType = TypeMap<Value*>();
index 5004e49..b93debc 100644 (file)
@@ -2384,6 +2384,7 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
         }
         break;
     case GetButterfly:
+    case GetButterflyWithoutCaging:
     case AllocatePropertyStorage:
     case ReallocatePropertyStorage:
     case NukeStructureAndSetButterfly:
index baa0bce..c3cf829 100644 (file)
@@ -358,6 +358,7 @@ private:
                     break;
                     
                 case GetButterfly:
+                case GetButterflyWithoutCaging:
                     // This barely works. The danger is that the GetButterfly is used by something that
                     // does something escaping to a candidate. Fortunately, the only butterfly-using ops
                     // that we exempt here also use the candidate directly. If there ever was a
index f5b08d0..1429b72 100644 (file)
@@ -1011,6 +1011,11 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
         def(HeapLocation(ButterflyLoc, JSObject_butterfly, node->child1()), LazyNode(node));
         return;
 
+    case GetButterflyWithoutCaging:
+        read(JSObject_butterfly);
+        def(HeapLocation(ButterflyWithoutCagingLoc, JSObject_butterfly, node->child1()), LazyNode(node));
+        return;
+
     case CheckSubClass:
         def(PureValue(node, node->classInfo()));
         return;
index 5c30bd8..0571038 100644 (file)
@@ -115,6 +115,7 @@ bool doesGC(Graph& graph, Node* node)
     case CheckStructure:
     case GetExecutable:
     case GetButterfly:
+    case GetButterflyWithoutCaging:
     case CheckSubClass:
     case CheckArray:
     case GetScope:
diff --git a/Source/JavaScriptCore/dfg/DFGFixedButterflyAccessUncagingPhase.cpp b/Source/JavaScriptCore/dfg/DFGFixedButterflyAccessUncagingPhase.cpp
new file mode 100644 (file)
index 0000000..f9aff5f
--- /dev/null
@@ -0,0 +1,114 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "DFGFixedButterflyAccessUncagingPhase.h"
+
+#if ENABLE(DFG_JIT)
+
+#include "DFGClobberize.h"
+#include "DFGGraph.h"
+#include "DFGPhase.h"
+#include "JSCInlines.h"
+#include <wtf/IndexSet.h>
+
+namespace JSC { namespace DFG {
+
+namespace {
+
+class FixedButterflyAccessUncagingPhase : public Phase {
+public:
+    FixedButterflyAccessUncagingPhase(Graph& graph)
+        : Phase(graph, "fixed butterfly access uncaging")
+    {
+    }
+    
+    bool run()
+    {
+        IndexSet<Node*> needCaging;
+        
+        bool changed = true;
+        while (changed) {
+            changed = false;
+            for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
+                for (Node* node : *block) {
+                    switch (node->op()) {
+                    // FIXME: Check again how badly we need this. It might not be worth it.
+                    // https://bugs.webkit.org/show_bug.cgi?id=175044
+                    case GetByOffset:
+                    case PutByOffset:
+                    case GetGetterSetterByOffset:
+                    case GetArrayLength:
+                    case GetVectorLength:
+                        break;
+                        
+                    case Upsilon:
+                        if (needCaging.contains(node->phi()))
+                            changed |= needCaging.add(node->child1().node());
+                        break;
+                        
+                    default:
+                        // FIXME: We could possibly make this more precise. We really only care about whether
+                        // this can read/write butterfly contents.
+                        // https://bugs.webkit.org/show_bug.cgi?id=174926
+                        if (!accessesOverlap(m_graph, node, Heap))
+                            break;
+                    
+                        m_graph.doToChildren(
+                            node,
+                            [&] (Edge& edge) {
+                                changed |= needCaging.add(edge.node());
+                            });
+                        break;
+                    }
+                }
+            }
+        }
+        
+        bool didOptimize = false;
+        for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
+            for (Node* node : *block) {
+                if (node->op() == GetButterfly && !needCaging.contains(node)) {
+                    node->setOp(GetButterflyWithoutCaging);
+                    didOptimize = true;
+                }
+            }
+        }
+        
+        return didOptimize;
+    }
+};
+
+} // anonymous namespace
+
+bool performFixedButterflyAccessUncaging(Graph& graph)
+{
+    return runPhase<FixedButterflyAccessUncagingPhase>(graph);
+}
+
+} } // namespace JSC::DFG
+
+#endif // ENABLE(DFG_JIT)
+
diff --git a/Source/JavaScriptCore/dfg/DFGFixedButterflyAccessUncagingPhase.h b/Source/JavaScriptCore/dfg/DFGFixedButterflyAccessUncagingPhase.h
new file mode 100644 (file)
index 0000000..0cf37d7
--- /dev/null
@@ -0,0 +1,40 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#if ENABLE(DFG_JIT)
+
+namespace JSC { namespace DFG {
+
+class Graph;
+
+// Turns GetButterfly into GetButterflyWithoutCaging if all of the accesses are fixed-offset.
+bool performFixedButterflyAccessUncaging(Graph&);
+
+} } // namespace JSC::DFG
+
+#endif // ENABLE(DFG_JIT)
+
index 9661b8b..fc7d99a 100644 (file)
@@ -1398,7 +1398,8 @@ private:
         case CheckStructure:
         case CheckCell:
         case CreateThis:
-        case GetButterfly: {
+        case GetButterfly:
+        case GetButterflyWithoutCaging: {
             fixEdge<CellUse>(node->child1());
             break;
         }
index 7b62a9f..6a5c950 100644 (file)
@@ -96,6 +96,10 @@ void printInternal(PrintStream& out, LocationKind kind)
         out.print("ButterflyLoc");
         return;
         
+    case ButterflyWithoutCagingLoc:
+        out.print("ButterflyWithoutCagingLoc");
+        return;
+        
     case CheckTypeInfoFlagsLoc:
         out.print("CheckTypeInfoFlagsLoc");
         return;
index 1dd8286..7c87f20 100644 (file)
@@ -39,6 +39,7 @@ enum LocationKind {
     ArrayLengthLoc,
     VectorLengthLoc,
     ButterflyLoc,
+    ButterflyWithoutCagingLoc,
     CheckTypeInfoFlagsLoc,
     OverridesHasInstanceLoc,
     ClosureVariableLoc,
index cabc5cb..0613adc 100644 (file)
@@ -205,6 +205,7 @@ namespace JSC { namespace DFG {
     macro(AllocatePropertyStorage, NodeMustGenerate | NodeResultStorage) \
     macro(ReallocatePropertyStorage, NodeMustGenerate | NodeResultStorage) \
     macro(GetButterfly, NodeResultStorage) \
+    macro(GetButterflyWithoutCaging, NodeResultStorage) \
     macro(NukeStructureAndSetButterfly, NodeMustGenerate) \
     macro(CheckArray, NodeMustGenerate) \
     macro(Arrayify, NodeMustGenerate) \
index 2d14dfb..3973212 100644 (file)
@@ -41,6 +41,7 @@
 #include "DFGCriticalEdgeBreakingPhase.h"
 #include "DFGDCEPhase.h"
 #include "DFGFailedFinalizer.h"
+#include "DFGFixedButterflyAccessUncagingPhase.h"
 #include "DFGFixupPhase.h"
 #include "DFGGraphSafepoint.h"
 #include "DFGIntegerCheckCombiningPhase.h"
@@ -468,6 +469,7 @@ Plan::CompilationPath Plan::compileInThreadImpl()
         RUN_PHASE(performCFA);
         RUN_PHASE(performGlobalStoreBarrierInsertion);
         RUN_PHASE(performStoreBarrierClustering);
+        RUN_PHASE(performFixedButterflyAccessUncaging);
         if (Options::useMovHintRemoval())
             RUN_PHASE(performMovHintRemoval);
         RUN_PHASE(performCleanUp);
index 3d25eb0..a8bf2e8 100644 (file)
@@ -839,6 +839,7 @@ private:
             break;
         }
         case GetButterfly:
+        case GetButterflyWithoutCaging:
         case GetIndexedPropertyStorage:
         case AllocatePropertyStorage:
         case ReallocatePropertyStorage: {
index f93a69e..5885f9f 100644 (file)
@@ -215,6 +215,7 @@ bool safeToExecute(AbstractStateType& state, Graph& graph, Node* node)
     case CheckStructure:
     case GetExecutable:
     case GetButterfly:
+    case GetButterflyWithoutCaging:
     case CallDOMGetter:
     case CallDOM:
     case CheckSubClass:
index 79ba70a..b19b418 100644 (file)
@@ -7983,6 +7983,9 @@ void SpeculativeJIT::compileGetButterfly(Node* node)
     GPRReg resultGPR = result.gpr();
     
     m_jit.loadPtr(JITCompiler::Address(baseGPR, JSObject::butterflyOffset()), resultGPR);
+    
+    // FIXME: Implement caging!
+    // https://bugs.webkit.org/show_bug.cgi?id=174918
 
     storageResult(resultGPR, node);
 }
index db59f63..dabd7cc 100644 (file)
@@ -4470,6 +4470,7 @@ void SpeculativeJIT::compile(Node* node)
         break;
         
     case GetButterfly:
+    case GetButterflyWithoutCaging:
         compileGetButterfly(node);
         break;
 
index 63bab5f..e7e9de2 100644 (file)
@@ -4656,6 +4656,7 @@ void SpeculativeJIT::compile(Node* node)
         break;
         
     case GetButterfly:
+    case GetButterflyWithoutCaging:
         compileGetButterfly(node);
         break;
 
index 93bf265..39fe710 100644 (file)
@@ -248,6 +248,7 @@ private:
                 case ReallocatePropertyStorage:
                 case NukeStructureAndSetButterfly:
                 case GetButterfly:
+                case GetButterflyWithoutCaging:
                 case GetByVal:
                 case PutByValDirect:
                 case PutByVal:
@@ -324,6 +325,7 @@ private:
                 case PutStructure:
                 case ReallocatePropertyStorage:
                 case GetButterfly:
+                case GetButterflyWithoutCaging:
                 case GetByVal:
                 case PutByValDirect:
                 case PutByVal:
index e659244..0b45794 100644 (file)
@@ -69,6 +69,7 @@ inline CapabilityLevel canCompile(Node* node)
     case ArrayifyToStructure:
     case PutStructure:
     case GetButterfly:
+    case GetButterflyWithoutCaging:
     case NewObject:
     case NewArray:
     case NewArrayWithSpread:
index 9aae4a2..df0da4c 100644 (file)
@@ -87,6 +87,7 @@
 #include <atomic>
 #include <unordered_set>
 #include <wtf/Box.h>
+#include <wtf/Gigacage.h>
 
 namespace JSC { namespace FTL {
 
@@ -664,6 +665,7 @@ private:
             compilePutAccessorByVal();
             break;
         case GetButterfly:
+        case GetButterflyWithoutCaging:
             compileGetButterfly();
             break;
         case ConstantStoragePointer:
@@ -3231,7 +3233,10 @@ private:
     
     void compileGetButterfly()
     {
-        setStorage(m_out.loadPtr(lowCell(m_node->child1()), m_heaps.JSObject_butterfly));
+        LValue butterfly = m_out.loadPtr(lowCell(m_node->child1()), m_heaps.JSObject_butterfly);
+        if (m_node->op() != GetButterflyWithoutCaging)
+            butterfly = caged(butterfly);
+        setStorage(butterfly);
     }
 
     void compileConstantStoragePointer()
@@ -3267,7 +3272,7 @@ private:
         }
 
         DFG_ASSERT(m_graph, m_node, isTypedView(m_node->arrayMode().typedArrayType()));
-        setStorage(m_out.loadPtr(cell, m_heaps.JSArrayBufferView_vector));
+        setStorage(caged(m_out.loadPtr(cell, m_heaps.JSArrayBufferView_vector)));
     }
     
     void compileCheckArray()
@@ -3509,6 +3514,8 @@ private:
                     index,
                     m_out.load32NonNegative(base, m_heaps.DirectArguments_length)));
 
+            // FIXME: I guess we need to cage DirectArguments?
+            // https://bugs.webkit.org/show_bug.cgi?id=174920
             TypedPointer address = m_out.baseIndex(
                 m_heaps.DirectArguments_storage, base, m_out.zeroExtPtr(index));
             setJSValue(m_out.load64(address));
@@ -3540,6 +3547,8 @@ private:
             LValue scope = m_out.loadPtr(base, m_heaps.ScopedArguments_scope);
             LValue arguments = m_out.loadPtr(table, m_heaps.ScopedArgumentsTable_arguments);
             
+            // FIXME: I guess we need to cage ScopedArguments?
+            // https://bugs.webkit.org/show_bug.cgi?id=174921
             TypedPointer address = m_out.baseIndex(
                 m_heaps.scopedArgumentsTableArguments, arguments, m_out.zeroExtPtr(index));
             LValue scopeOffset = m_out.load32(address);
@@ -3548,6 +3557,8 @@ private:
                 ExoticObjectMode, noValue(), nullptr,
                 m_out.equal(scopeOffset, m_out.constInt32(ScopeOffset::invalidOffset)));
             
+            // FIXME: I guess we need to cage JSEnvironmentRecord?
+            // https://bugs.webkit.org/show_bug.cgi?id=174922
             address = m_out.baseIndex(
                 m_heaps.JSEnvironmentRecord_variables, scope, m_out.zeroExtPtr(scopeOffset));
             ValueFromBlock namedResult = m_out.anchor(m_out.load64(address));
@@ -3555,6 +3566,8 @@ private:
             
             m_out.appendTo(overflowCase, continuation);
             
+            // FIXME: I guess we need to cage overflow storage?
+            // https://bugs.webkit.org/show_bug.cgi?id=174923
             address = m_out.baseIndex(
                 m_heaps.ScopedArguments_overflowStorage, base,
                 m_out.zeroExtPtr(m_out.sub(index, namedLength)));
@@ -5378,6 +5391,8 @@ private:
             
         m_out.appendTo(is8Bit, is16Bit);
             
+        // FIXME: Need to cage strings!
+        // https://bugs.webkit.org/show_bug.cgi?id=174924
         ValueFromBlock char8Bit = m_out.anchor(
             m_out.load8ZeroExt32(m_out.baseIndex(
                 m_heaps.characters8, storage, m_out.zeroExtPtr(index),
@@ -5479,6 +5494,8 @@ private:
             
         LBasicBlock lastNext = m_out.appendTo(is8Bit, is16Bit);
             
+        // FIXME: need to cage strings!
+        // https://bugs.webkit.org/show_bug.cgi?id=174924
         ValueFromBlock char8Bit = m_out.anchor(
             m_out.load8ZeroExt32(m_out.baseIndex(
                 m_heaps.characters8, storage, m_out.zeroExtPtr(index),
@@ -8075,6 +8092,8 @@ private:
         m_out.appendTo(loopStart, notEmptyValue);
         LValue unmaskedIndex = m_out.phi(Int32, indexStart);
         LValue index = m_out.bitAnd(mask, unmaskedIndex);
+        // FIXME: I think these buffers are caged?
+        // https://bugs.webkit.org/show_bug.cgi?id=174925
         LValue hashMapBucket = m_out.load64(m_out.baseIndex(m_heaps.properties.atAnyNumber(), buffer, m_out.zeroExt(index, Int64), ScaleEight));
         ValueFromBlock bucketResult = m_out.anchor(hashMapBucket);
         m_out.branch(m_out.equal(hashMapBucket, m_out.constIntPtr(bitwise_cast<intptr_t>(HashMapImpl<HashMapBucket<HashMapBucketDataKey>>::emptyValue()))),
@@ -8850,7 +8869,7 @@ private:
             m_out.neg(m_out.sub(index, m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_cachedInlineCapacity))));
         int32_t offsetOfFirstProperty = static_cast<int32_t>(offsetInButterfly(firstOutOfLineOffset)) * sizeof(EncodedJSValue);
         ValueFromBlock outOfLineResult = m_out.anchor(
-            m_out.load64(m_out.baseIndex(m_heaps.properties.atAnyNumber(), storage, realIndex, ScaleEight, offsetOfFirstProperty)));
+            m_out.load64(m_out.baseIndex(m_heaps.properties.atAnyNumber(), caged(storage), realIndex, ScaleEight, offsetOfFirstProperty)));
         m_out.jump(continuation);
 
         m_out.appendTo(slowCase, continuation);
@@ -10268,6 +10287,8 @@ private:
 
         m_out.appendTo(loopBody, slowPath);
 
+        // FIXME: Strings needs to be caged.
+        // https://bugs.webkit.org/show_bug.cgi?id=174924
         LValue byte = m_out.load8ZeroExt32(m_out.baseIndex(m_heaps.characters8, buffer, m_out.zeroExtPtr(index)));
         LValue isInvalidAsciiRange = m_out.bitAnd(byte, m_out.constInt32(~0x7F));
         LValue isUpperCase = m_out.belowOrEqual(m_out.sub(byte, m_out.constInt32('A')), m_out.constInt32('Z' - 'A'));
@@ -11593,6 +11614,36 @@ private:
         }
     }
     
+    LValue caged(LValue ptr)
+    {
+        if (vm().gigacageEnabled().isStillValid()) {
+            m_graph.watchpoints().addLazily(vm().gigacageEnabled());
+            
+            LValue basePtr = m_out.constIntPtr(g_gigacageBasePtr);
+            LValue mask = m_out.constIntPtr(GIGACAGE_MASK);
+            
+            // We don't have to worry about B3 messing up the bitAnd. Also, we want to get B3's excellent
+            // codegen for 2-operand andq on x86-64.
+            LValue masked = m_out.bitAnd(ptr, mask);
+            
+            // But B3 will currently mess up the code generation of this add. Basically, any offset from what we
+            // compute here will get reassociated and folded with g_gigacageBasePtr. There's a world in which
+            // moveConstants() observes that it needs to reassociate in order to hoist the big constants. But
+            // it's much easier to just block B3's badness here. That's what we do for now.
+            PatchpointValue* patchpoint = m_out.patchpoint(pointerType());
+            patchpoint->appendSomeRegister(basePtr);
+            patchpoint->appendSomeRegister(masked);
+            patchpoint->setGenerator(
+                [] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+                    jit.addPtr(params[1].gpr(), params[2].gpr(), params[0].gpr());
+                });
+            patchpoint->effects = Effects::none();
+            return patchpoint;
+        }
+        
+        return ptr;
+    }
+    
     void buildSwitch(SwitchData* data, LType type, LValue switchValue)
     {
         ASSERT(type == pointerType() || type == Int32);
diff --git a/Source/JavaScriptCore/heap/GigacageSubspace.cpp b/Source/JavaScriptCore/heap/GigacageSubspace.cpp
new file mode 100644 (file)
index 0000000..5c7c49f
--- /dev/null
@@ -0,0 +1,61 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "GigacageSubspace.h"
+
+#include <wtf/Gigacage.h>
+
+namespace JSC {
+
+GigacageSubspace::GigacageSubspace(CString name, Heap& heap, AllocatorAttributes attributes)
+    : Subspace(name, heap, attributes)
+{
+    Gigacage::ensureGigacage();
+}
+
+GigacageSubspace::~GigacageSubspace()
+{
+}
+
+void* GigacageSubspace::tryAllocateAlignedMemory(size_t alignment, size_t size)
+{
+    void* result = Gigacage::tryAlignedMalloc(alignment, size);
+    return result;
+}
+
+void GigacageSubspace::freeAlignedMemory(void* basePtr)
+{
+    Gigacage::alignedFree(basePtr);
+    WTF::compilerFence();
+}
+
+bool GigacageSubspace::canTradeBlocksWith(Subspace* other)
+{
+    return this == other;
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/GigacageSubspace.h b/Source/JavaScriptCore/heap/GigacageSubspace.h
new file mode 100644 (file)
index 0000000..4081d8a
--- /dev/null
@@ -0,0 +1,45 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "Subspace.h"
+#include <wtf/Gigacage.h>
+
+namespace JSC {
+
+// We use a GigacageSubspace for the auxiliary space.
+class GigacageSubspace : public Subspace {
+public:
+    GigacageSubspace(CString name, Heap&, AllocatorAttributes);
+    ~GigacageSubspace();
+    
+    bool canTradeBlocksWith(Subspace* other) override;
+    void* tryAllocateAlignedMemory(size_t alignment, size_t size) override;
+    void freeAlignedMemory(void*) override;
+};
+
+} // namespace JSC
+
index f65ef84..b2a97ce 100644 (file)
@@ -268,7 +268,6 @@ Heap::Heap(VM* vm, HeapType heapType)
     , m_sizeAfterLastEdenCollect(0)
     , m_sizeBeforeLastEdenCollect(0)
     , m_bytesAllocatedThisCycle(0)
-    , m_webAssemblyFastMemoriesAllocatedThisCycle(0)
     , m_bytesAbandonedSinceLastFullCollect(0)
     , m_maxEdenSize(m_minBytesPerCycle)
     , m_maxHeapSize(m_minBytesPerCycle)
@@ -436,6 +435,8 @@ void Heap::lastChanceToFinalize()
 
     sweepAllLogicallyEmptyWeakBlocks();
     
+    m_objectSpace.freeMemory();
+    
     if (Options::logGC())
         dataLog((MonotonicTime::now() - before).milliseconds(), "ms]\n");
 }
@@ -486,23 +487,6 @@ void Heap::deprecatedReportExtraMemorySlowCase(size_t size)
     reportExtraMemoryAllocatedSlowCase(size);
 }
 
-void Heap::reportWebAssemblyFastMemoriesAllocated(size_t count)
-{
-    didAllocateWebAssemblyFastMemories(count);
-    collectIfNecessaryOrDefer();
-}
-
-bool Heap::webAssemblyFastMemoriesThisCycleAtThreshold() const
-{
-    // WebAssembly fast memories use large amounts of virtual memory and we
-    // don't know how many can exist in this process. We keep track of the most
-    // fast memories that have existed at any point in time. The GC uses this
-    // top watermark as an indication of whether recent allocations should cause
-    // a collection: get too close and we may be close to the actual limit.
-    size_t fastMemoryThreshold = std::max<size_t>(1, Wasm::Memory::maxFastMemoryCount() / 2);
-    return m_webAssemblyFastMemoriesAllocatedThisCycle > fastMemoryThreshold;
-}
-
 bool Heap::overCriticalMemoryThreshold(MemoryThresholdCallType memoryThresholdCallType)
 {
 #if PLATFORM(IOS)
@@ -1997,10 +1981,10 @@ void Heap::finalize()
     }
     
     {
-        SweepingScope helpingGCScope(*this);
+        SweepingScope sweepingScope(*this);
         deleteUnmarkedCompiledCode();
         deleteSourceProviderCaches();
-        sweepLargeAllocations();
+        sweepInFinalize();
     }
     
     if (HasOwnPropertyCache* cache = vm()->hasOwnPropertyCache())
@@ -2051,9 +2035,15 @@ void Heap::waitForCollection(Ticket ticket)
         });
 }
 
-void Heap::sweepLargeAllocations()
+void Heap::sweepInFinalize()
 {
     m_objectSpace.sweepLargeAllocations();
+    
+    auto sweepBlock = [&] (MarkedBlock::Handle* handle) {
+        handle->sweep(nullptr);
+    };
+    
+    vm()->eagerlySweptDestructibleObjectSpace.forEachMarkedBlock(sweepBlock);
 }
 
 void Heap::suspendCompilerThreads()
@@ -2160,7 +2150,6 @@ void Heap::updateAllocationLimits()
     if (verbose) {
         dataLog("\n");
         dataLog("bytesAllocatedThisCycle = ", m_bytesAllocatedThisCycle, "\n");
-        dataLog("webAssemblyFastMemoriesAllocatedThisCycle = ", m_webAssemblyFastMemoriesAllocatedThisCycle, "\n");
     }
     
     // Calculate our current heap size threshold for the purpose of figuring out when we should
@@ -2243,7 +2232,6 @@ void Heap::updateAllocationLimits()
     if (verbose)
         dataLog("sizeAfterLastCollect = ", m_sizeAfterLastCollect, "\n");
     m_bytesAllocatedThisCycle = 0;
-    m_webAssemblyFastMemoriesAllocatedThisCycle = 0;
 
     if (Options::logGC())
         dataLog("=> ", currentHeapSize / 1024, "kb, ");
@@ -2317,11 +2305,6 @@ void Heap::didAllocate(size_t bytes)
     performIncrement(bytes);
 }
 
-void Heap::didAllocateWebAssemblyFastMemories(size_t count)
-{
-    m_webAssemblyFastMemoriesAllocatedThisCycle += count;
-}
-
 bool Heap::isValidAllocation(size_t)
 {
     if (!isValidThreadState(m_vm))
@@ -2374,7 +2357,7 @@ bool Heap::shouldDoFullCollection()
         return true;
 
     if (!m_currentRequest.scope)
-        return m_shouldDoFullCollection || webAssemblyFastMemoriesThisCycleAtThreshold() || overCriticalMemoryThreshold();
+        return m_shouldDoFullCollection || overCriticalMemoryThreshold();
     return *m_currentRequest.scope == CollectionScope::Full;
 }
 
@@ -2532,8 +2515,7 @@ void Heap::collectIfNecessaryOrDefer(GCDeferralContext* deferralContext)
             bytesAllowedThisCycle = std::min(m_maxEdenSizeWhenCritical, bytesAllowedThisCycle);
 #endif
 
-        if (!webAssemblyFastMemoriesThisCycleAtThreshold()
-            && m_bytesAllocatedThisCycle <= bytesAllowedThisCycle)
+        if (m_bytesAllocatedThisCycle <= bytesAllowedThisCycle)
             return;
     }
 
index bd21f35..fcc0733 100644 (file)
@@ -204,17 +204,6 @@ public:
     void reportExtraMemoryAllocated(size_t);
     JS_EXPORT_PRIVATE void reportExtraMemoryVisited(size_t);
 
-    // Same as above, but for uncommitted virtual memory allocations caused by
-    // WebAssembly fast memories. This is counted separately because virtual
-    // memory is logically a different type of resource than committed physical
-    // memory. We can often allocate huge amounts of virtual memory (think
-    // gigabytes) without adversely affecting regular GC'd memory. At some point
-    // though, too much virtual memory becomes prohibitive and we want to
-    // collect GC-able objects which keep this virtual memory alive.
-    // This is counted in number of fast memories, not bytes.
-    void reportWebAssemblyFastMemoriesAllocated(size_t);
-    bool webAssemblyFastMemoriesThisCycleAtThreshold() const;
-
 #if ENABLE(RESOURCE_USAGE)
     // Use this API to report the subset of extra memory that lives outside this process.
     JS_EXPORT_PRIVATE void reportExternalMemoryVisited(size_t);
@@ -264,7 +253,6 @@ public:
     void deleteAllUnlinkedCodeBlocks(DeleteAllCodeEffort);
 
     void didAllocate(size_t);
-    void didAllocateWebAssemblyFastMemories(size_t);
     bool isPagedOut(double deadline);
     
     const JITStubRoutineSet& jitStubRoutines() { return *m_jitStubRoutines; }
@@ -501,7 +489,7 @@ private:
     void gatherExtraHeapSnapshotData(HeapProfiler&);
     void removeDeadHeapSnapshotNodes(HeapProfiler&);
     void finalize();
-    void sweepLargeAllocations();
+    void sweepInFinalize();
     
     void sweepAllLogicallyEmptyWeakBlocks();
     bool sweepNextLogicallyEmptyWeakBlock();
@@ -548,7 +536,6 @@ private:
     size_t m_sizeBeforeLastEdenCollect;
 
     size_t m_bytesAllocatedThisCycle;
-    size_t m_webAssemblyFastMemoriesAllocatedThisCycle;
     size_t m_bytesAbandonedSinceLastFullCollect;
     size_t m_maxEdenSize;
     size_t m_maxEdenSizeWhenCritical;
index 839c616..cdd694e 100644 (file)
@@ -34,7 +34,7 @@ namespace JSC {
 
 LargeAllocation* LargeAllocation::tryCreate(Heap& heap, size_t size, Subspace* subspace)
 {
-    void* space = tryFastAlignedMalloc(alignment, headerSize() + size);
+    void* space = subspace->tryAllocateAlignedMemory(alignment, headerSize() + size);
     if (!space)
         return nullptr;
     if (scribbleFreeCells())
@@ -106,8 +106,9 @@ void LargeAllocation::sweep()
 
 void LargeAllocation::destroy()
 {
+    Subspace* subspace = m_subspace;
     this->~LargeAllocation();
-    fastAlignedFree(this);
+    subspace->freeAlignedMemory(this);
 }
 
 void LargeAllocation::dump(PrintStream& out) const
index 59162c1..7352289 100644 (file)
@@ -103,7 +103,10 @@ void* MarkedAllocator::tryAllocateWithoutCollecting()
     }
     
     if (Options::stealEmptyBlocksFromOtherAllocators()) {
-        if (MarkedBlock::Handle* block = markedSpace().findEmptyBlockToSteal()) {
+        if (MarkedBlock::Handle* block = m_subspace->findEmptyBlockToSteal()) {
+            RELEASE_ASSERT(block->subspace()->canTradeBlocksWith(m_subspace));
+            RELEASE_ASSERT(m_subspace->canTradeBlocksWith(block->subspace()));
+            
             block->sweep(nullptr);
             
             // It's good that this clears canAllocateButNotEmpty as well as all other bits,
@@ -240,7 +243,7 @@ MarkedBlock::Handle* MarkedAllocator::tryAllocateBlock()
 {
     SuperSamplerScope superSamplerScope(false);
     
-    MarkedBlock::Handle* handle = MarkedBlock::tryCreate(*m_heap);
+    MarkedBlock::Handle* handle = MarkedBlock::tryCreate(*m_heap, subspace());
     if (!handle)
         return nullptr;
     
index 731c6d8..a7c8593 100644 (file)
@@ -43,23 +43,24 @@ const size_t MarkedBlock::blockSize;
 static const bool computeBalance = false;
 static size_t balance;
 
-MarkedBlock::Handle* MarkedBlock::tryCreate(Heap& heap)
+MarkedBlock::Handle* MarkedBlock::tryCreate(Heap& heap, Subspace* subspace)
 {
     if (computeBalance) {
         balance++;
         if (!(balance % 10))
             dataLog("MarkedBlock Balance: ", balance, "\n");
     }
-    void* blockSpace = tryFastAlignedMalloc(blockSize, blockSize);
+    void* blockSpace = subspace->tryAllocateAlignedMemory(blockSize, blockSize);
     if (!blockSpace)
         return nullptr;
     if (scribbleFreeCells())
         scribble(blockSpace, blockSize);
-    return new Handle(heap, blockSpace);
+    return new Handle(heap, subspace, blockSpace);
 }
 
-MarkedBlock::Handle::Handle(Heap& heap, void* blockSpace)
-    : m_weakSet(heap.vm(), CellContainer())
+MarkedBlock::Handle::Handle(Heap& heap, Subspace* subspace, void* blockSpace)
+    : m_subspace(subspace)
+    , m_weakSet(heap.vm(), CellContainer())
     , m_newlyAllocatedVersion(MarkedSpace::nullVersion)
 {
     m_block = new (NotNull, blockSpace) MarkedBlock(*heap.vm(), *this);
@@ -72,6 +73,7 @@ MarkedBlock::Handle::Handle(Heap& heap, void* blockSpace)
 MarkedBlock::Handle::~Handle()
 {
     Heap& heap = *this->heap();
+    Subspace* subspace = this->subspace();
     if (computeBalance) {
         balance--;
         if (!(balance % 10))
@@ -79,7 +81,7 @@ MarkedBlock::Handle::~Handle()
     }
     removeFromAllocator();
     m_block->~MarkedBlock();
-    fastAlignedFree(m_block);
+    subspace->freeAlignedMemory(m_block);
     heap.didFreeBlock(blockSize);
 }
 
@@ -332,6 +334,11 @@ void MarkedBlock::Handle::didAddToAllocator(MarkedAllocator* allocator, size_t i
     m_index = index;
     m_allocator = allocator;
     
+    RELEASE_ASSERT(m_subspace->canTradeBlocksWith(allocator->subspace()));
+    RELEASE_ASSERT(allocator->subspace()->canTradeBlocksWith(m_subspace));
+    
+    m_subspace = allocator->subspace();
+    
     size_t cellSize = allocator->cellSize();
     m_atomsPerCell = (cellSize + atomSize - 1) / atomSize;
     m_endAtom = atomsPerBlock - m_atomsPerCell + 1;
@@ -390,11 +397,6 @@ void MarkedBlock::Handle::dumpState(PrintStream& out)
         });
 }
 
-Subspace* MarkedBlock::Handle::subspace() const
-{
-    return allocator()->subspace();
-}
-
 void MarkedBlock::Handle::sweep(FreeList* freeList)
 {
     SweepingScope sweepingScope(*heap());
index 14ad1b1..879faee 100644 (file)
@@ -199,7 +199,7 @@ public:
         void dumpState(PrintStream&);
         
     private:
-        Handle(Heap&, void*);
+        Handle(Heap&, Subspace*, void*);
         
         enum SweepDestructionMode { BlockHasNoDestructors, BlockHasDestructors, BlockHasDestructorsAndCollectorIsRunning };
         enum ScribbleMode { DontScribble, Scribble };
@@ -218,8 +218,8 @@ public:
         
         void setIsFreeListed();
         
-        MarkedBlock::Handle* m_prev;
-        MarkedBlock::Handle* m_next;
+        MarkedBlock::Handle* m_prev { nullptr };
+        MarkedBlock::Handle* m_next { nullptr };
             
         size_t m_atomsPerCell { std::numeric_limits<size_t>::max() };
         size_t m_endAtom { std::numeric_limits<size_t>::max() }; // This is a fuzzy end. Always test for < m_endAtom.
@@ -228,7 +228,8 @@ public:
             
         AllocatorAttributes m_attributes;
         bool m_isFreeListed { false };
-            
+
+        Subspace* m_subspace { nullptr };
         MarkedAllocator* m_allocator { nullptr };
         size_t m_index { std::numeric_limits<size_t>::max() };
         WeakSet m_weakSet;
@@ -238,7 +239,7 @@ public:
         MarkedBlock* m_block { nullptr };
     };
         
-    static MarkedBlock::Handle* tryCreate(Heap&);
+    static MarkedBlock::Handle* tryCreate(Heap&, Subspace*);
         
     Handle& handle();
         
@@ -395,6 +396,11 @@ inline MarkedAllocator* MarkedBlock::Handle::allocator() const
     return m_allocator;
 }
 
+inline Subspace* MarkedBlock::Handle::subspace() const
+{
+    return m_subspace;
+}
+
 inline Heap* MarkedBlock::Handle::heap() const
 {
     return m_weakSet.heap();
index 0f9cb67..2714e1b 100644 (file)
@@ -203,13 +203,17 @@ MarkedSpace::MarkedSpace(Heap* heap)
 
 MarkedSpace::~MarkedSpace()
 {
+    ASSERT(!m_blocks.set().size());
+}
+
+void MarkedSpace::freeMemory()
+{
     forEachBlock(
         [&] (MarkedBlock::Handle* block) {
             freeBlock(block);
         });
     for (LargeAllocation* allocation : m_largeAllocations)
         allocation->destroy();
-    ASSERT(!m_blocks.set().size());
 }
 
 void MarkedSpace::lastChanceToFinalize()
@@ -254,11 +258,8 @@ void MarkedSpace::sweepLargeAllocations()
 
 void MarkedSpace::prepareForAllocation()
 {
-    forEachAllocator(
-        [&] (MarkedAllocator& allocator) -> IterationStatus {
-            allocator.prepareForAllocation();
-            return IterationStatus::Continue;
-        });
+    for (Subspace* subspace : m_subspaces)
+        subspace->prepareForAllocation();
 
     m_activeWeakSets.takeFrom(m_newActiveWeakSets);
     
@@ -267,8 +268,6 @@ void MarkedSpace::prepareForAllocation()
     else
         m_largeAllocationsNurseryOffsetForSweep = 0;
     m_largeAllocationsNurseryOffset = m_largeAllocations.size();
-    
-    m_allocatorForEmptyAllocation = m_firstAllocator;
 }
 
 void MarkedSpace::visitWeakSets(SlotVisitor& visitor)
@@ -514,15 +513,6 @@ void MarkedSpace::didAllocateInBlock(MarkedBlock::Handle* block)
     }
 }
 
-MarkedBlock::Handle* MarkedSpace::findEmptyBlockToSteal()
-{
-    for (; m_allocatorForEmptyAllocation; m_allocatorForEmptyAllocation = m_allocatorForEmptyAllocation->nextAllocator()) {
-        if (MarkedBlock::Handle* block = m_allocatorForEmptyAllocation->findEmptyBlockToSteal())
-            return block;
-    }
-    return nullptr;
-}
-
 void MarkedSpace::snapshotUnswept()
 {
     if (m_heap->collectionScope() == CollectionScope::Eden) {
@@ -572,7 +562,8 @@ MarkedAllocator* MarkedSpace::addMarkedAllocator(
     if (!m_firstAllocator) {
         m_firstAllocator = allocator;
         m_lastAllocator = allocator;
-        m_allocatorForEmptyAllocation = allocator;
+        for (Subspace* subspace : m_subspaces)
+            subspace->didCreateFirstAllocator(allocator);
     } else {
         m_lastAllocator->setNextAllocator(allocator);
         m_lastAllocator = allocator;
index 5f0491e..5013eaf 100644 (file)
@@ -93,6 +93,7 @@ public:
     Heap* heap() const { return m_heap; }
     
     void lastChanceToFinalize(); // You must call stopAllocating before you call this.
+    void freeMemory();
 
     static size_t optimalSizeFor(size_t);
     
@@ -155,9 +156,6 @@ public:
     unsigned largeAllocationsForThisCollectionSize() const { return m_largeAllocationsForThisCollectionSize; }
     
     MarkedAllocator* firstAllocator() const { return m_firstAllocator; }
-    MarkedAllocator* allocatorForEmptyAllocation() const { return m_allocatorForEmptyAllocation; }
-    
-    MarkedBlock::Handle* findEmptyBlockToSteal();
     
     Lock& allocatorLock() { return m_allocatorLock; }
     MarkedAllocator* addMarkedAllocator(const AbstractLocker&, Subspace*, size_t cellSize);
@@ -215,7 +213,6 @@ private:
     Bag<MarkedAllocator> m_bagOfAllocators;
     MarkedAllocator* m_firstAllocator { nullptr };
     MarkedAllocator* m_lastAllocator { nullptr };
-    MarkedAllocator* m_allocatorForEmptyAllocation { nullptr };
 
     friend class HeapVerifier;
 };
index 83a4fdc..623a95f 100644 (file)
@@ -58,6 +58,7 @@ Subspace::Subspace(CString name, Heap& heap, AllocatorAttributes attributes)
     : m_space(heap.objectSpace())
     , m_name(name)
     , m_attributes(attributes)
+    , m_allocatorForEmptyAllocation(m_space.firstAllocator())
 {
     // It's remotely possible that we're GCing right now even if the client is careful to only
     // create subspaces right after VM creation, since collectContinuously (and probably other
@@ -87,6 +88,23 @@ void Subspace::destroy(VM& vm, JSCell* cell)
     DestroyFunc()(vm, cell);
 }
 
+bool Subspace::canTradeBlocksWith(Subspace*)
+{
+    return true;
+}
+
+void* Subspace::tryAllocateAlignedMemory(size_t alignment, size_t size)
+{
+    void* result = tryFastAlignedMalloc(alignment, size);
+    return result;
+}
+
+void Subspace::freeAlignedMemory(void* basePtr)
+{
+    fastAlignedFree(basePtr);
+    WTF::compilerFence();
+}
+
 // The reason why we distinguish between allocate and tryAllocate is to minimize the number of
 // checks on the allocation path in both cases. Likewise, the reason why we have overloads with and
 // without deferralContext is to minimize the amount of code for calling allocate when you don't
@@ -135,6 +153,31 @@ void* Subspace::tryAllocate(GCDeferralContext* deferralContext, size_t size)
     return result;
 }
 
+void Subspace::prepareForAllocation()
+{
+    forEachAllocator(
+        [&] (MarkedAllocator& allocator) {
+            allocator.prepareForAllocation();
+        });
+
+    m_allocatorForEmptyAllocation = m_space.firstAllocator();
+}
+
+MarkedBlock::Handle* Subspace::findEmptyBlockToSteal()
+{
+    for (; m_allocatorForEmptyAllocation; m_allocatorForEmptyAllocation = m_allocatorForEmptyAllocation->nextAllocator()) {
+        Subspace* otherSubspace = m_allocatorForEmptyAllocation->subspace();
+        if (!canTradeBlocksWith(otherSubspace))
+            continue;
+        if (!otherSubspace->canTradeBlocksWith(this))
+            continue;
+        
+        if (MarkedBlock::Handle* block = m_allocatorForEmptyAllocation->findEmptyBlockToSteal())
+            return block;
+    }
+    return nullptr;
+}
+
 MarkedAllocator* Subspace::allocatorForSlow(size_t size)
 {
     size_t index = MarkedSpace::sizeClassToIndex(size);
index 9e9ecf5..4fa7517 100644 (file)
@@ -59,6 +59,10 @@ public:
     // These get called for large objects.
     virtual void destroy(VM&, JSCell*);
     
+    virtual bool canTradeBlocksWith(Subspace* other);
+    virtual void* tryAllocateAlignedMemory(size_t alignment, size_t size);
+    virtual void freeAlignedMemory(void*);
+    
     MarkedAllocator* tryAllocatorFor(size_t);
     MarkedAllocator* allocatorFor(size_t);
     
@@ -68,6 +72,16 @@ public:
     JS_EXPORT_PRIVATE void* tryAllocate(size_t);
     JS_EXPORT_PRIVATE void* tryAllocate(GCDeferralContext*, size_t);
     
+    void prepareForAllocation();
+    
+    void didCreateFirstAllocator(MarkedAllocator* allocator) { m_allocatorForEmptyAllocation = allocator; }
+    
+    // Finds an empty block from any Subspace that agrees to trade blocks with us.
+    MarkedBlock::Handle* findEmptyBlockToSteal();
+    
+    template<typename Func>
+    void forEachAllocator(const Func&);
+    
     template<typename Func>
     void forEachMarkedBlock(const Func&);
     
@@ -103,6 +117,7 @@ private:
     
     std::array<MarkedAllocator*, MarkedSpace::numSizeClasses> m_allocatorForSizeStep;
     MarkedAllocator* m_firstAllocator { nullptr };
+    MarkedAllocator* m_allocatorForEmptyAllocation { nullptr }; // Uses the MarkedSpace linked list of blocks.
     SentinelLinkedList<LargeAllocation, BasicRawSentinelNode<LargeAllocation>> m_largeAllocations;
 };
 
index e346e18..ce817f8 100644 (file)
 namespace JSC {
 
 template<typename Func>
-void Subspace::forEachMarkedBlock(const Func& func)
+void Subspace::forEachAllocator(const Func& func)
 {
     for (MarkedAllocator* allocator = m_firstAllocator; allocator; allocator = allocator->nextAllocatorInSubspace())
-        allocator->forEachBlock(func);
+        func(*allocator);
+}
+
+template<typename Func>
+void Subspace::forEachMarkedBlock(const Func& func)
+{
+    forEachAllocator(
+        [&] (MarkedAllocator& allocator) {
+            allocator.forEachBlock(func);
+        });
 }
 
 template<typename Func>
 void Subspace::forEachNotEmptyMarkedBlock(const Func& func)
 {
-    for (MarkedAllocator* allocator = m_firstAllocator; allocator; allocator = allocator->nextAllocatorInSubspace())
-        allocator->forEachNotEmptyBlock(func);
+    forEachAllocator(
+        [&] (MarkedAllocator& allocator) {
+            allocator.forEachNotEmptyBlock(func);
+        });
 }
 
 template<typename Func>
index db9617b..34206ce 100644 (file)
@@ -172,6 +172,8 @@ JIT::JumpList JIT::emitDoubleLoad(Instruction*, PatchableJump& badType)
     JumpList slowCases;
     
     badType = patchableBranch32(NotEqual, regT2, TrustedImm32(DoubleShape));
+    // FIXME: Should do caging.
+    // https://bugs.webkit.org/show_bug.cgi?id=175037
     loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2);
     slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, Butterfly::offsetOfPublicLength())));
     loadDouble(BaseIndex(regT2, regT1, TimesEight), fpRegT0);
@@ -185,6 +187,8 @@ JIT::JumpList JIT::emitContiguousLoad(Instruction*, PatchableJump& badType, Inde
     JumpList slowCases;
     
     badType = patchableBranch32(NotEqual, regT2, TrustedImm32(expectedShape));
+    // FIXME: Should do caging.
+    // https://bugs.webkit.org/show_bug.cgi?id=175037
     loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2);
     slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, Butterfly::offsetOfPublicLength())));
     load64(BaseIndex(regT2, regT1, TimesEight), regT0);
@@ -200,6 +204,8 @@ JIT::JumpList JIT::emitArrayStorageLoad(Instruction*, PatchableJump& badType)
     add32(TrustedImm32(-ArrayStorageShape), regT2, regT3);
     badType = patchableBranch32(Above, regT3, TrustedImm32(SlowPutArrayStorageShape - ArrayStorageShape));
 
+    // FIXME: Should do caging.
+    // https://bugs.webkit.org/show_bug.cgi?id=175037
     loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2);
     slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, ArrayStorage::vectorLengthOffset())));
 
@@ -347,6 +353,8 @@ JIT::JumpList JIT::emitGenericContiguousPutByVal(Instruction* currentInstruction
 
     badType = patchableBranch32(NotEqual, regT2, TrustedImm32(indexingShape));
     
+    // FIXME: Should do caging.
+    // https://bugs.webkit.org/show_bug.cgi?id=175037
     loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2);
     Jump outOfBounds = branch32(AboveOrEqual, regT1, Address(regT2, Butterfly::offsetOfPublicLength()));
 
@@ -402,6 +410,8 @@ JIT::JumpList JIT::emitArrayStoragePutByVal(Instruction* currentInstruction, Pat
     JumpList slowCases;
     
     badType = patchableBranch32(NotEqual, regT2, TrustedImm32(ArrayStorageShape));
+    // FIXME: Should do caging.
+    // https://bugs.webkit.org/show_bug.cgi?id=175037
     loadPtr(Address(regT0, JSObject::butterflyOffset()), regT2);
     slowCases.append(branch32(AboveOrEqual, regT1, Address(regT2, ArrayStorage::vectorLengthOffset())));
 
@@ -913,6 +923,8 @@ void JIT::emit_op_get_from_scope(Instruction* currentInstruction)
                 abortWithReason(JITOffsetIsNotOutOfLine);
                 isOutOfLine.link(this);
             }
+            // FIXME: Should do caging.
+            // https://bugs.webkit.org/show_bug.cgi?id=175037
             loadPtr(Address(base, JSObject::butterflyOffset()), scratch);
             neg32(offset);
             signExtend32ToPtr(offset, offset);
@@ -1054,6 +1066,8 @@ void JIT::emit_op_put_to_scope(Instruction* currentInstruction)
             emitLoadWithStructureCheck(scope, structureSlot); // Structure check covers var injection.
             emitGetVirtualRegister(value, regT2);
             
+            // FIXME: Should do caging.
+            // https://bugs.webkit.org/show_bug.cgi?id=175037
             loadPtr(Address(regT0, JSObject::butterflyOffset()), regT0);
             loadPtr(operandSlot, regT1);
             negPtr(regT1);
@@ -1575,6 +1589,8 @@ JIT::JumpList JIT::emitIntTypedArrayGetByVal(Instruction*, PatchableJump& badTyp
     load8(Address(base, JSCell::typeInfoTypeOffset()), scratch);
     badType = patchableBranch32(NotEqual, scratch, TrustedImm32(typeForTypedArrayType(type)));
     slowCases.append(branch32(AboveOrEqual, property, Address(base, JSArrayBufferView::offsetOfLength())));
+    // FIXME: Should do caging.
+    // https://bugs.webkit.org/show_bug.cgi?id=175037
     loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), scratch);
     
     switch (elementSize(type)) {
@@ -1646,6 +1662,8 @@ JIT::JumpList JIT::emitFloatTypedArrayGetByVal(Instruction*, PatchableJump& badT
     load8(Address(base, JSCell::typeInfoTypeOffset()), scratch);
     badType = patchableBranch32(NotEqual, scratch, TrustedImm32(typeForTypedArrayType(type)));
     slowCases.append(branch32(AboveOrEqual, property, Address(base, JSArrayBufferView::offsetOfLength())));
+    // FIXME: Should do caging.
+    // https://bugs.webkit.org/show_bug.cgi?id=175037
     loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), scratch);
     
     switch (elementSize(type)) {
@@ -1713,6 +1731,8 @@ JIT::JumpList JIT::emitIntTypedArrayPutByVal(Instruction* currentInstruction, Pa
     
     // We would be loading this into base as in get_by_val, except that the slow
     // path expects the base to be unclobbered.
+    // FIXME: Should do caging.
+    // https://bugs.webkit.org/show_bug.cgi?id=175037
     loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), lateScratch);
     
     if (isClamped(type)) {
@@ -1796,6 +1816,8 @@ JIT::JumpList JIT::emitFloatTypedArrayPutByVal(Instruction* currentInstruction,
     
     // We would be loading this into base as in get_by_val, except that the slow
     // path expects the base to be unclobbered.
+    // FIXME: Should do caging.
+    // https://bugs.webkit.org/show_bug.cgi?id=175037
     loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), lateScratch);
     
     switch (elementSize(type)) {
index 6617088..9f093ce 100644 (file)
@@ -78,6 +78,7 @@
 #include "TestRunnerUtils.h"
 #include "TypeProfiler.h"
 #include "TypeProfilerLog.h"
+#include "TypedArrayInlines.h"
 #include "WasmContext.h"
 #include "WasmFaultSignalHandler.h"
 #include "WasmMemory.h"
@@ -984,6 +985,7 @@ void Element::finishCreation(VM& vm, Root* root)
 }
 
 static bool fillBufferWithContentsOfFile(const String& fileName, Vector<char>& buffer);
+static RefPtr<Uint8Array> fillBufferWithContentsOfFile(const String& fileName);
 
 class CommandLine;
 class GlobalObject;
@@ -1708,6 +1710,32 @@ static void convertShebangToJSComment(Vector<char>& buffer)
     }
 }
 
+static RefPtr<Uint8Array> fillBufferWithContentsOfFile(FILE* file)
+{
+    fseek(file, 0, SEEK_END);
+    size_t bufferCapacity = ftell(file);
+    fseek(file, 0, SEEK_SET);
+    RefPtr<Uint8Array> result = Uint8Array::create(bufferCapacity);
+    size_t readSize = fread(result->data(), 1, bufferCapacity, file);
+    if (readSize != bufferCapacity)
+        return nullptr;
+    return result;
+}
+
+static RefPtr<Uint8Array> fillBufferWithContentsOfFile(const String& fileName)
+{
+    FILE* f = fopen(fileName.utf8().data(), "rb");
+    if (!f) {
+        fprintf(stderr, "Could not open file: %s\n", fileName.utf8().data());
+        return nullptr;
+    }
+
+    RefPtr<Uint8Array> result = fillBufferWithContentsOfFile(f);
+    fclose(f);
+
+    return result;
+}
+
 static bool fillBufferWithContentsOfFile(FILE* file, Vector<char>& buffer)
 {
     // We might have injected "use strict"; at the top.
@@ -2276,16 +2304,15 @@ EncodedJSValue JSC_HOST_CALL functionReadFile(ExecState* exec)
         isBinary = true;
     }
 
-    Vector<char> content;
-    if (!fillBufferWithContentsOfFile(fileName, content))
+    RefPtr<Uint8Array> content = fillBufferWithContentsOfFile(fileName);
+    if (!content)
         return throwVMError(exec, scope, "Could not open file.");
 
     if (!isBinary)
-        return JSValue::encode(jsString(exec, stringFromUTF(content)));
+        return JSValue::encode(jsString(exec, String::fromUTF8WithLatin1Fallback(content->data(), content->length())));
 
     Structure* structure = exec->lexicalGlobalObject()->typedArrayStructure(TypeUint8);
-    auto length = content.size();
-    JSObject* result = createUint8TypedArray(exec, structure, ArrayBuffer::createFromBytes(content.releaseBuffer().leakPtr(), length, [] (void* p) { fastFree(p); }), 0, length);
+    JSObject* result = JSUint8Array::create(vm, structure, WTFMove(content));
     RETURN_IF_EXCEPTION(scope, encodedJSValue());
 
     return JSValue::encode(result);
@@ -3775,6 +3802,12 @@ int runJSC(CommandLine options, bool isWorker, const Func& func)
     return result;
 }
 
+static void gigacageDisabled(void*)
+{
+    dataLog("Gigacage disabled! Aborting.\n");
+    UNREACHABLE_FOR_PLATFORM();
+}
+
 int jscmain(int argc, char** argv)
 {
     // Need to override and enable restricted options before we start parsing options below.
@@ -3793,6 +3826,8 @@ int jscmain(int argc, char** argv)
 #if ENABLE(WEBASSEMBLY)
     JSC::Wasm::enableFastMemory();
 #endif
+    if (GIGACAGE_ENABLED)
+        Gigacage::addDisableCallback(gigacageDisabled, nullptr);
 
     int result;
     result = runJSC(
index 2a8103e..4f16652 100644 (file)
@@ -1198,6 +1198,8 @@ _llint_op_is_object:
 
 macro loadPropertyAtVariableOffset(propertyOffsetAsInt, objectAndStorage, value)
     bilt propertyOffsetAsInt, firstOutOfLineOffset, .isInline
+    # FIXME: Should do caging
+    # https://bugs.webkit.org/show_bug.cgi?id=175036
     loadp JSObject::m_butterfly[objectAndStorage], objectAndStorage
     negi propertyOffsetAsInt
     sxi2q propertyOffsetAsInt, propertyOffsetAsInt
@@ -1211,6 +1213,8 @@ end
 
 macro storePropertyAtVariableOffset(propertyOffsetAsInt, objectAndStorage, value)
     bilt propertyOffsetAsInt, firstOutOfLineOffset, .isInline
+    # FIXME: Should do caging
+    # https://bugs.webkit.org/show_bug.cgi?id=175036
     loadp JSObject::m_butterfly[objectAndStorage], objectAndStorage
     negi propertyOffsetAsInt
     sxi2q propertyOffsetAsInt, propertyOffsetAsInt
@@ -1287,6 +1291,8 @@ _llint_op_get_array_length:
     btiz t2, IsArray, .opGetArrayLengthSlow
     btiz t2, IndexingShapeMask, .opGetArrayLengthSlow
     loadisFromInstruction(1, t1)
+    # FIXME: Should do caging
+    # https://bugs.webkit.org/show_bug.cgi?id=175036
     loadp JSObject::m_butterfly[t3], t0
     loadi -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0], t0
     bilt t0, 0, .opGetArrayLengthSlow
@@ -1470,6 +1476,8 @@ _llint_op_get_by_val:
     loadisFromInstruction(3, t3)
     loadConstantOrVariableInt32(t3, t1, .opGetByValSlow)
     sxi2q t1, t1
+    # FIXME: Should do caging
+    # https://bugs.webkit.org/show_bug.cgi?id=175036
     loadp JSObject::m_butterfly[t0], t3
     andi IndexingShapeMask, t2
     bieq t2, Int32Shape, .opGetByValIsContiguous
@@ -1517,6 +1525,8 @@ _llint_op_get_by_val:
     bia t2, LastArrayType - FirstArrayType, .opGetByValSlow
     
     # Sweet, now we know that we have a typed array. Do some basic things now.
+    # FIXME: Should do caging
+    # https://bugs.webkit.org/show_bug.cgi?id=175036
     loadp JSArrayBufferView::m_vector[t0], t3
     biaeq t1, JSArrayBufferView::m_length[t0], .opGetByValSlow
     
@@ -1608,6 +1618,8 @@ macro putByVal(slowPath)
     loadisFromInstruction(2, t0)
     loadConstantOrVariableInt32(t0, t3, .opPutByValSlow)
     sxi2q t3, t3
+    # FIXME: Should do caging
+    # https://bugs.webkit.org/show_bug.cgi?id=175036
     loadp JSObject::m_butterfly[t1], t0
     andi IndexingShapeMask, t2
     bineq t2, Int32Shape, .opPutByValNotInt32
index 41d6949..19191d9 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2009, 2013, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2009-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -29,6 +29,7 @@
 #include "ArrayBufferNeuteringWatchpoint.h"
 #include "JSArrayBufferView.h"
 #include "JSCInlines.h"
+#include <wtf/Gigacage.h>
 
 namespace JSC {
 
@@ -102,20 +103,20 @@ void ArrayBufferContents::tryAllocate(unsigned numElements, unsigned elementByte
             return;
         }
     }
-    bool allocationSucceeded = false;
-    if (policy == ZeroInitialize)
-        allocationSucceeded = WTF::tryFastCalloc(numElements, elementByteSize).getValue(m_data);
-    else {
-        ASSERT(policy == DontInitialize);
-        allocationSucceeded = WTF::tryFastMalloc(numElements * elementByteSize).getValue(m_data);
-    }
-
-    if (allocationSucceeded) {
-        m_sizeInBytes = numElements * elementByteSize;
-        m_destructor = [] (void* p) { fastFree(p); };
+    size_t size = static_cast<size_t>(numElements) * static_cast<size_t>(elementByteSize);
+    if (!size)
+        size = 1; // Make sure malloc actually allocates something, but not too much. We use null to mean that the buffer is neutered.
+    m_data = Gigacage::tryMalloc(size);
+    if (!m_data) {
+        reset();
         return;
     }
-    reset();
+    
+    if (policy == ZeroInitialize)
+        memset(m_data, 0, size);
+
+    m_sizeInBytes = numElements * elementByteSize;
+    m_destructor = [] (void* p) { Gigacage::free(p); };
 }
 
 void ArrayBufferContents::makeShared()
@@ -180,13 +181,26 @@ Ref<ArrayBuffer> ArrayBuffer::create(ArrayBufferContents&& contents)
     return adoptRef(*new ArrayBuffer(WTFMove(contents)));
 }
 
+// FIXME: We cannot use this except if the memory comes from the cage.
+// Current this is only used from:
+// - JSGenericTypedArrayView<>::slowDownAndWasteMemory. But in that case, the memory should have already come
+//   from the cage.
 Ref<ArrayBuffer> ArrayBuffer::createAdopted(const void* data, unsigned byteLength)
 {
-    return createFromBytes(data, byteLength, [] (void* p) { fastFree(p); });
+    return createFromBytes(data, byteLength, [] (void* p) { Gigacage::free(p); });
 }
 
+// FIXME: We cannot use this except if the memory comes from the cage.
+// Currently this is only used from:
+// - The C API. We could support that by either having the system switch to a mode where typed arrays are no
+//   longer caged, or we could introduce a new set of typed array types that are uncaged and get accessed
+//   differently.
+// - WebAssembly. Wasm should allocate from the cage.
 Ref<ArrayBuffer> ArrayBuffer::createFromBytes(const void* data, unsigned byteLength, ArrayBufferDestructorFunction&& destructor)
 {
+    if (!Gigacage::isCaged(data) && data && byteLength)
+        Gigacage::disableGigacage();
+    
     ArrayBufferContents contents(const_cast<void*>(data), byteLength, WTFMove(destructor));
     return create(WTFMove(contents));
 }
@@ -204,7 +218,7 @@ RefPtr<ArrayBuffer> ArrayBuffer::tryCreate(ArrayBuffer& other)
 RefPtr<ArrayBuffer> ArrayBuffer::tryCreate(const void* source, unsigned byteLength)
 {
     ArrayBufferContents contents;
-    contents.tryAllocate(byteLength, 1, ArrayBufferContents::ZeroInitialize);
+    contents.tryAllocate(byteLength, 1, ArrayBufferContents::DontInitialize);
     if (!contents.m_data)
         return nullptr;
     return createInternal(WTFMove(contents), source, byteLength);
index 6a7f687..5d7e6de 100644 (file)
@@ -122,6 +122,9 @@ private:
 
     union {
         struct {
+            // FIXME: vectorLength should be least significant, so that it's really hard to craft a pointer by
+            // mucking with the butterfly.
+            // https://bugs.webkit.org/show_bug.cgi?id=174927
             uint32_t publicLength; // The meaning of this field depends on the array type, but for all JSArrays we rely on this being the publicly visible length (array.length).
             uint32_t vectorLength; // The length of the indexed property storage. The actual size of the storage depends on this, and the type.
         } lengths;
index 9ef143a..3bd8eba 100644 (file)
@@ -40,7 +40,6 @@
 #include "Options.h"
 #include "StructureIDTable.h"
 #include "SuperSampler.h"
-#include "WasmMemory.h"
 #include "WasmThunks.h"
 #include "WriteBarrier.h"
 #include <mutex>
@@ -60,9 +59,6 @@ void initializeThreading()
     std::call_once(initializeThreadingOnceFlag, []{
         WTF::initializeThreading();
         Options::initialize();
-#if ENABLE(WEBASSEMBLY)
-        Wasm::Memory::initializePreallocations();
-#endif
 #if ENABLE(WRITE_BARRIER_PROFILING)
         WriteBarrierCounters::initialize();
 #endif
index 79ec803..6798da5 100644 (file)
@@ -29,6 +29,7 @@
 #include "JSCInlines.h"
 #include "TypeError.h"
 #include "TypedArrayController.h"
+#include <wtf/Gigacage.h>
 
 namespace JSC {
 
index 31114b3..30183dd 100644 (file)
@@ -30,6 +30,7 @@
 #include "JSCInlines.h"
 #include "TypeError.h"
 #include "TypedArrayController.h"
+#include <wtf/Gigacage.h>
 
 namespace JSC {
 
@@ -88,13 +89,12 @@ JSArrayBufferView::ConstructionContext::ConstructionContext(
     if (length > static_cast<unsigned>(INT_MAX) / elementSize)
         return;
     
-    if (mode == ZeroFill) {
-        if (!tryFastCalloc(length, elementSize).getValue(m_vector))
-            return;
-    } else {
-        if (!tryFastMalloc(length * elementSize).getValue(m_vector))
-            return;
-    }
+    size_t size = static_cast<size_t>(length) * static_cast<size_t>(elementSize);
+    m_vector = Gigacage::tryMalloc(size);
+    if (!m_vector)
+        return;
+    if (mode == ZeroFill)
+        memset(m_vector, 0, size);
     
     vm.heap.reportExtraMemoryAllocated(static_cast<size_t>(length) * elementSize);
     
@@ -192,7 +192,7 @@ void JSArrayBufferView::finalize(JSCell* cell)
     JSArrayBufferView* thisObject = static_cast<JSArrayBufferView*>(cell);
     ASSERT(thisObject->m_mode == OversizeTypedArray || thisObject->m_mode == WastefulTypedArray);
     if (thisObject->m_mode == OversizeTypedArray)
-        fastFree(thisObject->m_vector.get());
+        Gigacage::free(thisObject->m_vector.get());
 }
 
 JSArrayBuffer* JSArrayBufferView::unsharedJSBuffer(ExecState* exec)
index ba6c44a..92edee3 100644 (file)
@@ -156,6 +156,8 @@ void JSLock::didAcquireLock()
 
     // Note: everything below must come after addCurrentThread().
     m_vm->traps().notifyGrabAllLocks();
+    
+    m_vm->fireGigacageEnabledIfNecessary();
 
 #if ENABLE(SAMPLING_PROFILER)
     if (SamplingProfiler* samplingProfiler = m_vm->samplingProfiler())
index 35060cc..6501277 100644 (file)
@@ -1045,6 +1045,8 @@ private:
     PropertyOffset prepareToPutDirectWithoutTransition(VM&, PropertyName, unsigned attributes, StructureID, Structure*);
 
 protected:
+    // FIXME: This should do caging.
+    // https://bugs.webkit.org/show_bug.cgi?id=175039
     AuxiliaryBarrier<Butterfly*> m_butterfly;
 #if USE(JSVALUE32_64)
 private:
index 430eacf..38735d5 100644 (file)
@@ -405,10 +405,8 @@ static void recomputeDependentOptions()
     if (!Options::useJIT())
         Options::useWebAssembly() = false;
 
-    if (!Options::useWebAssembly()) {
-        Options::webAssemblyFastMemoryPreallocateCount() = 0;
+    if (!Options::useWebAssembly())
         Options::useWebAssemblyFastTLS() = false;
-    }
     
     if (Options::dumpDisassembly()
         || Options::dumpDFGDisassembly()
index 6737f75..fefe14e 100644 (file)
@@ -460,9 +460,10 @@ typedef const char* optionString;
     \
     /* FIXME: enable fast memories on iOS and pre-allocate them. https://bugs.webkit.org/show_bug.cgi?id=170774 */ \
     v(bool, useWebAssemblyFastMemory, !isIOS(), Normal, "If true, we will try to use a 32-bit address space with a signal handler to bounds check wasm memory.") \
+    v(bool, logWebAssemblyMemory, false, Normal, nullptr) \
     v(unsigned, webAssemblyFastMemoryRedzonePages, 128, Normal, "WebAssembly fast memories use 4GiB virtual allocations, plus a redzone (counted as multiple of 64KiB WebAssembly pages) at the end to catch reg+imm accesses which exceed 32-bit, anything beyond the redzone is explicitly bounds-checked") \
     v(bool, crashIfWebAssemblyCantFastMemory, false, Normal, "If true, we will crash if we can't obtain fast memory for wasm.") \
-    v(unsigned, webAssemblyFastMemoryPreallocateCount, 0, Normal, "WebAssembly fast memories can be pre-allocated at program startup and remain cached to avoid fragmentation leading to bounds-checked memory. This number is an upper bound on initial allocation as well as total count of fast memories. Zero means no pre-allocation, no caching, and no limit to the number of runtime allocations.") \
+    v(unsigned, maxNumWebAssemblyFastMemories, 10, Normal, nullptr) \
     v(bool, useWebAssemblyFastTLS, true, Normal, "If true, we will try to use fast thread-local storage if available on the current platform.") \
     v(bool, useFastTLSForWasmContext, true, Normal, "If true (and fast TLS is enabled), we will store context in fast TLS. If false, we will pin it to a register.") \
     v(bool, useCallICsForWebAssemblyToJSCalls, true, Normal, "If true, we will use CallLinkInfo to inline cache Wasm to JS calls.") \
index f1df980..ba1a122 100644 (file)
@@ -86,6 +86,8 @@ private:
     
     uint32_t m_length;
     bool m_locked; // Being locked means that there are multiple references to this object and none of them expect to see the others' modifications. This means that modifications need to make a copy first.
+    // FIXME: Allocate this in the primitive gigacage
+    // https://bugs.webkit.org/show_bug.cgi?id=174921
     std::unique_ptr<ScopeOffset[]> m_arguments;
 };
 
index 1251024..360fe64 100644 (file)
@@ -167,6 +167,7 @@ VM::VM(VMType vmType, HeapType heapType)
     , destructibleCellSpace("Destructible JSCell", heap, AllocatorAttributes(NeedsDestruction, HeapCell::JSCell))
     , stringSpace("JSString", heap)
     , destructibleObjectSpace("JSDestructibleObject", heap)
+    , eagerlySweptDestructibleObjectSpace("Eagerly Swept JSDestructibleObject", heap)
     , segmentedVariableObjectSpace("JSSegmentedVariableObjectSpace", heap)
 #if ENABLE(WEBASSEMBLY)
     , webAssemblyCodeBlockSpace("JSWebAssemblyCodeBlockSpace", heap)
@@ -207,6 +208,7 @@ VM::VM(VMType vmType, HeapType heapType)
     , m_codeCache(std::make_unique<CodeCache>())
     , m_builtinExecutables(std::make_unique<BuiltinExecutables>(*this))
     , m_typeProfilerEnabledCount(0)
+    , m_gigacageEnabled(IsWatched)
     , m_controlFlowProfilerEnabledCount(0)
     , m_shadowChicken(std::make_unique<ShadowChicken>())
 {
@@ -284,6 +286,8 @@ VM::VM(VMType vmType, HeapType heapType)
 #if ENABLE(JIT)
     initializeHostCallReturnValue(); // This is needed to convince the linker not to drop host call return support.
 #endif
+    
+    Gigacage::addDisableCallback(gigacageDisabledCallback, this);
 
     heap.notifyIsSafeToCollect();
     
@@ -338,6 +342,7 @@ VM::VM(VMType vmType, HeapType heapType)
 
 VM::~VM()
 {
+    Gigacage::removeDisableCallback(gigacageDisabledCallback, this);
     promiseDeferredTimer->stopRunningTasks();
 #if ENABLE(WEBASSEMBLY)
     if (Wasm::existingWorklistOrNull())
@@ -406,6 +411,23 @@ VM::~VM()
 #endif
 }
 
+void VM::gigacageDisabledCallback(void* argument)
+{
+    static_cast<VM*>(argument)->gigacageDisabled();
+}
+
+void VM::gigacageDisabled()
+{
+    if (m_apiLock->currentThreadIsHoldingLock()) {
+        m_gigacageEnabled.fireAll(*this, "Gigacage disabled");
+        return;
+    }
+    // This is totally racy, and that's OK. The point is, it's up to the user to ensure that they pass the
+    // uncaged buffer in a nicely synchronized manner.
+    m_needToFireGigacageEnabled = true;
+}
+
 void VM::setLastStackTop(void* lastStackTop)
 { 
     m_lastStackTop = lastStackTop;
index ee232a7..aef8276 100644 (file)
@@ -36,6 +36,7 @@
 #include "ExceptionEventLocation.h"
 #include "ExecutableAllocator.h"
 #include "FunctionHasExecutedCache.h"
+#include "GigacageSubspace.h"
 #include "Heap.h"
 #include "Intrinsic.h"
 #include "JITThunks.h"
@@ -286,13 +287,14 @@ private:
 public:
     Heap heap;
     
-    Subspace auxiliarySpace;
+    GigacageSubspace auxiliarySpace;
     
     // Whenever possible, use subspaceFor<CellType>(vm) to get one of these subspaces.
     Subspace cellSpace;
     Subspace destructibleCellSpace;
     JSStringSubspace stringSpace;
     JSDestructibleObjectSubspace destructibleObjectSpace;
+    JSDestructibleObjectSubspace eagerlySweptDestructibleObjectSpace;
     JSSegmentedVariableObjectSubspace segmentedVariableObjectSpace;
 #if ENABLE(WEBASSEMBLY)
     JSWebAssemblyCodeBlockSubspace webAssemblyCodeBlockSpace;
@@ -523,6 +525,14 @@ public:
 
     void* lastStackTop() { return m_lastStackTop; }
     void setLastStackTop(void*);
+    
+    void fireGigacageEnabledIfNecessary()
+    {
+        if (m_needToFireGigacageEnabled) {
+            m_needToFireGigacageEnabled = false;
+            m_gigacageEnabled.fireAll(*this, "Gigacage disabled asynchronously");
+        }
+    }
 
     JSValue hostCallReturnValue;
     unsigned varargsLength;
@@ -624,6 +634,8 @@ public:
     
     // FIXME: Use AtomicString once it got merged with Identifier.
     JS_EXPORT_PRIVATE void addImpureProperty(const String&);
+    
+    InlineWatchpointSet& gigacageEnabled() { return m_gigacageEnabled; }
 
     BuiltinExecutables* builtinExecutables() { return m_builtinExecutables.get(); }
 
@@ -730,6 +742,9 @@ private:
 #if ENABLE(EXCEPTION_SCOPE_VERIFICATION)
     void verifyExceptionCheckNeedIsSatisfied(unsigned depth, ExceptionEventLocation&);
 #endif
+    
+    static void gigacageDisabledCallback(void*);
+    void gigacageDisabled();
 
 #if ENABLE(ASSEMBLER)
     bool m_canUseAssembler;
@@ -774,6 +789,8 @@ private:
     std::unique_ptr<TypeProfiler> m_typeProfiler;
     std::unique_ptr<TypeProfilerLog> m_typeProfilerLog;
     unsigned m_typeProfilerEnabledCount;
+    bool m_needToFireGigacageEnabled { false };
+    InlineWatchpointSet m_gigacageEnabled;
     FunctionHasExecutedCache m_functionHasExecutedCache;
     std::unique_ptr<ControlFlowProfiler> m_controlFlowProfiler;
     unsigned m_controlFlowProfilerEnabledCount;
index 34826b0..00ac3c1 100644 (file)
@@ -358,8 +358,6 @@ B3IRGenerator::B3IRGenerator(const ModuleInformation& info, Procedure& procedure
             case MemoryMode::Signaling:
                 ASSERT_UNUSED(pinnedGPR, InvalidGPRReg == pinnedGPR);
                 break;
-            case MemoryMode::NumberOfMemoryModes:
-                ASSERT_NOT_REACHED();
             }
             this->emitExceptionCheck(jit, ExceptionType::OutOfBoundsMemoryAccess);
         });
@@ -637,9 +635,6 @@ inline Value* B3IRGenerator::emitCheckAndPreparePointer(ExpressionType pointer,
             m_currentBlock->appendNew<WasmBoundsCheckValue>(m_proc, origin(), pointer, sizeOfOperation + offset - 1, maximum);
         }
         break;
-
-    case MemoryMode::NumberOfMemoryModes:
-        RELEASE_ASSERT_NOT_REACHED();
     }
     pointer = m_currentBlock->appendNew<Value>(m_proc, ZExt32, origin(), pointer);
     return m_currentBlock->appendNew<WasmAddressValue>(m_proc, origin(), pointer, m_memoryBaseGPR);
index 323b868..70fefb0 100644 (file)
@@ -125,8 +125,6 @@ bool CodeBlock::isSafeToRun(MemoryMode memoryMode)
         // Its memory, even if empty, absolutely must also be in Signaling mode
         // because the page protection detects out-of-bounds accesses.
         return memoryMode == Wasm::MemoryMode::Signaling;
-    case Wasm::MemoryMode::NumberOfMemoryModes:
-        break;
     }
     RELEASE_ASSERT_NOT_REACHED();
     return false;
index ed8c5d7..db7731f 100644 (file)
 
 #include "VM.h"
 #include "WasmThunks.h"
-
-#include <atomic>
-#include <wtf/MonotonicTime.h>
+#include <wtf/Gigacage.h>
+#include <wtf/Lock.h>
 #include <wtf/Platform.h>
 #include <wtf/PrintStream.h>
-#include <wtf/VMTags.h>
+#include <wtf/RAMSize.h>
 
 namespace JSC { namespace Wasm {
 
@@ -44,308 +43,198 @@ namespace JSC { namespace Wasm {
 // FIXME: Limit slow memory size. https://bugs.webkit.org/show_bug.cgi?id=170825
 
 namespace {
+
 constexpr bool verbose = false;
 
 NEVER_INLINE NO_RETURN_DUE_TO_CRASH void webAssemblyCouldntGetFastMemory() { CRASH(); }
-NEVER_INLINE NO_RETURN_DUE_TO_CRASH void webAssemblyCouldntUnmapMemory() { CRASH(); }
-NEVER_INLINE NO_RETURN_DUE_TO_CRASH void webAssemblyCouldntUnprotectMemory() { CRASH(); }
-
-void* mmapBytes(size_t bytes)
-{
-    void* location = mmap(nullptr, bytes, PROT_NONE, MAP_PRIVATE | MAP_ANON, VM_TAG_FOR_WEBASSEMBLY_MEMORY, 0);
-    return location == MAP_FAILED ? nullptr : location;
-}
 
-void munmapBytes(void* memory, size_t size)
-{
-    if (UNLIKELY(munmap(memory, size)))
-        webAssemblyCouldntUnmapMemory();
-}
-
-void zeroAndUnprotectBytes(void* start, size_t bytes)
-{
-    if (bytes) {
-        dataLogLnIf(verbose, "Zeroing and unprotecting ", bytes, " from ", RawPointer(start));
-        // FIXME: We could be smarter about memset / mmap / madvise. Here, we may not need to act synchronously, or maybe we can memset+unprotect smaller ranges of memory (which would pay off if not all the writable memory was actually physically backed: memset forces physical backing only to unprotect it right after). https://bugs.webkit.org/show_bug.cgi?id=170343
-        memset(start, 0, bytes);
-        if (UNLIKELY(mprotect(start, bytes, PROT_NONE)))
-            webAssemblyCouldntUnprotectMemory();
+struct MemoryResult {
+    enum Kind {
+        Success,
+        SuccessAndAsyncGC,
+        SyncGCAndRetry
+    };
+    
+    static const char* toString(Kind kind)
+    {
+        switch (kind) {
+        case Success:
+            return "Success";
+        case SuccessAndAsyncGC:
+            return "SuccessAndAsyncGC";
+        case SyncGCAndRetry:
+            return "SyncGCAndRetry";
+        }
+        RELEASE_ASSERT_NOT_REACHED();
+        return nullptr;
     }
-}
-
-// Allocate fast memories very early at program startup and cache them. The fast memories use significant amounts of virtual uncommitted address space, reducing the likelihood that we'll obtain any if we wait to allocate them.
-// We still try to allocate fast memories at runtime, and will cache them when relinquished up to the preallocation limit.
-// Note that this state is per-process, not per-VM.
-// We use simple static globals which don't allocate to avoid early fragmentation and to keep management to the bare minimum. We avoid locking because fast memories use segfault signal handling to handle out-of-bounds accesses. This requires identifying if the faulting address is in a fast memory range, which should avoid acquiring a lock lest the actual signal was caused by this very code while it already held the lock.
-// Speed and contention don't really matter here, but simplicity does. We therefore use straightforward FIFOs for our cache, and linear traversal for the list of currently active fast memories.
-constexpr size_t fastMemoryCacheHardLimit { 16 };
-constexpr size_t fastMemoryAllocationSoftLimit { 32 }; // Prevents filling up the virtual address space.
-static_assert(fastMemoryAllocationSoftLimit >= fastMemoryCacheHardLimit, "The cache shouldn't be bigger than the total number we'll ever allocate");
-size_t fastMemoryPreallocateCount { 0 };
-std::atomic<void*> fastMemoryCache[fastMemoryCacheHardLimit] = { ATOMIC_VAR_INIT(nullptr) };
-std::atomic<void*> currentlyActiveFastMemories[fastMemoryAllocationSoftLimit] = { ATOMIC_VAR_INIT(nullptr) };
-std::atomic<size_t> currentlyAllocatedFastMemories = ATOMIC_VAR_INIT(0);
-std::atomic<size_t> observedMaximumFastMemory = ATOMIC_VAR_INIT(0);
-std::atomic<size_t> currentSlowMemoryCapacity = ATOMIC_VAR_INIT(0);
-
-size_t fastMemoryAllocatedBytesSoftLimit()
-{
-    return fastMemoryAllocationSoftLimit * Memory::fastMappedBytes();
-}
-
-void* tryGetCachedFastMemory()
-{
-    for (unsigned idx = 0; idx < fastMemoryPreallocateCount; ++idx) {
-        if (void* previous = fastMemoryCache[idx].exchange(nullptr, std::memory_order_acq_rel))
-            return previous;
+    
+    MemoryResult() { }
+    
+    MemoryResult(void* basePtr, Kind kind)
+        : basePtr(basePtr)
+        , kind(kind)
+    {
     }
-    return nullptr;
-}
-
-bool tryAddToCachedFastMemory(void* memory)
-{
-    for (unsigned i = 0; i < fastMemoryPreallocateCount; ++i) {
-        void* expected = nullptr;
-        if (fastMemoryCache[i].compare_exchange_strong(expected, memory, std::memory_order_acq_rel)) {
-            dataLogLnIf(verbose, "Cached fast memory ", RawPointer(memory));
-            return true;
-        }
+    
+    void dump(PrintStream& out) const
+    {
+        out.print("{basePtr = ", RawPointer(basePtr), ", kind = ", toString(kind), "}");
     }
-    return false;
-}
-
-bool tryAddToCurrentlyActiveFastMemories(void* memory)
-{
-    for (size_t idx = 0; idx < fastMemoryAllocationSoftLimit; ++idx) {
-        void* expected = nullptr;
-        if (currentlyActiveFastMemories[idx].compare_exchange_strong(expected, memory, std::memory_order_acq_rel))
-            return true;
+    
+    void* basePtr;
+    Kind kind;
+};
+
+class MemoryManager {
+public:
+    MemoryManager()
+        : m_maxCount(Options::maxNumWebAssemblyFastMemories())
+    {
     }
-    return false;
-}
-
-void removeFromCurrentlyActiveFastMemories(void* memory)
-{
-    for (size_t idx = 0; idx < fastMemoryAllocationSoftLimit; ++idx) {
-        void* expected = memory;
-        if (currentlyActiveFastMemories[idx].compare_exchange_strong(expected, nullptr, std::memory_order_acq_rel))
-            return;
+    
+    MemoryResult tryAllocateVirtualPages()
+    {
+        MemoryResult result = [&] {
+            auto holder = holdLock(m_lock);
+            if (m_memories.size() >= m_maxCount)
+                return MemoryResult(nullptr, MemoryResult::SyncGCAndRetry);
+            
+            void* result = Gigacage::tryAllocateVirtualPages(Memory::fastMappedBytes());
+            if (!result)
+                return MemoryResult(nullptr, MemoryResult::SyncGCAndRetry);
+            
+            m_memories.append(result);
+            
+            return MemoryResult(
+                result,
+                m_memories.size() >= m_maxCount / 2 ? MemoryResult::SuccessAndAsyncGC : MemoryResult::Success);
+        }();
+        
+        if (Options::logWebAssemblyMemory())
+            dataLog("Allocated virtual: ", result, "; state: ", *this, "\n");
+        
+        return result;
     }
-    RELEASE_ASSERT_NOT_REACHED();
-}
-
-void* tryGetFastMemory(VM& vm)
-{
-    void* memory = nullptr;
-
-    if (LIKELY(Options::useWebAssemblyFastMemory())) {
-        memory = tryGetCachedFastMemory();
-        if (memory)
-            dataLogLnIf(verbose, "tryGetFastMemory re-using ", RawPointer(memory));
-        else if (currentlyAllocatedFastMemories.load(std::memory_order_acquire) >= 1) {
-            // No memory was available in the cache, but we know there's at least one currently live. Maybe GC will find a free one.
-            // FIXME collectSync(Full) and custom eager destruction of wasm memories could be better. For now use collectNow. Also, nothing tells us the current VM is holding onto fast memories. https://bugs.webkit.org/show_bug.cgi?id=170748
-            dataLogLnIf(verbose, "tryGetFastMemory waiting on GC and retrying");
-            vm.heap.collectNow(Sync, CollectionScope::Full);
-            memory = tryGetCachedFastMemory();
-            dataLogLnIf(verbose, "tryGetFastMemory waited on GC and retried ", memory? "successfully" : "unseccessfully");
-        }
-
-        // The soft limit is inherently racy because checking+allocation isn't atomic. Exceeding it slightly is fine.
-        bool atAllocationSoftLimit = currentlyAllocatedFastMemories.load(std::memory_order_acquire) >= fastMemoryAllocationSoftLimit;
-        dataLogLnIf(verbose && atAllocationSoftLimit, "tryGetFastMemory reached allocation soft limit of ", fastMemoryAllocationSoftLimit);
-
-        if (!memory && !atAllocationSoftLimit) {
-            memory = mmapBytes(Memory::fastMappedBytes());
-            if (memory) {
-                size_t currentlyAllocated = 1 + currentlyAllocatedFastMemories.fetch_add(1, std::memory_order_acq_rel);
-                size_t currentlyObservedMaximum = observedMaximumFastMemory.load(std::memory_order_acquire);
-                if (currentlyAllocated > currentlyObservedMaximum) {
-                    size_t expected = currentlyObservedMaximum;
-                    bool success = observedMaximumFastMemory.compare_exchange_strong(expected, currentlyAllocated, std::memory_order_acq_rel);
-                    if (success)
-                        dataLogLnIf(verbose, "tryGetFastMemory currently observed maximum is now ", currentlyAllocated);
-                    else
-                        // We lost the update race, but the counter is monotonic so the winner must have updated the value to what we were going to update it to, or multiple winners did so.
-                        ASSERT(expected >= currentlyAllocated);
-                }
-                dataLogLnIf(verbose, "tryGetFastMemory allocated ", RawPointer(memory), ", currently allocated is ", currentlyAllocated);
-            }
+    
+    void freeVirtualPages(void* basePtr)
+    {
+        {
+            auto holder = holdLock(m_lock);
+            Gigacage::freeVirtualPages(basePtr, Memory::fastMappedBytes());
+            m_memories.removeFirst(basePtr);
         }
+        
+        if (Options::logWebAssemblyMemory())
+            dataLog("Freed virtual; state: ", *this, "\n");
     }
-
-    if (memory) {
-        if (UNLIKELY(!tryAddToCurrentlyActiveFastMemories(memory))) {
-            // We got a memory, but reached the allocation soft limit *and* all of the allocated memories are active, none are cached. That's a bummer, we have to get rid of our memory. We can't just hold on to it because the list of active fast memories must be precise.
-            dataLogLnIf(verbose, "tryGetFastMemory found a fast memory but had to give it up");
-            munmapBytes(memory, Memory::fastMappedBytes());
-            currentlyAllocatedFastMemories.fetch_sub(1, std::memory_order_acq_rel);
-            memory = nullptr;
+    
+    bool containsAddress(void* address)
+    {
+        // NOTE: This can be called from a signal handler, but only after we proved that we're in JIT code.
+        auto holder = holdLock(m_lock);
+        for (void* memory : m_memories) {
+            char* start = static_cast<char*>(memory);
+            if (start <= address && address <= start + Memory::fastMappedBytes())
+                return true;
         }
+        return false;
     }
-
-    if (!memory) {
-        dataLogLnIf(verbose, "tryGetFastMemory couldn't re-use or allocate a fast memory");
-        if (UNLIKELY(Options::crashIfWebAssemblyCantFastMemory()))
-            webAssemblyCouldntGetFastMemory();
-    }
-
-    return memory;
-}
-
-bool slowMemoryCapacitySoftMaximumExceeded()
-{
-    // The limit on slow memory capacity is arbitrary. Its purpose is to limit
-    // virtual memory allocation. We choose to set the limit at the same virtual
-    // memory limit imposed on fast memories.
-    size_t maximum = fastMemoryAllocatedBytesSoftLimit();
-    size_t currentCapacity = currentSlowMemoryCapacity.load(std::memory_order_acquire);
-    if (UNLIKELY(currentCapacity > maximum)) {
-        dataLogLnIf(verbose, "Slow memory capacity limit reached");
-        return true;
+    
+    // FIXME: Ideally, bmalloc would have this kind of mechanism. Then, we would just forward to that
+    // mechanism here.
+    MemoryResult::Kind tryAllocatePhysicalBytes(size_t bytes)
+    {
+        MemoryResult::Kind result = [&] {
+            auto holder = holdLock(m_lock);
+            if (m_physicalBytes + bytes > ramSize())
+                return MemoryResult::SyncGCAndRetry;
+            
+            m_physicalBytes += bytes;
+            
+            if (m_physicalBytes >= ramSize() / 2)
+                return MemoryResult::SuccessAndAsyncGC;
+            
+            return MemoryResult::Success;
+        }();
+        
+        if (Options::logWebAssemblyMemory())
+            dataLog("Allocated physical: ", bytes, ", ", MemoryResult::toString(result), "; state: ", *this, "\n");
+        
+        return result;
     }
-    return false;
-}
-
-void* tryGetSlowMemory(size_t bytes)
-{
-    if (slowMemoryCapacitySoftMaximumExceeded())
-        return nullptr;
-    void* memory = mmapBytes(bytes);
-    if (memory)
-        currentSlowMemoryCapacity.fetch_add(bytes, std::memory_order_acq_rel);
-    dataLogLnIf(memory && verbose, "Obtained slow memory ", RawPointer(memory), " with capacity ", bytes);
-    dataLogLnIf(!memory && verbose, "Failed obtaining slow memory with capacity ", bytes);
-    return memory;
-}
-
-void relinquishMemory(void* memory, size_t writableSize, size_t mappedCapacity, MemoryMode mode)
-{
-    switch (mode) {
-    case MemoryMode::Signaling: {
-        RELEASE_ASSERT(Options::useWebAssemblyFastMemory());
-        RELEASE_ASSERT(mappedCapacity == Memory::fastMappedBytes());
-
-        // This memory cannot cause a trap anymore.
-        removeFromCurrentlyActiveFastMemories(memory);
-
-        // We may cache fast memories. Assuming we will, we have to reset them before inserting them into the cache.
-        zeroAndUnprotectBytes(memory, writableSize);
-
-        if (tryAddToCachedFastMemory(memory))
-            return;
-
-        dataLogLnIf(verbose, "relinquishMemory unable to cache fast memory, freeing instead ", RawPointer(memory));
-        munmapBytes(memory, Memory::fastMappedBytes());
-        currentlyAllocatedFastMemories.fetch_sub(1, std::memory_order_acq_rel);
-
-        return;
+    
+    void freePhysicalBytes(size_t bytes)
+    {
+        {
+            auto holder = holdLock(m_lock);
+            m_physicalBytes -= bytes;
+        }
+        
+        if (Options::logWebAssemblyMemory())
+            dataLog("Freed physical: ", bytes, "; state: ", *this, "\n");
     }
-
-    case MemoryMode::BoundsChecking:
-        dataLogLnIf(verbose, "relinquishFastMemory freeing slow memory ", RawPointer(memory));
-        munmapBytes(memory, mappedCapacity);
-        currentSlowMemoryCapacity.fetch_sub(mappedCapacity, std::memory_order_acq_rel);
-        return;
-
-    case MemoryMode::NumberOfMemoryModes:
-        break;
+    
+    void dump(PrintStream& out) const
+    {
+        out.print("memories =  ", m_memories.size(), "/", m_maxCount, ", bytes = ", m_physicalBytes, "/", ramSize());
     }
-
-    RELEASE_ASSERT_NOT_REACHED();
+    
+private:
+    Lock m_lock;
+    unsigned m_maxCount { 0 };
+    Vector<void*> m_memories;
+    size_t m_physicalBytes { 0 };
+};
+
+static MemoryManager& memoryManager()
+{
+    static std::once_flag onceFlag;
+    static MemoryManager* manager;
+    std::call_once(
+        onceFlag,
+        [] {
+            manager = new MemoryManager();
+        });
+    return *manager;
 }
 
-bool makeNewMemoryReadWriteOrRelinquish(void* memory, size_t initialBytes, size_t mappedCapacityBytes, MemoryMode mode)
+template<typename Func>
+bool tryAndGC(VM& vm, const Func& allocate)
 {
-    ASSERT(memory && initialBytes <= mappedCapacityBytes);
-    if (initialBytes) {
-        dataLogLnIf(verbose, "Marking WebAssembly memory's ", RawPointer(memory), "'s initial ", initialBytes, " bytes as read+write");
-        if (mprotect(memory, initialBytes, PROT_READ | PROT_WRITE)) {
-            const char* why = strerror(errno);
-            dataLogLnIf(verbose, "Failed making memory ", RawPointer(memory), " readable and writable: ", why);
-            relinquishMemory(memory, 0, mappedCapacityBytes, mode);
-            return false;
+    unsigned numTries = 2;
+    bool done = false;
+    for (unsigned i = 0; i < numTries && !done; ++i) {
+        switch (allocate()) {
+        case MemoryResult::Success:
+            done = true;
+            break;
+        case MemoryResult::SuccessAndAsyncGC:
+            vm.heap.collectAsync(CollectionScope::Full);
+            done = true;
+            break;
+        case MemoryResult::SyncGCAndRetry:
+            if (i + 1 == numTries)
+                break;
+            vm.heap.collectSync(CollectionScope::Full);
+            break;
         }
     }
-    return true;
+    return done;
 }
 
 } // anonymous namespace
 
-
 const char* makeString(MemoryMode mode)
 {
     switch (mode) {
     case MemoryMode::BoundsChecking: return "BoundsChecking";
     case MemoryMode::Signaling: return "Signaling";
-    case MemoryMode::NumberOfMemoryModes: break;
     }
     RELEASE_ASSERT_NOT_REACHED();
     return "";
 }
 
-void Memory::initializePreallocations()
-{
-    if (UNLIKELY(!Options::useWebAssemblyFastMemory()))
-        return;
-
-    // Races cannot occur in this function: it is only called at program initialization, before WebAssembly can be invoked.
-
-    MonotonicTime startTime;
-    if (verbose)
-        startTime = MonotonicTime::now();
-
-    const size_t desiredFastMemories = std::min<size_t>(Options::webAssemblyFastMemoryPreallocateCount(), fastMemoryCacheHardLimit);
-
-    // Start off trying to allocate fast memories contiguously so they don't fragment each other. This can fail if the address space is otherwise fragmented. In that case, go for smaller contiguous allocations. We'll eventually get individual non-contiguous fast memories allocated, or we'll just be unable to fit a single one at which point we give up.
-    auto allocateContiguousFastMemories = [&] (size_t numContiguous) -> bool {
-        if (void *memory = mmapBytes(Memory::fastMappedBytes() * numContiguous)) {
-            for (size_t subMemory = 0; subMemory < numContiguous; ++subMemory) {
-                void* startAddress = reinterpret_cast<char*>(memory) + Memory::fastMappedBytes() * subMemory;
-                bool inserted = false;
-                for (size_t cacheEntry = 0; cacheEntry < fastMemoryCacheHardLimit; ++cacheEntry) {
-                    if (fastMemoryCache[cacheEntry].load(std::memory_order_relaxed) == nullptr) {
-                        fastMemoryCache[cacheEntry].store(startAddress, std::memory_order_relaxed);
-                        inserted = true;
-                        break;
-                    }
-                }
-                RELEASE_ASSERT(inserted);
-            }
-            return true;
-        }
-        return false;
-    };
-
-    size_t fragments = 0;
-    size_t numFastMemories = 0;
-    size_t contiguousMemoryAllocationAttempt = desiredFastMemories;
-    while (numFastMemories != desiredFastMemories && contiguousMemoryAllocationAttempt != 0) {
-        if (allocateContiguousFastMemories(contiguousMemoryAllocationAttempt)) {
-            numFastMemories += contiguousMemoryAllocationAttempt;
-            contiguousMemoryAllocationAttempt = std::min(contiguousMemoryAllocationAttempt - 1, desiredFastMemories - numFastMemories);
-        } else
-            --contiguousMemoryAllocationAttempt;
-        ++fragments;
-    }
-
-    fastMemoryPreallocateCount = numFastMemories;
-    currentlyAllocatedFastMemories.store(fastMemoryPreallocateCount, std::memory_order_relaxed);
-    observedMaximumFastMemory.store(fastMemoryPreallocateCount, std::memory_order_relaxed);
-
-    if (verbose) {
-        MonotonicTime endTime = MonotonicTime::now();
-
-        for (size_t cacheEntry = 0; cacheEntry < fastMemoryPreallocateCount; ++cacheEntry) {
-            void* startAddress = fastMemoryCache[cacheEntry].load(std::memory_order_relaxed);
-            ASSERT(startAddress);
-            dataLogLn("Pre-allocation of WebAssembly fast memory at ", RawPointer(startAddress));
-        }
-
-        dataLogLn("Pre-allocated ", fastMemoryPreallocateCount, " WebAssembly fast memories in ", fastMemoryPreallocateCount == 0 ? 0 : fragments, fragments == 1 ? " fragment, took " : " fragments, took ", endTime - startTime);
-    }
-}
-
 Memory::Memory(PageCount initial, PageCount maximum)
     : m_initial(initial)
     , m_maximum(maximum)
@@ -373,8 +262,6 @@ RefPtr<Memory> Memory::create(VM& vm, PageCount initial, PageCount maximum)
 
     const size_t initialBytes = initial.bytes();
     const size_t maximumBytes = maximum ? maximum.bytes() : 0;
-    size_t mappedCapacityBytes = 0;
-    MemoryMode mode;
 
     // We need to be sure we have a stub prior to running code.
     if (UNLIKELY(!Thunks::singleton().stub(throwExceptionFromWasmThunkGenerator)))
@@ -385,50 +272,68 @@ RefPtr<Memory> Memory::create(VM& vm, PageCount initial, PageCount maximum)
         RELEASE_ASSERT(!initialBytes);
         return adoptRef(new Memory(initial, maximum));
     }
-
-    void* memory = nullptr;
-
-    // First try fast memory, because they're fast. Fast memory is suitable for any initial / maximum.
-    memory = tryGetFastMemory(vm);
-    if (memory) {
-        mappedCapacityBytes = Memory::fastMappedBytes();
-        mode = MemoryMode::Signaling;
-    }
-
-    // If we can't get a fast memory but the user expressed the intent to grow memory up to a certain maximum then we should try to honor that desire. It'll mean that grow is more likely to succeed, and won't require remapping.
-    if (!memory && maximum) {
-        memory = tryGetSlowMemory(maximumBytes);
-        if (memory) {
-            mappedCapacityBytes = maximumBytes;
-            mode = MemoryMode::BoundsChecking;
-        }
+    
+    bool done = tryAndGC(
+        vm,
+        [&] () -> MemoryResult::Kind {
+            return memoryManager().tryAllocatePhysicalBytes(initialBytes);
+        });
+    if (!done)
+        return nullptr;
+        
+    char* fastMemory = nullptr;
+    if (Options::useWebAssemblyFastMemory()) {
+        tryAndGC(
+            vm,
+            [&] () -> MemoryResult::Kind {
+                auto result = memoryManager().tryAllocateVirtualPages();
+                fastMemory = bitwise_cast<char*>(result.basePtr);
+                return result.kind;
+            });
     }
-
-    // We're stuck with a slow memory which may be slower or impossible to grow.
-    if (!memory) {
-        if (!initialBytes)
-            return adoptRef(new Memory(initial, maximum));
-        memory = tryGetSlowMemory(initialBytes);
-        if (memory) {
-            mappedCapacityBytes = initialBytes;
-            mode = MemoryMode::BoundsChecking;
+    
+    if (fastMemory) {
+        bool writable = true;
+        bool executable = false;
+        OSAllocator::commit(fastMemory, initialBytes, writable, executable);
+        
+        if (mprotect(fastMemory + initialBytes, Memory::fastMappedBytes() - initialBytes, PROT_NONE)) {
+            dataLog("mprotect failed: ", strerror(errno), "\n");
+            RELEASE_ASSERT_NOT_REACHED();
         }
+        
+        memset(fastMemory, 0, initialBytes);
+        return adoptRef(new Memory(fastMemory, initial, maximum, Memory::fastMappedBytes(), MemoryMode::Signaling));
     }
+    
+    if (UNLIKELY(Options::crashIfWebAssemblyCantFastMemory()))
+        webAssemblyCouldntGetFastMemory();
 
-    if (!memory)
-        return nullptr;
-
-    if (!makeNewMemoryReadWriteOrRelinquish(memory, initialBytes, mappedCapacityBytes, mode))
+    if (!initialBytes)
+        return adoptRef(new Memory(initial, maximum));
+    
+    void* slowMemory = Gigacage::tryAlignedMalloc(WTF::pageSize(), initialBytes);
+    if (!slowMemory) {
+        memoryManager().freePhysicalBytes(initialBytes);
         return nullptr;
-
-    return adoptRef(new Memory(memory, initial, maximum, mappedCapacityBytes, mode));
+    }
+    memset(slowMemory, 0, initialBytes);
+    return adoptRef(new Memory(slowMemory, initial, maximum, initialBytes, MemoryMode::BoundsChecking));
 }
 
 Memory::~Memory()
 {
     if (m_memory) {
-        dataLogLnIf(verbose, "Memory::~Memory ", *this);
-        relinquishMemory(m_memory, m_size, m_mappedCapacity, m_mode);
+        memoryManager().freePhysicalBytes(m_size);
+        switch (m_mode) {
+        case MemoryMode::Signaling:
+            mprotect(m_memory, Memory::fastMappedBytes(), PROT_READ | PROT_WRITE);
+            memoryManager().freeVirtualPages(m_memory);
+            break;
+        case MemoryMode::BoundsChecking:
+            Gigacage::alignedFree(m_memory);
+            break;
+        }
     }
 }
 
@@ -443,24 +348,12 @@ size_t Memory::fastMappedBytes()
     return static_cast<size_t>(std::numeric_limits<uint32_t>::max()) + fastMappedRedzoneBytes();
 }
 
-size_t Memory::maxFastMemoryCount()
-{
-    // The order can be relaxed here because we provide a monotonically-increasing estimate. A concurrent observer could see a slightly out-of-date value but can't tell that they did.
-    return observedMaximumFastMemory.load(std::memory_order_relaxed);
-}
-
 bool Memory::addressIsInActiveFastMemory(void* address)
 {
-    // This cannot race in any meaningful way: the thread which calls this function wants to know if a fault it received at a particular address is in a fast memory. That fast memory must therefore be active in that thread. It cannot be added or removed from the list of currently active fast memories. Other memories being added / removed concurrently are inconsequential.
-    for (size_t idx = 0; idx < fastMemoryAllocationSoftLimit; ++idx) {
-        char* start = static_cast<char*>(currentlyActiveFastMemories[idx].load(std::memory_order_acquire));
-        if (start <= address && address <= start + fastMappedBytes())
-            return true;
-    }
-    return false;
+    return memoryManager().containsAddress(address);
 }
 
-bool Memory::grow(PageCount newSize)
+bool Memory::grow(VM& vm, PageCount newSize)
 {
     RELEASE_ASSERT(newSize > PageCount::fromBytes(m_size));
 
@@ -470,58 +363,50 @@ bool Memory::grow(PageCount newSize)
         return false;
 
     size_t desiredSize = newSize.bytes();
-
+    RELEASE_ASSERT(desiredSize > m_size);
+    size_t extraBytes = desiredSize - m_size;
+    RELEASE_ASSERT(extraBytes);
+    bool success = tryAndGC(
+        vm,
+        [&] () -> MemoryResult::Kind {
+            return memoryManager().tryAllocatePhysicalBytes(extraBytes);
+        });
+    if (!success)
+        return false;
+        
     switch (mode()) {
-    case MemoryMode::BoundsChecking:
+    case MemoryMode::BoundsChecking: {
         RELEASE_ASSERT(maximum().bytes() != 0);
-        break;
-    case MemoryMode::Signaling:
-        // Signaling memory must have been pre-allocated virtually.
-        RELEASE_ASSERT(m_memory);
-        break;
-    case MemoryMode::NumberOfMemoryModes:
-        RELEASE_ASSERT_NOT_REACHED();
+        
+        void* newMemory = Gigacage::tryAlignedMalloc(WTF::pageSize(), desiredSize);
+        if (!newMemory)
+            return false;
+        memcpy(newMemory, m_memory, m_size);
+        memset(static_cast<char*>(newMemory) + m_size, 0, desiredSize - m_size);
+        if (m_memory)
+            Gigacage::alignedFree(m_memory);
+        m_memory = newMemory;
+        m_mappedCapacity = desiredSize;
+        m_size = desiredSize;
+        return true;
     }
-
-    if (m_memory && desiredSize <= m_mappedCapacity) {
+    case MemoryMode::Signaling: {
+        RELEASE_ASSERT(m_memory);
+        // Signaling memory must have been pre-allocated virtually.
         uint8_t* startAddress = static_cast<uint8_t*>(m_memory) + m_size;
-        size_t extraBytes = desiredSize - m_size;
-        RELEASE_ASSERT(extraBytes);
+        
         dataLogLnIf(verbose, "Marking WebAssembly memory's ", RawPointer(m_memory), " as read+write in range [", RawPointer(startAddress), ", ", RawPointer(startAddress + extraBytes), ")");
         if (mprotect(startAddress, extraBytes, PROT_READ | PROT_WRITE)) {
             dataLogLnIf(verbose, "Memory::grow in-place failed ", *this);
             return false;
         }
-
+        memset(startAddress, 0, extraBytes);
         m_size = desiredSize;
-        dataLogLnIf(verbose, "Memory::grow in-place ", *this);
         return true;
-    }
-
-    // Signaling memory can't grow past its already-mapped size.
-    RELEASE_ASSERT(mode() != MemoryMode::Signaling);
-
-    // Otherwise, let's try to make some new memory.
-    // FIXME mremap would be nice https://bugs.webkit.org/show_bug.cgi?id=170557
-    // FIXME should we over-allocate here? https://bugs.webkit.org/show_bug.cgi?id=170826
-    void* newMemory = tryGetSlowMemory(desiredSize);
-    if (!newMemory)
-        return false;
-
-    if (!makeNewMemoryReadWriteOrRelinquish(newMemory, desiredSize, desiredSize, mode()))
-        return false;
-
-    if (m_memory) {
-        memcpy(newMemory, m_memory, m_size);
-        relinquishMemory(m_memory, m_size, m_size, m_mode);
-    }
-
-    m_memory = newMemory;
-    m_mappedCapacity = desiredSize;
-    m_size = desiredSize;
-
-    dataLogLnIf(verbose, "Memory::grow ", *this);
-    return true;
+    } }
+    
+    RELEASE_ASSERT_NOT_REACHED();
+    return false;
 }
 
 void Memory::dump(PrintStream& out) const
index 55345cf..60a9bb7 100644 (file)
@@ -45,10 +45,9 @@ namespace Wasm {
 // FIXME: We should support other modes. see: https://bugs.webkit.org/show_bug.cgi?id=162693
 enum class MemoryMode : uint8_t {
     BoundsChecking,
-    Signaling,
-    NumberOfMemoryModes
+    Signaling
 };
-static constexpr size_t NumberOfMemoryModes = static_cast<size_t>(MemoryMode::NumberOfMemoryModes);
+static constexpr size_t NumberOfMemoryModes = 2;
 JS_EXPORT_PRIVATE const char* makeString(MemoryMode);
 
 class Memory : public RefCounted<Memory> {
@@ -58,16 +57,13 @@ public:
     void dump(WTF::PrintStream&) const;
 
     explicit operator bool() const { return !!m_memory; }
-
-    static void initializePreallocations();
+    
     static RefPtr<Memory> create(VM&, PageCount initial, PageCount maximum);
 
-    Memory() = default;
     ~Memory();
 
     static size_t fastMappedRedzoneBytes();
     static size_t fastMappedBytes(); // Includes redzone.
-    static size_t maxFastMemoryCount();
     static bool addressIsInActiveFastMemory(void*);
 
     void* memory() const { return m_memory; }
@@ -81,7 +77,7 @@ public:
 
     // grow() should only be called from the JSWebAssemblyMemory object since that object needs to update internal
     // pointers with the current base and size.
-    bool grow(PageCount);
+    bool grow(VM&, PageCount);
 
     void check() {  ASSERT(!deletionHasBegun()); }
 private:
index 3f7440a..c77fa05 100644 (file)
@@ -348,7 +348,7 @@ JSWebAssemblyInstance* JSWebAssemblyInstance::create(VM& vm, ExecState* exec, JS
     
     if (!instance->memory()) {
         // Make sure we have a dummy memory, so that wasm -> wasm thunks avoid checking for a nullptr Memory when trying to set pinned registers.
-        instance->m_memory.set(vm, instance, JSWebAssemblyMemory::create(exec, vm, exec->lexicalGlobalObject()->WebAssemblyMemoryStructure(), adoptRef(*(new Wasm::Memory()))));
+        instance->m_memory.set(vm, instance, JSWebAssemblyMemory::create(exec, vm, exec->lexicalGlobalObject()->WebAssemblyMemoryStructure(), Wasm::Memory::create(vm, 0, 0).releaseNonNull()));
         RETURN_IF_EXCEPTION(throwScope, nullptr);
     }
     
index 3ebee14..223c58b 100644 (file)
@@ -106,7 +106,7 @@ Wasm::PageCount JSWebAssemblyMemory::grow(VM& vm, ExecState* exec, uint32_t delt
     }
 
     if (delta) {
-        bool success = memory().grow(newSize);
+        bool success = memory().grow(vm, newSize);
         if (!success) {
             ASSERT(m_memoryBase == memory().memory());
             ASSERT(m_memorySize == memory().size());
@@ -138,7 +138,6 @@ void JSWebAssemblyMemory::finishCreation(VM& vm)
     Base::finishCreation(vm);
     ASSERT(inherits(vm, info()));
     heap()->reportExtraMemoryAllocated(memory().size());
-    vm.heap.reportWebAssemblyFastMemoriesAllocated(1);
 }
 
 void JSWebAssemblyMemory::destroy(JSCell* cell)
index 1096baa..1d06c4c 100644 (file)
@@ -41,6 +41,13 @@ class JSWebAssemblyMemory : public JSDestructibleObject {
 public:
     typedef JSDestructibleObject Base;
 
+    template<typename CellType>
+    static Subspace* subspaceFor(VM& vm)
+    {
+        // We hold onto a lot of memory, so it makes a lot of sense to be swept eagerly.
+        return &vm.eagerlySweptDestructibleObjectSpace;
+    }
+
     static JSWebAssemblyMemory* create(ExecState*, VM&, Structure*, Ref<Wasm::Memory>&&);
     static Structure* createStructure(VM&, JSGlobalObject*, JSValue);
 
index d5f02ab..a98e7ec 100644 (file)
@@ -1,3 +1,38 @@
+2017-08-01  Filip Pizlo  <fpizlo@apple.com>
+
+        Bmalloc and GC should put auxiliaries (butterflies, typed array backing stores) in a gigacage (separate multi-GB VM region)
+        https://bugs.webkit.org/show_bug.cgi?id=174727
+
+        Reviewed by Mark Lam.
+        
+        For the Gigacage project to have minimal impact, we need to have some abstraction that allows code to
+        avoid having to guard itself with #if's. This adds a Gigacage abstraction that overlays the Gigacage
+        namespace from bmalloc, which always lets you call things like Gigacage::caged and Gigacage::tryMalloc.
+        
+        Because of how many places need to possibly allocate in a gigacage, or possibly perform caged accesses,
+        it's better to hide the question of whether or not it's enabled inside this API.
+
+        * WTF.xcodeproj/project.pbxproj:
+        * wtf/CMakeLists.txt:
+        * wtf/FastMalloc.cpp:
+        * wtf/Gigacage.cpp: Added.
+        (Gigacage::tryMalloc):
+        (Gigacage::tryAllocateVirtualPages):
+        (Gigacage::freeVirtualPages):
+        (Gigacage::tryAlignedMalloc):
+        (Gigacage::alignedFree):
+        (Gigacage::free):
+        * wtf/Gigacage.h: Added.
+        (Gigacage::ensureGigacage):
+        (Gigacage::disableGigacage):
+        (Gigacage::addDisableCallback):
+        (Gigacage::removeDisableCallback):
+        (Gigacage::caged):
+        (Gigacage::isCaged):
+        (Gigacage::tryAlignedMalloc):
+        (Gigacage::alignedFree):
+        (Gigacage::free):
+
 2017-07-31  Matt Lewis  <jlewis3@apple.com>
 
         Unreviewed, rolling out r220060.
index fb83fa5..417b7f6 100644 (file)
@@ -23,6 +23,7 @@
 /* Begin PBXBuildFile section */
                0F30BA901E78708E002CA847 /* GlobalVersion.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F30BA8A1E78708E002CA847 /* GlobalVersion.cpp */; };
                0F43D8F11DB5ADDC00108FB6 /* AutomaticThread.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F43D8EF1DB5ADDC00108FB6 /* AutomaticThread.cpp */; };
+               0F5BF1761F23D49A0029D91D /* Gigacage.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5BF1741F23D49A0029D91D /* Gigacage.cpp */; };
                0F60F32F1DFCBD1B00416D6C /* LockedPrintStream.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F60F32D1DFCBD1B00416D6C /* LockedPrintStream.cpp */; };
                0F66B28A1DC97BAB004A1D3F /* ClockType.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F66B2801DC97BAB004A1D3F /* ClockType.cpp */; };
                0F66B28C1DC97BAB004A1D3F /* MonotonicTime.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F66B2821DC97BAB004A1D3F /* MonotonicTime.cpp */; };
                0F43D8F01DB5ADDC00108FB6 /* AutomaticThread.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AutomaticThread.h; sourceTree = "<group>"; };
                0F4570421BE5B58F0062A629 /* Dominators.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Dominators.h; sourceTree = "<group>"; };
                0F4570441BE834410062A629 /* BubbleSort.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BubbleSort.h; sourceTree = "<group>"; };
+               0F5BF1741F23D49A0029D91D /* Gigacage.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; path = Gigacage.cpp; sourceTree = "<group>"; };
+               0F5BF1751F23D49A0029D91D /* Gigacage.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = Gigacage.h; sourceTree = "<group>"; };
                0F5BF1651F2317830029D91D /* NaturalLoops.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = NaturalLoops.h; sourceTree = "<group>"; };
                0F60F32D1DFCBD1B00416D6C /* LockedPrintStream.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = LockedPrintStream.cpp; sourceTree = "<group>"; };
                0F60F32E1DFCBD1B00416D6C /* LockedPrintStream.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LockedPrintStream.h; sourceTree = "<group>"; };
                                1A1D8B9D1731879800141DA4 /* FunctionDispatcher.cpp */,
                                1A1D8B9B173186CE00141DA4 /* FunctionDispatcher.h */,
                                A8A472A8151A825A004123FF /* GetPtr.h */,
+                               0F5BF1741F23D49A0029D91D /* Gigacage.cpp */,
+                               0F5BF1751F23D49A0029D91D /* Gigacage.h */,
                                0F30BA8A1E78708E002CA847 /* GlobalVersion.cpp */,
                                0F30BA8B1E78708E002CA847 /* GlobalVersion.h */,
                                0FEC84AE1BD825310080FF74 /* GraphNodeWorklist.h */,
                                A8A47440151A825B004123FF /* StringImpl.cpp in Sources */,
                                A5BA15FC182435A600A82E69 /* StringImplCF.cpp in Sources */,
                                A5BA15F51824348000A82E69 /* StringImplMac.mm in Sources */,
+                               0F5BF1761F23D49A0029D91D /* Gigacage.cpp in Sources */,
                                A5BA15F3182433A900A82E69 /* StringMac.mm in Sources */,
                                0FDDBFA71666DFA300C55FEF /* StringPrintStream.cpp in Sources */,
                                93F1993E19D7958D00C2390B /* StringView.cpp in Sources */,
index e15d45c..dafe822 100644 (file)
@@ -38,6 +38,7 @@ set(WTF_HEADERS
     Forward.h
     FunctionDispatcher.h
     GetPtr.h
+    Gigacage.h
     GlobalVersion.h
     GraphNodeWorklist.h
     GregorianDateTime.h
@@ -219,6 +220,7 @@ set(WTF_SOURCES
     FastMalloc.cpp
     FilePrintStream.cpp
     FunctionDispatcher.cpp
+    Gigacage.cpp
     GlobalVersion.cpp
     GregorianDateTime.cpp
     HashTable.cpp
index fa1d4b0..9a0f08b 100644 (file)
@@ -1,6 +1,6 @@
 /*
  * Copyright (c) 2005, 2007, Google Inc. All rights reserved.
- * Copyright (C) 2005-2009, 2011, 2015-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2005-2017 Apple Inc. All rights reserved.
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
  * are met:
diff --git a/Source/WTF/wtf/Gigacage.cpp b/Source/WTF/wtf/Gigacage.cpp
new file mode 100644 (file)
index 0000000..be3115c
--- /dev/null
@@ -0,0 +1,111 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "Gigacage.h"
+
+#include <wtf/Atomics.h>
+#include <wtf/PageBlock.h>
+#include <wtf/OSAllocator.h>
+
+#if defined(USE_SYSTEM_MALLOC) && USE_SYSTEM_MALLOC
+
+extern "C" {
+const void* g_gigacageBasePtr;
+}
+
+namespace Gigacage {
+
+void* tryMalloc(size_t size)
+{
+    auto result = tryFastMalloc(size);
+    void* realResult;
+    if (result.getValue(realResult))
+        return realResult;
+    return nullptr;
+}
+
+void* tryAllocateVirtualPages(size_t size)
+{
+    return OSAllocator::reserveUncommitted(size);
+}
+
+void freeVirtualPages(void* basePtr, size_t size)
+{
+    OSAllocator::releaseDecommitted(basePtr, size);
+}
+
+} // namespace Gigacage
+#else
+#include <bmalloc/bmalloc.h>
+
+namespace Gigacage {
+
+// FIXME: Pointers into the primitive gigacage must be scrambled right after being returned from malloc,
+// and stay scrambled except just before use.
+// https://bugs.webkit.org/show_bug.cgi?id=175035
+
+void* tryAlignedMalloc(size_t alignment, size_t size)
+{
+    void* result = bmalloc::api::tryMemalign(alignment, size, bmalloc::HeapKind::Gigacage);
+    WTF::compilerFence();
+    return result;
+}
+
+void alignedFree(void* p)
+{
+    bmalloc::api::free(p, bmalloc::HeapKind::Gigacage);
+    WTF::compilerFence();
+}
+
+void* tryMalloc(size_t size)
+{
+    void* result = bmalloc::api::tryMalloc(size, bmalloc::HeapKind::Gigacage);
+    WTF::compilerFence();
+    return result;
+}
+
+void free(void* p)
+{
+    bmalloc::api::free(p, bmalloc::HeapKind::Gigacage);
+    WTF::compilerFence();
+}
+
+void* tryAllocateVirtualPages(size_t size)
+{
+    void* result = bmalloc::api::tryLargeMemalignVirtual(WTF::pageSize(), size, bmalloc::HeapKind::Gigacage);
+    WTF::compilerFence();
+    return result;
+}
+
+void freeVirtualPages(void* basePtr, size_t)
+{
+    bmalloc::api::freeLargeVirtual(basePtr, bmalloc::HeapKind::Gigacage);
+    WTF::compilerFence();
+}
+
+} // namespace Gigacage
+#endif
+
diff --git a/Source/WTF/wtf/Gigacage.h b/Source/WTF/wtf/Gigacage.h
new file mode 100644 (file)
index 0000000..f289ef9
--- /dev/null
@@ -0,0 +1,75 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include <wtf/FastMalloc.h>
+
+#if defined(USE_SYSTEM_MALLOC) && USE_SYSTEM_MALLOC
+#define GIGACAGE_MASK 0
+#define GIGACAGE_ENABLED 0
+
+extern "C" {
+extern WTF_EXPORTDATA const void* g_gigacageBasePtr;
+}
+
+namespace Gigacage {
+
+inline void ensureGigacage() { }
+inline void disableGigacage() { }
+
+inline void addDisableCallback(void (*)(void*), void*) { }
+inline void removeDisableCallback(void (*)(void*), void*) { }
+
+template<typename T>
+inline T* caged(T* ptr) { return ptr; }
+
+inline bool isCaged(const void*) { return false; }
+
+inline void* tryAlignedMalloc(size_t alignment, size_t size) { return tryFastAlignedMalloc(alignment, size); }
+inline void alignedFree(void* p) { fastAlignedFree(p); }
+WTF_EXPORT_PRIVATE void* tryMalloc(size_t size);
+inline void free(void* p) { fastFree(p); }
+
+WTF_EXPORT_PRIVATE void* tryAllocateVirtualPages(size_t size);
+WTF_EXPORT_PRIVATE void freeVirtualPages(void* basePtr, size_t size);
+
+} // namespace Gigacage
+#else
+#include <bmalloc/Gigacage.h>
+
+namespace Gigacage {
+
+WTF_EXPORT_PRIVATE void* tryAlignedMalloc(size_t alignment, size_t size);
+WTF_EXPORT_PRIVATE void alignedFree(void*);
+WTF_EXPORT_PRIVATE void* tryMalloc(size_t);
+WTF_EXPORT_PRIVATE void free(void*);
+
+WTF_EXPORT_PRIVATE void* tryAllocateVirtualPages(size_t size);
+WTF_EXPORT_PRIVATE void freeVirtualPages(void* basePtr, size_t size);
+
+} // namespace Gigacage
+#endif
+
index c947366..179901e 100644 (file)
@@ -1,3 +1,18 @@
+2017-08-01  Filip Pizlo  <fpizlo@apple.com>
+
+        Bmalloc and GC should put auxiliaries (butterflies, typed array backing stores) in a gigacage (separate multi-GB VM region)
+        https://bugs.webkit.org/show_bug.cgi?id=174727
+
+        Reviewed by Mark Lam.
+
+        No new tests because no change in behavior.
+        
+        Needed to teach Metal how to allocate in the Gigacage.
+
+        * platform/graphics/cocoa/GPUBufferMetal.mm:
+        (WebCore::GPUBuffer::GPUBuffer):
+        (WebCore::GPUBuffer::contents):
+
 2017-08-01  Fujii Hironori  <Hironori.Fujii@sony.com>
 
         [WinCairo] Implement Font::platformBoundsForGlyph
index 786c9bf..c7ab1a5 100644 (file)
@@ -30,8 +30,9 @@
 
 #import "GPUDevice.h"
 #import "Logging.h"
-
 #import <Metal/Metal.h>
+#import <wtf/Gigacage.h>
+#import <wtf/PageBlock.h>
 
 namespace WebCore {
 
@@ -41,8 +42,21 @@ GPUBuffer::GPUBuffer(GPUDevice* device, ArrayBufferView* data)
 
     if (!device || !device->platformDevice() || !data)
         return;
-
-    m_buffer = adoptNS((MTLBuffer *)[device->platformDevice() newBufferWithBytes:data->baseAddress() length:data->byteLength() options:MTLResourceOptionCPUCacheModeDefault]);
+    
+    size_t pageSize = WTF::pageSize();
+    size_t pageAlignedSize = roundUpToMultipleOf(pageSize, data->byteLength());
+    void* pageAlignedCopy = Gigacage::tryAlignedMalloc(pageSize, pageAlignedSize);
+    if (!pageAlignedCopy)
+        return;
+    memcpy(pageAlignedCopy, data->baseAddress(), data->byteLength());
+    m_contents = ArrayBuffer::createFromBytes(pageAlignedCopy, data->byteLength(), [] (void* ptr) { Gigacage::alignedFree(ptr); });
+    m_contents->ref();
+    ArrayBuffer* capturedContents = m_contents.get();
+    m_buffer = adoptNS((MTLBuffer *)[device->platformDevice() newBufferWithBytesNoCopy:m_contents->data() length:pageAlignedSize options:MTLResourceOptionCPUCacheModeDefault deallocator:^(void*, NSUInteger) { capturedContents->deref(); }]);
+    if (!m_buffer) {
+        m_contents->deref();
+        m_contents = nullptr;
+    }
 }
 
 unsigned long GPUBuffer::length() const
@@ -55,13 +69,6 @@ unsigned long GPUBuffer::length() const
 
 RefPtr<ArrayBuffer> GPUBuffer::contents()
 {
-    if (m_contents)
-        return m_contents;
-
-    if (!m_buffer)
-        return nullptr;
-
-    m_contents = ArrayBuffer::createFromBytes([m_buffer contents], [m_buffer length], [] (void*) { });
     return m_contents;
 }
 
index cb72cd6..32c6981 100644 (file)
@@ -1,3 +1,17 @@
+2017-08-01  Filip Pizlo  <fpizlo@apple.com>
+
+        Bmalloc and GC should put auxiliaries (butterflies, typed array backing stores) in a gigacage (separate multi-GB VM region)
+        https://bugs.webkit.org/show_bug.cgi?id=174727
+
+        Reviewed by Mark Lam.
+        
+        The WebProcess should never disable the Gigacage by allocating typed arrays outside the Gigacage. So,
+        we add a callback that crashes the process.
+
+        * WebProcess/WebProcess.cpp:
+        (WebKit::gigacageDisabled):
+        (WebKit::m_webSQLiteDatabaseTracker):
+
 2017-08-01  Brian Burg  <bburg@apple.com>
 
         Web Automation: add event to notify service when a page's main frame window object has cleared
index 6bab545..695ec04 100644 (file)
@@ -146,6 +146,11 @@ static const Seconds nonVisibleProcessCleanupDelay { 10_s };
 
 namespace WebKit {
 
+static void gigacageDisabled(void*)
+{
+    UNREACHABLE_FOR_PLATFORM();
+}
+
 WebProcess& WebProcess::singleton()
 {
     static WebProcess& process = *new WebProcess;
@@ -196,6 +201,9 @@ WebProcess::WebProcess()
         ASSERT(!statistics.isEmpty());
         parentProcessConnection()->send(Messages::WebResourceLoadStatisticsStore::ResourceLoadStatisticsUpdated(WTFMove(statistics)), 0);
     });
+
+    if (GIGACAGE_ENABLED)
+        Gigacage::addDisableCallback(gigacageDisabled, nullptr);
 }
 
 WebProcess::~WebProcess()
index 80e135c..bf604ce 100644 (file)
@@ -11,10 +11,12 @@ set(bmalloc_SOURCES
     bmalloc/Deallocator.cpp
     bmalloc/DebugHeap.cpp
     bmalloc/Environment.cpp
+    bmalloc/Gigacage.cpp
     bmalloc/Heap.cpp
     bmalloc/LargeMap.cpp
     bmalloc/Logging.cpp
     bmalloc/ObjectType.cpp
+    bmalloc/Scavenger.cpp
     bmalloc/StaticMutex.cpp
     bmalloc/VMHeap.cpp
     bmalloc/mbmalloc.cpp
index 3fda7ba..0c3e569 100644 (file)
@@ -1,3 +1,134 @@
+2017-08-01  Filip Pizlo  <fpizlo@apple.com>
+
+        Bmalloc and GC should put auxiliaries (butterflies, typed array backing stores) in a gigacage (separate multi-GB VM region)
+        https://bugs.webkit.org/show_bug.cgi?id=174727
+
+        Reviewed by Mark Lam.
+        
+        This adds a mechanism for managing multiple isolated heaps in bmalloc. For now, these isoheaps
+        (isolated heaps) have a very simple relationship with each other and with the rest of bmalloc:
+        
+        - You have to choose how many isoheaps you will have statically. See numHeaps in HeapKind.h.
+        
+        - Because numHeaps is static, each isoheap gets fast thread-local allocation. Basically, we have a
+          Cache for each heap kind.
+        
+        - Each isoheap gets its own Heap.
+        
+        - Each Heap gets a scavenger thread.
+        
+        - Some things, like Zone/VMHeap/Scavenger, are per-process.
+        
+        Most of the per-HeapKind functionality is handled by PerHeapKind<>.
+        
+        This approach is ideal for supporting special per-HeapKind behaviors. For now we have two heaps:
+        the Primary heap for normal malloc and the Gigacage. The gigacage is a 64GB-aligned 64GB virtual
+        region that we now use for variable-length random-access allocations. No Primary allocations will
+        go into the Gigacage.
+
+        * CMakeLists.txt:
+        * bmalloc.xcodeproj/project.pbxproj:
+        * bmalloc/AllocationKind.h: Added.
+        * bmalloc/Allocator.cpp:
+        (bmalloc::Allocator::Allocator):
+        (bmalloc::Allocator::tryAllocate):
+        (bmalloc::Allocator::allocateImpl):
+        (bmalloc::Allocator::reallocate):
+        (bmalloc::Allocator::refillAllocatorSlowCase):
+        (bmalloc::Allocator::allocateLarge):
+        * bmalloc/Allocator.h:
+        * bmalloc/BExport.h: Added.
+        * bmalloc/Cache.cpp:
+        (bmalloc::Cache::scavenge):
+        (bmalloc::Cache::Cache):
+        (bmalloc::Cache::tryAllocateSlowCaseNullCache):
+        (bmalloc::Cache::allocateSlowCaseNullCache):
+        (bmalloc::Cache::deallocateSlowCaseNullCache):
+        (bmalloc::Cache::reallocateSlowCaseNullCache):
+        (bmalloc::Cache::operator new): Deleted.
+        (bmalloc::Cache::operator delete): Deleted.
+        * bmalloc/Cache.h:
+        (bmalloc::Cache::tryAllocate):
+        (bmalloc::Cache::allocate):
+        (bmalloc::Cache::deallocate):
+        (bmalloc::Cache::reallocate):
+        * bmalloc/Deallocator.cpp:
+        (bmalloc::Deallocator::Deallocator):
+        (bmalloc::Deallocator::scavenge):
+        (bmalloc::Deallocator::processObjectLog):
+        (bmalloc::Deallocator::deallocateSlowCase):
+        * bmalloc/Deallocator.h:
+        * bmalloc/Gigacage.cpp: Added.
+        (Gigacage::Callback::Callback):
+        (Gigacage::Callback::function):
+        (Gigacage::Callbacks::Callbacks):
+        (Gigacage::ensureGigacage):
+        (Gigacage::disableGigacage):
+        (Gigacage::addDisableCallback):
+        (Gigacage::removeDisableCallback):
+        * bmalloc/Gigacage.h: Added.
+        (Gigacage::caged):
+        (Gigacage::isCaged):
+        * bmalloc/Heap.cpp:
+        (bmalloc::Heap::Heap):
+        (bmalloc::Heap::usingGigacage):
+        (bmalloc::Heap::concurrentScavenge):
+        (bmalloc::Heap::splitAndAllocate):
+        (bmalloc::Heap::tryAllocateLarge):
+        (bmalloc::Heap::allocateLarge):
+        (bmalloc::Heap::shrinkLarge):
+        (bmalloc::Heap::deallocateLarge):
+        * bmalloc/Heap.h:
+        (bmalloc::Heap::mutex):
+        (bmalloc::Heap::kind const):
+        (bmalloc::Heap::setScavengerThreadQOSClass): Deleted.
+        * bmalloc/HeapKind.h: Added.
+        * bmalloc/ObjectType.cpp:
+        (bmalloc::objectType):
+        * bmalloc/ObjectType.h:
+        * bmalloc/PerHeapKind.h: Added.
+        (bmalloc::PerHeapKindBase::PerHeapKindBase):
+        (bmalloc::PerHeapKindBase::size):
+        (bmalloc::PerHeapKindBase::at):
+        (bmalloc::PerHeapKindBase::at const):
+        (bmalloc::PerHeapKindBase::operator[]):
+        (bmalloc::PerHeapKindBase::operator[] const):
+        (bmalloc::StaticPerHeapKind::StaticPerHeapKind):
+        (bmalloc::PerHeapKind::PerHeapKind):
+        (bmalloc::PerHeapKind::~PerHeapKind):
+        * bmalloc/PerThread.h:
+        (bmalloc::PerThread<T>::destructor):
+        (bmalloc::PerThread<T>::getSlowCase):
+        (bmalloc::PerThreadStorage<Cache>::get): Deleted.
+        (bmalloc::PerThreadStorage<Cache>::init): Deleted.
+        * bmalloc/Scavenger.cpp: Added.
+        (bmalloc::Scavenger::Scavenger):
+        (bmalloc::Scavenger::scavenge):
+        * bmalloc/Scavenger.h: Added.
+        (bmalloc::Scavenger::setScavengerThreadQOSClass):
+        (bmalloc::Scavenger::requestedScavengerThreadQOSClass const):
+        * bmalloc/VMHeap.cpp:
+        (bmalloc::VMHeap::VMHeap):
+        (bmalloc::VMHeap::tryAllocateLargeChunk):
+        * bmalloc/VMHeap.h:
+        * bmalloc/Zone.cpp:
+        (bmalloc::Zone::Zone):
+        * bmalloc/Zone.h:
+        * bmalloc/bmalloc.h:
+        (bmalloc::api::tryMalloc):
+        (bmalloc::api::malloc):
+        (bmalloc::api::tryMemalign):
+        (bmalloc::api::memalign):
+        (bmalloc::api::realloc):
+        (bmalloc::api::tryLargeMemalignVirtual):
+        (bmalloc::api::free):
+        (bmalloc::api::freeLargeVirtual):
+        (bmalloc::api::scavengeThisThread):
+        (bmalloc::api::scavenge):
+        (bmalloc::api::isEnabled):
+        (bmalloc::api::setScavengerThreadQOSClass):
+        * bmalloc/mbmalloc.cpp:
+
 2017-08-01  Daewoong Jang  <daewoong.jang@navercorp.com>
 
         Implement __builtin_clzl for MSVC
index 13526d8..0b01f69 100644 (file)
@@ -7,6 +7,14 @@
        objects = {
 
 /* Begin PBXBuildFile section */
+               0F3DA0141F267AB800342C08 /* AllocationKind.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3DA0131F267AB800342C08 /* AllocationKind.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F5BF1471F22A8B10029D91D /* HeapKind.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5BF1461F22A8B10029D91D /* HeapKind.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F5BF1491F22A8D80029D91D /* PerHeapKind.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5BF1481F22A8D80029D91D /* PerHeapKind.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F5BF14D1F22B0C30029D91D /* Gigacage.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5BF14C1F22B0C30029D91D /* Gigacage.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F5BF14F1F22DEAF0029D91D /* Gigacage.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5BF14E1F22DEAF0029D91D /* Gigacage.cpp */; };
+               0F5BF1521F22E1570029D91D /* Scavenger.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F5BF1501F22E1570029D91D /* Scavenger.cpp */; };
+               0F5BF1531F22E1570029D91D /* Scavenger.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5BF1511F22E1570029D91D /* Scavenger.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F5BF1731F23C5710029D91D /* BExport.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F5BF1721F23C5710029D91D /* BExport.h */; settings = {ATTRIBUTES = (Private, ); }; };
                1400274918F89C1300115C97 /* Heap.h in Headers */ = {isa = PBXBuildFile; fileRef = 14DA320C18875B09007269E0 /* Heap.h */; settings = {ATTRIBUTES = (Private, ); }; };
                1400274A18F89C2300115C97 /* VMHeap.h in Headers */ = {isa = PBXBuildFile; fileRef = 144F7BFC18BFC517003537F3 /* VMHeap.h */; settings = {ATTRIBUTES = (Private, ); }; };
                140FA00319CE429C00FFD3C8 /* BumpRange.h in Headers */ = {isa = PBXBuildFile; fileRef = 140FA00219CE429C00FFD3C8 /* BumpRange.h */; settings = {ATTRIBUTES = (Private, ); }; };
 /* End PBXContainerItemProxy section */
 
 /* Begin PBXFileReference section */
+               0F3DA0131F267AB800342C08 /* AllocationKind.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = AllocationKind.h; path = bmalloc/AllocationKind.h; sourceTree = "<group>"; };
+               0F5BF1461F22A8B10029D91D /* HeapKind.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = HeapKind.h; path = bmalloc/HeapKind.h; sourceTree = "<group>"; };
+               0F5BF1481F22A8D80029D91D /* PerHeapKind.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = PerHeapKind.h; path = bmalloc/PerHeapKind.h; sourceTree = "<group>"; };
+               0F5BF14C1F22B0C30029D91D /* Gigacage.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = Gigacage.h; path = bmalloc/Gigacage.h; sourceTree = "<group>"; };
+               0F5BF14E1F22DEAF0029D91D /* Gigacage.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; name = Gigacage.cpp; path = bmalloc/Gigacage.cpp; sourceTree = "<group>"; };
+               0F5BF1501F22E1570029D91D /* Scavenger.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; name = Scavenger.cpp; path = bmalloc/Scavenger.cpp; sourceTree = "<group>"; };
+               0F5BF1511F22E1570029D91D /* Scavenger.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = Scavenger.h; path = bmalloc/Scavenger.h; sourceTree = "<group>"; };
+               0F5BF1721F23C5710029D91D /* BExport.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = BExport.h; path = bmalloc/BExport.h; sourceTree = "<group>"; };
                140FA00219CE429C00FFD3C8 /* BumpRange.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BumpRange.h; path = bmalloc/BumpRange.h; sourceTree = "<group>"; };
                140FA00419CE4B6800FFD3C8 /* LineMetadata.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LineMetadata.h; path = bmalloc/LineMetadata.h; sourceTree = "<group>"; };
                14105E8318E14374003A106E /* ObjectType.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = ObjectType.cpp; path = bmalloc/ObjectType.cpp; sourceTree = "<group>"; };
                14D9DB4E17F2866E00EAAB79 /* heap */ = {
                        isa = PBXGroup;
                        children = (
+                               0F3DA0131F267AB800342C08 /* AllocationKind.h */,
                                140FA00219CE429C00FFD3C8 /* BumpRange.h */,
                                147DC6E21CA5B70B00724E8D /* Chunk.h */,
                                142B44341E2839E7001DA6E9 /* DebugHeap.cpp */,
                                142B44351E2839E7001DA6E9 /* DebugHeap.h */,
                                14895D8F1A3A319C0006235D /* Environment.cpp */,
                                14895D901A3A319C0006235D /* Environment.h */,
+                               0F5BF14E1F22DEAF0029D91D /* Gigacage.cpp */,
+                               0F5BF14C1F22B0C30029D91D /* Gigacage.h */,
                                14DA320E18875D9F007269E0 /* Heap.cpp */,
                                14DA320C18875B09007269E0 /* Heap.h */,
                                140FA00419CE4B6800FFD3C8 /* LineMetadata.h */,
                                144BE11E1CA346520099C8C0 /* Object.h */,
                                14105E8318E14374003A106E /* ObjectType.cpp */,
                                1485656018A43DBA00ED6942 /* ObjectType.h */,
+                               0F5BF1501F22E1570029D91D /* Scavenger.cpp */,
+                               0F5BF1511F22E1570029D91D /* Scavenger.h */,
                                145F6874179DF84100D65598 /* Sizes.h */,
                                144F7BFB18BFC517003537F3 /* VMHeap.cpp */,
                                144F7BFC18BFC517003537F3 /* VMHeap.h */,
                                6599C5CA1EC3F15900A2F7BB /* AvailableMemory.cpp */,
                                6599C5CB1EC3F15900A2F7BB /* AvailableMemory.h */,
                                1413E468189EEDE400546D68 /* BAssert.h */,
+                               0F5BF1721F23C5710029D91D /* BExport.h */,
                                14C919C818FCC59F0028DB43 /* BPlatform.h */,
                                14D9DB4517F2447100EAAB79 /* FixedVector.h */,
+                               0F5BF1461F22A8B10029D91D /* HeapKind.h */,
                                1413E460189DCE1E00546D68 /* Inline.h */,
                                141D9AFF1C8E51C0000ABBA0 /* List.h */,
                                4426E27E1C838EE0008EB042 /* Logging.cpp */,
                                4426E27F1C838EE0008EB042 /* Logging.h */,
                                14C8992A1CC485E70027A057 /* Map.h */,
                                144DCED617A649D90093B2F2 /* Mutex.h */,
+                               0F5BF1481F22A8D80029D91D /* PerHeapKind.h */,
                                14446A0717A61FA400F9EA1D /* PerProcess.h */,
                                144469FD17A61F1F00F9EA1D /* PerThread.h */,
                                145F6878179E3A4400D65598 /* Range.h */,
                        files = (
                                14DD78C518F48D7500950702 /* Algorithm.h in Headers */,
                                14DD789818F48D4A00950702 /* Allocator.h in Headers */,
+                               0F5BF1531F22E1570029D91D /* Scavenger.h in Headers */,
+                               0F5BF1471F22A8B10029D91D /* HeapKind.h in Headers */,
                                14DD78C618F48D7500950702 /* AsyncTask.h in Headers */,
                                6599C5CD1EC3F15900A2F7BB /* AvailableMemory.h in Headers */,
                                14DD78C718F48D7500950702 /* BAssert.h in Headers */,
                                140FA00319CE429C00FFD3C8 /* BumpRange.h in Headers */,
                                14DD789918F48D4A00950702 /* Cache.h in Headers */,
                                147DC6E31CA5B70B00724E8D /* Chunk.h in Headers */,
+                               0F5BF1731F23C5710029D91D /* BExport.h in Headers */,
                                14DD789A18F48D4A00950702 /* Deallocator.h in Headers */,
                                142B44371E2839E7001DA6E9 /* DebugHeap.h in Headers */,
                                14895D921A3A319C0006235D /* Environment.h in Headers */,
                                14DD78C818F48D7500950702 /* FixedVector.h in Headers */,
                                1400274918F89C1300115C97 /* Heap.h in Headers */,
+                               0F5BF1491F22A8D80029D91D /* PerHeapKind.h in Headers */,
                                14DD78C918F48D7500950702 /* Inline.h in Headers */,
                                144C07F51C7B70260051BB6A /* LargeMap.h in Headers */,
                                14C8992D1CC578330027A057 /* LargeRange.h in Headers */,
                                144BE11F1CA346520099C8C0 /* Object.h in Headers */,
                                14DD789318F48D0F00950702 /* ObjectType.h in Headers */,
                                14DD78CB18F48D7500950702 /* PerProcess.h in Headers */,
+                               0F3DA0141F267AB800342C08 /* AllocationKind.h in Headers */,
                                14DD78CC18F48D7500950702 /* PerThread.h in Headers */,
                                14DD78CD18F48D7500950702 /* Range.h in Headers */,
                                148EFAE81D6B953B008E721E /* ScopeExit.h in Headers */,
                                14DD78BD18F48D6B00950702 /* SmallPage.h in Headers */,
                                143CB81D19022BC900B16A45 /* StaticMutex.h in Headers */,
                                14DD78CE18F48D7500950702 /* Syscall.h in Headers */,
+                               0F5BF14D1F22B0C30029D91D /* Gigacage.h in Headers */,
                                14DD78CF18F48D7500950702 /* Vector.h in Headers */,
                                14DD78D018F48D7500950702 /* VMAllocate.h in Headers */,
                                1400274A18F89C2300115C97 /* VMHeap.h in Headers */,
                        isa = PBXSourcesBuildPhase;
                        buildActionMask = 2147483647;
                        files = (
+                               0F5BF1521F22E1570029D91D /* Scavenger.cpp in Sources */,
                                14F271C318EA3978008C152F /* Allocator.cpp in Sources */,
                                6599C5CC1EC3F15900A2F7BB /* AvailableMemory.cpp in Sources */,
                                14F271C418EA397B008C152F /* Cache.cpp in Sources */,
                                142B44361E2839E7001DA6E9 /* DebugHeap.cpp in Sources */,
                                14895D911A3A319C0006235D /* Environment.cpp in Sources */,
                                14F271C718EA3990008C152F /* Heap.cpp in Sources */,
+                               0F5BF14F1F22DEAF0029D91D /* Gigacage.cpp in Sources */,
                                144C07F41C7B70260051BB6A /* LargeMap.cpp in Sources */,
                                4426E2801C838EE0008EB042 /* Logging.cpp in Sources */,
                                14F271C818EA3990008C152F /* ObjectType.cpp in Sources */,
diff --git a/Source/bmalloc/bmalloc/AllocationKind.h b/Source/bmalloc/bmalloc/AllocationKind.h
new file mode 100644 (file)
index 0000000..204c4a2
--- /dev/null
@@ -0,0 +1,36 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+namespace bmalloc {
+
+enum class AllocationKind {
+    Physical,
+    Virtual
+};
+
+} // namespace bmalloc
+
index b4dcf62..8215251 100644 (file)
@@ -38,8 +38,9 @@ using namespace std;
 
 namespace bmalloc {
 
-Allocator::Allocator(Heap* heap, Deallocator& deallocator)
-    : m_debugHeap(heap->debugHeap())
+Allocator::Allocator(Heap& heap, Deallocator& deallocator)
+    : m_heap(heap)
+    , m_debugHeap(heap.debugHeap())
     , m_deallocator(deallocator)
 {
     for (size_t sizeClass = 0; sizeClass < sizeClassCount; ++sizeClass)
@@ -59,8 +60,8 @@ void* Allocator::tryAllocate(size_t size)
     if (size <= smallMax)
         return allocate(size);
 
-    std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
-    return PerProcess<Heap>::getFastCase()->tryAllocateLarge(lock, alignment, size);
+    std::lock_guard<StaticMutex> lock(Heap::mutex());
+    return m_heap.tryAllocateLarge(lock, alignment, size);
 }
 
 void* Allocator::allocate(size_t alignment, size_t size)
@@ -88,11 +89,10 @@ void* Allocator::allocateImpl(size_t alignment, size_t size, bool crashOnFailure
     if (size <= smallMax && alignment <= smallMax)
         return allocate(roundUpToMultipleOf(alignment, size));
 
-    std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
-    Heap* heap = PerProcess<Heap>::getFastCase();
+    std::lock_guard<StaticMutex> lock(Heap::mutex());
     if (crashOnFailure)
-        return heap->allocateLarge(lock, alignment, size);
-    return heap->tryAllocateLarge(lock, alignment, size);
+        return m_heap.allocateLarge(lock, alignment, size);
+    return m_heap.tryAllocateLarge(lock, alignment, size);
 }
 
 void* Allocator::reallocate(void* object, size_t newSize)
@@ -101,9 +101,9 @@ void* Allocator::reallocate(void* object, size_t newSize)
         return m_debugHeap->realloc(object, newSize);
 
     size_t oldSize = 0;
-    switch (objectType(object)) {
+    switch (objectType(m_heap.kind(), object)) {
     case ObjectType::Small: {
-        BASSERT(objectType(nullptr) == ObjectType::Small);
+        BASSERT(objectType(m_heap.kind(), nullptr) == ObjectType::Small);
         if (!object)
             break;
 
@@ -112,11 +112,11 @@ void* Allocator::reallocate(void* object, size_t newSize)
         break;
     }
     case ObjectType::Large: {
-        std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
-        oldSize = PerProcess<Heap>::getFastCase()->largeSize(lock, object);
+        std::lock_guard<StaticMutex> lock(Heap::mutex());
+        oldSize = m_heap.largeSize(lock, object);
 
         if (newSize < oldSize && newSize > smallMax) {
-            PerProcess<Heap>::getFastCase()->shrinkLarge(lock, Range(object, oldSize), newSize);
+            m_heap.shrinkLarge(lock, Range(object, oldSize), newSize);
             return object;
         }
         break;
@@ -153,10 +153,9 @@ NO_INLINE void Allocator::refillAllocatorSlowCase(BumpAllocator& allocator, size
 {
     BumpRangeCache& bumpRangeCache = m_bumpRangeCaches[sizeClass];
 
-    std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
+    std::lock_guard<StaticMutex> lock(Heap::mutex());
     m_deallocator.processObjectLog(lock);
-    PerProcess<Heap>::getFastCase()->allocateSmallBumpRanges(
-        lock, sizeClass, allocator, bumpRangeCache, m_deallocator.lineCache(lock));
+    m_heap.allocateSmallBumpRanges(lock, sizeClass, allocator, bumpRangeCache, m_deallocator.lineCache(lock));
 }
 
 INLINE void Allocator::refillAllocator(BumpAllocator& allocator, size_t sizeClass)
@@ -169,8 +168,8 @@ INLINE void Allocator::refillAllocator(BumpAllocator& allocator, size_t sizeClas
 
 NO_INLINE void* Allocator::allocateLarge(size_t size)
 {
-    std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
-    return PerProcess<Heap>::getFastCase()->allocateLarge(lock, alignment, size);
+    std::lock_guard<StaticMutex> lock(Heap::mutex());
+    return m_heap.allocateLarge(lock, alignment, size);
 }
 
 NO_INLINE void* Allocator::allocateLogSizeClass(size_t size)
index 1d5022b..6d35132 100644 (file)
@@ -26,6 +26,7 @@
 #ifndef Allocator_h
 #define Allocator_h
 
+#include "BExport.h"
 #include "BumpAllocator.h"
 #include <array>
 
@@ -39,7 +40,7 @@ class Heap;
 
 class Allocator {
 public:
-    Allocator(Heap*, Deallocator&);
+    Allocator(Heap&, Deallocator&);
     ~Allocator();
 
     void* tryAllocate(size_t);
@@ -54,7 +55,7 @@ private:
     void* allocateImpl(size_t alignment, size_t, bool crashOnFailure);
     
     bool allocateFastCase(size_t, void*&);
-    void* allocateSlowCase(size_t);
+    BEXPORT void* allocateSlowCase(size_t);
     
     void* allocateLogSizeClass(size_t);
     void* allocateLarge(size_t);
@@ -65,6 +66,7 @@ private:
     std::array<BumpAllocator, sizeClassCount> m_bumpAllocators;
     std::array<BumpRangeCache, sizeClassCount> m_bumpRangeCaches;
 
+    Heap& m_heap;
     DebugHeap* m_debugHeap;
     Deallocator& m_deallocator;
 };
diff --git a/Source/bmalloc/bmalloc/BExport.h b/Source/bmalloc/bmalloc/BExport.h
new file mode 100644 (file)
index 0000000..38e2b3b
--- /dev/null
@@ -0,0 +1,29 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#define BEXPORT __attribute__((visibility("default")))
+
index 2fc7036..c3c5bf9 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 
 namespace bmalloc {
 
-void* Cache::operator new(size_t size)
+void Cache::scavenge(HeapKind heapKind)
 {
-    return vmAllocate(vmSize(size));
-}
-
-void Cache::operator delete(void* p, size_t size)
-{
-    vmDeallocate(p, vmSize(size));
-}
-
-void Cache::scavenge()
-{
-    Cache* cache = PerThread<Cache>::getFastCase();
-    if (!cache)
+    PerHeapKind<Cache>* caches = PerThread<PerHeapKind<Cache>>::getFastCase();
+    if (!caches)
         return;
 
-    cache->allocator().scavenge();
-    cache->deallocator().scavenge();
+    caches->at(heapKind).allocator().scavenge();
+    caches->at(heapKind).deallocator().scavenge();
 }
 
-Cache::Cache()
-    : m_deallocator(PerProcess<Heap>::get())
-    , m_allocator(PerProcess<Heap>::get(), m_deallocator)
+Cache::Cache(HeapKind heapKind)
+    : m_deallocator(PerProcess<PerHeapKind<Heap>>::get()->at(heapKind))
+    , m_allocator(PerProcess<PerHeapKind<Heap>>::get()->at(heapKind), m_deallocator)
 {
 }
 
-NO_INLINE void* Cache::tryAllocateSlowCaseNullCache(size_t size)
+NO_INLINE void* Cache::tryAllocateSlowCaseNullCache(HeapKind heapKind, size_t size)
 {
-    return PerThread<Cache>::getSlowCase()->allocator().tryAllocate(size);
+    return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(heapKind).allocator().tryAllocate(size);
 }
 
-NO_INLINE void* Cache::allocateSlowCaseNullCache(size_t size)
+NO_INLINE void* Cache::allocateSlowCaseNullCache(HeapKind heapKind, size_t size)
 {
-    return PerThread<Cache>::getSlowCase()->allocator().allocate(size);
+    return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(heapKind).allocator().allocate(size);
 }
 
-NO_INLINE void* Cache::allocateSlowCaseNullCache(size_t alignment, size_t size)
+NO_INLINE void* Cache::allocateSlowCaseNullCache(HeapKind heapKind, size_t alignment, size_t size)
 {
-    return PerThread<Cache>::getSlowCase()->allocator().allocate(alignment, size);
+    return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(heapKind).allocator().allocate(alignment, size);
 }
 
-NO_INLINE void Cache::deallocateSlowCaseNullCache(void* object)
+NO_INLINE void Cache::deallocateSlowCaseNullCache(HeapKind heapKind, void* object)
 {
-    PerThread<Cache>::getSlowCase()->deallocator().deallocate(object);
+    PerThread<PerHeapKind<Cache>>::getSlowCase()->at(heapKind).deallocator().deallocate(object);
 }
 
-NO_INLINE void* Cache::reallocateSlowCaseNullCache(void* object, size_t newSize)
+NO_INLINE void* Cache::reallocateSlowCaseNullCache(HeapKind heapKind, void* object, size_t newSize)
 {
-    return PerThread<Cache>::getSlowCase()->allocator().reallocate(object, newSize);
+    return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(heapKind).allocator().reallocate(object, newSize);
 }
 
 } // namespace bmalloc
index 6c28b4a..f27c04d 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014, 2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -27,7 +27,9 @@
 #define Cache_h
 
 #include "Allocator.h"
+#include "BExport.h"
 #include "Deallocator.h"
+#include "HeapKind.h"
 #include "PerThread.h"
 
 namespace bmalloc {
@@ -36,80 +38,77 @@ namespace bmalloc {
 
 class Cache {
 public:
-    void* operator new(size_t);
-    void operator delete(void*, size_t);
+    static void* tryAllocate(HeapKind, size_t);
+    static void* allocate(HeapKind, size_t);
+    static void* tryAllocate(HeapKind, size_t alignment, size_t);
+    static void* allocate(HeapKind, size_t alignment, size_t);
+    static void deallocate(HeapKind, void*);
+    static void* reallocate(HeapKind, void*, size_t);
 
-    static void* tryAllocate(size_t);
-    static void* allocate(size_t);
-    static void* tryAllocate(size_t alignment, size_t);
-    static void* allocate(size_t alignment, size_t);
-    static void deallocate(void*);
-    static void* reallocate(void*, size_t);
+    static void scavenge(HeapKind);
 
-    static void scavenge();
-
-    Cache();
+    Cache(HeapKind);
 
     Allocator& allocator() { return m_allocator; }
     Deallocator& deallocator() { return m_deallocator; }
 
 private:
-    static void* tryAllocateSlowCaseNullCache(size_t);
-    static void* allocateSlowCaseNullCache(size_t);
-    static void* allocateSlowCaseNullCache(size_t alignment, size_t);
-    static void deallocateSlowCaseNullCache(void*);
-    static void* reallocateSlowCaseNullCache(void*, size_t);
+    BEXPORT static void* tryAllocateSlowCaseNullCache(HeapKind, size_t);
+    BEXPORT static void* allocateSlowCaseNullCache(HeapKind, size_t);
+    BEXPORT static void* allocateSlowCaseNullCache(HeapKind, size_t alignment, size_t);
+    BEXPORT static void deallocateSlowCaseNullCache(HeapKind, void*);
+    BEXPORT static void* reallocateSlowCaseNullCache(HeapKind, void*, size_t);
 
     Deallocator m_deallocator;
     Allocator m_allocator;
 };
 
-inline void* Cache::tryAllocate(size_t size)
+inline void* Cache::tryAllocate(HeapKind heapKind, size_t size)
 {
-    Cache* cache = PerThread<Cache>::getFastCase();
-    if (!cache)
-        return tryAllocateSlowCaseNullCache(size);
-    return cache->allocator().tryAllocate(size);
+    PerHeapKind<Cache>* caches = PerThread<PerHeapKind<Cache>>::getFastCase();
+    if (!caches)
+        return tryAllocateSlowCaseNullCache(heapKind, size);
+    return caches->at(heapKind).allocator().tryAllocate(size);
 }
 
-inline void* Cache::allocate(size_t size)
+inline void* Cache::allocate(HeapKind heapKind, size_t size)
 {
-    Cache* cache = PerThread<Cache>::getFastCase();
-    if (!cache)
-        return allocateSlowCaseNullCache(size);
-    return cache->allocator().allocate(size);
+    PerHeapKind<Cache>* caches = PerThread<PerHeapKind<Cache>>::getFastCase();
+    if (!caches)
+        return allocateSlowCaseNullCache(heapKind, size);
+    return caches->at(heapKind).allocator().allocate(size);
 }
 
-inline void* Cache::tryAllocate(size_t alignment, size_t size)
+inline void* Cache::tryAllocate(HeapKind heapKind, size_t alignment, size_t size)
 {
-    Cache* cache = PerThread<Cache>::getFastCase();
-    if (!cache)
-        return allocateSlowCaseNullCache(alignment, size);
-    return cache->allocator().tryAllocate(alignment, size);
+    PerHeapKind<Cache>* caches = PerThread<PerHeapKind<Cache>>::getFastCase();
+    if (!caches)
+        return allocateSlowCaseNullCache(heapKind, alignment, size);
+    return caches->at(heapKind).allocator().tryAllocate(alignment, size);
 }
 
-inline void* Cache::allocate(size_t alignment, size_t size)
+inline void* Cache::allocate(HeapKind heapKind, size_t alignment, size_t size)
 {
-    Cache* cache = PerThread<Cache>::getFastCase();
-    if (!cache)
-        return allocateSlowCaseNullCache(alignment, size);
-    return cache->allocator().allocate(alignment, size);
+    PerHeapKind<Cache>* caches = PerThread<PerHeapKind<Cache>>::getFastCase();
+    if (!caches)
+        return allocateSlowCaseNullCache(heapKind, alignment, size);
+    return caches->at(heapKind).allocator().allocate(alignment, size);
 }
 
-inline void Cache::deallocate(void* object)
+inline void Cache::deallocate(HeapKind heapKind, void* object)
 {
-    Cache* cache = PerThread<Cache>::getFastCase();
-    if (!cache)
-        return deallocateSlowCaseNullCache(object);
-    return cache->deallocator().deallocate(object);
+    PerHeapKind<Cache>* caches = PerThread<PerHeapKind<Cache>>::getFastCase();
+    if (!caches)
+        return deallocateSlowCaseNullCache(heapKind, object);
+    return caches->at(heapKind).deallocator().deallocate(object);
 }
 
-inline void* Cache::reallocate(void* object, size_t newSize)
+inline void* Cache::reallocate(HeapKind heapKind, void* object, size_t newSize)
 {
-    Cache* cache = PerThread<Cache>::getFastCase();
-    if (!cache)
-        return reallocateSlowCaseNullCache(object, newSize);
-    return cache->allocator().reallocate(object, newSize);
+    PerHeapKind<Cache>* caches = PerThread<PerHeapKind<Cache>>::getFastCase();
+    if (!caches)
+        return reallocateSlowCaseNullCache(heapKind, object, newSize);
+    return caches->at(heapKind).allocator().reallocate(object, newSize);
 }
 
 } // namespace bmalloc
index b39a8a7..e8b7b29 100644 (file)
@@ -39,8 +39,9 @@ using namespace std;
 
 namespace bmalloc {
 
-Deallocator::Deallocator(Heap* heap)
-    : m_debugHeap(heap->debugHeap())
+Deallocator::Deallocator(Heap& heap)
+    : m_heap(heap)
+    , m_debugHeap(heap.debugHeap())
 {
     if (m_debugHeap) {
         // Fill the object log in order to disable the fast path.
@@ -59,18 +60,16 @@ void Deallocator::scavenge()
     if (m_debugHeap)
         return;
 
-    std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
+    std::lock_guard<StaticMutex> lock(Heap::mutex());
 
     processObjectLog(lock);
-    PerProcess<Heap>::getFastCase()->deallocateLineCache(lock, lineCache(lock));
+    m_heap.deallocateLineCache(lock, lineCache(lock));
 }
 
 void Deallocator::processObjectLog(std::lock_guard<StaticMutex>& lock)
 {
-    Heap* heap = PerProcess<Heap>::getFastCase();
-    
     for (Object object : m_objectLog)
-        heap->derefSmallLine(lock, object, lineCache(lock));
+        m_heap.derefSmallLine(lock, object, lineCache(lock));
     m_objectLog.clear();
 }
 
@@ -82,9 +81,9 @@ void Deallocator::deallocateSlowCase(void* object)
     if (!object)
         return;
 
-    std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
-    if (PerProcess<Heap>::getFastCase()->isLarge(lock, object)) {
-        PerProcess<Heap>::getFastCase()->deallocateLarge(lock, object);
+    std::lock_guard<StaticMutex> lock(Heap::mutex());
+    if (m_heap.isLarge(lock, object)) {
+        m_heap.deallocateLarge(lock, object);
         return;
     }
 
index 9a64d45..383a999 100644 (file)
@@ -26,6 +26,7 @@
 #ifndef Deallocator_h
 #define Deallocator_h
 
+#include "BExport.h"
 #include "FixedVector.h"
 #include "SmallPage.h"
 #include <mutex>
@@ -40,7 +41,7 @@ class StaticMutex;
 
 class Deallocator {
 public:
-    Deallocator(Heap*);
+    Deallocator(Heap&);
     ~Deallocator();
 
     void deallocate(void*);
@@ -52,8 +53,9 @@ public:
 
 private:
     bool deallocateFastCase(void*);
-    void deallocateSlowCase(void*);
+    BEXPORT void deallocateSlowCase(void*);
 
+    Heap& m_heap;
     FixedVector<void*, deallocatorLogCapacity> m_objectLog;
     LineCache m_lineCache; // The Heap removes items from this cache.
     DebugHeap* m_debugHeap;
diff --git a/Source/bmalloc/bmalloc/Gigacage.cpp b/Source/bmalloc/bmalloc/Gigacage.cpp
new file mode 100644 (file)
index 0000000..f8a1335
--- /dev/null
@@ -0,0 +1,127 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "Gigacage.h"
+
+#include "PerProcess.h"
+#include "VMAllocate.h"
+#include "Vector.h"
+#include "bmalloc.h"
+#include <mutex>
+
+// FIXME: Ask dyld to put this in its own page, and mprotect the page after we ensure the gigacage.
+// https://bugs.webkit.org/show_bug.cgi?id=174972
+void* g_gigacageBasePtr;
+
+using namespace bmalloc;
+
+namespace Gigacage {
+
+struct Callback {
+    Callback() { }
+    
+    Callback(void (*function)(void*), void *argument)
+        : function(function)
+        , argument(argument)
+    {
+    }
+    
+    void (*function)(void*) { nullptr };
+    void* argument { nullptr };
+};
+
+struct Callbacks {
+    Callbacks(std::lock_guard<StaticMutex>&) { }
+    
+    Vector<Callback> callbacks;
+};
+
+void ensureGigacage()
+{
+#if GIGACAGE_ENABLED
+    static std::once_flag onceFlag;
+    std::call_once(
+        onceFlag,
+        [] {
+            void* basePtr = tryVMAllocate(GIGACAGE_SIZE, GIGACAGE_SIZE + GIGACAGE_RUNWAY);
+            if (!basePtr)
+                return;
+            
+            vmDeallocatePhysicalPages(basePtr, GIGACAGE_SIZE + GIGACAGE_RUNWAY);
+            
+            g_gigacageBasePtr = basePtr;
+        });
+#endif // GIGACAGE_ENABLED
+}
+
+void disableGigacage()
+{
+    ensureGigacage();
+    if (!g_gigacageBasePtr) {
+        // It was never enabled. That means that we never even saved any callbacks. Or, we had already disabled
+        // it before, and already called the callbacks.
+        return;
+    }
+    
+    Callbacks& callbacks = *PerProcess<Callbacks>::get();
+    std::unique_lock<StaticMutex> lock(PerProcess<Callbacks>::mutex());
+    for (Callback& callback : callbacks.callbacks)
+        callback.function(callback.argument);
+    callbacks.callbacks.shrink(0);
+    g_gigacageBasePtr = nullptr;
+}
+
+void addDisableCallback(void (*function)(void*), void* argument)
+{
+    ensureGigacage();
+    if (!g_gigacageBasePtr) {
+        // It was already disabled or we were never able to enable it.
+        function(argument);
+        return;
+    }
+    
+    Callbacks& callbacks = *PerProcess<Callbacks>::get();
+    std::unique_lock<StaticMutex> lock(PerProcess<Callbacks>::mutex());
+    callbacks.callbacks.push(Callback(function, argument));
+}
+
+void removeDisableCallback(void (*function)(void*), void* argument)
+{
+    Callbacks& callbacks = *PerProcess<Callbacks>::get();
+    std::unique_lock<StaticMutex> lock(PerProcess<Callbacks>::mutex());
+    for (size_t i = 0; i < callbacks.callbacks.size(); ++i) {
+        if (callbacks.callbacks[i].function == function
+            && callbacks.callbacks[i].argument == argument) {
+            callbacks.callbacks[i] = callbacks.callbacks.last();
+            callbacks.callbacks.pop();
+            return;
+        }
+    }
+}
+
+} // namespace Gigacage
+
+
+
diff --git a/Source/bmalloc/bmalloc/Gigacage.h b/Source/bmalloc/bmalloc/Gigacage.h
new file mode 100644 (file)
index 0000000..0e9acef
--- /dev/null
@@ -0,0 +1,76 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "BExport.h"
+#include "BPlatform.h"
+#include <inttypes.h>
+
+// The Gigacage is 64GB.
+#define GIGACAGE_MASK 0xfffffffffllu
+#define GIGACAGE_SIZE (GIGACAGE_MASK + 1)
+
+// FIXME: Consider making this 32GB, in case unsigned 32-bit indices find their way into indexed accesses.
+// https://bugs.webkit.org/show_bug.cgi?id=175062
+#define GIGACAGE_RUNWAY (16llu * 1024 * 1024 * 1024)
+
+#if BOS(DARWIN) && BCPU(X86_64)
+#define GIGACAGE_ENABLED 1
+#else
+#define GIGACAGE_ENABLED 0
+#endif
+
+extern "C" BEXPORT void* g_gigacageBasePtr;
+
+namespace Gigacage {
+
+BEXPORT void ensureGigacage();
+
+BEXPORT void disableGigacage();
+
+// This will call the disable callback immediately if the Gigacage is currently disabled.
+BEXPORT void addDisableCallback(void (*)(void*), void*);
+BEXPORT void removeDisableCallback(void (*)(void*), void*);
+
+template<typename T>
+T* caged(T* ptr)
+{
+    void* gigacageBasePtr = g_gigacageBasePtr;
+    if (!gigacageBasePtr)
+        return ptr;
+    return reinterpret_cast<T*>(
+        reinterpret_cast<uintptr_t>(gigacageBasePtr) + (
+            reinterpret_cast<uintptr_t>(ptr) & static_cast<uintptr_t>(GIGACAGE_MASK)));
+}
+
+inline bool isCaged(const void* ptr)
+{
+    return caged(ptr) == ptr;
+}
+
+} // namespace Gigacage
+
+
index fdbdc7b..cdc26ca 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 #include "AvailableMemory.h"
 #include "BumpAllocator.h"
 #include "Chunk.h"
+#include "Gigacage.h"
 #include "DebugHeap.h"
 #include "PerProcess.h"
+#include "Scavenger.h"
 #include "SmallLine.h"
 #include "SmallPage.h"
+#include "VMHeap.h"
+#include "bmalloc.h"
 #include <thread>
 
 namespace bmalloc {
 
-Heap::Heap(std::lock_guard<StaticMutex>&)
-    : m_vmPageSizePhysical(vmPageSizePhysical())
+Heap::Heap(HeapKind kind, std::lock_guard<StaticMutex>&)
+    : m_kind(kind)
+    , m_vmPageSizePhysical(vmPageSizePhysical())
     , m_scavenger(*this, &Heap::concurrentScavenge)
     , m_debugHeap(nullptr)
 {
@@ -49,17 +54,22 @@ Heap::Heap(std::lock_guard<StaticMutex>&)
     
     if (m_environment.isDebugHeapEnabled())
         m_debugHeap = PerProcess<DebugHeap>::get();
-
-#if BOS(DARWIN)
-    auto queue = dispatch_queue_create("WebKit Malloc Memory Pressure Handler", DISPATCH_QUEUE_SERIAL);
-    m_pressureHandlerDispatchSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_MEMORYPRESSURE, 0, DISPATCH_MEMORYPRESSURE_CRITICAL, queue);
-    dispatch_source_set_event_handler(m_pressureHandlerDispatchSource, ^{
-        std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
-        scavenge(lock);
-    });
-    dispatch_resume(m_pressureHandlerDispatchSource);
-    dispatch_release(queue);
+    else {
+        Gigacage::ensureGigacage();
+#if GIGACAGE_ENABLED
+        if (usingGigacage()) {
+            RELEASE_BASSERT(g_gigacageBasePtr);
+            m_largeFree.add(LargeRange(g_gigacageBasePtr, GIGACAGE_SIZE, 0));
+        }
 #endif
+    }
+    
+    PerProcess<Scavenger>::get();
+}
+
+bool Heap::usingGigacage()
+{
+    return m_kind == HeapKind::Gigacage && g_gigacageBasePtr;
 }
 
 void Heap::initializeLineMetadata()
@@ -120,10 +130,10 @@ void Heap::initializePageMetadata()
 
 void Heap::concurrentScavenge()
 {
-    std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
+    std::lock_guard<StaticMutex> lock(mutex());
 
 #if BOS(DARWIN)
-    pthread_set_qos_class_self_np(m_requestedScavengerThreadQOSClass, 0);
+    pthread_set_qos_class_self_np(PerProcess<Scavenger>::getFastCase()->requestedScavengerThreadQOSClass(), 0);
 #endif
 
     if (m_isGrowing && !isUnderMemoryPressure()) {
@@ -438,7 +448,7 @@ void Heap::allocateSmallBumpRangesByObject(
     }
 }
 
-LargeRange Heap::splitAndAllocate(LargeRange& range, size_t alignment, size_t size)
+LargeRange Heap::splitAndAllocate(LargeRange& range, size_t alignment, size_t size, AllocationKind allocationKind)
 {
     LargeRange prev;
     LargeRange next;
@@ -457,11 +467,20 @@ LargeRange Heap::splitAndAllocate(LargeRange& range, size_t alignment, size_t si
         next = pair.second;
     }
     
-    if (range.physicalSize() < range.size()) {
-        scheduleScavengerIfUnderMemoryPressure(range.size());
+    switch (allocationKind) {
+    case AllocationKind::Virtual:
+        if (range.physicalSize())
+            vmDeallocatePhysicalPagesSloppy(range.begin(), range.size());
+        break;
         
-        vmAllocatePhysicalPagesSloppy(range.begin() + range.physicalSize(), range.size() - range.physicalSize());
-        range.setPhysicalSize(range.size());
+    case AllocationKind::Physical:
+        if (range.physicalSize() < range.size()) {
+            scheduleScavengerIfUnderMemoryPressure(range.size());
+            
+            vmAllocatePhysicalPagesSloppy(range.begin() + range.physicalSize(), range.size() - range.physicalSize());
+            range.setPhysicalSize(range.size());
+        }
+        break;
     }
     
     if (prev)
@@ -476,7 +495,7 @@ LargeRange Heap::splitAndAllocate(LargeRange& range, size_t alignment, size_t si
     return range;
 }
 
-void* Heap::tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t size)
+void* Heap::tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t size, AllocationKind allocationKind)
 {
     BASSERT(isPowerOfTwo(alignment));
 
@@ -494,21 +513,24 @@ void* Heap::tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, si
 
     LargeRange range = m_largeFree.remove(alignment, size);
     if (!range) {
-        range = m_vmHeap.tryAllocateLargeChunk(alignment, size);
-        if (!range)
+        if (usingGigacage())
             return nullptr;
 
+        range = PerProcess<VMHeap>::get()->tryAllocateLargeChunk(alignment, size, allocationKind);
+        if (!range)
+            return nullptr;
+        
         m_largeFree.add(range);
 
         range = m_largeFree.remove(alignment, size);
     }
 
-    return splitAndAllocate(range, alignment, size).begin();
+    return splitAndAllocate(range, alignment, size, allocationKind).begin();
 }
 
-void* Heap::allocateLarge(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size)
+void* Heap::allocateLarge(std::lock_guard<StaticMutex>& lock, size_t alignment, size_t size, AllocationKind allocationKind)
 {
-    void* result = tryAllocateLarge(lock, alignment, size);
+    void* result = tryAllocateLarge(lock, alignment, size, allocationKind);
     RELEASE_BASSERT(result);
     return result;
 }
@@ -529,16 +551,15 @@ void Heap::shrinkLarge(std::lock_guard<StaticMutex>&, const Range& object, size_
 
     size_t size = m_largeAllocated.remove(object.begin());
     LargeRange range = LargeRange(object, size);
-    splitAndAllocate(range, alignment, newSize);
+    splitAndAllocate(range, alignment, newSize, AllocationKind::Physical);
 
     scheduleScavenger(size);
 }
 
-void Heap::deallocateLarge(std::lock_guard<StaticMutex>&, void* object)
+void Heap::deallocateLarge(std::lock_guard<StaticMutex>&, void* object, AllocationKind allocationKind)
 {
     size_t size = m_largeAllocated.remove(object);
-    m_largeFree.add(LargeRange(object, size, size));
-    
+    m_largeFree.add(LargeRange(object, size, allocationKind == AllocationKind::Physical ? size : 0));
     scheduleScavenger(size);
 }
 
index 8a00a23..606adbd 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 #ifndef Heap_h
 #define Heap_h
 
+#include "AllocationKind.h"
 #include "AsyncTask.h"
 #include "BumpRange.h"
+#include "Chunk.h"
 #include "Environment.h"
+#include "HeapKind.h"
 #include "LargeMap.h"
 #include "LineMetadata.h"
 #include "List.h"
 #include "Map.h"
 #include "Mutex.h"
 #include "Object.h"
+#include "PerHeapKind.h"
+#include "PerProcess.h"
 #include "SmallLine.h"
 #include "SmallPage.h"
-#include "VMHeap.h"
 #include "Vector.h"
 #include <array>
 #include <mutex>
 
-#if BOS(DARWIN)
-#include <dispatch/dispatch.h>
-#endif
-
 namespace bmalloc {
 
 class BeginTag;
@@ -55,7 +55,11 @@ class EndTag;
 
 class Heap {
 public:
-    Heap(std::lock_guard<StaticMutex>&);
+    Heap(HeapKind, std::lock_guard<StaticMutex>&);
+    
+    static StaticMutex& mutex() { return PerProcess<PerHeapKind<Heap>>::mutex(); }
+    
+    HeapKind kind() const { return m_kind; }
     
     DebugHeap* debugHeap() { return m_debugHeap; }
 
@@ -64,9 +68,9 @@ public:
     void derefSmallLine(std::lock_guard<StaticMutex>&, Object, LineCache&);
     void deallocateLineCache(std::lock_guard<StaticMutex>&, LineCache&);
 
-    void* allocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t);
-    void* tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t);
-    void deallocateLarge(std::lock_guard<StaticMutex>&, void*);
+    void* allocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t, AllocationKind = AllocationKind::Physical);
+    void* tryAllocateLarge(std::lock_guard<StaticMutex>&, size_t alignment, size_t, AllocationKind = AllocationKind::Physical);
+    void deallocateLarge(std::lock_guard<StaticMutex>&, void*, AllocationKind = AllocationKind::Physical);
 
     bool isLarge(std::lock_guard<StaticMutex>&, void*);
     size_t largeSize(std::lock_guard<StaticMutex>&, void*);
@@ -74,10 +78,6 @@ public:
 
     void scavenge(std::lock_guard<StaticMutex>&);
 
-#if BOS(DARWIN)
-    void setScavengerThreadQOSClass(qos_class_t overrideClass) { m_requestedScavengerThreadQOSClass = overrideClass; }
-#endif
-
 private:
     struct LargeObjectHash {
         static unsigned hash(void* key)
@@ -89,6 +89,8 @@ private:
 
     ~Heap() = delete;
     
+    bool usingGigacage();
+    
     void initializeLineMetadata();
     void initializePageMetadata();
 
@@ -107,13 +109,15 @@ private:
     void mergeLargeLeft(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
     void mergeLargeRight(EndTag*&, BeginTag*&, Range&, bool& inVMHeap);
 
-    LargeRange splitAndAllocate(LargeRange&, size_t alignment, size_t);
+    LargeRange splitAndAllocate(LargeRange&, size_t alignment, size_t, AllocationKind);
 
     void scheduleScavenger(size_t);
     void scheduleScavengerIfUnderMemoryPressure(size_t);
     
     void concurrentScavenge();
     
+    HeapKind m_kind;
+    
     size_t m_vmPageSizePhysical;
     Vector<LineMetadata> m_smallLineMetadata;
     std::array<size_t, sizeClassCount> m_pageClasses;
@@ -134,13 +138,6 @@ private:
 
     Environment m_environment;
     DebugHeap* m_debugHeap;
-
-    VMHeap m_vmHeap;
-
-#if BOS(DARWIN)
-    dispatch_source_t m_pressureHandlerDispatchSource;
-    qos_class_t m_requestedScavengerThreadQOSClass { QOS_CLASS_USER_INITIATED };
-#endif
 };
 
 inline void Heap::allocateSmallBumpRanges(
diff --git a/Source/bmalloc/bmalloc/HeapKind.h b/Source/bmalloc/bmalloc/HeapKind.h
new file mode 100644 (file)
index 0000000..4b3f326
--- /dev/null
@@ -0,0 +1,38 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+namespace bmalloc {
+
+enum class HeapKind {
+    Primary,
+    Gigacage
+};
+
+static constexpr unsigned numHeaps = 2;
+
+} // namespace bmalloc
+
index a8d3397..8aeab00 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 
 namespace bmalloc {
 
-ObjectType objectType(void* object)
+ObjectType objectType(HeapKind kind, void* object)
 {
     if (mightBeLarge(object)) {
         if (!object)
             return ObjectType::Small;
 
-        std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
-        if (PerProcess<Heap>::getFastCase()->isLarge(lock, object))
+        std::lock_guard<StaticMutex> lock(Heap::mutex());
+        if (PerProcess<PerHeapKind<Heap>>::getFastCase()->at(kind).isLarge(lock, object))
             return ObjectType::Large;
     }
     
index 2cc3ab0..097f987 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 #define ObjectType_h
 
 #include "BAssert.h"
+#include "HeapKind.h"
 #include "Sizes.h"
 
 namespace bmalloc {
 
 enum class ObjectType : unsigned char { Small, Large };
 
-ObjectType objectType(void*);
+ObjectType objectType(HeapKind, void*);
 
 inline bool mightBeLarge(void* object)
 {
diff --git a/Source/bmalloc/bmalloc/PerHeapKind.h b/Source/bmalloc/bmalloc/PerHeapKind.h
new file mode 100644 (file)
index 0000000..03ac1a3
--- /dev/null
@@ -0,0 +1,106 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "HeapKind.h"
+
+namespace bmalloc {
+
+template<typename T>
+class PerHeapKindBase {
+public:
+    PerHeapKindBase(const PerHeapKindBase&) = delete;
+    PerHeapKindBase& operator=(const PerHeapKindBase&) = delete;
+    
+    template<typename... Arguments>
+    PerHeapKindBase(Arguments&&... arguments)
+    {
+        for (unsigned i = numHeaps; i--;)
+            new (&at(i)) T(static_cast<HeapKind>(i), std::forward<Arguments>(arguments)...);
+    }
+    
+    static size_t size() { return numHeaps; }
+    
+    T& at(size_t i)
+    {
+        return *reinterpret_cast<T*>(&m_memory[i]);
+    }
+    
+    const T& at(size_t i) const
+    {
+        return *reinterpret_cast<T*>(&m_memory[i]);
+    }
+    
+    T& at(HeapKind heapKind)
+    {
+        return at(static_cast<size_t>(heapKind));
+    }
+    
+    const T& at(HeapKind heapKind) const
+    {
+        return at(static_cast<size_t>(heapKind));
+    }
+    
+    T& operator[](size_t i) { return at(i); }
+    const T& operator[](size_t i) const { return at(i); }
+    T& operator[](HeapKind heapKind) { return at(heapKind); }
+    const T& operator[](HeapKind heapKind) const { return at(heapKind); }
+
+private:
+    typedef typename std::array<typename std::aligned_storage<sizeof(T), std::alignment_of<T>::value>::type, numHeaps> Memory;
+    Memory m_memory;
+};
+
+template<typename T>
+class StaticPerHeapKind : public PerHeapKindBase<T> {
+public:
+    template<typename... Arguments>
+    StaticPerHeapKind(Arguments&&... arguments)
+        : PerHeapKindBase<T>(std::forward<Arguments>(arguments)...)
+    {
+    }
+    
+    ~StaticPerHeapKind() = delete;
+};
+
+template<typename T>
+class PerHeapKind : public PerHeapKindBase<T> {
+public:
+    template<typename... Arguments>
+    PerHeapKind(Arguments&&... arguments)
+        : PerHeapKindBase<T>(std::forward<Arguments>(arguments)...)
+    {
+    }
+    
+    ~PerHeapKind()
+    {
+        for (unsigned i = numHeaps; i--;)
+            this->at(i).~T();
+    }
+};
+
+} // namespace bmalloc
+
index 3fe1568..fd0aca0 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -28,6 +28,8 @@
 
 #include "BPlatform.h"
 #include "Inline.h"
+#include "PerHeapKind.h"
+#include "VMAllocate.h"
 #include <mutex>
 #include <pthread.h>
 
@@ -63,9 +65,9 @@ private:
 class Cache;
 template<typename T> struct PerThreadStorage;
 
-// For now, we only support PerThread<Cache>. We can expand to other types by
+// For now, we only support PerThread<PerHeapKind<Cache>>. We can expand to other types by
 // using more keys.
-template<> struct PerThreadStorage<Cache> {
+template<> struct PerThreadStorage<PerHeapKind<Cache>> {
     static const pthread_key_t key = __PTK_FRAMEWORK_JAVASCRIPTCORE_KEY0;
 
     static void* get()
@@ -131,14 +133,16 @@ template<typename T>
 void PerThread<T>::destructor(void* p)
 {
     T* t = static_cast<T*>(p);
-    delete t;
+    t->~T();
+    vmDeallocate(t, vmSize(sizeof(T)));
 }
 
 template<typename T>
 T* PerThread<T>::getSlowCase()
 {
     BASSERT(!getFastCase());
-    T* t = new T;
+    T* t = static_cast<T*>(vmAllocate(vmSize(sizeof(T))));
+    new (t) T();
     PerThreadStorage<T>::init(t, destructor);
     return t;
 }
diff --git a/Source/bmalloc/bmalloc/Scavenger.cpp b/Source/bmalloc/bmalloc/Scavenger.cpp
new file mode 100644 (file)
index 0000000..6b64c27
--- /dev/null
@@ -0,0 +1,54 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "Scavenger.h"
+
+#include "Heap.h"
+#include <thread>
+
+namespace bmalloc {
+
+Scavenger::Scavenger(std::lock_guard<StaticMutex>&)
+{
+#if BOS(DARWIN)
+    auto queue = dispatch_queue_create("WebKit Malloc Memory Pressure Handler", DISPATCH_QUEUE_SERIAL);
+    m_pressureHandlerDispatchSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_MEMORYPRESSURE, 0, DISPATCH_MEMORYPRESSURE_CRITICAL, queue);
+    dispatch_source_set_event_handler(m_pressureHandlerDispatchSource, ^{
+        scavenge();
+    });
+    dispatch_resume(m_pressureHandlerDispatchSource);
+    dispatch_release(queue);
+#endif
+}
+
+void Scavenger::scavenge()
+{
+    std::lock_guard<StaticMutex> lock(Heap::mutex());
+    for (unsigned i = numHeaps; i--;)
+        PerProcess<PerHeapKind<Heap>>::get()->at(i).scavenge(lock);
+}
+
+} // namespace bmalloc
+
diff --git a/Source/bmalloc/bmalloc/Scavenger.h b/Source/bmalloc/bmalloc/Scavenger.h
new file mode 100644 (file)
index 0000000..49731dc
--- /dev/null
@@ -0,0 +1,63 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "BPlatform.h"
+#include "StaticMutex.h"
+#include <mutex>
+
+#if BOS(DARWIN)
+#include <dispatch/dispatch.h>
+#endif
+
+namespace bmalloc {
+
+// FIXME: This class should become a common scavenger mechanism for all heaps.
+// https://bugs.webkit.org/show_bug.cgi?id=174973
+
+class Scavenger {
+public:
+    Scavenger(std::lock_guard<StaticMutex>&);
+    
+    ~Scavenger() = delete;
+    
+    void scavenge();
+    
+#if BOS(DARWIN)
+    void setScavengerThreadQOSClass(qos_class_t overrideClass) { m_requestedScavengerThreadQOSClass = overrideClass; }
+    qos_class_t requestedScavengerThreadQOSClass() const { return m_requestedScavengerThreadQOSClass; }
+#endif
+
+private:
+#if BOS(DARWIN)
+    dispatch_source_t m_pressureHandlerDispatchSource;
+    qos_class_t m_requestedScavengerThreadQOSClass { QOS_CLASS_USER_INITIATED };
+#endif
+};
+
+} // namespace bmalloc
+
+
index 87c36af..0d89d28 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 
 namespace bmalloc {
 
-LargeRange VMHeap::tryAllocateLargeChunk(size_t alignment, size_t size)
+VMHeap::VMHeap(std::lock_guard<StaticMutex>&)
+{
+}
+
+LargeRange VMHeap::tryAllocateLargeChunk(size_t alignment, size_t size, AllocationKind allocationKind)
 {
     // We allocate VM in aligned multiples to increase the chances that
     // the OS will provide contiguous ranges that we can merge.
@@ -46,11 +50,14 @@ LargeRange VMHeap::tryAllocateLargeChunk(size_t alignment, size_t size)
     void* memory = tryVMAllocate(alignment, size);
     if (!memory)
         return LargeRange();
+    
+    if (allocationKind == AllocationKind::Virtual)
+        vmDeallocatePhysicalPagesSloppy(memory, size);
 
     Chunk* chunk = static_cast<Chunk*>(memory);
     
 #if BOS(DARWIN)
-    m_zone.addRange(Range(chunk->bytes(), size));
+    PerProcess<Zone>::get()->addRange(Range(chunk->bytes(), size));
 #endif
 
     return LargeRange(chunk->bytes(), size, 0);
index 6ecdd5b..1361b94 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 #ifndef VMHeap_h
 #define VMHeap_h
 
+#include "AllocationKind.h"
 #include "Chunk.h"
 #include "FixedVector.h"
+#include "HeapKind.h"
 #include "LargeRange.h"
 #include "Map.h"
 #include "Vector.h"
@@ -45,12 +47,9 @@ typedef enum { Sync, Async } ScavengeMode;
 
 class VMHeap {
 public:
-    LargeRange tryAllocateLargeChunk(size_t alignment, size_t);
+    VMHeap(std::lock_guard<StaticMutex>&);
     
-private:
-#if BOS(DARWIN)
-    Zone m_zone;
-#endif
+    LargeRange tryAllocateLargeChunk(size_t alignment, size_t, AllocationKind);
 };
 
 } // namespace bmalloc
index a18e7ce..7b35b0f 100644 (file)
@@ -115,7 +115,7 @@ static const malloc_introspection_t zoneIntrospect = {
     .statistics = bmalloc::statistics
 };
 
-Zone::Zone()
+Zone::Zone(std::lock_guard<StaticMutex>&)
 {
     malloc_zone_t::size = &bmalloc::zoneSize;
     malloc_zone_t::zone_name = "WebKit Malloc";
index 6253418..7a00d17 100644 (file)
@@ -28,7 +28,9 @@
 
 #include "FixedVector.h"
 #include "Range.h"
+#include "StaticMutex.h"
 #include <malloc/malloc.h>
+#include <mutex>
 
 namespace bmalloc {
 
@@ -39,7 +41,7 @@ public:
     // Enough capacity to track a 64GB heap, so probably enough for anything.
     static const size_t capacity = 2048;
 
-    Zone();
+    Zone(std::lock_guard<StaticMutex>&);
     Zone(task_t, memory_reader_t, vm_address_t);
 
     void addRange(Range);
index 36cdf37..5d6dd21 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2014-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 
 #include "AvailableMemory.h"
 #include "Cache.h"
+#include "Gigacage.h"
 #include "Heap.h"
+#include "PerHeapKind.h"
 #include "PerProcess.h"
+#include "Scavenger.h"
 #include "StaticMutex.h"
 
 namespace bmalloc {
 namespace api {
 
 // Returns null on failure.
-inline void* tryMalloc(size_t size)
+inline void* tryMalloc(size_t size, HeapKind kind = HeapKind::Primary)
 {
-    return Cache::tryAllocate(size);
+    return Cache::tryAllocate(kind, size);
 }
 
 // Crashes on failure.
-inline void* malloc(size_t size)
+inline void* malloc(size_t size, HeapKind kind = HeapKind::Primary)
 {
-    return Cache::allocate(size);
+    return Cache::allocate(kind, size);
 }
 
 // Returns null on failure.
-inline void* tryMemalign(size_t alignment, size_t size)
+inline void* tryMemalign(size_t alignment, size_t size, HeapKind kind = HeapKind::Primary)
 {
-    return Cache::tryAllocate(alignment, size);
+    return Cache::tryAllocate(kind, alignment, size);
 }
 
 // Crashes on failure.
-inline void* memalign(size_t alignment, size_t size)
+inline void* memalign(size_t alignment, size_t size, HeapKind kind = HeapKind::Primary)
 {
-    return Cache::allocate(alignment, size);
+    return Cache::allocate(kind, alignment, size);
 }
 
 // Crashes on failure.
-inline void* realloc(void* object, size_t newSize)
+inline void* realloc(void* object, size_t newSize, HeapKind kind = HeapKind::Primary)
 {
-    return Cache::reallocate(object, newSize);
+    return Cache::reallocate(kind, object, newSize);
 }
 
-inline void free(void* object)
+// Returns null for failure
+inline void* tryLargeMemalignVirtual(size_t alignment, size_t size, HeapKind kind = HeapKind::Primary)
 {
-    Cache::deallocate(object);
+    Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
+    std::lock_guard<StaticMutex> lock(Heap::mutex());
+    return heap.allocateLarge(lock, alignment, size, AllocationKind::Virtual);
+}
+
+inline void free(void* object, HeapKind kind = HeapKind::Primary)
+{
+    Cache::deallocate(kind, object);
+}
+
+inline void freeLargeVirtual(void* object, HeapKind kind = HeapKind::Primary)
+{
+    Heap& heap = PerProcess<PerHeapKind<Heap>>::get()->at(kind);
+    std::lock_guard<StaticMutex> lock(Heap::mutex());
+    heap.deallocateLarge(lock, object, AllocationKind::Virtual);
 }
 
 inline void scavengeThisThread()
 {
-    Cache::scavenge();
+    for (unsigned i = numHeaps; i--;)
+        Cache::scavenge(static_cast<HeapKind>(i));
 }
 
 inline void scavenge()
 {
     scavengeThisThread();
 
-    std::lock_guard<StaticMutex> lock(PerProcess<Heap>::mutex());
-    PerProcess<Heap>::get()->scavenge(lock);
+    PerProcess<Scavenger>::get()->scavenge();
 }
 
-inline bool isEnabled()
+inline bool isEnabled(HeapKind kind = HeapKind::Primary)
 {
-    std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
-    return !PerProcess<Heap>::getFastCase()->debugHeap();
+    std::unique_lock<StaticMutex> lock(Heap::mutex());
+    return !PerProcess<PerHeapKind<Heap>>::getFastCase()->at(kind).debugHeap();
 }
     
 inline size_t availableMemory()
@@ -106,8 +124,8 @@ inline double percentAvailableMemoryInUse()
 #if BOS(DARWIN)
 inline void setScavengerThreadQOSClass(qos_class_t overrideClass)
 {
-    std::unique_lock<StaticMutex> lock(PerProcess<Heap>::mutex());
-    PerProcess<Heap>::getFastCase()->setScavengerThreadQOSClass(overrideClass);
+    std::unique_lock<StaticMutex> lock(Heap::mutex());
+    PerProcess<Scavenger>::get()->setScavengerThreadQOSClass(overrideClass);
 }
 #endif
 
index f31a80d..4ca3d3e 100644 (file)
 
 #include "bmalloc.h"
 
-#define EXPORT __attribute__((visibility("default")))
+#include "BExport.h"
 
 extern "C" {
 
-EXPORT void* mbmalloc(size_t);
-EXPORT void* mbmemalign(size_t, size_t);
-EXPORT void mbfree(void*, size_t);
-EXPORT void* mbrealloc(void*, size_t, size_t);
-EXPORT void mbscavenge();
+BEXPORT void* mbmalloc(size_t);
+BEXPORT void* mbmemalign(size_t, size_t);
+BEXPORT void mbfree(void*, size_t);
+BEXPORT void* mbrealloc(void*, size_t, size_t);
+BEXPORT void mbscavenge();
     
 void* mbmalloc(size_t size)
 {
index e978b67..c60839d 100755 (executable)
@@ -1213,6 +1213,7 @@ def runWebAssembly
         run("wasm-eager-jettison", "-m", "--forceCodeBlockToJettisonDueToOldAge=true", *FTL_OPTIONS)
         run("wasm-no-call-ic", "-m", "--useCallICsForWebAssemblyToJSCalls=false", *FTL_OPTIONS)
         run("wasm-no-tls-context", "-m", "--useFastTLSForWasmContext=false", *FTL_OPTIONS)
+        run("wasm-slow-memory", "-m", "--useWebAssemblyFastMemory=false", *FTL_OPTIONS)
     end
 end