https://bugs.webkit.org/show_bug.cgi?id=174919
Reviewed by Keith Miller.
Source/bmalloc:
This introduces two kinds of Gigacage, Primitive and JSValue. This translates to two kinds of
HeapKind, PrimitiveGigacage and JSValueGigacage.
The new support functionality required turning Inline.h into BInline.h, and INLINE into BINLINE, and
NO_INLINE into BNO_INLINE.
* bmalloc.xcodeproj/project.pbxproj:
* bmalloc/Allocator.cpp:
(bmalloc::Allocator::refillAllocatorSlowCase):
(bmalloc::Allocator::refillAllocator):
(bmalloc::Allocator::allocateLarge):
(bmalloc::Allocator::allocateLogSizeClass):
* bmalloc/AsyncTask.h:
* bmalloc/BInline.h: Copied from Source/bmalloc/bmalloc/Inline.h.
* bmalloc/Cache.cpp:
(bmalloc::Cache::tryAllocateSlowCaseNullCache):
(bmalloc::Cache::allocateSlowCaseNullCache):
(bmalloc::Cache::deallocateSlowCaseNullCache):
(bmalloc::Cache::reallocateSlowCaseNullCache):
* bmalloc/Deallocator.cpp:
* bmalloc/Gigacage.cpp:
(Gigacage::PrimitiveDisableCallbacks::PrimitiveDisableCallbacks):
(Gigacage::ensureGigacage):
(Gigacage::disablePrimitiveGigacage):
(Gigacage::addPrimitiveDisableCallback):
(Gigacage::removePrimitiveDisableCallback):
(Gigacage::Callbacks::Callbacks): Deleted.
(Gigacage::disableGigacage): Deleted.
(Gigacage::addDisableCallback): Deleted.
(Gigacage::removeDisableCallback): Deleted.
* bmalloc/Gigacage.h:
(Gigacage::name):
(Gigacage::basePtr):
(Gigacage::forEachKind):
(Gigacage::caged):
(Gigacage::isCaged):
* bmalloc/Heap.cpp:
(bmalloc::Heap::Heap):
(bmalloc::Heap::usingGigacage):
(bmalloc::Heap::gigacageBasePtr):
* bmalloc/Heap.h:
* bmalloc/HeapKind.h:
(bmalloc::isGigacage):
(bmalloc::gigacageKind):
(bmalloc::heapKind):
* bmalloc/Inline.h: Removed.
* bmalloc/Map.h:
* bmalloc/PerProcess.h:
(bmalloc::PerProcess<T>::getFastCase):
(bmalloc::PerProcess<T>::get):
(bmalloc::PerProcess<T>::getSlowCase):
* bmalloc/PerThread.h:
(bmalloc::PerThread<T>::getFastCase):
* bmalloc/Vector.h:
(bmalloc::Vector<T>::push):
(bmalloc::Vector<T>::shrinkCapacity):
(bmalloc::Vector<T>::growCapacity):
Source/JavaScriptCore:
This adapts JSC to there being two gigacages.
To make matters simpler, this turns AlignedMemoryAllocators into per-VM instances rather than
singletons. I don't think we were gaining anything by making them be singletons.
This makes it easy to teach GigacageAlignedMemoryAllocator that there are multiple kinds of
gigacages. We'll have one of those allocators per cage.
From there, this change teaches everyone who previously knew about cages that there are two cages.
This means having to specify either Gigacage::Primitive or Gigacage::JSValue. In most places, this is
easy: typed arrays are Primitive and butterflies are JSValue. But there are a few places where it's
not so obvious, so this change introduces some helpers to make it easy to define what cage you want
to use in one place and refer to it abstractly. We do this in DirectArguments and GenericArguments.h
A lot of the magic of this change is due to CagedBarrierPtr, which combines AuxiliaryBarrier and
CagedPtr. This removes one layer of "get()" calls from a bunch of places.
* JavaScriptCore.xcodeproj/project.pbxproj:
* bytecode/AccessCase.cpp:
(JSC::AccessCase::generateImpl):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::emitAllocateRawObject):
(JSC::DFG::SpeculativeJIT::compileAllocatePropertyStorage):
(JSC::DFG::SpeculativeJIT::compileReallocatePropertyStorage):
(JSC::DFG::SpeculativeJIT::compileNewTypedArray):
(JSC::DFG::SpeculativeJIT::emitAllocateButterfly):
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::compileGetButterfly):
(JSC::FTL::DFG::LowerDFGToB3::compileGetIndexedPropertyStorage):
(JSC::FTL::DFG::LowerDFGToB3::compileNewTypedArray):
(JSC::FTL::DFG::LowerDFGToB3::compileGetDirectPname):
(JSC::FTL::DFG::LowerDFGToB3::compileMaterializeNewObject):
(JSC::FTL::DFG::LowerDFGToB3::allocatePropertyStorageWithSizeImpl):
(JSC::FTL::DFG::LowerDFGToB3::allocateJSArray):
(JSC::FTL::DFG::LowerDFGToB3::caged):
* heap/FastMallocAlignedMemoryAllocator.cpp:
(JSC::FastMallocAlignedMemoryAllocator::instance): Deleted.
* heap/FastMallocAlignedMemoryAllocator.h:
* heap/GigacageAlignedMemoryAllocator.cpp:
(JSC::GigacageAlignedMemoryAllocator::GigacageAlignedMemoryAllocator):
(JSC::GigacageAlignedMemoryAllocator::tryAllocateAlignedMemory):
(JSC::GigacageAlignedMemoryAllocator::freeAlignedMemory):
(JSC::GigacageAlignedMemoryAllocator::dump const):
(JSC::GigacageAlignedMemoryAllocator::instance): Deleted.
* heap/GigacageAlignedMemoryAllocator.h:
* jsc.cpp:
(primitiveGigacageDisabled):
(jscmain):
(gigacageDisabled): Deleted.
* llint/LowLevelInterpreter64.asm:
* runtime/ArrayBuffer.cpp:
(JSC::ArrayBufferContents::tryAllocate):
(JSC::ArrayBuffer::createAdopted):
(JSC::ArrayBuffer::createFromBytes):
* runtime/AuxiliaryBarrier.h:
* runtime/ButterflyInlines.h:
(JSC::Butterfly::createUninitialized):
(JSC::Butterfly::tryCreate):
(JSC::Butterfly::growArrayRight):
* runtime/CagedBarrierPtr.h: Added.
(JSC::CagedBarrierPtr::CagedBarrierPtr):
(JSC::CagedBarrierPtr::clear):
(JSC::CagedBarrierPtr::set):
(JSC::CagedBarrierPtr::get const):
(JSC::CagedBarrierPtr::getMayBeNull const):
(JSC::CagedBarrierPtr::operator== const):
(JSC::CagedBarrierPtr::operator!= const):
(JSC::CagedBarrierPtr::operator bool const):
(JSC::CagedBarrierPtr::setWithoutBarrier):
(JSC::CagedBarrierPtr::operator* const):
(JSC::CagedBarrierPtr::operator-> const):
(JSC::CagedBarrierPtr::operator[] const):
* runtime/DirectArguments.cpp:
(JSC::DirectArguments::overrideThings):
(JSC::DirectArguments::unmapArgument):
* runtime/DirectArguments.h:
(JSC::DirectArguments::isMappedArgument const):
* runtime/GenericArguments.h:
* runtime/GenericArgumentsInlines.h:
(JSC::GenericArguments<Type>::initModifiedArgumentsDescriptor):
(JSC::GenericArguments<Type>::setModifiedArgumentDescriptor):
(JSC::GenericArguments<Type>::isModifiedArgumentDescriptor):
* runtime/HashMapImpl.cpp:
(JSC::HashMapImpl<HashMapBucket>::visitChildren):
* runtime/HashMapImpl.h:
(JSC::HashMapBuffer::create):
(JSC::HashMapImpl::buffer const):
(JSC::HashMapImpl::rehash):
* runtime/JSArray.cpp:
(JSC::JSArray::tryCreateUninitializedRestricted):
(JSC::JSArray::unshiftCountSlowCase):
(JSC::JSArray::setLength):
(JSC::JSArray::pop):
(JSC::JSArray::push):
(JSC::JSArray::fastSlice):
(JSC::JSArray::shiftCountWithArrayStorage):
(JSC::JSArray::shiftCountWithAnyIndexingType):
(JSC::JSArray::unshiftCountWithAnyIndexingType):
(JSC::JSArray::fillArgList):
(JSC::JSArray::copyToArguments):
* runtime/JSArray.h:
(JSC::JSArray::tryCreate):
* runtime/JSArrayBufferView.cpp:
(JSC::JSArrayBufferView::ConstructionContext::ConstructionContext):
(JSC::JSArrayBufferView::finalize):
* runtime/JSLock.cpp:
(JSC::JSLock::didAcquireLock):
* runtime/JSObject.cpp:
(JSC::JSObject::heapSnapshot):
(JSC::JSObject::getOwnPropertySlotByIndex):
(JSC::JSObject::putByIndex):
(JSC::JSObject::enterDictionaryIndexingMode):
(JSC::JSObject::createInitialIndexedStorage):
(JSC::JSObject::createArrayStorage):
(JSC::JSObject::convertUndecidedToInt32):
(JSC::JSObject::convertUndecidedToDouble):
(JSC::JSObject::convertUndecidedToContiguous):
(JSC::JSObject::constructConvertedArrayStorageWithoutCopyingElements):
(JSC::JSObject::convertUndecidedToArrayStorage):
(JSC::JSObject::convertInt32ToDouble):
(JSC::JSObject::convertInt32ToContiguous):
(JSC::JSObject::convertInt32ToArrayStorage):
(JSC::JSObject::convertDoubleToContiguous):
(JSC::JSObject::convertDoubleToArrayStorage):
(JSC::JSObject::convertContiguousToArrayStorage):
(JSC::JSObject::setIndexQuicklyToUndecided):
(JSC::JSObject::ensureArrayStorageExistsAndEnterDictionaryIndexingMode):
(JSC::JSObject::deletePropertyByIndex):
(JSC::JSObject::getOwnPropertyNames):
(JSC::JSObject::putIndexedDescriptor):
(JSC::JSObject::defineOwnIndexedProperty):
(JSC::JSObject::putByIndexBeyondVectorLengthWithoutAttributes):
(JSC::JSObject::putDirectIndexSlowOrBeyondVectorLength):
(JSC::JSObject::getNewVectorLength):
(JSC::JSObject::ensureLengthSlow):
(JSC::JSObject::reallocateAndShrinkButterfly):
(JSC::JSObject::allocateMoreOutOfLineStorage):
(JSC::JSObject::getEnumerableLength):
* runtime/JSObject.h:
(JSC::JSObject::getArrayLength const):
(JSC::JSObject::getVectorLength):
(JSC::JSObject::putDirectIndex):
(JSC::JSObject::canGetIndexQuickly):
(JSC::JSObject::getIndexQuickly):
(JSC::JSObject::tryGetIndexQuickly const):
(JSC::JSObject::canSetIndexQuickly):
(JSC::JSObject::setIndexQuickly):
(JSC::JSObject::initializeIndex):
(JSC::JSObject::initializeIndexWithoutBarrier):
(JSC::JSObject::hasSparseMap):
(JSC::JSObject::inSparseIndexingMode):
(JSC::JSObject::butterfly const):
(JSC::JSObject::butterfly):
(JSC::JSObject::outOfLineStorage const):
(JSC::JSObject::outOfLineStorage):
(JSC::JSObject::ensureInt32):
(JSC::JSObject::ensureDouble):
(JSC::JSObject::ensureContiguous):
(JSC::JSObject::ensureArrayStorage):
(JSC::JSObject::arrayStorage):
(JSC::JSObject::arrayStorageOrNull):
(JSC::JSObject::ensureLength):
* runtime/RegExpMatchesArray.h:
(JSC::tryCreateUninitializedRegExpMatchesArray):
* runtime/VM.cpp:
(JSC::VM::VM):
(JSC::VM::~VM):
(JSC::VM::primitiveGigacageDisabledCallback):
(JSC::VM::primitiveGigacageDisabled):
(JSC::VM::gigacageDisabledCallback): Deleted.
(JSC::VM::gigacageDisabled): Deleted.
* runtime/VM.h:
(JSC::VM::gigacageAuxiliarySpace):
(JSC::VM::firePrimitiveGigacageEnabledIfNecessary):
(JSC::VM::primitiveGigacageEnabled):
(JSC::VM::fireGigacageEnabledIfNecessary): Deleted.
(JSC::VM::gigacageEnabled): Deleted.
* wasm/WasmMemory.cpp:
(JSC::Wasm::Memory::create):
(JSC::Wasm::Memory::~Memory):
(JSC::Wasm::Memory::grow):
Source/WebCore:
No new tests because no change in behavior.
Adapting to API changes - we now specify the AlignedMemoryAllocator differently and we need to be
specific about which Gigacage we're using.
* bindings/js/WebCoreJSClientData.cpp:
(WebCore::JSVMClientData::JSVMClientData):
* platform/graphics/cocoa/GPUBufferMetal.mm:
(WebCore::GPUBuffer::GPUBuffer):
Source/WebKit:
The disable callback is all about the primitive gigacage.
* WebProcess/WebProcess.cpp:
(WebKit::primitiveGigacageDisabled):
(WebKit::m_webSQLiteDatabaseTracker):
(WebKit::gigacageDisabled): Deleted.
Source/WTF:
This mirrors the changes from bmalloc/Gigacage.h.
Also it teaches CagedPtr how to reason about multiple gigacages.
* wtf/CagedPtr.h:
(WTF::CagedPtr::get const):
(WTF::CagedPtr::operator[] const):
* wtf/Gigacage.cpp:
(Gigacage::tryMalloc):
(Gigacage::tryAllocateVirtualPages):
(Gigacage::freeVirtualPages):
(Gigacage::tryAlignedMalloc):
(Gigacage::alignedFree):
(Gigacage::free):
* wtf/Gigacage.h:
(Gigacage::disablePrimitiveGigacage):
(Gigacage::addPrimitiveDisableCallback):
(Gigacage::removePrimitiveDisableCallback):
(Gigacage::name):
(Gigacage::basePtr):
(Gigacage::caged):
(Gigacage::isCaged):
(Gigacage::tryAlignedMalloc):
(Gigacage::alignedFree):
(Gigacage::free):
(Gigacage::disableGigacage): Deleted.
(Gigacage::addDisableCallback): Deleted.
(Gigacage::removeDisableCallback): Deleted.
git-svn-id: https://svn.webkit.org/repository/webkit/trunk@220352
268f45cc-cd09-0410-ab3c-
d52691b4dbfc
+2017-08-06 Filip Pizlo <fpizlo@apple.com>
+
+ Primitive auxiliaries and JSValue auxiliaries should have separate gigacages
+ https://bugs.webkit.org/show_bug.cgi?id=174919
+
+ Reviewed by Keith Miller.
+
+ This adapts JSC to there being two gigacages.
+
+ To make matters simpler, this turns AlignedMemoryAllocators into per-VM instances rather than
+ singletons. I don't think we were gaining anything by making them be singletons.
+
+ This makes it easy to teach GigacageAlignedMemoryAllocator that there are multiple kinds of
+ gigacages. We'll have one of those allocators per cage.
+
+ From there, this change teaches everyone who previously knew about cages that there are two cages.
+ This means having to specify either Gigacage::Primitive or Gigacage::JSValue. In most places, this is
+ easy: typed arrays are Primitive and butterflies are JSValue. But there are a few places where it's
+ not so obvious, so this change introduces some helpers to make it easy to define what cage you want
+ to use in one place and refer to it abstractly. We do this in DirectArguments and GenericArguments.h
+
+ A lot of the magic of this change is due to CagedBarrierPtr, which combines AuxiliaryBarrier and
+ CagedPtr. This removes one layer of "get()" calls from a bunch of places.
+
+ * JavaScriptCore.xcodeproj/project.pbxproj:
+ * bytecode/AccessCase.cpp:
+ (JSC::AccessCase::generateImpl):
+ * dfg/DFGSpeculativeJIT.cpp:
+ (JSC::DFG::SpeculativeJIT::emitAllocateRawObject):
+ (JSC::DFG::SpeculativeJIT::compileAllocatePropertyStorage):
+ (JSC::DFG::SpeculativeJIT::compileReallocatePropertyStorage):
+ (JSC::DFG::SpeculativeJIT::compileNewTypedArray):
+ (JSC::DFG::SpeculativeJIT::emitAllocateButterfly):
+ * ftl/FTLLowerDFGToB3.cpp:
+ (JSC::FTL::DFG::LowerDFGToB3::compileGetButterfly):
+ (JSC::FTL::DFG::LowerDFGToB3::compileGetIndexedPropertyStorage):
+ (JSC::FTL::DFG::LowerDFGToB3::compileNewTypedArray):
+ (JSC::FTL::DFG::LowerDFGToB3::compileGetDirectPname):
+ (JSC::FTL::DFG::LowerDFGToB3::compileMaterializeNewObject):
+ (JSC::FTL::DFG::LowerDFGToB3::allocatePropertyStorageWithSizeImpl):
+ (JSC::FTL::DFG::LowerDFGToB3::allocateJSArray):
+ (JSC::FTL::DFG::LowerDFGToB3::caged):
+ * heap/FastMallocAlignedMemoryAllocator.cpp:
+ (JSC::FastMallocAlignedMemoryAllocator::instance): Deleted.
+ * heap/FastMallocAlignedMemoryAllocator.h:
+ * heap/GigacageAlignedMemoryAllocator.cpp:
+ (JSC::GigacageAlignedMemoryAllocator::GigacageAlignedMemoryAllocator):
+ (JSC::GigacageAlignedMemoryAllocator::tryAllocateAlignedMemory):
+ (JSC::GigacageAlignedMemoryAllocator::freeAlignedMemory):
+ (JSC::GigacageAlignedMemoryAllocator::dump const):
+ (JSC::GigacageAlignedMemoryAllocator::instance): Deleted.
+ * heap/GigacageAlignedMemoryAllocator.h:
+ * jsc.cpp:
+ (primitiveGigacageDisabled):
+ (jscmain):
+ (gigacageDisabled): Deleted.
+ * llint/LowLevelInterpreter64.asm:
+ * runtime/ArrayBuffer.cpp:
+ (JSC::ArrayBufferContents::tryAllocate):
+ (JSC::ArrayBuffer::createAdopted):
+ (JSC::ArrayBuffer::createFromBytes):
+ * runtime/AuxiliaryBarrier.h:
+ * runtime/ButterflyInlines.h:
+ (JSC::Butterfly::createUninitialized):
+ (JSC::Butterfly::tryCreate):
+ (JSC::Butterfly::growArrayRight):
+ * runtime/CagedBarrierPtr.h: Added.
+ (JSC::CagedBarrierPtr::CagedBarrierPtr):
+ (JSC::CagedBarrierPtr::clear):
+ (JSC::CagedBarrierPtr::set):
+ (JSC::CagedBarrierPtr::get const):
+ (JSC::CagedBarrierPtr::getMayBeNull const):
+ (JSC::CagedBarrierPtr::operator== const):
+ (JSC::CagedBarrierPtr::operator!= const):
+ (JSC::CagedBarrierPtr::operator bool const):
+ (JSC::CagedBarrierPtr::setWithoutBarrier):
+ (JSC::CagedBarrierPtr::operator* const):
+ (JSC::CagedBarrierPtr::operator-> const):
+ (JSC::CagedBarrierPtr::operator[] const):
+ * runtime/DirectArguments.cpp:
+ (JSC::DirectArguments::overrideThings):
+ (JSC::DirectArguments::unmapArgument):
+ * runtime/DirectArguments.h:
+ (JSC::DirectArguments::isMappedArgument const):
+ * runtime/GenericArguments.h:
+ * runtime/GenericArgumentsInlines.h:
+ (JSC::GenericArguments<Type>::initModifiedArgumentsDescriptor):
+ (JSC::GenericArguments<Type>::setModifiedArgumentDescriptor):
+ (JSC::GenericArguments<Type>::isModifiedArgumentDescriptor):
+ * runtime/HashMapImpl.cpp:
+ (JSC::HashMapImpl<HashMapBucket>::visitChildren):
+ * runtime/HashMapImpl.h:
+ (JSC::HashMapBuffer::create):
+ (JSC::HashMapImpl::buffer const):
+ (JSC::HashMapImpl::rehash):
+ * runtime/JSArray.cpp:
+ (JSC::JSArray::tryCreateUninitializedRestricted):
+ (JSC::JSArray::unshiftCountSlowCase):
+ (JSC::JSArray::setLength):
+ (JSC::JSArray::pop):
+ (JSC::JSArray::push):
+ (JSC::JSArray::fastSlice):
+ (JSC::JSArray::shiftCountWithArrayStorage):
+ (JSC::JSArray::shiftCountWithAnyIndexingType):
+ (JSC::JSArray::unshiftCountWithAnyIndexingType):
+ (JSC::JSArray::fillArgList):
+ (JSC::JSArray::copyToArguments):
+ * runtime/JSArray.h:
+ (JSC::JSArray::tryCreate):
+ * runtime/JSArrayBufferView.cpp:
+ (JSC::JSArrayBufferView::ConstructionContext::ConstructionContext):
+ (JSC::JSArrayBufferView::finalize):
+ * runtime/JSLock.cpp:
+ (JSC::JSLock::didAcquireLock):
+ * runtime/JSObject.cpp:
+ (JSC::JSObject::heapSnapshot):
+ (JSC::JSObject::getOwnPropertySlotByIndex):
+ (JSC::JSObject::putByIndex):
+ (JSC::JSObject::enterDictionaryIndexingMode):
+ (JSC::JSObject::createInitialIndexedStorage):
+ (JSC::JSObject::createArrayStorage):
+ (JSC::JSObject::convertUndecidedToInt32):
+ (JSC::JSObject::convertUndecidedToDouble):
+ (JSC::JSObject::convertUndecidedToContiguous):
+ (JSC::JSObject::constructConvertedArrayStorageWithoutCopyingElements):
+ (JSC::JSObject::convertUndecidedToArrayStorage):
+ (JSC::JSObject::convertInt32ToDouble):
+ (JSC::JSObject::convertInt32ToContiguous):
+ (JSC::JSObject::convertInt32ToArrayStorage):
+ (JSC::JSObject::convertDoubleToContiguous):
+ (JSC::JSObject::convertDoubleToArrayStorage):
+ (JSC::JSObject::convertContiguousToArrayStorage):
+ (JSC::JSObject::setIndexQuicklyToUndecided):
+ (JSC::JSObject::ensureArrayStorageExistsAndEnterDictionaryIndexingMode):
+ (JSC::JSObject::deletePropertyByIndex):
+ (JSC::JSObject::getOwnPropertyNames):
+ (JSC::JSObject::putIndexedDescriptor):
+ (JSC::JSObject::defineOwnIndexedProperty):
+ (JSC::JSObject::putByIndexBeyondVectorLengthWithoutAttributes):
+ (JSC::JSObject::putDirectIndexSlowOrBeyondVectorLength):
+ (JSC::JSObject::getNewVectorLength):
+ (JSC::JSObject::ensureLengthSlow):
+ (JSC::JSObject::reallocateAndShrinkButterfly):
+ (JSC::JSObject::allocateMoreOutOfLineStorage):
+ (JSC::JSObject::getEnumerableLength):
+ * runtime/JSObject.h:
+ (JSC::JSObject::getArrayLength const):
+ (JSC::JSObject::getVectorLength):
+ (JSC::JSObject::putDirectIndex):
+ (JSC::JSObject::canGetIndexQuickly):
+ (JSC::JSObject::getIndexQuickly):
+ (JSC::JSObject::tryGetIndexQuickly const):
+ (JSC::JSObject::canSetIndexQuickly):
+ (JSC::JSObject::setIndexQuickly):
+ (JSC::JSObject::initializeIndex):
+ (JSC::JSObject::initializeIndexWithoutBarrier):
+ (JSC::JSObject::hasSparseMap):
+ (JSC::JSObject::inSparseIndexingMode):
+ (JSC::JSObject::butterfly const):
+ (JSC::JSObject::butterfly):
+ (JSC::JSObject::outOfLineStorage const):
+ (JSC::JSObject::outOfLineStorage):
+ (JSC::JSObject::ensureInt32):
+ (JSC::JSObject::ensureDouble):
+ (JSC::JSObject::ensureContiguous):
+ (JSC::JSObject::ensureArrayStorage):
+ (JSC::JSObject::arrayStorage):
+ (JSC::JSObject::arrayStorageOrNull):
+ (JSC::JSObject::ensureLength):
+ * runtime/RegExpMatchesArray.h:
+ (JSC::tryCreateUninitializedRegExpMatchesArray):
+ * runtime/VM.cpp:
+ (JSC::VM::VM):
+ (JSC::VM::~VM):
+ (JSC::VM::primitiveGigacageDisabledCallback):
+ (JSC::VM::primitiveGigacageDisabled):
+ (JSC::VM::gigacageDisabledCallback): Deleted.
+ (JSC::VM::gigacageDisabled): Deleted.
+ * runtime/VM.h:
+ (JSC::VM::gigacageAuxiliarySpace):
+ (JSC::VM::firePrimitiveGigacageEnabledIfNecessary):
+ (JSC::VM::primitiveGigacageEnabled):
+ (JSC::VM::fireGigacageEnabledIfNecessary): Deleted.
+ (JSC::VM::gigacageEnabled): Deleted.
+ * wasm/WasmMemory.cpp:
+ (JSC::Wasm::Memory::create):
+ (JSC::Wasm::Memory::~Memory):
+ (JSC::Wasm::Memory::grow):
+
2017-08-07 Commit Queue <commit-queue@webkit.org>
Unreviewed, rolling out r220144.
0FEC3C571F33A45300F59B6C /* FastMallocAlignedMemoryAllocator.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FEC3C551F33A45300F59B6C /* FastMallocAlignedMemoryAllocator.h */; settings = {ATTRIBUTES = (Private, ); }; };
0FEC3C5A1F33A48900F59B6C /* GigacageAlignedMemoryAllocator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FEC3C581F33A48900F59B6C /* GigacageAlignedMemoryAllocator.cpp */; };
0FEC3C5B1F33A48900F59B6C /* GigacageAlignedMemoryAllocator.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FEC3C591F33A48900F59B6C /* GigacageAlignedMemoryAllocator.h */; };
+ 0FEC3C601F379F5300F59B6C /* CagedBarrierPtr.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FEC3C5F1F379F5300F59B6C /* CagedBarrierPtr.h */; settings = {ATTRIBUTES = (Private, ); }; };
0FEC84FE1BDACDAC0080FF74 /* B3ArgumentRegValue.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FEC84B41BDACDAC0080FF74 /* B3ArgumentRegValue.cpp */; };
0FEC84FF1BDACDAC0080FF74 /* B3ArgumentRegValue.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FEC84B51BDACDAC0080FF74 /* B3ArgumentRegValue.h */; };
0FEC85001BDACDAC0080FF74 /* B3BasicBlock.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FEC84B61BDACDAC0080FF74 /* B3BasicBlock.cpp */; };
0FEC3C551F33A45300F59B6C /* FastMallocAlignedMemoryAllocator.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = FastMallocAlignedMemoryAllocator.h; sourceTree = "<group>"; };
0FEC3C581F33A48900F59B6C /* GigacageAlignedMemoryAllocator.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; path = GigacageAlignedMemoryAllocator.cpp; sourceTree = "<group>"; };
0FEC3C591F33A48900F59B6C /* GigacageAlignedMemoryAllocator.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = GigacageAlignedMemoryAllocator.h; sourceTree = "<group>"; };
+ 0FEC3C5F1F379F5300F59B6C /* CagedBarrierPtr.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = CagedBarrierPtr.h; sourceTree = "<group>"; };
0FEC84B41BDACDAC0080FF74 /* B3ArgumentRegValue.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = B3ArgumentRegValue.cpp; path = b3/B3ArgumentRegValue.cpp; sourceTree = "<group>"; };
0FEC84B51BDACDAC0080FF74 /* B3ArgumentRegValue.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = B3ArgumentRegValue.h; path = b3/B3ArgumentRegValue.h; sourceTree = "<group>"; };
0FEC84B61BDACDAC0080FF74 /* B3BasicBlock.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = B3BasicBlock.cpp; path = b3/B3BasicBlock.cpp; sourceTree = "<group>"; };
9E729409190F0306001A91B5 /* BundlePath.mm */,
0FB7F38B15ED8E3800F167B2 /* Butterfly.h */,
0FB7F38C15ED8E3800F167B2 /* ButterflyInlines.h */,
+ 0FEC3C5F1F379F5300F59B6C /* CagedBarrierPtr.h */,
BCA62DFE0E2826230004F30D /* CallData.cpp */,
145C507F0D9DF63B0088F6B9 /* CallData.h */,
FE80C1981D775FB4008510C0 /* CatchScope.cpp */,
0FB14E2318130955009B6B4D /* DFGInlineCacheWrapperInlines.h in Headers */,
A704D90617A0BAA8006BA554 /* DFGInPlaceAbstractState.h in Headers */,
0F2BDC21151E803B00CD8910 /* DFGInsertionSet.h in Headers */,
+ 0FEC3C601F379F5300F59B6C /* CagedBarrierPtr.h in Headers */,
0F300B7C18AB1B1400A6D72E /* DFGIntegerCheckCombiningPhase.h in Headers */,
0F898F321B27689F0083A33C /* DFGIntegerRangeOptimizationPhase.h in Headers */,
0FC97F3E18202119002C9B26 /* DFGInvalidationPointInjectionPhase.h in Headers */,
size_t newSize = newStructure()->outOfLineCapacity() * sizeof(JSValue);
if (allocatingInline) {
- MarkedAllocator* allocator = vm.auxiliarySpace.allocatorFor(newSize);
+ MarkedAllocator* allocator = vm.jsValueGigacageAuxiliarySpace.allocatorFor(newSize);
if (!allocator) {
// Yuck, this case would suck!
m_jit.move(TrustedImmPtr(0), storageGPR);
if (size) {
- if (MarkedAllocator* allocator = m_jit.vm()->auxiliarySpace.allocatorFor(size)) {
+ if (MarkedAllocator* allocator = m_jit.vm()->jsValueGigacageAuxiliarySpace.allocatorFor(size)) {
m_jit.move(TrustedImmPtr(allocator), scratchGPR);
m_jit.emitAllocate(storageGPR, allocator, scratchGPR, scratch2GPR, slowCases);
size_t size = initialOutOfLineCapacity * sizeof(JSValue);
- MarkedAllocator* allocator = m_jit.vm()->auxiliarySpace.allocatorFor(size);
+ MarkedAllocator* allocator = m_jit.vm()->jsValueGigacageAuxiliarySpace.allocatorFor(size);
if (!allocator || node->transition()->previous->couldHaveIndexingHeader()) {
SpeculateCellOperand base(this, node->child1());
size_t newSize = oldSize * outOfLineGrowthFactor;
ASSERT(newSize == node->transition()->next->outOfLineCapacity() * sizeof(JSValue));
- MarkedAllocator* allocator = m_jit.vm()->auxiliarySpace.allocatorFor(newSize);
+ MarkedAllocator* allocator = m_jit.vm()->jsValueGigacageAuxiliarySpace.allocatorFor(newSize);
if (!allocator || node->transition()->previous->couldHaveIndexingHeader()) {
SpeculateCellOperand base(this, node->child1());
m_jit.and32(TrustedImm32(~7), scratchGPR);
}
m_jit.emitAllocateVariableSized(
- storageGPR, m_jit.vm()->auxiliarySpace, scratchGPR, scratchGPR,
+ storageGPR, m_jit.vm()->primitiveGigacageAuxiliarySpace, scratchGPR, scratchGPR,
scratchGPR2, slowCases);
MacroAssembler::Jump done = m_jit.branchTest32(MacroAssembler::Zero, sizeGPR);
m_jit.lshift32(TrustedImm32(3), scratch1);
m_jit.add32(TrustedImm32(sizeof(IndexingHeader)), scratch1, scratch2);
m_jit.emitAllocateVariableSized(
- storageResultGPR, m_jit.vm()->auxiliarySpace, scratch2, scratch1, scratch3, slowCases);
+ storageResultGPR, m_jit.vm()->jsValueGigacageAuxiliarySpace, scratch2, scratch1, scratch3, slowCases);
m_jit.addPtr(TrustedImm32(sizeof(IndexingHeader)), storageResultGPR);
m_jit.store32(sizeGPR, MacroAssembler::Address(storageResultGPR, Butterfly::offsetOfPublicLength()));
{
LValue butterfly = m_out.loadPtr(lowCell(m_node->child1()), m_heaps.JSObject_butterfly);
if (m_node->op() != GetButterflyWithoutCaging)
- butterfly = caged(butterfly);
+ butterfly = caged(Gigacage::JSValue, butterfly);
setStorage(butterfly);
}
}
DFG_ASSERT(m_graph, m_node, isTypedView(m_node->arrayMode().typedArrayType()));
- setStorage(caged(m_out.loadPtr(cell, m_heaps.JSArrayBufferView_vector)));
+ setStorage(caged(Gigacage::Primitive, m_out.loadPtr(cell, m_heaps.JSArrayBufferView_vector)));
}
void compileCheckArray()
m_out.constIntPtr(~static_cast<intptr_t>(7)));
}
- LValue allocator = allocatorForSize(vm().auxiliarySpace, byteSize, slowCase);
+ LValue allocator = allocatorForSize(vm().primitiveGigacageAuxiliarySpace, byteSize, slowCase);
LValue storage = allocateHeapCell(allocator, slowCase);
splatWords(
m_out.neg(m_out.sub(index, m_out.load32(enumerator, m_heaps.JSPropertyNameEnumerator_cachedInlineCapacity))));
int32_t offsetOfFirstProperty = static_cast<int32_t>(offsetInButterfly(firstOutOfLineOffset)) * sizeof(EncodedJSValue);
ValueFromBlock outOfLineResult = m_out.anchor(
- m_out.load64(m_out.baseIndex(m_heaps.properties.atAnyNumber(), caged(storage), realIndex, ScaleEight, offsetOfFirstProperty)));
+ m_out.load64(m_out.baseIndex(m_heaps.properties.atAnyNumber(), caged(Gigacage::JSValue, storage), realIndex, ScaleEight, offsetOfFirstProperty)));
m_out.jump(continuation);
m_out.appendTo(slowCase, continuation);
ValueFromBlock noButterfly = m_out.anchor(m_out.intPtrZero);
LValue startOfStorage = allocateHeapCell(
- allocatorForSize(vm().auxiliarySpace, butterflySize, slowPath),
+ allocatorForSize(vm().jsValueGigacageAuxiliarySpace, butterflySize, slowPath),
slowPath);
LValue fastButterflyValue = m_out.add(
LBasicBlock lastNext = m_out.insertNewBlocksBefore(slowPath);
size_t sizeInBytes = sizeInValues * sizeof(JSValue);
- MarkedAllocator* allocator = vm().auxiliarySpace.allocatorFor(sizeInBytes);
+ MarkedAllocator* allocator = vm().jsValueGigacageAuxiliarySpace.allocatorFor(sizeInBytes);
LValue startOfStorage = allocateHeapCell(m_out.constIntPtr(allocator), slowPath);
ValueFromBlock fastButterfly = m_out.anchor(
m_out.add(m_out.constIntPtr(sizeInBytes + sizeof(IndexingHeader)), startOfStorage));
LValue butterflySize = m_out.add(
payloadSize, m_out.constIntPtr(sizeof(IndexingHeader)));
- LValue allocator = allocatorForSize(vm().auxiliarySpace, butterflySize, failCase);
+ LValue allocator = allocatorForSize(vm().jsValueGigacageAuxiliarySpace, butterflySize, failCase);
LValue startOfStorage = allocateHeapCell(allocator, failCase);
LValue butterfly = m_out.add(startOfStorage, m_out.constIntPtr(sizeof(IndexingHeader)));
}
}
- LValue caged(LValue ptr)
+ LValue caged(Gigacage::Kind kind, LValue ptr)
{
- if (vm().gigacageEnabled().isStillValid()) {
- m_graph.watchpoints().addLazily(vm().gigacageEnabled());
-
- LValue basePtr = m_out.constIntPtr(g_gigacageBasePtr);
- LValue mask = m_out.constIntPtr(GIGACAGE_MASK);
-
- // We don't have to worry about B3 messing up the bitAnd. Also, we want to get B3's excellent
- // codegen for 2-operand andq on x86-64.
- LValue masked = m_out.bitAnd(ptr, mask);
-
- // But B3 will currently mess up the code generation of this add. Basically, any offset from what we
- // compute here will get reassociated and folded with g_gigacageBasePtr. There's a world in which
- // moveConstants() observes that it needs to reassociate in order to hoist the big constants. But
- // it's much easier to just block B3's badness here. That's what we do for now.
- PatchpointValue* patchpoint = m_out.patchpoint(pointerType());
- patchpoint->appendSomeRegister(basePtr);
- patchpoint->appendSomeRegister(masked);
- patchpoint->setGenerator(
- [] (CCallHelpers& jit, const StackmapGenerationParams& params) {
- jit.addPtr(params[1].gpr(), params[2].gpr(), params[0].gpr());
- });
- patchpoint->effects = Effects::none();
- return patchpoint;
+ if (kind == Gigacage::Primitive) {
+ if (vm().primitiveGigacageEnabled().isStillValid())
+ m_graph.watchpoints().addLazily(vm().primitiveGigacageEnabled());
+ else
+ return ptr;
}
- return ptr;
+ LValue basePtr = m_out.constIntPtr(Gigacage::basePtr(kind));
+ LValue mask = m_out.constIntPtr(GIGACAGE_MASK);
+
+ // We don't have to worry about B3 messing up the bitAnd. Also, we want to get B3's excellent
+ // codegen for 2-operand andq on x86-64.
+ LValue masked = m_out.bitAnd(ptr, mask);
+
+ // But B3 will currently mess up the code generation of this add. Basically, any offset from what we
+ // compute here will get reassociated and folded with Gigacage::basePtr. There's a world in which
+ // moveConstants() observes that it needs to reassociate in order to hoist the big constants. But
+ // it's much easier to just block B3's badness here. That's what we do for now.
+ PatchpointValue* patchpoint = m_out.patchpoint(pointerType());
+ patchpoint->appendSomeRegister(basePtr);
+ patchpoint->appendSomeRegister(masked);
+ patchpoint->setGenerator(
+ [] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+ jit.addPtr(params[1].gpr(), params[2].gpr(), params[0].gpr());
+ });
+ patchpoint->effects = Effects::none();
+ return patchpoint;
}
void buildSwitch(SwitchData* data, LType type, LValue switchValue)
namespace JSC {
-FastMallocAlignedMemoryAllocator& FastMallocAlignedMemoryAllocator::instance()
-{
- static FastMallocAlignedMemoryAllocator* result;
- static std::once_flag onceFlag;
- std::call_once(
- onceFlag,
- [] {
- result = new FastMallocAlignedMemoryAllocator();
- });
- return *result;
-}
-
FastMallocAlignedMemoryAllocator::FastMallocAlignedMemoryAllocator()
{
}
class FastMallocAlignedMemoryAllocator : public AlignedMemoryAllocator {
public:
- JS_EXPORT_PRIVATE static FastMallocAlignedMemoryAllocator& instance();
-
+ FastMallocAlignedMemoryAllocator();
~FastMallocAlignedMemoryAllocator();
void* tryAllocateAlignedMemory(size_t alignment, size_t size) override;
void freeAlignedMemory(void*) override;
void dump(PrintStream&) const override;
-
-private:
- FastMallocAlignedMemoryAllocator();
};
} // namespace JSC
#include "config.h"
#include "GigacageAlignedMemoryAllocator.h"
-#include <mutex>
-#include <wtf/Gigacage.h>
-
namespace JSC {
-GigacageAlignedMemoryAllocator& GigacageAlignedMemoryAllocator::instance()
-{
- static GigacageAlignedMemoryAllocator* result;
- static std::once_flag onceFlag;
- std::call_once(
- onceFlag,
- [] {
- result = new GigacageAlignedMemoryAllocator();
- });
- return *result;
-}
-
-GigacageAlignedMemoryAllocator::GigacageAlignedMemoryAllocator()
+GigacageAlignedMemoryAllocator::GigacageAlignedMemoryAllocator(Gigacage::Kind kind)
+ : m_kind(kind)
{
}
void* GigacageAlignedMemoryAllocator::tryAllocateAlignedMemory(size_t alignment, size_t size)
{
- return Gigacage::tryAlignedMalloc(alignment, size);
+ return Gigacage::tryAlignedMalloc(m_kind, alignment, size);
}
void GigacageAlignedMemoryAllocator::freeAlignedMemory(void* basePtr)
{
- Gigacage::alignedFree(basePtr);
+ Gigacage::alignedFree(m_kind, basePtr);
}
void GigacageAlignedMemoryAllocator::dump(PrintStream& out) const
{
- out.print("Gigacage");
+ out.print(Gigacage::name(m_kind), "Gigacage");
}
} // namespace JSC
#pragma once
#include "AlignedMemoryAllocator.h"
+#include <wtf/Gigacage.h>
namespace JSC {
class GigacageAlignedMemoryAllocator : public AlignedMemoryAllocator {
public:
- // FIXME: This shouldn't be a singleton. There should be different instances for primaries, JSValues,
- // and other things.
- // https://bugs.webkit.org/show_bug.cgi?id=174919
- static GigacageAlignedMemoryAllocator& instance();
-
+ GigacageAlignedMemoryAllocator(Gigacage::Kind);
~GigacageAlignedMemoryAllocator();
void* tryAllocateAlignedMemory(size_t alignment, size_t size) override;
void dump(PrintStream&) const override;
private:
- GigacageAlignedMemoryAllocator();
+ Gigacage::Kind m_kind;
};
} // namespace JSC
return result;
}
-static void gigacageDisabled(void*)
+static void primitiveGigacageDisabled(void*)
{
- dataLog("Gigacage disabled! Aborting.\n");
+ dataLog("Primitive gigacage disabled! Aborting.\n");
UNREACHABLE_FOR_PLATFORM();
}
JSC::Wasm::enableFastMemory();
#endif
if (Gigacage::shouldBeEnabled())
- Gigacage::addDisableCallback(gigacageDisabled, nullptr);
+ Gigacage::addPrimitiveDisableCallback(primitiveGigacageDisabled, nullptr);
int result;
result = runJSC(
end)
end
-macro loadCaged(source, dest, scratch)
+macro loadCaged(basePtr, source, dest, scratch)
loadp source, dest
if GIGACAGE_ENABLED and not C_LOOP
- loadp _g_gigacageBasePtr, scratch
+ loadp basePtr, scratch
btpz scratch, .done
andp constexpr GIGACAGE_MASK, dest
addp scratch, dest
macro loadPropertyAtVariableOffset(propertyOffsetAsInt, objectAndStorage, value)
bilt propertyOffsetAsInt, firstOutOfLineOffset, .isInline
- loadCaged(JSObject::m_butterfly[objectAndStorage], objectAndStorage, value)
+ loadCaged(_g_jsValueGigacageBasePtr, JSObject::m_butterfly[objectAndStorage], objectAndStorage, value)
negi propertyOffsetAsInt
sxi2q propertyOffsetAsInt, propertyOffsetAsInt
jmp .ready
macro storePropertyAtVariableOffset(propertyOffsetAsInt, objectAndStorage, value, scratch)
bilt propertyOffsetAsInt, firstOutOfLineOffset, .isInline
- loadCaged(JSObject::m_butterfly[objectAndStorage], objectAndStorage, scratch)
+ loadCaged(_g_jsValueGigacageBasePtr, JSObject::m_butterfly[objectAndStorage], objectAndStorage, scratch)
negi propertyOffsetAsInt
sxi2q propertyOffsetAsInt, propertyOffsetAsInt
jmp .ready
btiz t2, IsArray, .opGetArrayLengthSlow
btiz t2, IndexingShapeMask, .opGetArrayLengthSlow
loadisFromInstruction(1, t1)
- loadCaged(JSObject::m_butterfly[t3], t0, t2)
+ loadCaged(_g_jsValueGigacageBasePtr, JSObject::m_butterfly[t3], t0, t2)
loadi -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0], t0
bilt t0, 0, .opGetArrayLengthSlow
orq tagTypeNumber, t0
loadisFromInstruction(3, t3)
loadConstantOrVariableInt32(t3, t1, .opGetByValSlow)
sxi2q t1, t1
- loadCaged(JSObject::m_butterfly[t0], t3, t5)
+ loadCaged(_g_jsValueGigacageBasePtr, JSObject::m_butterfly[t0], t3, t5)
andi IndexingShapeMask, t2
bieq t2, Int32Shape, .opGetByValIsContiguous
bineq t2, ContiguousShape, .opGetByValNotContiguous
bia t2, LastArrayType - FirstArrayType, .opGetByValSlow
# Sweet, now we know that we have a typed array. Do some basic things now.
- loadCaged(JSArrayBufferView::m_vector[t0], t3, t5)
+ loadCaged(_g_primitiveGigacageBasePtr, JSArrayBufferView::m_vector[t0], t3, t5)
biaeq t1, JSArrayBufferView::m_length[t0], .opGetByValSlow
# Now bisect through the various types. Note that we can treat Uint8ArrayType and
loadisFromInstruction(2, t0)
loadConstantOrVariableInt32(t0, t3, .opPutByValSlow)
sxi2q t3, t3
- loadCaged(JSObject::m_butterfly[t1], t0, t5)
+ loadCaged(_g_jsValueGigacageBasePtr, JSObject::m_butterfly[t1], t0, t5)
andi IndexingShapeMask, t2
bineq t2, Int32Shape, .opPutByValNotInt32
contiguousPutByVal(
size_t size = static_cast<size_t>(numElements) * static_cast<size_t>(elementByteSize);
if (!size)
size = 1; // Make sure malloc actually allocates something, but not too much. We use null to mean that the buffer is neutered.
- m_data = Gigacage::tryMalloc(size);
+ m_data = Gigacage::tryMalloc(Gigacage::Primitive, size);
if (!m_data) {
reset();
return;
memset(m_data, 0, size);
m_sizeInBytes = numElements * elementByteSize;
- m_destructor = [] (void* p) { Gigacage::free(p); };
+ m_destructor = [] (void* p) { Gigacage::free(Gigacage::Primitive, p); };
}
void ArrayBufferContents::makeShared()
// from the cage.
Ref<ArrayBuffer> ArrayBuffer::createAdopted(const void* data, unsigned byteLength)
{
- return createFromBytes(data, byteLength, [] (void* p) { Gigacage::free(p); });
+ return createFromBytes(data, byteLength, [] (void* p) { Gigacage::free(Gigacage::Primitive, p); });
}
// FIXME: We cannot use this except if the memory comes from the cage.
// - WebAssembly. Wasm should allocate from the cage.
Ref<ArrayBuffer> ArrayBuffer::createFromBytes(const void* data, unsigned byteLength, ArrayBufferDestructorFunction&& destructor)
{
- if (data && byteLength && !Gigacage::isCaged(data))
- Gigacage::disableGigacage();
+ if (data && byteLength && !Gigacage::isCaged(Gigacage::Primitive, data))
+ Gigacage::disablePrimitiveGigacage();
ArrayBufferContents contents(const_cast<void*>(data), byteLength, WTFMove(destructor));
return create(WTFMove(contents));
template<typename T>
class AuxiliaryBarrier {
public:
+ typedef T Type;
+
AuxiliaryBarrier(): m_value() { }
template<typename U>
inline Butterfly* Butterfly::createUninitialized(VM& vm, JSCell*, size_t preCapacity, size_t propertyCapacity, bool hasIndexingHeader, size_t indexingPayloadSizeInBytes)
{
size_t size = totalSize(preCapacity, propertyCapacity, hasIndexingHeader, indexingPayloadSizeInBytes);
- void* base = vm.auxiliarySpace.allocate(size);
+ void* base = vm.jsValueGigacageAuxiliarySpace.allocate(size);
Butterfly* result = fromBase(base, preCapacity, propertyCapacity);
return result;
}
inline Butterfly* Butterfly::tryCreate(VM& vm, JSCell*, size_t preCapacity, size_t propertyCapacity, bool hasIndexingHeader, const IndexingHeader& indexingHeader, size_t indexingPayloadSizeInBytes)
{
size_t size = totalSize(preCapacity, propertyCapacity, hasIndexingHeader, indexingPayloadSizeInBytes);
- void* base = vm.auxiliarySpace.tryAllocate(size);
+ void* base = vm.jsValueGigacageAuxiliarySpace.tryAllocate(size);
if (!base)
return nullptr;
Butterfly* result = fromBase(base, preCapacity, propertyCapacity);
void* theBase = base(0, propertyCapacity);
size_t oldSize = totalSize(0, propertyCapacity, hadIndexingHeader, oldIndexingPayloadSizeInBytes);
size_t newSize = totalSize(0, propertyCapacity, true, newIndexingPayloadSizeInBytes);
- void* newBase = vm.auxiliarySpace.tryAllocate(newSize);
+ void* newBase = vm.jsValueGigacageAuxiliarySpace.tryAllocate(newSize);
if (!newBase)
return nullptr;
// FIXME: This probably shouldn't be a memcpy.
--- /dev/null
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "AuxiliaryBarrier.h"
+#include <wtf/CagedPtr.h>
+
+namespace JSC {
+
+class JSCell;
+class VM;
+
+// This is a convenient combo of AuxiliaryBarrier and CagedPtr.
+
+template<Gigacage::Kind passedKind, typename T>
+class CagedBarrierPtr {
+public:
+ static constexpr Gigacage::Kind kind = passedKind;
+ typedef T Type;
+
+ CagedBarrierPtr() { }
+
+ template<typename U>
+ CagedBarrierPtr(VM& vm, JSCell* cell, U&& value)
+ {
+ m_barrier.set(vm, cell, std::forward<U>(value));
+ }
+
+ void clear() { m_barrier.clear(); }
+
+ template<typename U>
+ void set(VM& vm, JSCell* cell, U&& value)
+ {
+ m_barrier.set(vm, cell, std::forward<U>(value));
+ }
+
+ T* get() const { return m_barrier.get().get(); }
+ T* getMayBeNull() const { return m_barrier.get().getMayBeNull(); }
+
+ bool operator==(const CagedBarrierPtr& other) const
+ {
+ return getMayBeNull() == other.getMayBeNull();
+ }
+
+ bool operator!=(const CagedBarrierPtr& other) const
+ {
+ return !(*this == other);
+ }
+
+ explicit operator bool() const
+ {
+ return *this != CagedBarrierPtr();
+ }
+
+ template<typename U>
+ void setWithoutBarrier(U&& value) { m_barrier.setWithoutBarrier(std::forward<U>(value)); }
+
+ T& operator*() const { return *get(); }
+ T* operator->() const { return get(); }
+
+ template<typename IndexType>
+ T& operator[](IndexType index) const { return get()[index]; }
+
+private:
+ AuxiliaryBarrier<CagedPtr<kind, T>> m_barrier;
+};
+
+} // namespace JSC
putDirect(vm, vm.propertyNames->callee, m_callee.get(), DontEnum);
putDirect(vm, vm.propertyNames->iteratorSymbol, globalObject()->arrayProtoValuesFunction(), DontEnum);
- void* backingStore = vm.auxiliarySpace.tryAllocate(mappedArgumentsSize());
+ void* backingStore = vm.gigacageAuxiliarySpace(m_mappedArguments.kind).tryAllocate(mappedArgumentsSize());
RELEASE_ASSERT(backingStore);
bool* overrides = static_cast<bool*>(backingStore);
m_mappedArguments.set(vm, this, overrides);
void DirectArguments::unmapArgument(VM& vm, unsigned index)
{
overrideThingsIfNecessary(vm);
- m_mappedArguments.get()[index] = true;
+ m_mappedArguments[index] = true;
}
void DirectArguments::copyToArguments(ExecState* exec, VirtualRegister firstElementDest, unsigned offset, unsigned length)
/*
- * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
#pragma once
-#include "AuxiliaryBarrier.h"
+#include "CagedBarrierPtr.h"
#include "DirectArgumentsOffset.h"
#include "GenericArguments.h"
+#include <wtf/CagedPtr.h>
namespace JSC {
bool isMappedArgument(uint32_t i) const
{
- return i < m_length && (!m_mappedArguments || !m_mappedArguments.get()[i]);
+ return i < m_length && (!m_mappedArguments || !m_mappedArguments[i]);
}
bool isMappedArgumentInDFG(uint32_t i) const
WriteBarrier<JSFunction> m_callee;
uint32_t m_length; // Always the actual length of captured arguments and never what was stored into the length property.
uint32_t m_minCapacity; // The max of this and length determines the capacity of this object. It may be the actual capacity, or maybe something smaller. We arrange it this way to be kind to the JITs.
- AuxiliaryBarrier<bool*> m_mappedArguments; // If non-null, it means that length, callee, and caller are fully materialized properties.
+ CagedBarrierPtr<Gigacage::Primitive, bool> m_mappedArguments; // If non-null, it means that length, callee, and caller are fully materialized properties.
};
} // namespace JSC
/*
- * Copyright (C) 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
#pragma once
+#include "CagedBarrierPtr.h"
#include "JSObject.h"
namespace JSC {
void copyToArguments(ExecState*, VirtualRegister firstElementDest, unsigned offset, unsigned length);
- AuxiliaryBarrier<bool*> m_modifiedArgumentsDescriptor;
+ CagedBarrierPtr<Gigacage::Primitive, bool> m_modifiedArgumentsDescriptor;
};
} // namespace JSC
RELEASE_ASSERT(!m_modifiedArgumentsDescriptor);
if (argsLength) {
- void* backingStore = vm.auxiliarySpace.tryAllocate(WTF::roundUpToMultipleOf<8>(argsLength));
+ void* backingStore = vm.gigacageAuxiliarySpace(m_modifiedArgumentsDescriptor.kind).tryAllocate(WTF::roundUpToMultipleOf<8>(argsLength));
RELEASE_ASSERT(backingStore);
bool* modifiedArguments = static_cast<bool*>(backingStore);
m_modifiedArgumentsDescriptor.set(vm, this, modifiedArguments);
{
initModifiedArgumentsDescriptorIfNecessary(vm, length);
if (index < length)
- m_modifiedArgumentsDescriptor.get()[index] = true;
+ m_modifiedArgumentsDescriptor[index] = true;
}
template<typename Type>
if (!m_modifiedArgumentsDescriptor)
return false;
if (index < length)
- return m_modifiedArgumentsDescriptor.get()[index];
+ return m_modifiedArgumentsDescriptor[index];
return false;
}
visitor.append(thisObject->m_head);
visitor.append(thisObject->m_tail);
- if (HashMapBufferType* buffer = thisObject->m_buffer.get())
+ if (HashMapBufferType* buffer = thisObject->m_buffer.getMayBeNull())
visitor.markAuxiliary(buffer);
}
{
auto scope = DECLARE_THROW_SCOPE(vm);
size_t allocationSize = HashMapBuffer::allocationSize(capacity);
- void* data = vm.auxiliarySpace.tryAllocate(allocationSize);
+ void* data = vm.jsValueGigacageAuxiliarySpace.tryAllocate(allocationSize);
if (!data) {
throwOutOfMemoryError(exec, scope);
return nullptr;
ALWAYS_INLINE HashMapBucketType** buffer() const
{
- return m_buffer.get()->buffer();
+ return m_buffer->buffer();
}
void finishCreation(ExecState* exec, VM& vm)
makeAndSetNewBuffer(exec, vm);
RETURN_IF_EXCEPTION(scope, void());
} else {
- m_buffer.get()->reset(m_capacity);
+ m_buffer->reset(m_capacity);
assertBufferIsEmpty();
}
WriteBarrier<HashMapBucketType> m_head;
WriteBarrier<HashMapBucketType> m_tail;
- AuxiliaryBarrier<HashMapBufferType*> m_buffer;
+ CagedBarrierPtr<Gigacage::JSValue, HashMapBufferType> m_buffer;
uint32_t m_keyCount;
uint32_t m_deleteCount;
uint32_t m_capacity;
|| hasContiguous(indexingType));
unsigned vectorLength = Butterfly::optimalContiguousVectorLength(structure, initialLength);
- void* temp = vm.auxiliarySpace.tryAllocate(deferralContext, Butterfly::totalSize(0, outOfLineStorage, true, vectorLength * sizeof(EncodedJSValue)));
+ void* temp = vm.jsValueGigacageAuxiliarySpace.tryAllocate(deferralContext, Butterfly::totalSize(0, outOfLineStorage, true, vectorLength * sizeof(EncodedJSValue)));
if (UNLIKELY(!temp))
return nullptr;
butterfly = Butterfly::fromBase(temp, 0, outOfLineStorage);
} else {
static const unsigned indexBias = 0;
unsigned vectorLength = ArrayStorage::optimalVectorLength(indexBias, structure, initialLength);
- void* temp = vm.auxiliarySpace.tryAllocate(deferralContext, Butterfly::totalSize(indexBias, outOfLineStorage, true, ArrayStorage::sizeFor(vectorLength)));
+ void* temp = vm.jsValueGigacageAuxiliarySpace.tryAllocate(deferralContext, Butterfly::totalSize(indexBias, outOfLineStorage, true, ArrayStorage::sizeFor(vectorLength)));
if (UNLIKELY(!temp))
return nullptr;
butterfly = Butterfly::fromBase(temp, indexBias, outOfLineStorage);
allocatedNewStorage = false;
} else {
size_t newSize = Butterfly::totalSize(0, propertyCapacity, true, ArrayStorage::sizeFor(desiredCapacity));
- newAllocBase = vm.auxiliarySpace.tryAllocate(newSize);
+ newAllocBase = vm.jsValueGigacageAuxiliarySpace.tryAllocate(newSize);
if (!newAllocBase)
return false;
newStorageCapacity = desiredCapacity;
VM& vm = exec->vm();
auto scope = DECLARE_THROW_SCOPE(vm);
- Butterfly* butterfly = m_butterfly.get().getMayBeNull();
+ Butterfly* butterfly = m_butterfly.getMayBeNull();
switch (indexingType()) {
case ArrayClass:
if (!newLength)
VM& vm = exec->vm();
auto scope = DECLARE_THROW_SCOPE(vm);
- Butterfly* butterfly = m_butterfly.get().getMayBeNull();
+ Butterfly* butterfly = m_butterfly.getMayBeNull();
switch (indexingType()) {
case ArrayClass:
VM& vm = exec->vm();
auto scope = DECLARE_THROW_SCOPE(vm);
- Butterfly* butterfly = m_butterfly.get().getMayBeNull();
+ Butterfly* butterfly = m_butterfly.getMayBeNull();
switch (indexingType()) {
case ArrayClass: {
auto& resultButterfly = *resultArray->butterfly();
if (arrayType == ArrayWithDouble)
- memcpy(resultButterfly.contiguousDouble().data(), m_butterfly.get()->contiguousDouble().data() + startIndex, sizeof(JSValue) * count);
+ memcpy(resultButterfly.contiguousDouble().data(), m_butterfly->contiguousDouble().data() + startIndex, sizeof(JSValue) * count);
else
- memcpy(resultButterfly.contiguous().data(), m_butterfly.get()->contiguous().data() + startIndex, sizeof(JSValue) * count);
+ memcpy(resultButterfly.contiguous().data(), m_butterfly->contiguous().data() + startIndex, sizeof(JSValue) * count);
resultButterfly.setPublicLength(count);
return resultArray;
// Adjust the Butterfly and the index bias. We only need to do this here because we're changing
// the start of the Butterfly, which needs to point at the first indexed property in the used
// portion of the vector.
- Butterfly* butterfly = m_butterfly.get()->shift(structure(), count);
+ Butterfly* butterfly = m_butterfly->shift(structure(), count);
setButterfly(vm, butterfly);
storage = butterfly->arrayStorage();
storage->m_indexBias += count;
VM& vm = exec->vm();
RELEASE_ASSERT(count > 0);
- Butterfly* butterfly = m_butterfly.get().getMayBeNull();
+ Butterfly* butterfly = m_butterfly.getMayBeNull();
switch (indexingType()) {
case ArrayClass:
VM& vm = exec->vm();
auto scope = DECLARE_THROW_SCOPE(vm);
- Butterfly* butterfly = m_butterfly.get().getMayBeNull();
+ Butterfly* butterfly = m_butterfly.getMayBeNull();
switch (indexingType()) {
case ArrayClass:
throwOutOfMemoryError(exec, scope);
return false;
}
- butterfly = m_butterfly.get().getMayBeNull();
+ butterfly = m_butterfly.getMayBeNull();
// We have to check for holes before we start moving things around so that we don't get halfway
// through shifting and then realize we should have been in ArrayStorage mode.
throwOutOfMemoryError(exec, scope);
return false;
}
- butterfly = m_butterfly.get().getMayBeNull();
+ butterfly = m_butterfly.getMayBeNull();
// We have to check for holes before we start moving things around so that we don't get halfway
// through shifting and then realize we should have been in ArrayStorage mode.
unsigned vectorEnd;
WriteBarrier<Unknown>* vector;
- Butterfly* butterfly = m_butterfly.get().getMayBeNull();
+ Butterfly* butterfly = m_butterfly.getMayBeNull();
switch (indexingType()) {
case ArrayClass:
// FIXME: What prevents this from being called with a RuntimeArray? The length function will always return 0 in that case.
ASSERT(length == this->length());
- Butterfly* butterfly = m_butterfly.get().getMayBeNull();
+ Butterfly* butterfly = m_butterfly.getMayBeNull();
switch (indexingType()) {
case ArrayClass:
return;
return nullptr;
unsigned vectorLength = Butterfly::optimalContiguousVectorLength(structure, initialLength);
- void* temp = vm.auxiliarySpace.tryAllocate(nullptr, Butterfly::totalSize(0, outOfLineStorage, true, vectorLength * sizeof(EncodedJSValue)));
+ void* temp = vm.jsValueGigacageAuxiliarySpace.tryAllocate(nullptr, Butterfly::totalSize(0, outOfLineStorage, true, vectorLength * sizeof(EncodedJSValue)));
if (!temp)
return nullptr;
butterfly = Butterfly::fromBase(temp, 0, outOfLineStorage);
void* temp;
size_t size = sizeOf(length, elementSize);
if (size) {
- temp = vm.auxiliarySpace.tryAllocate(nullptr, size);
+ temp = vm.primitiveGigacageAuxiliarySpace.tryAllocate(nullptr, size);
if (!temp)
return;
} else
return;
size_t size = static_cast<size_t>(length) * static_cast<size_t>(elementSize);
- m_vector = Gigacage::tryMalloc(size);
+ m_vector = Gigacage::tryMalloc(Gigacage::Primitive, size);
if (!m_vector)
return;
if (mode == ZeroFill)
JSArrayBufferView* thisObject = static_cast<JSArrayBufferView*>(cell);
ASSERT(thisObject->m_mode == OversizeTypedArray || thisObject->m_mode == WastefulTypedArray);
if (thisObject->m_mode == OversizeTypedArray)
- Gigacage::free(thisObject->m_vector.get());
+ Gigacage::free(Gigacage::Primitive, thisObject->m_vector.get());
}
JSArrayBuffer* JSArrayBufferView::unsharedJSBuffer(ExecState* exec)
// Note: everything below must come after addCurrentThread().
m_vm->traps().notifyGrabAllLocks();
- m_vm->fireGigacageEnabledIfNecessary();
+ m_vm->firePrimitiveGigacageEnabledIfNecessary();
#if ENABLE(SAMPLING_PROFILER)
if (SamplingProfiler* samplingProfiler = m_vm->samplingProfiler())
builder.appendPropertyNameEdge(thisObject, toValue.asCell(), entry.key);
}
- Butterfly* butterfly = thisObject->m_butterfly.get().getMayBeNull();
+ Butterfly* butterfly = thisObject->m_butterfly.getMayBeNull();
if (butterfly) {
WriteBarrier<Unknown>* data = nullptr;
uint32_t count = 0;
}
case ALL_ARRAY_STORAGE_INDEXING_TYPES: {
- ArrayStorage* storage = thisObject->m_butterfly.get()->arrayStorage();
+ ArrayStorage* storage = thisObject->m_butterfly->arrayStorage();
if (i >= storage->length())
return false;
case NonArrayWithArrayStorage:
case ArrayWithArrayStorage: {
- ArrayStorage* storage = thisObject->m_butterfly.get()->arrayStorage();
+ ArrayStorage* storage = thisObject->m_butterfly->arrayStorage();
if (propertyName >= storage->vectorLength())
break;
case NonArrayWithSlowPutArrayStorage:
case ArrayWithSlowPutArrayStorage: {
- ArrayStorage* storage = thisObject->m_butterfly.get()->arrayStorage();
+ ArrayStorage* storage = thisObject->m_butterfly->arrayStorage();
if (propertyName >= storage->vectorLength())
break;
enterDictionaryIndexingModeWhenArrayStorageAlreadyExists(vm, storage);
break;
case ALL_ARRAY_STORAGE_INDEXING_TYPES:
- enterDictionaryIndexingModeWhenArrayStorageAlreadyExists(vm, m_butterfly.get()->arrayStorage());
+ enterDictionaryIndexingModeWhenArrayStorageAlreadyExists(vm, m_butterfly->arrayStorage());
break;
default:
unsigned propertyCapacity = structure->outOfLineCapacity();
unsigned vectorLength = Butterfly::optimalContiguousVectorLength(propertyCapacity, length);
Butterfly* newButterfly = Butterfly::createOrGrowArrayRight(
- m_butterfly.get().getMayBeNull(), vm, this, structure, propertyCapacity, false, 0,
+ m_butterfly.getMayBeNull(), vm, this, structure, propertyCapacity, false, 0,
sizeof(EncodedJSValue) * vectorLength);
newButterfly->setPublicLength(length);
newButterfly->setVectorLength(vectorLength);
IndexingType oldType = indexingType();
ASSERT_UNUSED(oldType, !hasIndexedProperties(oldType));
- Butterfly* newButterfly = createArrayStorageButterfly(vm, this, oldStructure, length, vectorLength, m_butterfly.get().getMayBeNull());
+ Butterfly* newButterfly = createArrayStorageButterfly(vm, this, oldStructure, length, vectorLength, m_butterfly.getMayBeNull());
ArrayStorage* result = newButterfly->arrayStorage();
Structure* newStructure = Structure::nonPropertyTransition(vm, oldStructure, oldStructure->suggestedArrayStorageTransition());
nukeStructureAndSetButterfly(vm, oldStructureID, newButterfly);
{
ASSERT(hasUndecided(indexingType()));
- Butterfly* butterfly = m_butterfly.get().getMayBeNull();
+ Butterfly* butterfly = m_butterfly.getMayBeNull();
for (unsigned i = butterfly->vectorLength(); i--;)
butterfly->contiguousInt32()[i].setWithoutWriteBarrier(JSValue());
setStructure(vm, Structure::nonPropertyTransition(vm, structure(vm), NonPropertyTransition::AllocateInt32));
- return m_butterfly.get()->contiguousInt32();
+ return m_butterfly->contiguousInt32();
}
ContiguousDoubles JSObject::convertUndecidedToDouble(VM& vm)
{
ASSERT(hasUndecided(indexingType()));
- Butterfly* butterfly = m_butterfly.get().get();
+ Butterfly* butterfly = m_butterfly.get();
for (unsigned i = butterfly->vectorLength(); i--;)
butterfly->contiguousDouble()[i] = PNaN;
setStructure(vm, Structure::nonPropertyTransition(vm, structure(vm), NonPropertyTransition::AllocateDouble));
- return m_butterfly.get()->contiguousDouble();
+ return m_butterfly->contiguousDouble();
}
ContiguousJSValues JSObject::convertUndecidedToContiguous(VM& vm)
{
ASSERT(hasUndecided(indexingType()));
- Butterfly* butterfly = m_butterfly.get().get();
+ Butterfly* butterfly = m_butterfly.get();
for (unsigned i = butterfly->vectorLength(); i--;)
butterfly->contiguous()[i].setWithoutWriteBarrier(JSValue());
WTF::storeStoreFence();
setStructure(vm, Structure::nonPropertyTransition(vm, structure(vm), NonPropertyTransition::AllocateContiguous));
- return m_butterfly.get()->contiguous();
+ return m_butterfly->contiguous();
}
ArrayStorage* JSObject::constructConvertedArrayStorageWithoutCopyingElements(VM& vm, unsigned neededLength)
{
Structure* structure = this->structure(vm);
- unsigned publicLength = m_butterfly.get()->publicLength();
+ unsigned publicLength = m_butterfly->publicLength();
unsigned propertyCapacity = structure->outOfLineCapacity();
unsigned propertySize = structure->outOfLineSize();
memcpy(
newButterfly->propertyStorage() - propertySize,
- m_butterfly.get()->propertyStorage() - propertySize,
+ m_butterfly->propertyStorage() - propertySize,
propertySize * sizeof(EncodedJSValue));
ArrayStorage* newStorage = newButterfly->arrayStorage();
DeferGC deferGC(vm.heap);
ASSERT(hasUndecided(indexingType()));
- unsigned vectorLength = m_butterfly.get()->vectorLength();
+ unsigned vectorLength = m_butterfly->vectorLength();
ArrayStorage* storage = constructConvertedArrayStorageWithoutCopyingElements(vm, vectorLength);
for (unsigned i = vectorLength; i--;)
{
ASSERT(hasInt32(indexingType()));
- Butterfly* butterfly = m_butterfly.get().get();
+ Butterfly* butterfly = m_butterfly.get();
for (unsigned i = butterfly->vectorLength(); i--;) {
WriteBarrier<Unknown>* current = &butterfly->contiguousInt32()[i];
double* currentAsDouble = bitwise_cast<double*>(current);
}
setStructure(vm, Structure::nonPropertyTransition(vm, structure(vm), NonPropertyTransition::AllocateDouble));
- return m_butterfly.get()->contiguousDouble();
+ return m_butterfly->contiguousDouble();
}
ContiguousJSValues JSObject::convertInt32ToContiguous(VM& vm)
ASSERT(hasInt32(indexingType()));
setStructure(vm, Structure::nonPropertyTransition(vm, structure(vm), NonPropertyTransition::AllocateContiguous));
- return m_butterfly.get()->contiguous();
+ return m_butterfly->contiguous();
}
ArrayStorage* JSObject::convertInt32ToArrayStorage(VM& vm, NonPropertyTransition transition)
DeferGC deferGC(vm.heap);
ASSERT(hasInt32(indexingType()));
- unsigned vectorLength = m_butterfly.get()->vectorLength();
+ unsigned vectorLength = m_butterfly->vectorLength();
ArrayStorage* newStorage = constructConvertedArrayStorageWithoutCopyingElements(vm, vectorLength);
- Butterfly* butterfly = m_butterfly.get().get();
+ Butterfly* butterfly = m_butterfly.get();
for (unsigned i = 0; i < vectorLength; i++) {
JSValue v = butterfly->contiguous()[i].get();
newStorage->m_vector[i].setWithoutWriteBarrier(v);
{
ASSERT(hasDouble(indexingType()));
- Butterfly* butterfly = m_butterfly.get().get();
+ Butterfly* butterfly = m_butterfly.get();
for (unsigned i = butterfly->vectorLength(); i--;) {
double* current = &butterfly->contiguousDouble()[i];
WriteBarrier<Unknown>* currentAsValue = bitwise_cast<WriteBarrier<Unknown>*>(current);
WTF::storeStoreFence();
setStructure(vm, Structure::nonPropertyTransition(vm, structure(vm), NonPropertyTransition::AllocateContiguous));
- return m_butterfly.get()->contiguous();
+ return m_butterfly->contiguous();
}
ArrayStorage* JSObject::convertDoubleToArrayStorage(VM& vm, NonPropertyTransition transition)
DeferGC deferGC(vm.heap);
ASSERT(hasDouble(indexingType()));
- unsigned vectorLength = m_butterfly.get()->vectorLength();
+ unsigned vectorLength = m_butterfly->vectorLength();
ArrayStorage* newStorage = constructConvertedArrayStorageWithoutCopyingElements(vm, vectorLength);
- Butterfly* butterfly = m_butterfly.get().get();
+ Butterfly* butterfly = m_butterfly.get();
for (unsigned i = 0; i < vectorLength; i++) {
double value = butterfly->contiguousDouble()[i];
if (value != value) {
DeferGC deferGC(vm.heap);
ASSERT(hasContiguous(indexingType()));
- unsigned vectorLength = m_butterfly.get()->vectorLength();
+ unsigned vectorLength = m_butterfly->vectorLength();
ArrayStorage* newStorage = constructConvertedArrayStorageWithoutCopyingElements(vm, vectorLength);
- Butterfly* butterfly = m_butterfly.get().get();
+ Butterfly* butterfly = m_butterfly.get();
for (unsigned i = 0; i < vectorLength; i++) {
JSValue v = butterfly->contiguous()[i].get();
newStorage->m_vector[i].setWithoutWriteBarrier(v);
void JSObject::setIndexQuicklyToUndecided(VM& vm, unsigned index, JSValue value)
{
- ASSERT(index < m_butterfly.get()->publicLength());
- ASSERT(index < m_butterfly.get()->vectorLength());
+ ASSERT(index < m_butterfly->publicLength());
+ ASSERT(index < m_butterfly->vectorLength());
convertUndecidedForValue(vm, value);
setIndexQuickly(vm, index, value);
}
return enterDictionaryIndexingModeWhenArrayStorageAlreadyExists(vm, convertContiguousToArrayStorage(vm));
case ALL_ARRAY_STORAGE_INDEXING_TYPES:
- return enterDictionaryIndexingModeWhenArrayStorageAlreadyExists(vm, m_butterfly.get()->arrayStorage());
+ return enterDictionaryIndexingModeWhenArrayStorageAlreadyExists(vm, m_butterfly->arrayStorage());
default:
CRASH();
}
case ALL_ARRAY_STORAGE_INDEXING_TYPES: {
- ArrayStorage* storage = thisObject->m_butterfly.get()->arrayStorage();
+ ArrayStorage* storage = thisObject->m_butterfly->arrayStorage();
if (i < storage->vectorLength()) {
WriteBarrier<Unknown>& valueSlot = storage->m_vector[i];
}
case ALL_ARRAY_STORAGE_INDEXING_TYPES: {
- ArrayStorage* storage = object->m_butterfly.get()->arrayStorage();
+ ArrayStorage* storage = object->m_butterfly->arrayStorage();
unsigned usedVectorLength = std::min(storage->length(), storage->vectorLength());
for (unsigned i = 0; i < usedVectorLength; ++i) {
bool JSObject::putIndexedDescriptor(ExecState* exec, SparseArrayEntry* entryInMap, const PropertyDescriptor& descriptor, PropertyDescriptor& oldDescriptor)
{
VM& vm = exec->vm();
- auto map = m_butterfly.get()->arrayStorage()->m_sparseMap.get();
+ auto map = m_butterfly->arrayStorage()->m_sparseMap.get();
if (descriptor.isDataDescriptor()) {
if (descriptor.value())
if (descriptor.attributes() & (ReadOnly | Accessor))
notifyPresenceOfIndexedAccessors(vm);
- SparseArrayValueMap* map = m_butterfly.get()->arrayStorage()->m_sparseMap.get();
+ SparseArrayValueMap* map = m_butterfly->arrayStorage()->m_sparseMap.get();
RELEASE_ASSERT(map);
// 1. Let current be the result of calling the [[GetOwnProperty]] internal method of O with property name P.
entryInMap->get(defaults);
putIndexedDescriptor(exec, entryInMap, descriptor, defaults);
- Butterfly* butterfly = m_butterfly.get().get();
+ Butterfly* butterfly = m_butterfly.get();
if (index >= butterfly->arrayStorage()->length())
butterfly->arrayStorage()->setLength(index + 1);
return true;
ASSERT((indexingType() & IndexingShapeMask) == indexingShape);
ASSERT(!indexingShouldBeSparse());
- Butterfly* butterfly = m_butterfly.get().get();
+ Butterfly* butterfly = m_butterfly.get();
// For us to get here, the index is either greater than the public length, or greater than
// or equal to the vector length.
throwOutOfMemoryError(exec, scope);
return false;
}
- butterfly = m_butterfly.get().get();
+ butterfly = m_butterfly.get();
RELEASE_ASSERT(i < butterfly->vectorLength());
switch (indexingShape) {
case ALL_INT32_INDEXING_TYPES: {
if (attributes) {
- if (i < m_butterfly.get()->vectorLength())
+ if (i < m_butterfly->vectorLength())
return putDirectIndexBeyondVectorLengthWithArrayStorage(exec, i, value, attributes, mode, ensureArrayStorageExistsAndEnterDictionaryIndexingMode(vm));
return putDirectIndexBeyondVectorLengthWithArrayStorage(exec, i, value, attributes, mode, convertInt32ToArrayStorage(vm));
}
case ALL_DOUBLE_INDEXING_TYPES: {
if (attributes) {
- if (i < m_butterfly.get()->vectorLength())
+ if (i < m_butterfly->vectorLength())
return putDirectIndexBeyondVectorLengthWithArrayStorage(exec, i, value, attributes, mode, ensureArrayStorageExistsAndEnterDictionaryIndexingMode(vm));
return putDirectIndexBeyondVectorLengthWithArrayStorage(exec, i, value, attributes, mode, convertDoubleToArrayStorage(vm));
}
case ALL_CONTIGUOUS_INDEXING_TYPES: {
if (attributes) {
- if (i < m_butterfly.get()->vectorLength())
+ if (i < m_butterfly->vectorLength())
return putDirectIndexBeyondVectorLengthWithArrayStorage(exec, i, value, attributes, mode, ensureArrayStorageExistsAndEnterDictionaryIndexingMode(vm));
return putDirectIndexBeyondVectorLengthWithArrayStorage(exec, i, value, attributes, mode, convertContiguousToArrayStorage(vm));
}
case ALL_ARRAY_STORAGE_INDEXING_TYPES:
if (attributes) {
- if (i < m_butterfly.get()->vectorLength())
+ if (i < m_butterfly->vectorLength())
return putDirectIndexBeyondVectorLengthWithArrayStorage(exec, i, value, attributes, mode, ensureArrayStorageExistsAndEnterDictionaryIndexingMode(vm));
}
return putDirectIndexBeyondVectorLengthWithArrayStorage(exec, i, value, attributes, mode, arrayStorage());
if (hasIndexedProperties(indexingType())) {
if (ArrayStorage* storage = arrayStorageOrNull())
indexBias = storage->m_indexBias;
- vectorLength = m_butterfly.get()->vectorLength();
- length = m_butterfly.get()->publicLength();
+ vectorLength = m_butterfly->vectorLength();
+ length = m_butterfly->publicLength();
}
return getNewVectorLength(indexBias, vectorLength, length, desiredLength);
bool JSObject::ensureLengthSlow(VM& vm, unsigned length)
{
- Butterfly* butterfly = m_butterfly.get().get();
+ Butterfly* butterfly = m_butterfly.get();
ASSERT(length <= MAX_STORAGE_VECTOR_LENGTH);
ASSERT(hasContiguous(indexingType()) || hasInt32(indexingType()) || hasDouble(indexingType()) || hasUndecided(indexingType()));
{
ASSERT(length <= MAX_STORAGE_VECTOR_LENGTH);
ASSERT(hasContiguous(indexingType()) || hasInt32(indexingType()) || hasDouble(indexingType()) || hasUndecided(indexingType()));
- ASSERT(m_butterfly.get()->vectorLength() > length);
- ASSERT(!m_butterfly.get()->indexingHeader()->preCapacity(structure()));
+ ASSERT(m_butterfly->vectorLength() > length);
+ ASSERT(!m_butterfly->indexingHeader()->preCapacity(structure()));
DeferGC deferGC(vm.heap);
- Butterfly* newButterfly = m_butterfly.get()->resizeArray(vm, this, structure(), 0, ArrayStorage::sizeFor(length));
+ Butterfly* newButterfly = m_butterfly->resizeArray(vm, this, structure(), 0, ArrayStorage::sizeFor(length));
newButterfly->setVectorLength(length);
newButterfly->setPublicLength(length);
WTF::storeStoreFence();
// It's important that this function not rely on structure(), for the property
// capacity, since we might have already mutated the structure in-place.
- return Butterfly::createOrGrowPropertyStorage(m_butterfly.get().getMayBeNull(), vm, this, structure(vm), oldSize, newSize);
+ return Butterfly::createOrGrowPropertyStorage(m_butterfly.getMayBeNull(), vm, this, structure(vm), oldSize, newSize);
}
static JSCustomGetterSetterFunction* getCustomGetterSetterFunctionForGetterSetter(ExecState* exec, PropertyName propertyName, CustomGetterSetter* getterSetter, JSCustomGetterSetterFunction::Type type)
}
case ALL_ARRAY_STORAGE_INDEXING_TYPES: {
- ArrayStorage* storage = object->m_butterfly.get()->arrayStorage();
+ ArrayStorage* storage = object->m_butterfly->arrayStorage();
if (storage->m_sparseMap.get())
return 0;
#include "ArrayConventions.h"
#include "ArrayStorage.h"
-#include "AuxiliaryBarrier.h"
#include "Butterfly.h"
#include "CPU.h"
+#include "CagedBarrierPtr.h"
#include "CallFrame.h"
#include "ClassInfo.h"
#include "CustomGetterSetter.h"
#include "VM.h"
#include "JSString.h"
#include "SparseArrayValueMap.h"
-#include <wtf/CagedPtr.h>
#include <wtf/StdLibExtras.h>
namespace JSC {
{
if (!hasIndexedProperties(indexingType()))
return 0;
- return m_butterfly.get()->publicLength();
+ return m_butterfly->publicLength();
}
unsigned getVectorLength()
{
if (!hasIndexedProperties(indexingType()))
return 0;
- return m_butterfly.get()->vectorLength();
+ return m_butterfly->vectorLength();
}
static bool putInlineForJSObject(JSCell*, ExecState*, PropertyName, JSValue, PutPropertySlot&);
case ALL_DOUBLE_INDEXING_TYPES:
case ALL_CONTIGUOUS_INDEXING_TYPES:
case ALL_ARRAY_STORAGE_INDEXING_TYPES:
- return propertyName < m_butterfly.get()->vectorLength();
+ return propertyName < m_butterfly->vectorLength();
default:
RELEASE_ASSERT_NOT_REACHED();
return false;
bool canGetIndexQuickly(unsigned i)
{
- Butterfly* butterfly = m_butterfly.get().getMayBeNull();
+ Butterfly* butterfly = m_butterfly.getMayBeNull();
switch (indexingType()) {
case ALL_BLANK_INDEXING_TYPES:
case ALL_UNDECIDED_INDEXING_TYPES:
JSValue getIndexQuickly(unsigned i)
{
- Butterfly* butterfly = m_butterfly.get().get();
+ Butterfly* butterfly = m_butterfly.get();
switch (indexingType()) {
case ALL_INT32_INDEXING_TYPES:
return jsNumber(butterfly->contiguous()[i].get().asInt32());
JSValue tryGetIndexQuickly(unsigned i) const
{
- Butterfly* butterfly = m_butterfly.get().getMayBeNull();
+ Butterfly* butterfly = m_butterfly.getMayBeNull();
switch (indexingType()) {
case ALL_BLANK_INDEXING_TYPES:
case ALL_UNDECIDED_INDEXING_TYPES:
bool canSetIndexQuickly(unsigned i)
{
- Butterfly* butterfly = m_butterfly.get().getMayBeNull();
+ Butterfly* butterfly = m_butterfly.getMayBeNull();
switch (indexingType()) {
case ALL_BLANK_INDEXING_TYPES:
case ALL_UNDECIDED_INDEXING_TYPES:
void setIndexQuickly(VM& vm, unsigned i, JSValue v)
{
- Butterfly* butterfly = m_butterfly.get().get();
+ Butterfly* butterfly = m_butterfly.get();
switch (indexingType()) {
case ALL_INT32_INDEXING_TYPES: {
ASSERT(i < butterfly->vectorLength());
ALWAYS_INLINE void initializeIndex(ObjectInitializationScope& scope, unsigned i, JSValue v, IndexingType indexingType)
{
VM& vm = scope.vm();
- Butterfly* butterfly = m_butterfly.get().get();
+ Butterfly* butterfly = m_butterfly.get();
switch (indexingType) {
case ALL_UNDECIDED_INDEXING_TYPES: {
setIndexQuicklyToUndecided(vm, i, v);
// barriers. This implies not having any data format conversions.
ALWAYS_INLINE void initializeIndexWithoutBarrier(ObjectInitializationScope&, unsigned i, JSValue v, IndexingType indexingType)
{
- Butterfly* butterfly = m_butterfly.get().get();
+ Butterfly* butterfly = m_butterfly.get();
switch (indexingType) {
case ALL_UNDECIDED_INDEXING_TYPES: {
RELEASE_ASSERT_NOT_REACHED();
case ALL_CONTIGUOUS_INDEXING_TYPES:
return false;
case ALL_ARRAY_STORAGE_INDEXING_TYPES:
- return !!m_butterfly.get()->arrayStorage()->m_sparseMap;
+ return !!m_butterfly->arrayStorage()->m_sparseMap;
default:
RELEASE_ASSERT_NOT_REACHED();
return false;
case ALL_CONTIGUOUS_INDEXING_TYPES:
return false;
case ALL_ARRAY_STORAGE_INDEXING_TYPES:
- return m_butterfly.get()->arrayStorage()->inSparseMode();
+ return m_butterfly->arrayStorage()->inSparseMode();
default:
RELEASE_ASSERT_NOT_REACHED();
return false;
return inlineStorageUnsafe();
}
- const Butterfly* butterfly() const { return m_butterfly.get().getMayBeNull(); }
- Butterfly* butterfly() { return m_butterfly.get().getMayBeNull(); }
+ const Butterfly* butterfly() const { return m_butterfly.getMayBeNull(); }
+ Butterfly* butterfly() { return m_butterfly.getMayBeNull(); }
- ConstPropertyStorage outOfLineStorage() const { return m_butterfly.get()->propertyStorage(); }
- PropertyStorage outOfLineStorage() { return m_butterfly.get()->propertyStorage(); }
+ ConstPropertyStorage outOfLineStorage() const { return m_butterfly->propertyStorage(); }
+ PropertyStorage outOfLineStorage() { return m_butterfly->propertyStorage(); }
const WriteBarrierBase<Unknown>* locationForOffset(PropertyOffset offset) const
{
ContiguousJSValues ensureInt32(VM& vm)
{
if (LIKELY(hasInt32(indexingType())))
- return m_butterfly.get()->contiguousInt32();
+ return m_butterfly->contiguousInt32();
return ensureInt32Slow(vm);
}
ContiguousDoubles ensureDouble(VM& vm)
{
if (LIKELY(hasDouble(indexingType())))
- return m_butterfly.get()->contiguousDouble();
+ return m_butterfly->contiguousDouble();
return ensureDoubleSlow(vm);
}
ContiguousJSValues ensureContiguous(VM& vm)
{
if (LIKELY(hasContiguous(indexingType())))
- return m_butterfly.get()->contiguous();
+ return m_butterfly->contiguous();
return ensureContiguousSlow(vm);
}
ArrayStorage* ensureArrayStorage(VM& vm)
{
if (LIKELY(hasAnyArrayStorage(indexingType())))
- return m_butterfly.get()->arrayStorage();
+ return m_butterfly->arrayStorage();
return ensureArrayStorageSlow(vm);
}
ArrayStorage* arrayStorage()
{
ASSERT(hasAnyArrayStorage(indexingType()));
- return m_butterfly.get()->arrayStorage();
+ return m_butterfly->arrayStorage();
}
// Call this if you want to predicate some actions on whether or not the
{
switch (indexingType()) {
case ALL_ARRAY_STORAGE_INDEXING_TYPES:
- return m_butterfly.get()->arrayStorage();
+ return m_butterfly->arrayStorage();
default:
return 0;
ASSERT(length <= MAX_STORAGE_VECTOR_LENGTH);
ASSERT(hasContiguous(indexingType()) || hasInt32(indexingType()) || hasDouble(indexingType()) || hasUndecided(indexingType()));
- if (m_butterfly.get()->vectorLength() < length) {
+ if (m_butterfly->vectorLength() < length) {
if (!ensureLengthSlow(vm, length))
return false;
}
- if (m_butterfly.get()->publicLength() < length)
- m_butterfly.get()->setPublicLength(length);
+ if (m_butterfly->publicLength() < length)
+ m_butterfly->setPublicLength(length);
return true;
}
PropertyOffset prepareToPutDirectWithoutTransition(VM&, PropertyName, unsigned attributes, StructureID, Structure*);
protected:
- AuxiliaryBarrier<CagedPtr<Butterfly>> m_butterfly;
+ CagedBarrierPtr<Gigacage::JSValue, Butterfly> m_butterfly;
#if USE(JSVALUE32_64)
private:
uint32_t m_padding;
JSGlobalObject* globalObject = structure->globalObject();
bool createUninitialized = globalObject->isOriginalArrayStructure(structure);
- void* temp = vm.auxiliarySpace.tryAllocate(deferralContext, Butterfly::totalSize(0, structure->outOfLineCapacity(), true, vectorLength * sizeof(EncodedJSValue)));
+ void* temp = vm.jsValueGigacageAuxiliarySpace.tryAllocate(deferralContext, Butterfly::totalSize(0, structure->outOfLineCapacity(), true, vectorLength * sizeof(EncodedJSValue)));
if (UNLIKELY(!temp))
return nullptr;
Butterfly* butterfly = Butterfly::fromBase(temp, 0, structure->outOfLineCapacity());
, m_runLoop(CFRunLoopGetCurrent())
#endif // USE(CF)
, heap(this, heapType)
- , auxiliarySpace("Auxiliary", heap, AllocatorAttributes(DoesNotNeedDestruction, HeapCell::Auxiliary), &GigacageAlignedMemoryAllocator::instance())
- , cellSpace("JSCell", heap, AllocatorAttributes(DoesNotNeedDestruction, HeapCell::JSCell), &FastMallocAlignedMemoryAllocator::instance())
- , destructibleCellSpace("Destructible JSCell", heap, AllocatorAttributes(NeedsDestruction, HeapCell::JSCell), &FastMallocAlignedMemoryAllocator::instance())
- , stringSpace("JSString", heap, &FastMallocAlignedMemoryAllocator::instance())
- , destructibleObjectSpace("JSDestructibleObject", heap, &FastMallocAlignedMemoryAllocator::instance())
- , eagerlySweptDestructibleObjectSpace("Eagerly Swept JSDestructibleObject", heap, &FastMallocAlignedMemoryAllocator::instance())
- , segmentedVariableObjectSpace("JSSegmentedVariableObjectSpace", heap, &FastMallocAlignedMemoryAllocator::instance())
+ , fastMallocAllocator(std::make_unique<FastMallocAlignedMemoryAllocator>())
+ , primitiveGigacageAllocator(std::make_unique<GigacageAlignedMemoryAllocator>(Gigacage::Primitive))
+ , jsValueGigacageAllocator(std::make_unique<GigacageAlignedMemoryAllocator>(Gigacage::JSValue))
+ , primitiveGigacageAuxiliarySpace("Primitive Gigacage Auxiliary", heap, AllocatorAttributes(DoesNotNeedDestruction, HeapCell::Auxiliary), primitiveGigacageAllocator.get())
+ , jsValueGigacageAuxiliarySpace("JSValue Gigacage Auxiliary", heap, AllocatorAttributes(DoesNotNeedDestruction, HeapCell::Auxiliary), jsValueGigacageAllocator.get())
+ , cellSpace("JSCell", heap, AllocatorAttributes(DoesNotNeedDestruction, HeapCell::JSCell), fastMallocAllocator.get())
+ , destructibleCellSpace("Destructible JSCell", heap, AllocatorAttributes(NeedsDestruction, HeapCell::JSCell), fastMallocAllocator.get())
+ , stringSpace("JSString", heap, fastMallocAllocator.get())
+ , destructibleObjectSpace("JSDestructibleObject", heap, fastMallocAllocator.get())
+ , eagerlySweptDestructibleObjectSpace("Eagerly Swept JSDestructibleObject", heap, fastMallocAllocator.get())
+ , segmentedVariableObjectSpace("JSSegmentedVariableObjectSpace", heap, fastMallocAllocator.get())
#if ENABLE(WEBASSEMBLY)
- , webAssemblyCodeBlockSpace("JSWebAssemblyCodeBlockSpace", heap, &FastMallocAlignedMemoryAllocator::instance())
+ , webAssemblyCodeBlockSpace("JSWebAssemblyCodeBlockSpace", heap, fastMallocAllocator.get())
#endif
, vmType(vmType)
, clientData(0)
, m_codeCache(std::make_unique<CodeCache>())
, m_builtinExecutables(std::make_unique<BuiltinExecutables>(*this))
, m_typeProfilerEnabledCount(0)
- , m_gigacageEnabled(IsWatched)
+ , m_primitiveGigacageEnabled(IsWatched)
, m_controlFlowProfilerEnabledCount(0)
, m_shadowChicken(std::make_unique<ShadowChicken>())
{
initializeHostCallReturnValue(); // This is needed to convince the linker not to drop host call return support.
#endif
- Gigacage::addDisableCallback(gigacageDisabledCallback, this);
+ Gigacage::addPrimitiveDisableCallback(primitiveGigacageDisabledCallback, this);
heap.notifyIsSafeToCollect();
{
auto destructionLocker = holdLock(s_destructionLock.read());
- Gigacage::removeDisableCallback(gigacageDisabledCallback, this);
+ Gigacage::removePrimitiveDisableCallback(primitiveGigacageDisabledCallback, this);
promiseDeferredTimer->stopRunningTasks();
#if ENABLE(WEBASSEMBLY)
if (Wasm::existingWorklistOrNull())
#endif
}
-void VM::gigacageDisabledCallback(void* argument)
+void VM::primitiveGigacageDisabledCallback(void* argument)
{
- static_cast<VM*>(argument)->gigacageDisabled();
+ static_cast<VM*>(argument)->primitiveGigacageDisabled();
}
-void VM::gigacageDisabled()
+void VM::primitiveGigacageDisabled()
{
if (m_apiLock->currentThreadIsHoldingLock()) {
- m_gigacageEnabled.fireAll(*this, "Gigacage disabled");
+ m_primitiveGigacageEnabled.fireAll(*this, "Primitive gigacage disabled");
return;
}
// This is totally racy, and that's OK. The point is, it's up to the user to ensure that they pass the
// uncaged buffer in a nicely synchronized manner.
- m_needToFireGigacageEnabled = true;
+ m_needToFirePrimitiveGigacageEnabled = true;
}
void VM::setLastStackTop(void* lastStackTop)
#include <wtf/Deque.h>
#include <wtf/DoublyLinkedList.h>
#include <wtf/Forward.h>
+#include <wtf/Gigacage.h>
#include <wtf/HashMap.h>
#include <wtf/HashSet.h>
#include <wtf/StackBounds.h>
class ExecState;
class Exception;
class ExceptionScope;
+class FastMallocAlignedMemoryAllocator;
+class GigacageAlignedMemoryAllocator;
class HandleStack;
class TypeProfiler;
class TypeProfilerLog;
public:
Heap heap;
- Subspace auxiliarySpace;
+ std::unique_ptr<FastMallocAlignedMemoryAllocator> fastMallocAllocator;
+ std::unique_ptr<GigacageAlignedMemoryAllocator> primitiveGigacageAllocator;
+ std::unique_ptr<GigacageAlignedMemoryAllocator> jsValueGigacageAllocator;
+
+ Subspace primitiveGigacageAuxiliarySpace; // Typed arrays, strings, bitvectors, etc go here.
+ Subspace jsValueGigacageAuxiliarySpace; // Butterflies, arrays of JSValues, etc go here.
+
+ // We make cross-cutting assumptions about typed arrays being in the primitive Gigacage and butterflies
+ // being in the JSValue gigacage. For some types, it's super obvious where they should go, and so we
+ // can hardcode that fact. But sometimes it's not clear, so we abstract it by having a Gigacage::Kind
+ // constant somewhere.
+ // FIXME: Maybe it would be better if everyone abstracted this?
+ // https://bugs.webkit.org/show_bug.cgi?id=175248
+ ALWAYS_INLINE Subspace& gigacageAuxiliarySpace(Gigacage::Kind kind)
+ {
+ switch (kind) {
+ case Gigacage::Primitive:
+ return primitiveGigacageAuxiliarySpace;
+ case Gigacage::JSValue:
+ return jsValueGigacageAuxiliarySpace;
+ }
+ RELEASE_ASSERT_NOT_REACHED();
+ return primitiveGigacageAuxiliarySpace;
+ }
// Whenever possible, use subspaceFor<CellType>(vm) to get one of these subspaces.
Subspace cellSpace;
void* lastStackTop() { return m_lastStackTop; }
void setLastStackTop(void*);
- void fireGigacageEnabledIfNecessary()
+ void firePrimitiveGigacageEnabledIfNecessary()
{
- if (m_needToFireGigacageEnabled) {
- m_needToFireGigacageEnabled = false;
- m_gigacageEnabled.fireAll(*this, "Gigacage disabled asynchronously");
+ if (m_needToFirePrimitiveGigacageEnabled) {
+ m_needToFirePrimitiveGigacageEnabled = false;
+ m_primitiveGigacageEnabled.fireAll(*this, "Primitive gigacage disabled asynchronously");
}
}
// FIXME: Use AtomicString once it got merged with Identifier.
JS_EXPORT_PRIVATE void addImpureProperty(const String&);
- InlineWatchpointSet& gigacageEnabled() { return m_gigacageEnabled; }
+ InlineWatchpointSet& primitiveGigacageEnabled() { return m_primitiveGigacageEnabled; }
BuiltinExecutables* builtinExecutables() { return m_builtinExecutables.get(); }
void verifyExceptionCheckNeedIsSatisfied(unsigned depth, ExceptionEventLocation&);
#endif
- static void gigacageDisabledCallback(void*);
- void gigacageDisabled();
+ static void primitiveGigacageDisabledCallback(void*);
+ void primitiveGigacageDisabled();
#if ENABLE(ASSEMBLER)
bool m_canUseAssembler;
std::unique_ptr<TypeProfiler> m_typeProfiler;
std::unique_ptr<TypeProfilerLog> m_typeProfilerLog;
unsigned m_typeProfilerEnabledCount;
- bool m_needToFireGigacageEnabled { false };
- InlineWatchpointSet m_gigacageEnabled;
+ bool m_needToFirePrimitiveGigacageEnabled { false };
+ InlineWatchpointSet m_primitiveGigacageEnabled;
FunctionHasExecutedCache m_functionHasExecutedCache;
std::unique_ptr<ControlFlowProfiler> m_controlFlowProfiler;
unsigned m_controlFlowProfilerEnabledCount;
if (m_memories.size() >= m_maxCount)
return MemoryResult(nullptr, MemoryResult::SyncGCAndRetry);
- void* result = Gigacage::tryAllocateVirtualPages(Memory::fastMappedBytes());
+ void* result = Gigacage::tryAllocateVirtualPages(Gigacage::Primitive, Memory::fastMappedBytes());
if (!result)
return MemoryResult(nullptr, MemoryResult::SyncGCAndRetry);
{
{
auto holder = holdLock(m_lock);
- Gigacage::freeVirtualPages(basePtr, Memory::fastMappedBytes());
+ Gigacage::freeVirtualPages(Gigacage::Primitive, basePtr, Memory::fastMappedBytes());
m_memories.removeFirst(basePtr);
}
if (!initialBytes)
return adoptRef(new Memory(initial, maximum));
- void* slowMemory = Gigacage::tryAlignedMalloc(WTF::pageSize(), initialBytes);
+ void* slowMemory = Gigacage::tryAlignedMalloc(Gigacage::Primitive, WTF::pageSize(), initialBytes);
if (!slowMemory) {
memoryManager().freePhysicalBytes(initialBytes);
return nullptr;
memoryManager().freeVirtualPages(m_memory);
break;
case MemoryMode::BoundsChecking:
- Gigacage::alignedFree(m_memory);
+ Gigacage::alignedFree(Gigacage::Primitive, m_memory);
break;
}
}
case MemoryMode::BoundsChecking: {
RELEASE_ASSERT(maximum().bytes() != 0);
- void* newMemory = Gigacage::tryAlignedMalloc(WTF::pageSize(), desiredSize);
+ void* newMemory = Gigacage::tryAlignedMalloc(Gigacage::Primitive, WTF::pageSize(), desiredSize);
if (!newMemory)
return false;
memcpy(newMemory, m_memory, m_size);
memset(static_cast<char*>(newMemory) + m_size, 0, desiredSize - m_size);
if (m_memory)
- Gigacage::alignedFree(m_memory);
+ Gigacage::alignedFree(Gigacage::Primitive, m_memory);
m_memory = newMemory;
m_mappedCapacity = desiredSize;
m_size = desiredSize;
+2017-08-06 Filip Pizlo <fpizlo@apple.com>
+
+ Primitive auxiliaries and JSValue auxiliaries should have separate gigacages
+ https://bugs.webkit.org/show_bug.cgi?id=174919
+
+ Reviewed by Keith Miller.
+
+ This mirrors the changes from bmalloc/Gigacage.h.
+
+ Also it teaches CagedPtr how to reason about multiple gigacages.
+
+ * wtf/CagedPtr.h:
+ (WTF::CagedPtr::get const):
+ (WTF::CagedPtr::operator[] const):
+ * wtf/Gigacage.cpp:
+ (Gigacage::tryMalloc):
+ (Gigacage::tryAllocateVirtualPages):
+ (Gigacage::freeVirtualPages):
+ (Gigacage::tryAlignedMalloc):
+ (Gigacage::alignedFree):
+ (Gigacage::free):
+ * wtf/Gigacage.h:
+ (Gigacage::disablePrimitiveGigacage):
+ (Gigacage::addPrimitiveDisableCallback):
+ (Gigacage::removePrimitiveDisableCallback):
+ (Gigacage::name):
+ (Gigacage::basePtr):
+ (Gigacage::caged):
+ (Gigacage::isCaged):
+ (Gigacage::tryAlignedMalloc):
+ (Gigacage::alignedFree):
+ (Gigacage::free):
+ (Gigacage::disableGigacage): Deleted.
+ (Gigacage::addDisableCallback): Deleted.
+ (Gigacage::removeDisableCallback): Deleted.
+
2017-08-07 Brian Burg <bburg@apple.com>
Remove CANVAS_PATH compilation guard
namespace WTF {
-template<typename T>
+template<Gigacage::Kind passedKind, typename T>
class CagedPtr {
public:
+ static constexpr Gigacage::Kind kind = passedKind;
+
CagedPtr(T* ptr = nullptr)
: m_ptr(ptr)
{
T* get() const
{
ASSERT(m_ptr);
- return Gigacage::caged(m_ptr);
+ return Gigacage::caged(kind, m_ptr);
}
T* getMayBeNull() const
T& operator*() const { return *get(); }
T* operator->() const { return get(); }
+
+ template<typename IndexType>
+ T& operator[](IndexType index) const { return get()[index]; }
private:
T* m_ptr;
#if defined(USE_SYSTEM_MALLOC) && USE_SYSTEM_MALLOC
extern "C" {
-const void* g_gigacageBasePtr;
+void* const g_gigacageBasePtr;
}
namespace Gigacage {
-void* tryMalloc(size_t size)
+void* tryMalloc(Kind, size_t size)
{
auto result = tryFastMalloc(size);
void* realResult;
return nullptr;
}
-void* tryAllocateVirtualPages(size_t size)
+void* tryAllocateVirtualPages(Kind, size_t size)
{
return OSAllocator::reserveUncommitted(size);
}
-void freeVirtualPages(void* basePtr, size_t size)
+void freeVirtualPages(Kind, void* basePtr, size_t size)
{
OSAllocator::releaseDecommitted(basePtr, size);
}
// and stay scrambled except just before use.
// https://bugs.webkit.org/show_bug.cgi?id=175035
-void* tryAlignedMalloc(size_t alignment, size_t size)
+void* tryAlignedMalloc(Kind kind, size_t alignment, size_t size)
{
- void* result = bmalloc::api::tryMemalign(alignment, size, bmalloc::HeapKind::Gigacage);
+ void* result = bmalloc::api::tryMemalign(alignment, size, bmalloc::heapKind(kind));
WTF::compilerFence();
return result;
}
-void alignedFree(void* p)
+void alignedFree(Kind kind, void* p)
{
- bmalloc::api::free(p, bmalloc::HeapKind::Gigacage);
+ if (!p)
+ return;
+ RELEASE_ASSERT(isCaged(kind, p));
+ bmalloc::api::free(p, bmalloc::heapKind(kind));
WTF::compilerFence();
}
-void* tryMalloc(size_t size)
+void* tryMalloc(Kind kind, size_t size)
{
- void* result = bmalloc::api::tryMalloc(size, bmalloc::HeapKind::Gigacage);
+ void* result = bmalloc::api::tryMalloc(size, bmalloc::heapKind(kind));
WTF::compilerFence();
return result;
}
-void free(void* p)
+void free(Kind kind, void* p)
{
- bmalloc::api::free(p, bmalloc::HeapKind::Gigacage);
+ if (!p)
+ return;
+ RELEASE_ASSERT(isCaged(kind, p));
+ bmalloc::api::free(p, bmalloc::heapKind(kind));
WTF::compilerFence();
}
-void* tryAllocateVirtualPages(size_t size)
+void* tryAllocateVirtualPages(Kind kind, size_t size)
{
- void* result = bmalloc::api::tryLargeMemalignVirtual(WTF::pageSize(), size, bmalloc::HeapKind::Gigacage);
+ void* result = bmalloc::api::tryLargeMemalignVirtual(WTF::pageSize(), size, bmalloc::heapKind(kind));
WTF::compilerFence();
return result;
}
-void freeVirtualPages(void* basePtr, size_t)
+void freeVirtualPages(Kind kind, void* basePtr, size_t)
{
- bmalloc::api::freeLargeVirtual(basePtr, bmalloc::HeapKind::Gigacage);
+ if (!basePtr)
+ return;
+ RELEASE_ASSERT(isCaged(kind, basePtr));
+ bmalloc::api::freeLargeVirtual(basePtr, bmalloc::heapKind(kind));
WTF::compilerFence();
}
#define GIGACAGE_ENABLED 0
extern "C" {
-extern WTF_EXPORTDATA const void* g_gigacageBasePtr;
+extern WTF_EXPORTDATA void* const g_gigacageBasePtr;
}
namespace Gigacage {
+enum Kind {
+ Primitive,
+ JSValue
+};
+
inline void ensureGigacage() { }
-inline void disableGigacage() { }
+inline void disablePrimitiveGigacage() { }
inline bool shouldBeEnabled() { return false; }
-inline void addDisableCallback(void (*)(void*), void*) { }
-inline void removeDisableCallback(void (*)(void*), void*) { }
+inline void addPrimitiveDisableCallback(void (*)(void*), void*) { }
+inline void removePrimitiveDisableCallback(void (*)(void*), void*) { }
+
+ALWAYS_INLINE const char* name(Kind kind)
+{
+ switch (kind) {
+ case Primitive:
+ return "Primitive";
+ case JSValue:
+ return "JSValue";
+ }
+ RELEASE_ASSERT_NOT_REACHED();
+ return nullptr;
+}
+
+ALWAYS_INLINE void* basePtr(Kind)
+{
+ return g_gigacageBasePtr;
+}
template<typename T>
-inline T* caged(T* ptr) { return ptr; }
+inline T* caged(Kind, T* ptr) { return ptr; }
-inline bool isCaged(const void*) { return false; }
+inline bool isCaged(Kind, const void*) { return false; }
-inline void* tryAlignedMalloc(size_t alignment, size_t size) { return tryFastAlignedMalloc(alignment, size); }
-inline void alignedFree(void* p) { fastAlignedFree(p); }
-WTF_EXPORT_PRIVATE void* tryMalloc(size_t size);
-inline void free(void* p) { fastFree(p); }
+inline void* tryAlignedMalloc(Kind, size_t alignment, size_t size) { return tryFastAlignedMalloc(alignment, size); }
+inline void alignedFree(Kind, void* p) { fastAlignedFree(p); }
+WTF_EXPORT_PRIVATE void* tryMalloc(Kind, size_t size);
+inline void free(Kind, void* p) { fastFree(p); }
-WTF_EXPORT_PRIVATE void* tryAllocateVirtualPages(size_t size);
-WTF_EXPORT_PRIVATE void freeVirtualPages(void* basePtr, size_t size);
+WTF_EXPORT_PRIVATE void* tryAllocateVirtualPages(Kind, size_t size);
+WTF_EXPORT_PRIVATE void freeVirtualPages(Kind, void* basePtr, size_t size);
} // namespace Gigacage
#else
namespace Gigacage {
-WTF_EXPORT_PRIVATE void* tryAlignedMalloc(size_t alignment, size_t size);
-WTF_EXPORT_PRIVATE void alignedFree(void*);
-WTF_EXPORT_PRIVATE void* tryMalloc(size_t);
-WTF_EXPORT_PRIVATE void free(void*);
+WTF_EXPORT_PRIVATE void* tryAlignedMalloc(Kind, size_t alignment, size_t size);
+WTF_EXPORT_PRIVATE void alignedFree(Kind, void*);
+WTF_EXPORT_PRIVATE void* tryMalloc(Kind, size_t);
+WTF_EXPORT_PRIVATE void free(Kind, void*);
-WTF_EXPORT_PRIVATE void* tryAllocateVirtualPages(size_t size);
-WTF_EXPORT_PRIVATE void freeVirtualPages(void* basePtr, size_t size);
+WTF_EXPORT_PRIVATE void* tryAllocateVirtualPages(Kind, size_t size);
+WTF_EXPORT_PRIVATE void freeVirtualPages(Kind, void* basePtr, size_t size);
} // namespace Gigacage
#endif
+2017-08-06 Filip Pizlo <fpizlo@apple.com>
+
+ Primitive auxiliaries and JSValue auxiliaries should have separate gigacages
+ https://bugs.webkit.org/show_bug.cgi?id=174919
+
+ Reviewed by Keith Miller.
+
+ No new tests because no change in behavior.
+
+ Adapting to API changes - we now specify the AlignedMemoryAllocator differently and we need to be
+ specific about which Gigacage we're using.
+
+ * bindings/js/WebCoreJSClientData.cpp:
+ (WebCore::JSVMClientData::JSVMClientData):
+ * platform/graphics/cocoa/GPUBufferMetal.mm:
+ (WebCore::GPUBuffer::GPUBuffer):
+
2017-08-07 Basuke Suzuki <Basuke.Suzuki@sony.com>
[Curl] Add abstraction layer of cookie jar implementation for Curl port
JSVMClientData::JSVMClientData(VM& vm)
: m_builtinFunctions(vm)
, m_builtinNames(&vm)
- , m_outputConstraintSpace("WebCore Wrapper w/ Output Constraint", vm.heap, &FastMallocAlignedMemoryAllocator::instance())
- , m_globalObjectOutputConstraintSpace("WebCore Global Object w/ Output Constraint", vm.heap, &FastMallocAlignedMemoryAllocator::instance())
+ , m_outputConstraintSpace("WebCore Wrapper w/ Output Constraint", vm.heap, vm.fastMallocAllocator.get())
+ , m_globalObjectOutputConstraintSpace("WebCore Global Object w/ Output Constraint", vm.heap, vm.fastMallocAllocator.get())
{
}
size_t pageSize = WTF::pageSize();
size_t pageAlignedSize = roundUpToMultipleOf(pageSize, data->byteLength());
- void* pageAlignedCopy = Gigacage::tryAlignedMalloc(pageSize, pageAlignedSize);
+ void* pageAlignedCopy = Gigacage::tryAlignedMalloc(Gigacage::Primitive, pageSize, pageAlignedSize);
if (!pageAlignedCopy)
return;
memcpy(pageAlignedCopy, data->baseAddress(), data->byteLength());
- m_contents = ArrayBuffer::createFromBytes(pageAlignedCopy, data->byteLength(), [] (void* ptr) { Gigacage::alignedFree(ptr); });
+ m_contents = ArrayBuffer::createFromBytes(pageAlignedCopy, data->byteLength(), [] (void* ptr) { Gigacage::alignedFree(Gigacage::Primitive, ptr); });
m_contents->ref();
ArrayBuffer* capturedContents = m_contents.get();
m_buffer = adoptNS((MTLBuffer *)[device->platformDevice() newBufferWithBytesNoCopy:m_contents->data() length:pageAlignedSize options:MTLResourceOptionCPUCacheModeDefault deallocator:^(void*, NSUInteger) { capturedContents->deref(); }]);
+2017-08-06 Filip Pizlo <fpizlo@apple.com>
+
+ Primitive auxiliaries and JSValue auxiliaries should have separate gigacages
+ https://bugs.webkit.org/show_bug.cgi?id=174919
+
+ Reviewed by Keith Miller.
+
+ The disable callback is all about the primitive gigacage.
+
+ * WebProcess/WebProcess.cpp:
+ (WebKit::primitiveGigacageDisabled):
+ (WebKit::m_webSQLiteDatabaseTracker):
+ (WebKit::gigacageDisabled): Deleted.
+
2017-08-07 Brian Burg <bburg@apple.com>
Remove CANVAS_PATH compilation guard
namespace WebKit {
-static void gigacageDisabled(void*)
+static void primitiveGigacageDisabled(void*)
{
UNREACHABLE_FOR_PLATFORM();
}
});
if (Gigacage::shouldBeEnabled())
- Gigacage::addDisableCallback(gigacageDisabled, nullptr);
+ Gigacage::addPrimitiveDisableCallback(primitiveGigacageDisabled, nullptr);
}
WebProcess::~WebProcess()
+2017-08-06 Filip Pizlo <fpizlo@apple.com>
+
+ Primitive auxiliaries and JSValue auxiliaries should have separate gigacages
+ https://bugs.webkit.org/show_bug.cgi?id=174919
+
+ Reviewed by Keith Miller.
+
+ This introduces two kinds of Gigacage, Primitive and JSValue. This translates to two kinds of
+ HeapKind, PrimitiveGigacage and JSValueGigacage.
+
+ The new support functionality required turning Inline.h into BInline.h, and INLINE into BINLINE, and
+ NO_INLINE into BNO_INLINE.
+
+ * bmalloc.xcodeproj/project.pbxproj:
+ * bmalloc/Allocator.cpp:
+ (bmalloc::Allocator::refillAllocatorSlowCase):
+ (bmalloc::Allocator::refillAllocator):
+ (bmalloc::Allocator::allocateLarge):
+ (bmalloc::Allocator::allocateLogSizeClass):
+ * bmalloc/AsyncTask.h:
+ * bmalloc/BInline.h: Copied from Source/bmalloc/bmalloc/Inline.h.
+ * bmalloc/Cache.cpp:
+ (bmalloc::Cache::tryAllocateSlowCaseNullCache):
+ (bmalloc::Cache::allocateSlowCaseNullCache):
+ (bmalloc::Cache::deallocateSlowCaseNullCache):
+ (bmalloc::Cache::reallocateSlowCaseNullCache):
+ * bmalloc/Deallocator.cpp:
+ * bmalloc/Gigacage.cpp:
+ (Gigacage::PrimitiveDisableCallbacks::PrimitiveDisableCallbacks):
+ (Gigacage::ensureGigacage):
+ (Gigacage::disablePrimitiveGigacage):
+ (Gigacage::addPrimitiveDisableCallback):
+ (Gigacage::removePrimitiveDisableCallback):
+ (Gigacage::Callbacks::Callbacks): Deleted.
+ (Gigacage::disableGigacage): Deleted.
+ (Gigacage::addDisableCallback): Deleted.
+ (Gigacage::removeDisableCallback): Deleted.
+ * bmalloc/Gigacage.h:
+ (Gigacage::name):
+ (Gigacage::basePtr):
+ (Gigacage::forEachKind):
+ (Gigacage::caged):
+ (Gigacage::isCaged):
+ * bmalloc/Heap.cpp:
+ (bmalloc::Heap::Heap):
+ (bmalloc::Heap::usingGigacage):
+ (bmalloc::Heap::gigacageBasePtr):
+ * bmalloc/Heap.h:
+ * bmalloc/HeapKind.h:
+ (bmalloc::isGigacage):
+ (bmalloc::gigacageKind):
+ (bmalloc::heapKind):
+ * bmalloc/Inline.h: Removed.
+ * bmalloc/Map.h:
+ * bmalloc/PerProcess.h:
+ (bmalloc::PerProcess<T>::getFastCase):
+ (bmalloc::PerProcess<T>::get):
+ (bmalloc::PerProcess<T>::getSlowCase):
+ * bmalloc/PerThread.h:
+ (bmalloc::PerThread<T>::getFastCase):
+ * bmalloc/Vector.h:
+ (bmalloc::Vector<T>::push):
+ (bmalloc::Vector<T>::shrinkCapacity):
+ (bmalloc::Vector<T>::growCapacity):
+
2017-08-02 Filip Pizlo <fpizlo@apple.com>
If Gigacage is disabled, bmalloc should service large aligned memory allocation requests through vmAllocate
14DD78C618F48D7500950702 /* AsyncTask.h in Headers */ = {isa = PBXBuildFile; fileRef = 1417F65218BA88A00076FA3F /* AsyncTask.h */; settings = {ATTRIBUTES = (Private, ); }; };
14DD78C718F48D7500950702 /* BAssert.h in Headers */ = {isa = PBXBuildFile; fileRef = 1413E468189EEDE400546D68 /* BAssert.h */; settings = {ATTRIBUTES = (Private, ); }; };
14DD78C818F48D7500950702 /* FixedVector.h in Headers */ = {isa = PBXBuildFile; fileRef = 14D9DB4517F2447100EAAB79 /* FixedVector.h */; settings = {ATTRIBUTES = (Private, ); }; };
- 14DD78C918F48D7500950702 /* Inline.h in Headers */ = {isa = PBXBuildFile; fileRef = 1413E460189DCE1E00546D68 /* Inline.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ 14DD78C918F48D7500950702 /* BInline.h in Headers */ = {isa = PBXBuildFile; fileRef = 1413E460189DCE1E00546D68 /* BInline.h */; settings = {ATTRIBUTES = (Private, ); }; };
14DD78CA18F48D7500950702 /* Mutex.h in Headers */ = {isa = PBXBuildFile; fileRef = 144DCED617A649D90093B2F2 /* Mutex.h */; settings = {ATTRIBUTES = (Private, ); }; };
14DD78CB18F48D7500950702 /* PerProcess.h in Headers */ = {isa = PBXBuildFile; fileRef = 14446A0717A61FA400F9EA1D /* PerProcess.h */; settings = {ATTRIBUTES = (Private, ); }; };
14DD78CC18F48D7500950702 /* PerThread.h in Headers */ = {isa = PBXBuildFile; fileRef = 144469FD17A61F1F00F9EA1D /* PerThread.h */; settings = {ATTRIBUTES = (Private, ); }; };
140FA00219CE429C00FFD3C8 /* BumpRange.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BumpRange.h; path = bmalloc/BumpRange.h; sourceTree = "<group>"; };
140FA00419CE4B6800FFD3C8 /* LineMetadata.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LineMetadata.h; path = bmalloc/LineMetadata.h; sourceTree = "<group>"; };
14105E8318E14374003A106E /* ObjectType.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = ObjectType.cpp; path = bmalloc/ObjectType.cpp; sourceTree = "<group>"; };
- 1413E460189DCE1E00546D68 /* Inline.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Inline.h; path = bmalloc/Inline.h; sourceTree = "<group>"; };
+ 1413E460189DCE1E00546D68 /* BInline.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BInline.h; path = bmalloc/BInline.h; sourceTree = "<group>"; };
1413E462189DE1CD00546D68 /* BumpAllocator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; lineEnding = 0; name = BumpAllocator.h; path = bmalloc/BumpAllocator.h; sourceTree = "<group>"; };
1413E468189EEDE400546D68 /* BAssert.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = BAssert.h; path = bmalloc/BAssert.h; sourceTree = "<group>"; };
1417F64F18B7280C0076FA3F /* Syscall.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = Syscall.h; path = bmalloc/Syscall.h; sourceTree = "<group>"; };
6599C5CB1EC3F15900A2F7BB /* AvailableMemory.h */,
1413E468189EEDE400546D68 /* BAssert.h */,
0F5BF1721F23C5710029D91D /* BExport.h */,
+ 1413E460189DCE1E00546D68 /* BInline.h */,
14C919C818FCC59F0028DB43 /* BPlatform.h */,
14D9DB4517F2447100EAAB79 /* FixedVector.h */,
0F5BF1461F22A8B10029D91D /* HeapKind.h */,
- 1413E460189DCE1E00546D68 /* Inline.h */,
141D9AFF1C8E51C0000ABBA0 /* List.h */,
4426E27E1C838EE0008EB042 /* Logging.cpp */,
4426E27F1C838EE0008EB042 /* Logging.h */,
14DD78C818F48D7500950702 /* FixedVector.h in Headers */,
1400274918F89C1300115C97 /* Heap.h in Headers */,
0F5BF1491F22A8D80029D91D /* PerHeapKind.h in Headers */,
- 14DD78C918F48D7500950702 /* Inline.h in Headers */,
+ 14DD78C918F48D7500950702 /* BInline.h in Headers */,
144C07F51C7B70260051BB6A /* LargeMap.h in Headers */,
14C8992D1CC578330027A057 /* LargeRange.h in Headers */,
140FA00519CE4B6800FFD3C8 /* LineMetadata.h in Headers */,
}
}
-NO_INLINE void Allocator::refillAllocatorSlowCase(BumpAllocator& allocator, size_t sizeClass)
+BNO_INLINE void Allocator::refillAllocatorSlowCase(BumpAllocator& allocator, size_t sizeClass)
{
BumpRangeCache& bumpRangeCache = m_bumpRangeCaches[sizeClass];
m_heap.allocateSmallBumpRanges(lock, sizeClass, allocator, bumpRangeCache, m_deallocator.lineCache(lock));
}
-INLINE void Allocator::refillAllocator(BumpAllocator& allocator, size_t sizeClass)
+BINLINE void Allocator::refillAllocator(BumpAllocator& allocator, size_t sizeClass)
{
BumpRangeCache& bumpRangeCache = m_bumpRangeCaches[sizeClass];
if (!bumpRangeCache.size())
return allocator.refill(bumpRangeCache.pop());
}
-NO_INLINE void* Allocator::allocateLarge(size_t size)
+BNO_INLINE void* Allocator::allocateLarge(size_t size)
{
std::lock_guard<StaticMutex> lock(Heap::mutex());
return m_heap.allocateLarge(lock, alignment, size);
}
-NO_INLINE void* Allocator::allocateLogSizeClass(size_t size)
+BNO_INLINE void* Allocator::allocateLogSizeClass(size_t size)
{
size_t sizeClass = bmalloc::sizeClass(size);
BumpAllocator& allocator = m_bumpAllocators[sizeClass];
#define AsyncTask_h
#include "BAssert.h"
-#include "Inline.h"
+#include "BInline.h"
#include "Mutex.h"
#include "Sizes.h"
#include <atomic>
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
-#ifndef Inline_h
-#define Inline_h
+#ifndef BInline_h
+#define BInline_h
-#define INLINE __attribute__((always_inline)) inline
+#define BINLINE __attribute__((always_inline)) inline
-#define NO_INLINE __attribute__((noinline))
+#define BNO_INLINE __attribute__((noinline))
-#endif // Inline_h
+#endif // BInline_h
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
+#include "BInline.h"
#include "Cache.h"
#include "Heap.h"
-#include "Inline.h"
#include "PerProcess.h"
namespace bmalloc {
{
}
-NO_INLINE void* Cache::tryAllocateSlowCaseNullCache(HeapKind heapKind, size_t size)
+BNO_INLINE void* Cache::tryAllocateSlowCaseNullCache(HeapKind heapKind, size_t size)
{
return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(heapKind).allocator().tryAllocate(size);
}
-NO_INLINE void* Cache::allocateSlowCaseNullCache(HeapKind heapKind, size_t size)
+BNO_INLINE void* Cache::allocateSlowCaseNullCache(HeapKind heapKind, size_t size)
{
return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(heapKind).allocator().allocate(size);
}
-NO_INLINE void* Cache::allocateSlowCaseNullCache(HeapKind heapKind, size_t alignment, size_t size)
+BNO_INLINE void* Cache::allocateSlowCaseNullCache(HeapKind heapKind, size_t alignment, size_t size)
{
return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(heapKind).allocator().allocate(alignment, size);
}
-NO_INLINE void Cache::deallocateSlowCaseNullCache(HeapKind heapKind, void* object)
+BNO_INLINE void Cache::deallocateSlowCaseNullCache(HeapKind heapKind, void* object)
{
PerThread<PerHeapKind<Cache>>::getSlowCase()->at(heapKind).deallocator().deallocate(object);
}
-NO_INLINE void* Cache::reallocateSlowCaseNullCache(HeapKind heapKind, void* object, size_t newSize)
+BNO_INLINE void* Cache::reallocateSlowCaseNullCache(HeapKind heapKind, void* object, size_t newSize)
{
return PerThread<PerHeapKind<Cache>>::getSlowCase()->at(heapKind).allocator().reallocate(object, newSize);
}
*/
#include "BAssert.h"
+#include "BInline.h"
#include "Chunk.h"
#include "Deallocator.h"
#include "DebugHeap.h"
#include "Heap.h"
-#include "Inline.h"
#include "Object.h"
#include "PerProcess.h"
#include <algorithm>
// FIXME: Ask dyld to put this in its own page, and mprotect the page after we ensure the gigacage.
// https://bugs.webkit.org/show_bug.cgi?id=174972
-void* g_gigacageBasePtr;
+void* g_primitiveGigacageBasePtr;
+void* g_jsValueGigacageBasePtr;
using namespace bmalloc;
void* argument { nullptr };
};
-struct Callbacks {
- Callbacks(std::lock_guard<StaticMutex>&) { }
+struct PrimitiveDisableCallbacks {
+ PrimitiveDisableCallbacks(std::lock_guard<StaticMutex>&) { }
Vector<Callback> callbacks;
};
if (!shouldBeEnabled())
return;
- void* basePtr = tryVMAllocate(GIGACAGE_SIZE, GIGACAGE_SIZE + GIGACAGE_RUNWAY);
- if (!basePtr)
- return;
-
- vmDeallocatePhysicalPages(basePtr, GIGACAGE_SIZE + GIGACAGE_RUNWAY);
-
- g_gigacageBasePtr = basePtr;
+ forEachKind(
+ [&] (Kind kind) {
+ // FIXME: Randomize where this goes.
+ // https://bugs.webkit.org/show_bug.cgi?id=175245
+ basePtr(kind) = tryVMAllocate(GIGACAGE_SIZE, GIGACAGE_SIZE + GIGACAGE_RUNWAY);
+ if (!basePtr(kind)) {
+ fprintf(stderr, "FATAL: Could not allocate %s gigacage.\n", name(kind));
+ BCRASH();
+ }
+
+ vmDeallocatePhysicalPages(basePtr(kind), GIGACAGE_SIZE + GIGACAGE_RUNWAY);
+ });
});
#endif // GIGACAGE_ENABLED
}
-void disableGigacage()
+void disablePrimitiveGigacage()
{
ensureGigacage();
- if (!g_gigacageBasePtr) {
+ if (!g_primitiveGigacageBasePtr) {
// It was never enabled. That means that we never even saved any callbacks. Or, we had already disabled
// it before, and already called the callbacks.
return;
}
- Callbacks& callbacks = *PerProcess<Callbacks>::get();
- std::unique_lock<StaticMutex> lock(PerProcess<Callbacks>::mutex());
+ PrimitiveDisableCallbacks& callbacks = *PerProcess<PrimitiveDisableCallbacks>::get();
+ std::unique_lock<StaticMutex> lock(PerProcess<PrimitiveDisableCallbacks>::mutex());
for (Callback& callback : callbacks.callbacks)
callback.function(callback.argument);
callbacks.callbacks.shrink(0);
- g_gigacageBasePtr = nullptr;
+ g_primitiveGigacageBasePtr = nullptr;
}
-void addDisableCallback(void (*function)(void*), void* argument)
+void addPrimitiveDisableCallback(void (*function)(void*), void* argument)
{
ensureGigacage();
- if (!g_gigacageBasePtr) {
+ if (!g_primitiveGigacageBasePtr) {
// It was already disabled or we were never able to enable it.
function(argument);
return;
}
- Callbacks& callbacks = *PerProcess<Callbacks>::get();
- std::unique_lock<StaticMutex> lock(PerProcess<Callbacks>::mutex());
+ PrimitiveDisableCallbacks& callbacks = *PerProcess<PrimitiveDisableCallbacks>::get();
+ std::unique_lock<StaticMutex> lock(PerProcess<PrimitiveDisableCallbacks>::mutex());
callbacks.callbacks.push(Callback(function, argument));
}
-void removeDisableCallback(void (*function)(void*), void* argument)
+void removePrimitiveDisableCallback(void (*function)(void*), void* argument)
{
- Callbacks& callbacks = *PerProcess<Callbacks>::get();
- std::unique_lock<StaticMutex> lock(PerProcess<Callbacks>::mutex());
+ PrimitiveDisableCallbacks& callbacks = *PerProcess<PrimitiveDisableCallbacks>::get();
+ std::unique_lock<StaticMutex> lock(PerProcess<PrimitiveDisableCallbacks>::mutex());
for (size_t i = 0; i < callbacks.callbacks.size(); ++i) {
if (callbacks.callbacks[i].function == function
&& callbacks.callbacks[i].argument == argument) {
#include "BAssert.h"
#include "BExport.h"
+#include "BInline.h"
#include "BPlatform.h"
#include <inttypes.h>
#define GIGACAGE_ENABLED 0
#endif
-extern "C" BEXPORT void* g_gigacageBasePtr;
+extern "C" BEXPORT void* g_primitiveGigacageBasePtr;
+extern "C" BEXPORT void* g_jsValueGigacageBasePtr;
namespace Gigacage {
+enum Kind {
+ Primitive,
+ JSValue
+};
+
BEXPORT void ensureGigacage();
-BEXPORT void disableGigacage();
+BEXPORT void disablePrimitiveGigacage();
+
+// This will call the disable callback immediately if the Primitive Gigacage is currently disabled.
+BEXPORT void addPrimitiveDisableCallback(void (*)(void*), void*);
+BEXPORT void removePrimitiveDisableCallback(void (*)(void*), void*);
+
+BINLINE const char* name(Kind kind)
+{
+ switch (kind) {
+ case Primitive:
+ return "Primitive";
+ case JSValue:
+ return "JSValue";
+ }
+ BCRASH();
+ return nullptr;
+}
+
+BINLINE void*& basePtr(Kind kind)
+{
+ switch (kind) {
+ case Primitive:
+ return g_primitiveGigacageBasePtr;
+ case JSValue:
+ return g_jsValueGigacageBasePtr;
+ }
+ BCRASH();
+ return g_primitiveGigacageBasePtr;
+}
-// This will call the disable callback immediately if the Gigacage is currently disabled.
-BEXPORT void addDisableCallback(void (*)(void*), void*);
-BEXPORT void removeDisableCallback(void (*)(void*), void*);
+template<typename Func>
+void forEachKind(const Func& func)
+{
+ func(Primitive);
+ func(JSValue);
+}
template<typename T>
-T* caged(T* ptr)
+BINLINE T* caged(Kind kind, T* ptr)
{
BASSERT(ptr);
- void* gigacageBasePtr = g_gigacageBasePtr;
+ void* gigacageBasePtr = basePtr(kind);
if (!gigacageBasePtr)
return ptr;
return reinterpret_cast<T*>(
reinterpret_cast<uintptr_t>(ptr) & static_cast<uintptr_t>(GIGACAGE_MASK)));
}
-inline bool isCaged(const void* ptr)
+BINLINE bool isCaged(Kind kind, const void* ptr)
{
- return caged(ptr) == ptr;
+ return caged(kind, ptr) == ptr;
}
BEXPORT bool shouldBeEnabled();
Gigacage::ensureGigacage();
#if GIGACAGE_ENABLED
if (usingGigacage()) {
- RELEASE_BASSERT(g_gigacageBasePtr);
- m_largeFree.add(LargeRange(g_gigacageBasePtr, GIGACAGE_SIZE, 0));
+ RELEASE_BASSERT(gigacageBasePtr());
+ m_largeFree.add(LargeRange(gigacageBasePtr(), GIGACAGE_SIZE, 0));
}
#endif
}
bool Heap::usingGigacage()
{
- return m_kind == HeapKind::Gigacage && g_gigacageBasePtr;
+ return isGigacage(m_kind) && gigacageBasePtr();
+}
+
+void* Heap::gigacageBasePtr()
+{
+ return Gigacage::basePtr(gigacageKind(m_kind));
}
void Heap::initializeLineMetadata()
~Heap() = delete;
bool usingGigacage();
+ void* gigacageBasePtr(); // May crash if !usingGigacage().
void initializeLineMetadata();
void initializePageMetadata();
#pragma once
+#include "BAssert.h"
+#include "BInline.h"
+#include "Gigacage.h"
+
namespace bmalloc {
enum class HeapKind {
Primary,
- Gigacage
+ PrimitiveGigacage,
+ JSValueGigacage
};
-static constexpr unsigned numHeaps = 2;
+static constexpr unsigned numHeaps = 3;
+
+BINLINE bool isGigacage(HeapKind heapKind)
+{
+ switch (heapKind) {
+ case HeapKind::Primary:
+ return false;
+ case HeapKind::PrimitiveGigacage:
+ case HeapKind::JSValueGigacage:
+ return true;
+ }
+ BCRASH();
+ return false;
+}
+
+BINLINE Gigacage::Kind gigacageKind(HeapKind kind)
+{
+ switch (kind) {
+ case HeapKind::Primary:
+ BCRASH();
+ return Gigacage::Primitive;
+ case HeapKind::PrimitiveGigacage:
+ return Gigacage::Primitive;
+ case HeapKind::JSValueGigacage:
+ return Gigacage::JSValue;
+ }
+ BCRASH();
+ return Gigacage::Primitive;
+}
+
+BINLINE HeapKind heapKind(Gigacage::Kind kind)
+{
+ switch (kind) {
+ case Gigacage::Primitive:
+ return HeapKind::PrimitiveGigacage;
+ case Gigacage::JSValue:
+ return HeapKind::JSValueGigacage;
+ }
+ BCRASH();
+ return HeapKind::Primary;
+}
} // namespace bmalloc
#ifndef Map_h
#define Map_h
-#include "Inline.h"
+#include "BInline.h"
#include "Sizes.h"
#include "Vector.h"
#ifndef PerProcess_h
#define PerProcess_h
-#include "Inline.h"
+#include "BInline.h"
#include "Sizes.h"
#include "StaticMutex.h"
#include <mutex>
};
template<typename T>
-INLINE T* PerProcess<T>::getFastCase()
+BINLINE T* PerProcess<T>::getFastCase()
{
return s_object.load(std::memory_order_consume);
}
template<typename T>
-INLINE T* PerProcess<T>::get()
+BINLINE T* PerProcess<T>::get()
{
T* object = getFastCase();
if (!object)
}
template<typename T>
-NO_INLINE T* PerProcess<T>::getSlowCase()
+BNO_INLINE T* PerProcess<T>::getSlowCase()
{
std::lock_guard<StaticMutex> lock(s_mutex);
if (!s_object.load(std::memory_order_consume)) {
#ifndef PerThread_h
#define PerThread_h
+#include "BInline.h"
#include "BPlatform.h"
-#include "Inline.h"
#include "PerHeapKind.h"
#include "VMAllocate.h"
#include <mutex>
#endif
template<typename T>
-INLINE T* PerThread<T>::getFastCase()
+BINLINE T* PerThread<T>::getFastCase()
{
return static_cast<T*>(PerThreadStorage<T>::get());
}
#ifndef Vector_h
#define Vector_h
-#include "Inline.h"
+#include "BInline.h"
#include "VMAllocate.h"
#include <cstddef>
#include <cstring>
}
template<typename T>
-INLINE void Vector<T>::push(const T& value)
+BINLINE void Vector<T>::push(const T& value)
{
if (m_size == m_capacity)
growCapacity();
}
template<typename T>
-NO_INLINE void Vector<T>::shrinkCapacity()
+BNO_INLINE void Vector<T>::shrinkCapacity()
{
size_t newCapacity = max(initialCapacity(), m_capacity / shrinkFactor);
reallocateBuffer(newCapacity);
}
template<typename T>
-NO_INLINE void Vector<T>::growCapacity()
+BNO_INLINE void Vector<T>::growCapacity()
{
size_t newCapacity = max(initialCapacity(), m_size * growFactor);
reallocateBuffer(newCapacity);