[JSC] Shrink Structure
authorysuzuki@apple.com <ysuzuki@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Mon, 24 Feb 2020 08:38:50 +0000 (08:38 +0000)
committerysuzuki@apple.com <ysuzuki@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Mon, 24 Feb 2020 08:38:50 +0000 (08:38 +0000)
https://bugs.webkit.org/show_bug.cgi?id=207827

Reviewed by Saam Barati.

Source/JavaScriptCore:

This patch shrinks sizeof(Structure) from 112 to 96 (16 bytes) in architectures using 64 bit pointers.
Structure is one of the most frequently allocated JSCell in JSC. So it is worth doing
all the sort of bit hacks to make it compact as much as possible.

    1. Put outOfLineTypeFlags, maxOffset and transitionOffset into highest bits of m_propertyTableUnsafe,
       m_cachedPrototypeChain, m_classInfo, and m_transitionPropertyName. Do not use PackedPtr here since
       some of them are concurrently accessed by GC.
    2. Put m_inlineCapacity into lower 8 bits of m_propertyHash.
    3. Remove m_lock, and use Structure::cellLock() instead.
    4. Remove m_cachedPrototypeChain clearing from the concurrent collector since it is dead code, it was old code.
       We were setting m_cachedPrototypeChain only if Structure is for JSObject. Clearing happened only if it was not
       a Structure for JSObject.
    5. Previous Structure is held as StructureID m_previous. And m_previousOrRareData becomes m_cachedPrototypeChainOrRareData.

Many pairs are using CompactPointerTuple to make code clean.
Combining all of the above techniques saves us 16 bytes.

* bytecode/AccessCase.cpp:
(JSC::AccessCase::create):
(JSC::AccessCase::propagateTransitions const):
* bytecode/AccessCase.h:
(JSC::AccessCase::structure const):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::compileCheckSubClass):
(JSC::DFG::SpeculativeJIT::compileObjectKeys):
(JSC::DFG::SpeculativeJIT::compileCreateThis):
(JSC::DFG::SpeculativeJIT::compileCreatePromise):
(JSC::DFG::SpeculativeJIT::compileCreateInternalFieldObject):
* ftl/FTLAbstractHeapRepository.h:
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::compileObjectKeys):
(JSC::FTL::DFG::LowerDFGToB3::compileCreatePromise):
(JSC::FTL::DFG::LowerDFGToB3::compileCreateInternalFieldObject):
(JSC::FTL::DFG::LowerDFGToB3::compileCheckSubClass):
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::emitLoadClassInfoFromStructure):
* jit/JITOpcodes.cpp:
(JSC::JIT::emit_op_create_this):
* jit/JITOpcodes32_64.cpp:
(JSC::JIT::emit_op_create_this):
* jit/Repatch.cpp:
(JSC::tryCachePutByID):
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::LLINT_SLOW_PATH_DECL):
* runtime/ClonedArguments.cpp:
(JSC::ClonedArguments::createStructure):
* runtime/ConcurrentJSLock.h:
(JSC::ConcurrentJSLockerBase::ConcurrentJSLockerBase):
(JSC::GCSafeConcurrentJSLockerImpl::GCSafeConcurrentJSLockerImpl):
(JSC::GCSafeConcurrentJSLockerImpl::~GCSafeConcurrentJSLockerImpl):
(JSC::ConcurrentJSLockerImpl::ConcurrentJSLockerImpl):
(JSC::GCSafeConcurrentJSLocker::GCSafeConcurrentJSLocker): Deleted.
(JSC::GCSafeConcurrentJSLocker::~GCSafeConcurrentJSLocker): Deleted.
(JSC::ConcurrentJSLocker::ConcurrentJSLocker): Deleted.
* runtime/JSCell.h:
* runtime/JSObject.cpp:
(JSC::JSObject::deleteProperty):
(JSC::JSObject::shiftButterflyAfterFlattening):
* runtime/JSObject.h:
(JSC::JSObject::getDirectConcurrently const):
* runtime/JSObjectInlines.h:
(JSC::JSObject::prepareToPutDirectWithoutTransition):
* runtime/JSType.cpp:
(WTF::printInternal):
* runtime/JSType.h:
* runtime/Structure.cpp:
(JSC::StructureTransitionTable::contains const):
(JSC::StructureTransitionTable::get const):
(JSC::StructureTransitionTable::add):
(JSC::Structure::dumpStatistics):
(JSC::Structure::Structure):
(JSC::Structure::create):
(JSC::Structure::findStructuresAndMapForMaterialization):
(JSC::Structure::materializePropertyTable):
(JSC::Structure::addPropertyTransitionToExistingStructureImpl):
(JSC::Structure::addPropertyTransitionToExistingStructureConcurrently):
(JSC::Structure::addNewPropertyTransition):
(JSC::Structure::removeNewPropertyTransition):
(JSC::Structure::changePrototypeTransition):
(JSC::Structure::attributeChangeTransition):
(JSC::Structure::toDictionaryTransition):
(JSC::Structure::takePropertyTableOrCloneIfPinned):
(JSC::Structure::nonPropertyTransitionSlow):
(JSC::Structure::flattenDictionaryStructure):
(JSC::Structure::pin):
(JSC::Structure::pinForCaching):
(JSC::Structure::allocateRareData):
(JSC::Structure::ensurePropertyReplacementWatchpointSet):
(JSC::Structure::copyPropertyTableForPinning):
(JSC::Structure::add):
(JSC::Structure::remove):
(JSC::Structure::visitChildren):
(JSC::Structure::canCachePropertyNameEnumerator const):
* runtime/Structure.h:
* runtime/StructureInlines.h:
(JSC::Structure::get):
(JSC::Structure::ruleOutUnseenProperty const):
(JSC::Structure::seenProperties const):
(JSC::Structure::addPropertyHashAndSeenProperty):
(JSC::Structure::forEachPropertyConcurrently):
(JSC::Structure::transitivelyTransitionedFrom):
(JSC::Structure::cachedPrototypeChain const):
(JSC::Structure::setCachedPrototypeChain):
(JSC::Structure::prototypeChain const):
(JSC::Structure::propertyReplacementWatchpointSet):
(JSC::Structure::checkOffsetConsistency const):
(JSC::Structure::add):
(JSC::Structure::remove):
(JSC::Structure::removePropertyWithoutTransition):
(JSC::Structure::setPropertyTable):
(JSC::Structure::clearPropertyTable):
(JSC::Structure::setOutOfLineTypeFlags):
(JSC::Structure::setInlineCapacity):
(JSC::Structure::setClassInfo):
(JSC::Structure::setPreviousID):
(JSC::Structure::clearPreviousID):
* runtime/StructureRareData.cpp:
(JSC::StructureRareData::createStructure):
(JSC::StructureRareData::create):
(JSC::StructureRareData::StructureRareData):
(JSC::StructureRareData::visitChildren):
* runtime/StructureRareData.h:
* runtime/StructureRareDataInlines.h:
(JSC::StructureRareData::setCachedPrototypeChain):
(JSC::StructureRareData::setPreviousID): Deleted.
(JSC::StructureRareData::clearPreviousID): Deleted.
* tools/JSDollarVM.cpp:
(JSC::JSDollarVMHelper::functionGetStructureTransitionList):
* wasm/js/WebAssemblyFunction.cpp:
(JSC::WebAssemblyFunction::jsCallEntrypointSlow):

Source/WTF:

Make CompactPointerTuple usable for storing 16 bits data.

* WTF.xcodeproj/project.pbxproj:
* wtf/CMakeLists.txt:
* wtf/CompactPointerTuple.h:
* wtf/CompactRefPtrTuple.h: Added.
* wtf/text/StringImpl.h:
* wtf/text/SymbolImpl.h:
(WTF::SymbolImpl::hashForSymbol const):
(WTF::SymbolImpl::SymbolImpl):

LayoutTests:

This test is half-broken since it relies on HashMap's order implicitly.
We changed SymbolImpl's hash code, so it makes the result different.

* inspector/debugger/tail-deleted-frames/tail-deleted-frames-this-value-expected.txt:

git-svn-id: http://svn.webkit.org/repository/webkit/trunk@257201 268f45cc-cd09-0410-ab3c-d52691b4dbfc

37 files changed:
LayoutTests/ChangeLog
LayoutTests/inspector/debugger/tail-deleted-frames/tail-deleted-frames-this-value-expected.txt
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/bytecode/AccessCase.cpp
Source/JavaScriptCore/bytecode/AccessCase.h
Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.h
Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
Source/JavaScriptCore/jit/AssemblyHelpers.h
Source/JavaScriptCore/jit/JITOpcodes.cpp
Source/JavaScriptCore/jit/JITOpcodes32_64.cpp
Source/JavaScriptCore/jit/Repatch.cpp
Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
Source/JavaScriptCore/runtime/ClonedArguments.cpp
Source/JavaScriptCore/runtime/ConcurrentJSLock.h
Source/JavaScriptCore/runtime/JSCell.h
Source/JavaScriptCore/runtime/JSObject.cpp
Source/JavaScriptCore/runtime/JSObject.h
Source/JavaScriptCore/runtime/JSObjectInlines.h
Source/JavaScriptCore/runtime/JSType.cpp
Source/JavaScriptCore/runtime/JSType.h
Source/JavaScriptCore/runtime/Structure.cpp
Source/JavaScriptCore/runtime/Structure.h
Source/JavaScriptCore/runtime/StructureInlines.h
Source/JavaScriptCore/runtime/StructureRareData.cpp
Source/JavaScriptCore/runtime/StructureRareData.h
Source/JavaScriptCore/runtime/StructureRareDataInlines.h
Source/JavaScriptCore/runtime/StructureTransitionTable.h
Source/JavaScriptCore/tools/JSDollarVM.cpp
Source/JavaScriptCore/wasm/js/WebAssemblyFunction.cpp
Source/WTF/ChangeLog
Source/WTF/WTF.xcodeproj/project.pbxproj
Source/WTF/wtf/CMakeLists.txt
Source/WTF/wtf/CompactPointerTuple.h
Source/WTF/wtf/CompactRefPtrTuple.h [new file with mode: 0644]
Source/WTF/wtf/text/StringImpl.h
Source/WTF/wtf/text/SymbolImpl.h

index 26e6d3f..73b37fe 100644 (file)
@@ -1,3 +1,15 @@
+2020-02-23  Yusuke Suzuki  <ysuzuki@apple.com>
+
+        [JSC] Shrink Structure
+        https://bugs.webkit.org/show_bug.cgi?id=207827
+
+        Reviewed by Saam Barati.
+
+        This test is half-broken since it relies on HashMap's order implicitly.
+        We changed SymbolImpl's hash code, so it makes the result different.
+
+        * inspector/debugger/tail-deleted-frames/tail-deleted-frames-this-value-expected.txt:
+
 2020-02-23  Diego Pino Garcia  <dpino@igalia.com>
 
         [GTK] Mark several async loading tests as failure
index b42740f..5a29360 100644 (file)
@@ -1,3 +1,141 @@
+2020-02-23  Yusuke Suzuki  <ysuzuki@apple.com>
+
+        [JSC] Shrink Structure
+        https://bugs.webkit.org/show_bug.cgi?id=207827
+
+        Reviewed by Saam Barati.
+
+        This patch shrinks sizeof(Structure) from 112 to 96 (16 bytes) in architectures using 64 bit pointers.
+        Structure is one of the most frequently allocated JSCell in JSC. So it is worth doing
+        all the sort of bit hacks to make it compact as much as possible.
+
+            1. Put outOfLineTypeFlags, maxOffset and transitionOffset into highest bits of m_propertyTableUnsafe,
+               m_cachedPrototypeChain, m_classInfo, and m_transitionPropertyName. Do not use PackedPtr here since
+               some of them are concurrently accessed by GC.
+            2. Put m_inlineCapacity into lower 8 bits of m_propertyHash.
+            3. Remove m_lock, and use Structure::cellLock() instead.
+            4. Remove m_cachedPrototypeChain clearing from the concurrent collector since it is dead code, it was old code.
+               We were setting m_cachedPrototypeChain only if Structure is for JSObject. Clearing happened only if it was not
+               a Structure for JSObject.
+            5. Previous Structure is held as StructureID m_previous. And m_previousOrRareData becomes m_cachedPrototypeChainOrRareData.
+
+        Many pairs are using CompactPointerTuple to make code clean.
+        Combining all of the above techniques saves us 16 bytes.
+
+        * bytecode/AccessCase.cpp:
+        (JSC::AccessCase::create):
+        (JSC::AccessCase::propagateTransitions const):
+        * bytecode/AccessCase.h:
+        (JSC::AccessCase::structure const):
+        * dfg/DFGSpeculativeJIT.cpp:
+        (JSC::DFG::SpeculativeJIT::compileCheckSubClass):
+        (JSC::DFG::SpeculativeJIT::compileObjectKeys):
+        (JSC::DFG::SpeculativeJIT::compileCreateThis):
+        (JSC::DFG::SpeculativeJIT::compileCreatePromise):
+        (JSC::DFG::SpeculativeJIT::compileCreateInternalFieldObject):
+        * ftl/FTLAbstractHeapRepository.h:
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::compileObjectKeys):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCreatePromise):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCreateInternalFieldObject):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCheckSubClass):
+        * jit/AssemblyHelpers.h:
+        (JSC::AssemblyHelpers::emitLoadClassInfoFromStructure):
+        * jit/JITOpcodes.cpp:
+        (JSC::JIT::emit_op_create_this):
+        * jit/JITOpcodes32_64.cpp:
+        (JSC::JIT::emit_op_create_this):
+        * jit/Repatch.cpp:
+        (JSC::tryCachePutByID):
+        * llint/LLIntSlowPaths.cpp:
+        (JSC::LLInt::LLINT_SLOW_PATH_DECL):
+        * runtime/ClonedArguments.cpp:
+        (JSC::ClonedArguments::createStructure):
+        * runtime/ConcurrentJSLock.h:
+        (JSC::ConcurrentJSLockerBase::ConcurrentJSLockerBase):
+        (JSC::GCSafeConcurrentJSLockerImpl::GCSafeConcurrentJSLockerImpl):
+        (JSC::GCSafeConcurrentJSLockerImpl::~GCSafeConcurrentJSLockerImpl):
+        (JSC::ConcurrentJSLockerImpl::ConcurrentJSLockerImpl):
+        (JSC::GCSafeConcurrentJSLocker::GCSafeConcurrentJSLocker): Deleted.
+        (JSC::GCSafeConcurrentJSLocker::~GCSafeConcurrentJSLocker): Deleted.
+        (JSC::ConcurrentJSLocker::ConcurrentJSLocker): Deleted.
+        * runtime/JSCell.h:
+        * runtime/JSObject.cpp:
+        (JSC::JSObject::deleteProperty):
+        (JSC::JSObject::shiftButterflyAfterFlattening):
+        * runtime/JSObject.h:
+        (JSC::JSObject::getDirectConcurrently const):
+        * runtime/JSObjectInlines.h:
+        (JSC::JSObject::prepareToPutDirectWithoutTransition):
+        * runtime/JSType.cpp:
+        (WTF::printInternal):
+        * runtime/JSType.h:
+        * runtime/Structure.cpp:
+        (JSC::StructureTransitionTable::contains const):
+        (JSC::StructureTransitionTable::get const):
+        (JSC::StructureTransitionTable::add):
+        (JSC::Structure::dumpStatistics):
+        (JSC::Structure::Structure):
+        (JSC::Structure::create):
+        (JSC::Structure::findStructuresAndMapForMaterialization):
+        (JSC::Structure::materializePropertyTable):
+        (JSC::Structure::addPropertyTransitionToExistingStructureImpl):
+        (JSC::Structure::addPropertyTransitionToExistingStructureConcurrently):
+        (JSC::Structure::addNewPropertyTransition):
+        (JSC::Structure::removeNewPropertyTransition):
+        (JSC::Structure::changePrototypeTransition):
+        (JSC::Structure::attributeChangeTransition):
+        (JSC::Structure::toDictionaryTransition):
+        (JSC::Structure::takePropertyTableOrCloneIfPinned):
+        (JSC::Structure::nonPropertyTransitionSlow):
+        (JSC::Structure::flattenDictionaryStructure):
+        (JSC::Structure::pin):
+        (JSC::Structure::pinForCaching):
+        (JSC::Structure::allocateRareData):
+        (JSC::Structure::ensurePropertyReplacementWatchpointSet):
+        (JSC::Structure::copyPropertyTableForPinning):
+        (JSC::Structure::add):
+        (JSC::Structure::remove):
+        (JSC::Structure::visitChildren):
+        (JSC::Structure::canCachePropertyNameEnumerator const):
+        * runtime/Structure.h:
+        * runtime/StructureInlines.h:
+        (JSC::Structure::get):
+        (JSC::Structure::ruleOutUnseenProperty const):
+        (JSC::Structure::seenProperties const):
+        (JSC::Structure::addPropertyHashAndSeenProperty):
+        (JSC::Structure::forEachPropertyConcurrently):
+        (JSC::Structure::transitivelyTransitionedFrom):
+        (JSC::Structure::cachedPrototypeChain const):
+        (JSC::Structure::setCachedPrototypeChain):
+        (JSC::Structure::prototypeChain const):
+        (JSC::Structure::propertyReplacementWatchpointSet):
+        (JSC::Structure::checkOffsetConsistency const):
+        (JSC::Structure::add):
+        (JSC::Structure::remove):
+        (JSC::Structure::removePropertyWithoutTransition):
+        (JSC::Structure::setPropertyTable):
+        (JSC::Structure::clearPropertyTable):
+        (JSC::Structure::setOutOfLineTypeFlags):
+        (JSC::Structure::setInlineCapacity):
+        (JSC::Structure::setClassInfo):
+        (JSC::Structure::setPreviousID):
+        (JSC::Structure::clearPreviousID):
+        * runtime/StructureRareData.cpp:
+        (JSC::StructureRareData::createStructure):
+        (JSC::StructureRareData::create):
+        (JSC::StructureRareData::StructureRareData):
+        (JSC::StructureRareData::visitChildren):
+        * runtime/StructureRareData.h:
+        * runtime/StructureRareDataInlines.h:
+        (JSC::StructureRareData::setCachedPrototypeChain):
+        (JSC::StructureRareData::setPreviousID): Deleted.
+        (JSC::StructureRareData::clearPreviousID): Deleted.
+        * tools/JSDollarVM.cpp:
+        (JSC::JSDollarVMHelper::functionGetStructureTransitionList):
+        * wasm/js/WebAssemblyFunction.cpp:
+        (JSC::WebAssemblyFunction::jsCallEntrypointSlow):
+
 2020-02-20  Mark Lam  <mark.lam@apple.com>
 
         Make support for bytecode caching more robust against file corruption.
index 2af480a..def6752 100644 (file)
@@ -110,7 +110,7 @@ std::unique_ptr<AccessCase> AccessCase::create(
     VM& vm, JSCell* owner, CacheableIdentifier identifier, PropertyOffset offset, Structure* oldStructure, Structure* newStructure,
     const ObjectPropertyConditionSet& conditionSet, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
 {
-    RELEASE_ASSERT(oldStructure == newStructure->previousID());
+    RELEASE_ASSERT(oldStructure == newStructure->previousID(vm));
 
     // Skip optimizing the case where we need a realloc, if we don't have
     // enough registers to make it happen.
@@ -705,7 +705,7 @@ bool AccessCase::propagateTransitions(SlotVisitor& visitor) const
 
     switch (m_type) {
     case Transition:
-        if (visitor.vm().heap.isMarked(m_structure->previousID()))
+        if (visitor.vm().heap.isMarked(m_structure->previousID(visitor.vm())))
             visitor.appendUnbarriered(m_structure.get());
         else
             result = false;
index 79d3101..a767767 100644 (file)
@@ -158,7 +158,7 @@ public:
     Structure* structure() const
     {
         if (m_type == Transition)
-            return m_structure->previousID();
+            return m_structure->previousID(m_structure->vm());
         return m_structure.get();
     }
     bool guardedByStructureCheck(const StructureStubInfo&) const;
index 642b163..17ba2b2 100644 (file)
@@ -9605,7 +9605,7 @@ void SpeculativeJIT::compileCheckSubClass(Node* node)
         GPRReg specifiedGPR = specified.gpr();
 
         m_jit.emitLoadStructure(vm(), baseGPR, otherGPR, specifiedGPR);
-        m_jit.loadPtr(CCallHelpers::Address(otherGPR, Structure::classInfoOffset()), otherGPR);
+        m_jit.emitLoadClassInfoFromStructure(otherGPR, otherGPR);
         m_jit.move(CCallHelpers::TrustedImmPtr(node->classInfo()), specifiedGPR);
 
         CCallHelpers::Label loop = m_jit.label();
@@ -12698,10 +12698,13 @@ void SpeculativeJIT::compileObjectKeys(Node* node)
 
             CCallHelpers::JumpList slowCases;
             m_jit.emitLoadStructure(vm(), objectGPR, structureGPR, scratchGPR);
-            m_jit.loadPtr(CCallHelpers::Address(structureGPR, Structure::previousOrRareDataOffset()), scratchGPR);
+            m_jit.loadPtr(CCallHelpers::Address(structureGPR, Structure::offsetOfCachedPrototypeChainOrRareData()), scratchGPR);
+#if CPU(ADDRESS64)
+            m_jit.andPtr(CCallHelpers::TrustedImmPtr(Structure::cachedPrototypeChainOrRareDataMask), scratchGPR);
+#endif
 
             slowCases.append(m_jit.branchTestPtr(CCallHelpers::Zero, scratchGPR));
-            slowCases.append(m_jit.branch32(CCallHelpers::Equal, CCallHelpers::Address(scratchGPR, JSCell::structureIDOffset()), TrustedImm32(bitwise_cast<int32_t>(vm().structureStructure->structureID()))));
+            slowCases.append(m_jit.branch32(CCallHelpers::NotEqual, CCallHelpers::Address(scratchGPR, JSCell::structureIDOffset()), TrustedImm32(bitwise_cast<int32_t>(vm().structureRareDataStructure->structureID()))));
 
             m_jit.loadPtr(CCallHelpers::Address(scratchGPR, StructureRareData::offsetOfCachedOwnKeys()), scratchGPR);
 
@@ -12839,7 +12842,7 @@ void SpeculativeJIT::compileCreateThis(Node* node)
     auto butterfly = TrustedImmPtr(nullptr);
     emitAllocateJSObject(resultGPR, JITAllocator::variable(), allocatorGPR, structureGPR, butterfly, scratchGPR, slowPath);
 
-    m_jit.load8(JITCompiler::Address(structureGPR, Structure::inlineCapacityOffset()), inlineCapacityGPR);
+    m_jit.load8(JITCompiler::Address(structureGPR, Structure::offsetOfInlineCapacity()), inlineCapacityGPR);
     m_jit.emitInitializeInlineStorage(resultGPR, inlineCapacityGPR);
     m_jit.mutatorFence(vm());
 
@@ -12877,8 +12880,8 @@ void SpeculativeJIT::compileCreatePromise(Node* node)
     slowCases.append(m_jit.branchTestPtr(MacroAssembler::Zero, rareDataGPR, CCallHelpers::TrustedImm32(JSFunction::rareDataTag)));
     m_jit.loadPtr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfInternalFunctionAllocationProfile() + InternalFunctionAllocationProfile::offsetOfStructure() - JSFunction::rareDataTag), structureGPR);
     slowCases.append(m_jit.branchTestPtr(CCallHelpers::Zero, structureGPR));
-    m_jit.move(TrustedImmPtr(node->isInternalPromise() ? JSInternalPromise::info() : JSPromise::info()), scratch1GPR);
-    slowCases.append(m_jit.branchPtr(CCallHelpers::NotEqual, scratch1GPR, CCallHelpers::Address(structureGPR, Structure::classInfoOffset())));
+    m_jit.emitLoadClassInfoFromStructure(structureGPR, scratch1GPR);
+    slowCases.append(m_jit.branchPtr(CCallHelpers::NotEqual, scratch1GPR, TrustedImmPtr(node->isInternalPromise() ? JSInternalPromise::info() : JSPromise::info())));
     m_jit.move(TrustedImmPtr::weakPointer(m_jit.graph(), globalObject), scratch1GPR);
     slowCases.append(m_jit.branchPtr(CCallHelpers::NotEqual, scratch1GPR, CCallHelpers::Address(structureGPR, Structure::globalObjectOffset())));
 
@@ -12925,8 +12928,8 @@ void SpeculativeJIT::compileCreateInternalFieldObject(Node* node, Operation oper
     slowCases.append(m_jit.branchTestPtr(MacroAssembler::Zero, rareDataGPR, CCallHelpers::TrustedImm32(JSFunction::rareDataTag)));
     m_jit.loadPtr(JITCompiler::Address(rareDataGPR, FunctionRareData::offsetOfInternalFunctionAllocationProfile() + InternalFunctionAllocationProfile::offsetOfStructure() - JSFunction::rareDataTag), structureGPR);
     slowCases.append(m_jit.branchTestPtr(CCallHelpers::Zero, structureGPR));
-    m_jit.move(TrustedImmPtr(JSClass::info()), scratch1GPR);
-    slowCases.append(m_jit.branchPtr(CCallHelpers::NotEqual, scratch1GPR, CCallHelpers::Address(structureGPR, Structure::classInfoOffset())));
+    m_jit.emitLoadClassInfoFromStructure(structureGPR, scratch1GPR);
+    slowCases.append(m_jit.branchPtr(CCallHelpers::NotEqual, scratch1GPR, TrustedImmPtr(JSClass::info())));
     m_jit.move(TrustedImmPtr::weakPointer(m_jit.graph(), globalObject), scratch1GPR);
     slowCases.append(m_jit.branchPtr(CCallHelpers::NotEqual, scratch1GPR, CCallHelpers::Address(structureGPR, Structure::globalObjectOffset())));
 
index b108abf..219a4f4 100644 (file)
@@ -139,11 +139,11 @@ namespace JSC { namespace FTL {
     macro(StringImpl_data, StringImpl::dataOffset()) \
     macro(StringImpl_hashAndFlags, StringImpl::flagsOffset()) \
     macro(StringImpl_length, StringImpl::lengthMemoryOffset()) \
-    macro(Structure_classInfo, Structure::classInfoOffset()) \
+    macro(Structure_classInfo, Structure::offsetOfClassInfo()) \
     macro(Structure_globalObject, Structure::globalObjectOffset()) \
     macro(Structure_indexingModeIncludingHistory, Structure::indexingModeIncludingHistoryOffset()) \
-    macro(Structure_inlineCapacity, Structure::inlineCapacityOffset()) \
-    macro(Structure_previousOrRareData, Structure::previousOrRareDataOffset()) \
+    macro(Structure_inlineCapacity, Structure::offsetOfInlineCapacity()) \
+    macro(Structure_cachedPrototypeChainOrRareData, Structure::offsetOfCachedPrototypeChainOrRareData()) \
     macro(Structure_prototype, Structure::prototypeOffset()) \
     macro(Structure_structureID, Structure::structureIDOffset()) \
     macro(StructureRareData_cachedOwnKeys, StructureRareData::offsetOfCachedOwnKeys()) \
index 4d3427b..2dd5484 100644 (file)
@@ -6091,17 +6091,17 @@ private:
 
                 LValue object = lowObject(m_node->child1());
                 LValue structure = loadStructure(object);
-                LValue previousOrRareData = m_out.loadPtr(structure, m_heaps.Structure_previousOrRareData);
-                m_out.branch(m_out.notNull(previousOrRareData), unsure(notNullCase), unsure(slowCase));
+                LValue cachedPrototypeChainOrRareData = m_out.bitAnd(m_out.constIntPtr(Structure::cachedPrototypeChainOrRareDataMask), m_out.loadPtr(structure, m_heaps.Structure_cachedPrototypeChainOrRareData));
+                m_out.branch(m_out.notNull(cachedPrototypeChainOrRareData), unsure(notNullCase), unsure(slowCase));
 
                 LBasicBlock lastNext = m_out.appendTo(notNullCase, rareDataCase);
                 m_out.branch(
-                    m_out.notEqual(m_out.load32(previousOrRareData, m_heaps.JSCell_structureID), m_out.constInt32(m_graph.m_vm.structureStructure->structureID())),
+                    m_out.equal(m_out.load32(cachedPrototypeChainOrRareData, m_heaps.JSCell_structureID), m_out.constInt32(m_graph.m_vm.structureRareDataStructure->structureID())),
                     unsure(rareDataCase), unsure(slowCase));
 
                 m_out.appendTo(rareDataCase, useCacheCase);
                 ASSERT(bitwise_cast<uintptr_t>(StructureRareData::cachedOwnKeysSentinel()) == 1);
-                LValue cachedOwnKeys = m_out.loadPtr(previousOrRareData, m_heaps.StructureRareData_cachedOwnKeys);
+                LValue cachedOwnKeys = m_out.loadPtr(cachedPrototypeChainOrRareData, m_heaps.StructureRareData_cachedOwnKeys);
                 m_out.branch(m_out.belowOrEqual(cachedOwnKeys, m_out.constIntPtr(bitwise_cast<void*>(StructureRareData::cachedOwnKeysSentinel()))), unsure(slowCase), unsure(useCacheCase));
 
                 m_out.appendTo(useCacheCase, slowButArrayBufferCase);
@@ -6675,7 +6675,8 @@ private:
         m_out.branch(m_out.isZero64(structure), rarely(slowCase), usually(hasStructure));
 
         m_out.appendTo(hasStructure, checkGlobalObjectCase);
-        m_out.branch(m_out.equal(m_out.loadPtr(structure, m_heaps.Structure_classInfo), m_out.constIntPtr(m_node->isInternalPromise() ? JSInternalPromise::info() : JSPromise::info())), usually(checkGlobalObjectCase), rarely(slowCase));
+        LValue classInfo = m_out.bitAnd(m_out.loadPtr(structure, m_heaps.Structure_classInfo), m_out.constIntPtr(Structure::classInfoMask));
+        m_out.branch(m_out.equal(classInfo, m_out.constIntPtr(m_node->isInternalPromise() ? JSInternalPromise::info() : JSPromise::info())), usually(checkGlobalObjectCase), rarely(slowCase));
 
         m_out.appendTo(checkGlobalObjectCase, fastAllocationCase);
         ValueFromBlock derivedStructure = m_out.anchor(structure);
@@ -6730,7 +6731,8 @@ private:
         m_out.branch(m_out.isZero64(structure), rarely(slowCase), usually(hasStructure));
 
         m_out.appendTo(hasStructure, checkGlobalObjectCase);
-        m_out.branch(m_out.equal(m_out.loadPtr(structure, m_heaps.Structure_classInfo), m_out.constIntPtr(JSClass::info())), usually(checkGlobalObjectCase), rarely(slowCase));
+        LValue classInfo = m_out.bitAnd(m_out.loadPtr(structure, m_heaps.Structure_classInfo), m_out.constIntPtr(Structure::classInfoMask));
+        m_out.branch(m_out.equal(classInfo, m_out.constIntPtr(JSClass::info())), usually(checkGlobalObjectCase), rarely(slowCase));
 
         m_out.appendTo(checkGlobalObjectCase, fastAllocationCase);
         m_out.branch(m_out.equal(m_out.loadPtr(structure, m_heaps.Structure_globalObject), weakPointer(globalObject)), usually(fastAllocationCase), rarely(slowCase));
@@ -13208,7 +13210,7 @@ private:
             LBasicBlock continuation = m_out.newBlock();
 
             LValue structure = loadStructure(cell);
-            LValue classInfo = m_out.loadPtr(structure, m_heaps.Structure_classInfo);
+            LValue classInfo = m_out.bitAnd(m_out.loadPtr(structure, m_heaps.Structure_classInfo), m_out.constIntPtr(Structure::classInfoMask));
             ValueFromBlock otherAtStart = m_out.anchor(classInfo);
             m_out.jump(loop);
 
index 729f584..668f0e5 100644 (file)
@@ -1517,6 +1517,14 @@ public:
     
     void emitLoadStructure(VM&, RegisterID source, RegisterID dest, RegisterID scratch);
 
+    void emitLoadClassInfoFromStructure(RegisterID structure, RegisterID dst)
+    {
+        loadPtr(Address(structure, Structure::offsetOfClassInfo()), dst);
+#if CPU(ADDRESS64)
+        andPtr(TrustedImmPtr(bitwise_cast<void*>(Structure::classInfoMask)), dst);
+#endif
+    }
+
     void emitStoreStructureWithTypeInfo(TrustedImmPtr structure, RegisterID dest, RegisterID)
     {
         emitStoreStructureWithTypeInfo(*this, structure, dest);
index 7f358c1..fde00bc 100644 (file)
@@ -979,7 +979,7 @@ void JIT::emit_op_create_this(const Instruction* currentInstruction)
     JumpList slowCases;
     auto butterfly = TrustedImmPtr(nullptr);
     emitAllocateJSObject(resultReg, JITAllocator::variable(), allocatorReg, structureReg, butterfly, scratchReg, slowCases);
-    load8(Address(structureReg, Structure::inlineCapacityOffset()), scratchReg);
+    load8(Address(structureReg, Structure::offsetOfInlineCapacity()), scratchReg);
     emitInitializeInlineStorage(resultReg, scratchReg);
     addSlowCase(slowCases);
     emitPutVirtualRegister(bytecode.m_dst);
index cd667b2..b11f655 100644 (file)
@@ -1093,7 +1093,7 @@ void JIT::emit_op_create_this(const Instruction* currentInstruction)
     JumpList slowCases;
     auto butterfly = TrustedImmPtr(nullptr);
     emitAllocateJSObject(resultReg, JITAllocator::variable(), allocatorReg, structureReg, butterfly, scratchReg, slowCases);
-    load8(Address(structureReg, Structure::inlineCapacityOffset()), scratchReg);
+    load8(Address(structureReg, Structure::offsetOfInlineCapacity()), scratchReg);
     emitInitializeInlineStorage(resultReg, scratchReg);
     addSlowCase(slowCases);
     emitStoreCell(bytecode.m_dst, resultReg);
index 9182041..08696b1 100644 (file)
@@ -617,7 +617,7 @@ static InlineCacheAction tryCachePutByID(JSGlobalObject* globalObject, CodeBlock
                 if (baseValue.asCell()->structure(vm) != newStructure)
                     return GiveUpOnCache;
 
-                ASSERT(newStructure->previousID() == oldStructure);
+                ASSERT(newStructure->previousID(vm) == oldStructure);
                 ASSERT(!newStructure->isDictionary());
                 ASSERT(newStructure->isObject());
                 
index 2254fcc..b911ee2 100644 (file)
@@ -855,7 +855,7 @@ LLINT_SLOW_PATH_DECL(slow_path_put_by_id)
                 Structure* a = vm.heap.structureIDTable().get(oldStructureID);
                 Structure* b = baseValue.asCell()->structure(vm);
                 if (slot.type() == PutPropertySlot::NewProperty)
-                    b = b->previousID();
+                    b = b->previousID(vm);
 
                 if (Structure::shouldConvertToPolyProto(a, b)) {
                     a->rareData()->sharedPolyProtoWatchpoint()->invalidate(vm, StringFireDetail("Detected poly proto opportunity."));
@@ -876,9 +876,9 @@ LLINT_SLOW_PATH_DECL(slow_path_put_by_id)
         if (newStructure->propertyAccessesAreCacheable() && baseCell == slot.base()) {
             if (slot.type() == PutPropertySlot::NewProperty) {
                 GCSafeConcurrentJSLocker locker(codeBlock->m_lock, vm.heap);
-                if (!newStructure->isDictionary() && newStructure->previousID()->outOfLineCapacity() == newStructure->outOfLineCapacity()) {
-                    ASSERT(oldStructure == newStructure->previousID());
-                    if (oldStructure == newStructure->previousID()) {
+                if (!newStructure->isDictionary() && newStructure->previousID(vm)->outOfLineCapacity() == newStructure->outOfLineCapacity()) {
+                    ASSERT(oldStructure == newStructure->previousID(vm));
+                    if (oldStructure == newStructure->previousID(vm)) {
                         ASSERT(oldStructure->transitionWatchpointSetHasBeenInvalidated());
 
                         bool sawPolyProto = false;
index 96a3635..c4f74c0 100644 (file)
@@ -153,7 +153,7 @@ Structure* ClonedArguments::createStructure(VM& vm, JSGlobalObject* globalObject
     Structure* structure = Structure::create(vm, globalObject, prototype, TypeInfo(ClonedArgumentsType, StructureFlags), info(), indexingType);
     structure->addPropertyWithoutTransition(
         vm, vm.propertyNames->length, static_cast<unsigned>(PropertyAttribute::DontEnum),
-        [&] (const GCSafeConcurrentJSLocker&, PropertyOffset offset, PropertyOffset newMaxOffset) {
+        [&](const GCSafeConcurrentJSCellLocker&, PropertyOffset offset, PropertyOffset newMaxOffset) {
             RELEASE_ASSERT(offset == clonedArgumentsLengthPropertyOffset);
             structure->setMaxOffset(vm, newMaxOffset);
         });
index aa273ba..1d40807 100644 (file)
 namespace JSC {
 
 using ConcurrentJSLock = Lock;
-using ConcurrentJSLockerImpl = LockHolder;
 
-static_assert(sizeof(ConcurrentJSLock) == 1, "Regardless of status of concurrent JS flag, size of ConurrentJSLock is always one byte.");
+static_assert(sizeof(ConcurrentJSLock) == 1, "Regardless of status of concurrent JS flag, size of ConcurrentJSLock is always one byte.");
 
+template<typename Lock>
 class ConcurrentJSLockerBase : public AbstractLocker {
     WTF_MAKE_NONCOPYABLE(ConcurrentJSLockerBase);
 public:
-    explicit ConcurrentJSLockerBase(ConcurrentJSLock& lockable)
+    explicit ConcurrentJSLockerBase(Lock& lockable)
         : m_locker(&lockable)
     {
     }
-    explicit ConcurrentJSLockerBase(ConcurrentJSLock* lockable)
+    explicit ConcurrentJSLockerBase(Lock* lockable)
         : m_locker(lockable)
     {
     }
@@ -64,63 +64,65 @@ public:
     }
 
 private:
-    ConcurrentJSLockerImpl m_locker;
+    Locker<Lock> m_locker;
 };
 
-class GCSafeConcurrentJSLocker : public ConcurrentJSLockerBase {
+template<typename Lock>
+class GCSafeConcurrentJSLockerImpl : public ConcurrentJSLockerBase<Lock> {
 public:
-    GCSafeConcurrentJSLocker(ConcurrentJSLock& lockable, Heap& heap)
-        : ConcurrentJSLockerBase(lockable)
+    GCSafeConcurrentJSLockerImpl(Lock& lockable, Heap& heap)
+        : ConcurrentJSLockerBase<Lock>(lockable)
         , m_deferGC(heap)
     {
     }
 
-    GCSafeConcurrentJSLocker(ConcurrentJSLock* lockable, Heap& heap)
-        : ConcurrentJSLockerBase(lockable)
+    GCSafeConcurrentJSLockerImpl(Lock* lockable, Heap& heap)
+        : ConcurrentJSLockerBase<Lock>(lockable)
         , m_deferGC(heap)
     {
     }
 
-    ~GCSafeConcurrentJSLocker()
+    ~GCSafeConcurrentJSLockerImpl()
     {
         // We have to unlock early due to the destruction order of base
         // vs. derived classes. If we didn't, then we would destroy the 
         // DeferGC object before unlocking the lock which could cause a GC
         // and resulting deadlock.
-        unlockEarly();
+        ConcurrentJSLockerBase<Lock>::unlockEarly();
     }
 
 private:
     DeferGC m_deferGC;
 };
 
-class ConcurrentJSLocker : public ConcurrentJSLockerBase {
+template<typename Lock>
+class ConcurrentJSLockerImpl : public ConcurrentJSLockerBase<Lock> {
 public:
-    ConcurrentJSLocker(ConcurrentJSLock& lockable)
-        : ConcurrentJSLockerBase(lockable)
+    ConcurrentJSLockerImpl(Lock& lockable)
+        : ConcurrentJSLockerBase<Lock>(lockable)
 #if !defined(NDEBUG)
         , m_disallowGC(std::in_place)
 #endif
     {
     }
 
-    ConcurrentJSLocker(ConcurrentJSLock* lockable)
-        : ConcurrentJSLockerBase(lockable)
+    ConcurrentJSLockerImpl(Lock* lockable)
+        : ConcurrentJSLockerBase<Lock>(lockable)
 #if !defined(NDEBUG)
         , m_disallowGC(std::in_place)
 #endif
     {
     }
 
-    ConcurrentJSLocker(NoLockingNecessaryTag)
-        : ConcurrentJSLockerBase(NoLockingNecessary)
+    ConcurrentJSLockerImpl(NoLockingNecessaryTag)
+        : ConcurrentJSLockerBase<Lock>(NoLockingNecessary)
 #if !defined(NDEBUG)
         , m_disallowGC(WTF::nullopt)
 #endif
     {
     }
     
-    ConcurrentJSLocker(int) = delete;
+    ConcurrentJSLockerImpl(int) = delete;
 
 #if !defined(NDEBUG)
 private:
@@ -128,4 +130,7 @@ private:
 #endif
 };
 
+using ConcurrentJSLocker = ConcurrentJSLockerImpl<ConcurrentJSLock>;
+using GCSafeConcurrentJSLocker = GCSafeConcurrentJSLockerImpl<ConcurrentJSLock>;
+
 } // namespace JSC
index 665d088..d5656f8 100644 (file)
@@ -24,6 +24,7 @@
 
 #include "CallData.h"
 #include "CellState.h"
+#include "ConcurrentJSLock.h"
 #include "ConstructData.h"
 #include "EnumerationMode.h"
 #include "Heap.h"
@@ -288,6 +289,9 @@ private:
     JS_EXPORT_PRIVATE void unlockSlow();
 };
 
+using ConcurrentJSCellLocker = ConcurrentJSLockerImpl<JSCellLock>;
+using GCSafeConcurrentJSCellLocker = GCSafeConcurrentJSLockerImpl<JSCellLock>;
+
 // FIXME: Refer to Subspace by reference.
 // https://bugs.webkit.org/show_bug.cgi?id=166988
 template<typename Type>
index acd29de..3a5121d 100644 (file)
@@ -2005,7 +2005,7 @@ bool JSObject::deleteProperty(JSCell* cell, JSGlobalObject* globalObject, Proper
 
         PropertyOffset offset = invalidOffset;
         if (structure->isUncacheableDictionary())
-            offset = structure->removePropertyWithoutTransition(vm, propertyName, [] (const GCSafeConcurrentJSLocker&, PropertyOffset, PropertyOffset) { });
+            offset = structure->removePropertyWithoutTransition(vm, propertyName, [](const GCSafeConcurrentJSCellLocker&, PropertyOffset, PropertyOffset) { });
         else {
             structure = Structure::removePropertyTransition(vm, structure, propertyName, offset, &deferredWatchpointFire);
             if (thisObject->m_butterfly && !structure->outOfLineCapacity() && !structure->hasIndexingHeader(thisObject)) {
@@ -3766,7 +3766,7 @@ void JSObject::convertToDictionary(VM& vm)
         vm, Structure::toCacheableDictionaryTransition(vm, structure(vm), &deferredWatchpointFire));
 }
 
-void JSObject::shiftButterflyAfterFlattening(const GCSafeConcurrentJSLocker&, VM& vm, Structure* structure, size_t outOfLineCapacityAfter)
+void JSObject::shiftButterflyAfterFlattening(const GCSafeConcurrentJSCellLocker&, VM& vm, Structure* structure, size_t outOfLineCapacityAfter)
 {
     // This could interleave visitChildren because some old structure could have been a non
     // dictionary structure. We have to be crazy careful. But, we are guaranteed to be holding
index ec6300e..651160b 100644 (file)
@@ -815,7 +815,7 @@ public:
     {
         structure(vm)->flattenDictionaryStructure(vm, this);
     }
-    void shiftButterflyAfterFlattening(const GCSafeConcurrentJSLocker&, VM&, Structure* structure, size_t outOfLineCapacityAfter);
+    void shiftButterflyAfterFlattening(const GCSafeConcurrentJSCellLocker&, VM&, Structure*, size_t outOfLineCapacityAfter);
 
     JSGlobalObject* globalObject() const
     {
@@ -1329,7 +1329,7 @@ inline JSValue JSObject::getPrototype(VM& vm, JSGlobalObject* globalObject)
 // flatten an object.
 inline JSValue JSObject::getDirectConcurrently(Structure* structure, PropertyOffset offset) const
 {
-    ConcurrentJSLocker locker(structure->lock());
+    ConcurrentJSCellLocker locker(structure->cellLock());
     if (!structure->isValidOffset(offset))
         return { };
     return getDirect(offset);
index a670034..ecf43c4 100644 (file)
@@ -216,7 +216,7 @@ ALWAYS_INLINE PropertyOffset JSObject::prepareToPutDirectWithoutTransition(VM& v
     PropertyOffset result;
     structure->addPropertyWithoutTransition(
         vm, propertyName, attributes,
-        [&] (const GCSafeConcurrentJSLocker&, PropertyOffset offset, PropertyOffset newMaxOffset) {
+        [&](const GCSafeConcurrentJSCellLocker&, PropertyOffset offset, PropertyOffset newMaxOffset) {
             unsigned newOutOfLineCapacity = Structure::outOfLineCapacity(newMaxOffset);
             if (newOutOfLineCapacity != oldOutOfLineCapacity) {
                 Butterfly* butterfly = allocateMoreOutOfLineStorage(vm, oldOutOfLineCapacity, newOutOfLineCapacity);
index b02b347..a1337c6 100644 (file)
@@ -44,6 +44,7 @@ void printInternal(PrintStream& out, JSC::JSType type)
     CASE(StringType)
     CASE(SymbolType)
     CASE(BigIntType)
+    CASE(StructureRareDataType)
     CASE(CustomGetterSetterType)
     CASE(APIValueWrapperType)
     CASE(NativeExecutableType)
index 1479925..cd2953c 100644 (file)
@@ -29,6 +29,7 @@ enum JSType : uint8_t {
     SymbolType,
     BigIntType,
 
+    StructureRareDataType,
     GetterSetterType,
     CustomGetterSetterType,
     APIValueWrapperType,
index eda446b..64c800b 100644 (file)
@@ -91,7 +91,7 @@ bool StructureTransitionTable::contains(UniquedStringImpl* rep, unsigned attribu
 {
     if (isUsingSingleSlot()) {
         Structure* transition = singleTransition();
-        return transition && transition->m_transitionPropertyName == rep && transition->transitionPropertyAttributes() == attributes && transition->isPropertyDeletionTransition() == !isAddition;
+        return transition && transition->transitionPropertyName() == rep && transition->transitionPropertyAttributes() == attributes && transition->isPropertyDeletionTransition() == !isAddition;
     }
     return map()->get(StructureTransitionTable::Hash::Key(rep, attributes, isAddition));
 }
@@ -100,7 +100,7 @@ inline Structure* StructureTransitionTable::get(UniquedStringImpl* rep, unsigned
 {
     if (isUsingSingleSlot()) {
         Structure* transition = singleTransition();
-        return (transition && transition->m_transitionPropertyName == rep && transition->transitionPropertyAttributes() == attributes && transition->isPropertyDeletionTransition() == !isAddition) ? transition : 0;
+        return (transition && transition->transitionPropertyName() == rep && transition->transitionPropertyAttributes() == attributes && transition->isPropertyDeletionTransition() == !isAddition) ? transition : nullptr;
     }
     return map()->get(StructureTransitionTable::Hash::Key(rep, attributes, isAddition));
 }
@@ -123,7 +123,7 @@ void StructureTransitionTable::add(VM& vm, Structure* structure)
     }
 
     // Add the structure to the map.
-    map()->set(StructureTransitionTable::Hash::Key(structure->m_transitionPropertyName.get(), structure->transitionPropertyAttributes(), !structure->isPropertyDeletionTransition()), structure);
+    map()->set(StructureTransitionTable::Hash::Key(structure->transitionPropertyName(), structure->transitionPropertyAttributes(), !structure->isPropertyDeletionTransition()), structure);
 }
 
 void Structure::dumpStatistics()
@@ -142,7 +142,7 @@ void Structure::dumpStatistics()
         switch (structure->m_transitionTable.size()) {
             case 0:
                 ++numberLeaf;
-                if (!structure->previousID())
+                if (!structure->previousID(structure->vm()))
                     ++numberSingletons;
                 break;
 
@@ -151,7 +151,7 @@ void Structure::dumpStatistics()
                 break;
         }
 
-        if (PropertyTable* table = structure->propertyTableOrNull()) {
+        if (PropertyTable* table = structure->propertyTableUnsafeOrNull()) {
             ++numberWithPropertyMaps;
             totalPropertyMapsSize += table->sizeInMemory();
         }
@@ -174,15 +174,12 @@ void Structure::dumpStatistics()
 Structure::Structure(VM& vm, JSGlobalObject* globalObject, JSValue prototype, const TypeInfo& typeInfo, const ClassInfo* classInfo, IndexingType indexingType, unsigned inlineCapacity)
     : JSCell(vm, vm.structureStructure.get())
     , m_blob(vm.heap.structureIDTable().allocateID(this), indexingType, typeInfo)
-    , m_outOfLineTypeFlags(typeInfo.outOfLineTypeFlags())
-    , m_inlineCapacity(inlineCapacity)
-    , m_bitField(0)
     , m_globalObject(vm, this, globalObject, WriteBarrier<JSGlobalObject>::MayBeNull)
     , m_prototype(vm, this, prototype)
-    , m_classInfo(classInfo)
     , m_transitionWatchpointSet(IsWatched)
-    , m_propertyHash(0)
 {
+    setInlineCapacity(inlineCapacity);
+    setClassInfo(classInfo);
     setDictionaryKind(NoneDictionaryKind);
     setIsPinnedPropertyTable(false);
     setHasGetterSetterProperties(classInfo->hasStaticSetterOrReadonlyProperties());
@@ -200,31 +197,30 @@ Structure::Structure(VM& vm, JSGlobalObject* globalObject, JSValue prototype, co
     setIsPropertyDeletionTransition(false);
     setTransitionOffset(vm, invalidOffset);
     setMaxOffset(vm, invalidOffset);
+    setOutOfLineTypeFlags(typeInfo.outOfLineTypeFlags());
+
     ASSERT(inlineCapacity <= JSFinalObject::maxInlineCapacity());
     ASSERT(static_cast<PropertyOffset>(inlineCapacity) < firstOutOfLineOffset);
     ASSERT(!hasRareData());
-    ASSERT(hasReadOnlyOrGetterSetterPropertiesExcludingProto() || !m_classInfo->hasStaticSetterOrReadonlyProperties());
-    ASSERT(hasGetterSetterProperties() || !m_classInfo->hasStaticSetterOrReadonlyProperties());
-    ASSERT(!this->typeInfo().overridesGetCallData() || m_classInfo->methodTable.getCallData != &JSCell::getCallData);
+    ASSERT(hasReadOnlyOrGetterSetterPropertiesExcludingProto() || !this->classInfo()->hasStaticSetterOrReadonlyProperties());
+    ASSERT(hasGetterSetterProperties() || !this->classInfo()->hasStaticSetterOrReadonlyProperties());
+    ASSERT(!this->typeInfo().overridesGetCallData() || this->classInfo()->methodTable.getCallData != &JSCell::getCallData);
 }
 
 const ClassInfo Structure::s_info = { "Structure", nullptr, nullptr, nullptr, CREATE_METHOD_TABLE(Structure) };
 
 Structure::Structure(VM& vm)
     : JSCell(CreatingEarlyCell)
-    , m_inlineCapacity(0)
-    , m_bitField(0)
     , m_prototype(vm, this, jsNull())
-    , m_classInfo(info())
     , m_transitionWatchpointSet(IsWatched)
-    , m_propertyHash(0)
 {
+    setInlineCapacity(0);
+    setClassInfo(info());
     setDictionaryKind(NoneDictionaryKind);
     setIsPinnedPropertyTable(false);
-    setHasGetterSetterProperties(m_classInfo->hasStaticSetterOrReadonlyProperties());
+    setHasGetterSetterProperties(classInfo()->hasStaticSetterOrReadonlyProperties());
     setHasCustomGetterSetterProperties(false);
-    setHasReadOnlyOrGetterSetterPropertiesExcludingProto(m_classInfo->hasStaticSetterOrReadonlyProperties());
+    setHasReadOnlyOrGetterSetterPropertiesExcludingProto(classInfo()->hasStaticSetterOrReadonlyProperties());
     setHasUnderscoreProtoPropertyExcludingOriginalProto(false);
     setIsQuickPropertyAccessAllowedForEnumeration(true);
     setTransitionPropertyAttributes(0);
@@ -240,23 +236,26 @@ Structure::Structure(VM& vm)
  
     TypeInfo typeInfo = TypeInfo(CellType, StructureFlags);
     m_blob = StructureIDBlob(vm.heap.structureIDTable().allocateID(this), 0, typeInfo);
-    m_outOfLineTypeFlags = typeInfo.outOfLineTypeFlags();
+    setOutOfLineTypeFlags(typeInfo.outOfLineTypeFlags());
 
-    ASSERT(hasReadOnlyOrGetterSetterPropertiesExcludingProto() || !m_classInfo->hasStaticSetterOrReadonlyProperties());
-    ASSERT(hasGetterSetterProperties() || !m_classInfo->hasStaticSetterOrReadonlyProperties());
-    ASSERT(!this->typeInfo().overridesGetCallData() || m_classInfo->methodTable.getCallData != &JSCell::getCallData);
+    ASSERT(hasReadOnlyOrGetterSetterPropertiesExcludingProto() || !classInfo()->hasStaticSetterOrReadonlyProperties());
+    ASSERT(hasGetterSetterProperties() || !classInfo()->hasStaticSetterOrReadonlyProperties());
+    ASSERT(!this->typeInfo().overridesGetCallData() || classInfo()->methodTable.getCallData != &JSCell::getCallData);
 }
 
 Structure::Structure(VM& vm, Structure* previous, DeferredStructureTransitionWatchpointFire* deferred)
     : JSCell(vm, vm.structureStructure.get())
-    , m_inlineCapacity(previous->m_inlineCapacity)
-    , m_bitField(0)
-    , m_prototype(vm, this, previous->m_prototype.get())
-    , m_classInfo(previous->m_classInfo)
-    , m_transitionWatchpointSet(IsWatched)
+#if CPU(ADDRESS64)
+    , m_propertyHashAndSeenProperties(previous->m_propertyHashAndSeenProperties)
+#else
     , m_propertyHash(previous->m_propertyHash)
     , m_seenProperties(previous->m_seenProperties)
+#endif
+    , m_prototype(vm, this, previous->m_prototype.get())
+    , m_transitionWatchpointSet(IsWatched)
 {
+    setInlineCapacity(previous->inlineCapacity());
+    setClassInfo(previous->classInfo());
     setDictionaryKind(previous->dictionaryKind());
     setIsPinnedPropertyTable(false);
     setHasBeenFlattenedBefore(previous->hasBeenFlattenedBefore());
@@ -277,7 +276,7 @@ Structure::Structure(VM& vm, Structure* previous, DeferredStructureTransitionWat
  
     TypeInfo typeInfo = previous->typeInfo();
     m_blob = StructureIDBlob(vm.heap.structureIDTable().allocateID(this), previous->indexingModeIncludingHistory(), typeInfo);
-    m_outOfLineTypeFlags = typeInfo.outOfLineTypeFlags();
+    setOutOfLineTypeFlags(typeInfo.outOfLineTypeFlags());
 
     ASSERT(!previous->typeInfo().structureIsImmortal());
     setPreviousID(vm, previous);
@@ -289,9 +288,9 @@ Structure::Structure(VM& vm, Structure* previous, DeferredStructureTransitionWat
 
     if (previous->m_globalObject)
         m_globalObject.set(vm, this, previous->m_globalObject.get());
-    ASSERT(hasReadOnlyOrGetterSetterPropertiesExcludingProto() || !m_classInfo->hasStaticSetterOrReadonlyProperties());
-    ASSERT(hasGetterSetterProperties() || !m_classInfo->hasStaticSetterOrReadonlyProperties());
-    ASSERT(!this->typeInfo().overridesGetCallData() || m_classInfo->methodTable.getCallData != &JSCell::getCallData);
+    ASSERT(hasReadOnlyOrGetterSetterPropertiesExcludingProto() || !classInfo()->hasStaticSetterOrReadonlyProperties());
+    ASSERT(hasGetterSetterProperties() || !classInfo()->hasStaticSetterOrReadonlyProperties());
+    ASSERT(!this->typeInfo().overridesGetCallData() || classInfo()->methodTable.getCallData != &JSCell::getCallData);
 }
 
 Structure::~Structure()
@@ -313,7 +312,7 @@ Structure* Structure::create(PolyProtoTag, VM& vm, JSGlobalObject* globalObject,
     unsigned oldOutOfLineCapacity = result->outOfLineCapacity();
     result->addPropertyWithoutTransition(
         vm, vm.propertyNames->builtinNames().polyProtoName(), static_cast<unsigned>(PropertyAttribute::DontEnum),
-        [&] (const GCSafeConcurrentJSLocker&, PropertyOffset offset, PropertyOffset newMaxOffset) {
+        [&](const GCSafeConcurrentJSCellLocker&, PropertyOffset offset, PropertyOffset newMaxOffset) {
             RELEASE_ASSERT(Structure::outOfLineCapacity(newMaxOffset) == oldOutOfLineCapacity);
             RELEASE_ASSERT(offset == knownPolyProtoOffset);
             RELEASE_ASSERT(isInlineOffset(knownPolyProtoOffset));
@@ -329,15 +328,15 @@ bool Structure::isValidPrototype(JSValue prototype)
     return prototype.isNull() || (prototype.isObject() && prototype.getObject()->mayBePrototype());
 }
 
-void Structure::findStructuresAndMapForMaterialization(Vector<Structure*, 8>& structures, Structure*& structure, PropertyTable*& table)
+void Structure::findStructuresAndMapForMaterialization(VM& vm, Vector<Structure*, 8>& structures, Structure*& structure, PropertyTable*& table)
 {
     ASSERT(structures.isEmpty());
-    table = 0;
+    table = nullptr;
 
-    for (structure = this; structure; structure = structure->previousID()) {
-        structure->m_lock.lock();
+    for (structure = this; structure; structure = structure->previousID(vm)) {
+        structure->cellLock().lock();
         
-        table = structure->propertyTableOrNull();
+        table = structure->propertyTableUnsafeOrNull();
         if (table) {
             // Leave the structure locked, so that the caller can do things to it atomically
             // before it loses its property table.
@@ -345,7 +344,7 @@ void Structure::findStructuresAndMapForMaterialization(Vector<Structure*, 8>& st
         }
         
         structures.append(structure);
-        structure->m_lock.unlock();
+        structure->cellLock().unlock();
     }
     
     ASSERT(!structure);
@@ -363,34 +362,35 @@ PropertyTable* Structure::materializePropertyTable(VM& vm, bool setPropertyTable
     Structure* structure;
     PropertyTable* table;
     
-    findStructuresAndMapForMaterialization(structures, structure, table);
+    findStructuresAndMapForMaterialization(vm, structures, structure, table);
     
-    unsigned capacity = numberOfSlotsForMaxOffset(maxOffset(), m_inlineCapacity);
+    unsigned capacity = numberOfSlotsForMaxOffset(maxOffset(), inlineCapacity());
     if (table) {
         table = table->copy(vm, capacity);
-        structure->m_lock.unlock();
+        structure->cellLock().unlock();
     } else
         table = PropertyTable::create(vm, capacity);
     
     // Must hold the lock on this structure, since we will be modifying this structure's
     // property map. We don't want getConcurrently() to see the property map in a half-baked
     // state.
-    GCSafeConcurrentJSLocker locker(m_lock, vm.heap);
+    GCSafeConcurrentJSCellLocker locker(cellLock(), vm.heap);
     if (setPropertyTable)
         this->setPropertyTable(vm, table);
 
     for (size_t i = structures.size(); i--;) {
         structure = structures[i];
-        if (!structure->m_transitionPropertyName)
+        UniquedStringImpl* transitionPropertyName = structure->transitionPropertyName();
+        if (!transitionPropertyName)
             continue;
         if (structure->isPropertyDeletionTransition()) {
-            auto item = table->find(structure->m_transitionPropertyName.get());
+            auto item = table->find(transitionPropertyName);
             ASSERT(item.first);
             table->remove(item);
             table->addDeletedOffset(structure->transitionOffset());
             continue;
         }
-        PropertyMapEntry entry(structure->m_transitionPropertyName.get(), structure->transitionOffset(), structure->transitionPropertyAttributes());
+        PropertyMapEntry entry(transitionPropertyName, structure->transitionOffset(), structure->transitionPropertyAttributes());
         auto nextOffset = table->nextOffset(structure->inlineCapacity());
         ASSERT_UNUSED(nextOffset, nextOffset == structure->transitionOffset());
         auto result = table->add(entry);
@@ -425,7 +425,7 @@ Structure* Structure::addPropertyTransitionToExistingStructureImpl(Structure* st
         return existingTransition;
     }
 
-    return 0;
+    return nullptr;
 }
 
 Structure* Structure::addPropertyTransitionToExistingStructure(Structure* structure, PropertyName propertyName, unsigned attributes, PropertyOffset& offset)
@@ -436,7 +436,7 @@ Structure* Structure::addPropertyTransitionToExistingStructure(Structure* struct
 
 Structure* Structure::addPropertyTransitionToExistingStructureConcurrently(Structure* structure, UniquedStringImpl* uid, unsigned attributes, PropertyOffset& offset)
 {
-    ConcurrentJSLocker locker(structure->m_lock);
+    ConcurrentJSCellLocker locker(structure->cellLock());
     return addPropertyTransitionToExistingStructureImpl(structure, uid, attributes, offset);
 }
 
@@ -497,8 +497,7 @@ Structure* Structure::addNewPropertyTransition(VM& vm, Structure* structure, Pro
     }
     
     Structure* transition = create(vm, structure, deferred);
-
-    transition->m_cachedPrototypeChain.setMayBeNull(vm, transition, structure->m_cachedPrototypeChain.get());
+    transition->setCachedPrototypeChain(vm, structure->cachedPrototypeChain());
     
     // While we are adding the property, rematerializing the property table is super weird: we already
     // have a m_transitionPropertyName and transitionPropertyAttributes but the m_transitionOffset is still wrong. If the
@@ -510,12 +509,12 @@ Structure* Structure::addNewPropertyTransition(VM& vm, Structure* structure, Pro
     // case all is well.  If it wasn't for the lock, the GC would have TOCTOU: if could read
     // protectPropertyTableWhileTransitioning before we set it to true, and then blow the table away after.
     {
-        ConcurrentJSLocker locker(transition->m_lock);
+        ConcurrentJSCellLocker locker(transition->cellLock());
         transition->setProtectPropertyTableWhileTransitioning(true);
+        transition->setTransitionPropertyName(locker, propertyName.uid());
     }
 
     transition->m_blob.setIndexingModeIncludingHistory(structure->indexingModeIncludingHistory() & ~CopyOnWrite);
-    transition->m_transitionPropertyName = propertyName.uid();
     transition->setTransitionPropertyAttributes(attributes);
     transition->setPropertyTable(vm, structure->takePropertyTableOrCloneIfPinned(vm));
     transition->setMaxOffset(vm, structure->maxOffset());
@@ -530,7 +529,7 @@ Structure* Structure::addNewPropertyTransition(VM& vm, Structure* structure, Pro
 
     checkOffset(transition->transitionOffset(), transition->inlineCapacity());
     {
-        GCSafeConcurrentJSLocker locker(structure->m_lock, vm.heap);
+        GCSafeConcurrentJSCellLocker locker(structure->cellLock(), vm.heap);
         structure->m_transitionTable.add(vm, transition);
     }
     transition->checkOffsetConsistency();
@@ -575,7 +574,7 @@ Structure* Structure::removeNewPropertyTransition(VM& vm, Structure* structure,
     ASSERT(!Structure::removePropertyTransitionFromExistingStructure(vm, structure, propertyName, offset, deferred));
 
     int transitionCount = 0;
-    for (auto* s = structure; s && transitionCount <= s_maxTransitionLength; s = s->previousID())
+    for (auto* s = structure; s && transitionCount <= s_maxTransitionLength; s = s->previousID(vm))
         ++transitionCount;
 
     if (transitionCount > s_maxTransitionLength) {
@@ -587,16 +586,16 @@ Structure* Structure::removeNewPropertyTransition(VM& vm, Structure* structure,
     }
 
     Structure* transition = create(vm, structure, deferred);
-    transition->m_cachedPrototypeChain.setMayBeNull(vm, transition, structure->m_cachedPrototypeChain.get());
+    transition->setCachedPrototypeChain(vm, structure->cachedPrototypeChain());
 
     // While we are deleting the property, we need to make sure the table is not cleared.
     {
-        ConcurrentJSLocker locker(transition->m_lock);
+        ConcurrentJSCellLocker locker(transition->cellLock());
         transition->setProtectPropertyTableWhileTransitioning(true);
+        transition->setTransitionPropertyName(locker, propertyName.uid());
     }
 
     transition->m_blob.setIndexingModeIncludingHistory(structure->indexingModeIncludingHistory() & ~CopyOnWrite);
-    transition->m_transitionPropertyName = propertyName.uid();
     transition->setPropertyTable(vm, structure->takePropertyTableOrCloneIfPinned(vm));
     transition->setMaxOffset(vm, structure->maxOffset());
     transition->setIsPropertyDeletionTransition(true);
@@ -612,7 +611,7 @@ Structure* Structure::removeNewPropertyTransition(VM& vm, Structure* structure,
 
     checkOffset(transition->transitionOffset(), transition->inlineCapacity());
     {
-        GCSafeConcurrentJSLocker locker(structure->m_lock, vm.heap);
+        GCSafeConcurrentJSCellLocker locker(structure->cellLock(), vm.heap);
         structure->m_transitionTable.add(vm, transition);
     }
     transition->checkOffsetConsistency();
@@ -630,7 +629,7 @@ Structure* Structure::changePrototypeTransition(VM& vm, Structure* structure, JS
     transition->m_prototype.set(vm, transition, prototype);
 
     PropertyTable* table = structure->copyPropertyTableForPinning(vm);
-    transition->pin(holdLock(transition->m_lock), vm, table);
+    transition->pin(holdLock(transition->cellLock()), vm, table);
     transition->setMaxOffset(vm, structure->maxOffset());
     
     transition->checkOffsetConsistency();
@@ -643,7 +642,7 @@ Structure* Structure::attributeChangeTransition(VM& vm, Structure* structure, Pr
         Structure* transition = create(vm, structure);
 
         PropertyTable* table = structure->copyPropertyTableForPinning(vm);
-        transition->pin(holdLock(transition->m_lock), vm, table);
+        transition->pin(holdLock(transition->cellLock()), vm, table);
         transition->setMaxOffset(vm, structure->maxOffset());
         
         structure = transition;
@@ -665,7 +664,7 @@ Structure* Structure::toDictionaryTransition(VM& vm, Structure* structure, Dicti
     Structure* transition = create(vm, structure, deferred);
 
     PropertyTable* table = structure->copyPropertyTableForPinning(vm);
-    transition->pin(holdLock(transition->m_lock), vm, table);
+    transition->pin(holdLock(transition->cellLock()), vm, table);
     transition->setMaxOffset(vm, structure->maxOffset());
     transition->setDictionaryKind(kind);
     transition->setHasBeenDictionary(true);
@@ -701,12 +700,12 @@ Structure* Structure::preventExtensionsTransition(VM& vm, Structure* structure)
 
 PropertyTable* Structure::takePropertyTableOrCloneIfPinned(VM& vm)
 {
-    // This must always return a property table. It can't return null.
-    PropertyTable* result = propertyTableOrNull();
+    // This function must always return a property table. It can't return null.
+    PropertyTable* result = propertyTableUnsafeOrNull();
     if (result) {
         if (isPinnedPropertyTable())
             return result->copy(vm, result->size() + 1);
-        ConcurrentJSLocker locker(m_lock);
+        ConcurrentJSCellLocker locker(cellLock());
         setPropertyTable(vm, nullptr);
         return result;
     }
@@ -743,10 +742,10 @@ Structure* Structure::nonPropertyTransitionSlow(VM& vm, Structure* structure, No
         // table doesn't know how to take into account such wholesale edits.
 
         PropertyTable* table = structure->copyPropertyTableForPinning(vm);
-        transition->pinForCaching(holdLock(transition->m_lock), vm, table);
+        transition->pinForCaching(holdLock(transition->cellLock()), vm, table);
         transition->setMaxOffset(vm, structure->maxOffset());
         
-        table = transition->propertyTableOrNull();
+        table = transition->propertyTableUnsafeOrNull();
         RELEASE_ASSERT(table);
         for (auto& entry : *table) {
             if (setsDontDeleteOnAllProperties(transitionKind))
@@ -761,14 +760,14 @@ Structure* Structure::nonPropertyTransitionSlow(VM& vm, Structure* structure, No
     }
     
     if (setsReadOnlyOnNonAccessorProperties(transitionKind)
-        && !transition->propertyTableOrNull()->isEmpty())
+        && !transition->propertyTableUnsafeOrNull()->isEmpty())
         transition->setHasReadOnlyOrGetterSetterPropertiesExcludingProto(true);
     
     if (structure->isDictionary()) {
         PropertyTable* table = transition->ensurePropertyTable(vm);
-        transition->pin(holdLock(transition->m_lock), vm, table);
+        transition->pin(holdLock(transition->cellLock()), vm, table);
     } else {
-        auto locker = holdLock(structure->m_lock);
+        auto locker = holdLock(structure->cellLock());
         structure->m_transitionTable.add(vm, transition);
     }
 
@@ -820,14 +819,14 @@ Structure* Structure::flattenDictionaryStructure(VM& vm, JSObject* object)
     ASSERT(isDictionary());
     ASSERT(object->structure(vm) == this);
     
-    GCSafeConcurrentJSLocker locker(m_lock, vm.heap);
+    GCSafeConcurrentJSCellLocker locker(cellLock(), vm.heap);
     
     object->setStructureIDDirectly(nuke(id()));
     WTF::storeStoreFence();
 
     size_t beforeOutOfLineCapacity = this->outOfLineCapacity();
     if (isUncacheableDictionary()) {
-        PropertyTable* table = propertyTableOrNull();
+        PropertyTable* table = propertyTableUnsafeOrNull();
         ASSERT(table);
 
         size_t propertyCount = table->size();
@@ -841,14 +840,14 @@ Structure* Structure::flattenDictionaryStructure(VM& vm, JSObject* object)
         auto offset = invalidOffset;
         for (PropertyTable::iterator iter = table->begin(); iter != end; ++iter, ++i) {
             values[i] = object->getDirect(iter->offset);
-            offset = iter->offset = offsetForPropertyNumber(i, m_inlineCapacity);
+            offset = iter->offset = offsetForPropertyNumber(i, inlineCapacity());
         }
         setMaxOffset(vm, offset);
         ASSERT(transitionOffset() == invalidOffset);
         
         // Copies in our values to their compacted locations.
         for (unsigned i = 0; i < propertyCount; i++)
-            object->putDirect(vm, offsetForPropertyNumber(i, m_inlineCapacity), values[i]);
+            object->putDirect(vm, offsetForPropertyNumber(i, inlineCapacity()), values[i]);
 
         table->clearDeletedOffsets();
 
@@ -894,27 +893,32 @@ Structure* Structure::flattenDictionaryStructure(VM& vm, JSObject* object)
     return this;
 }
 
-void Structure::pin(const AbstractLocker&, VM& vm, PropertyTable* table)
+void Structure::pin(const AbstractLocker& locker, VM& vm, PropertyTable* table)
 {
     setIsPinnedPropertyTable(true);
     setPropertyTable(vm, table);
     clearPreviousID();
-    m_transitionPropertyName = nullptr;
+    setTransitionPropertyName(locker, nullptr);
 }
 
-void Structure::pinForCaching(const AbstractLocker&, VM& vm, PropertyTable* table)
+void Structure::pinForCaching(const AbstractLocker& locker, VM& vm, PropertyTable* table)
 {
     setIsPinnedPropertyTable(true);
     setPropertyTable(vm, table);
-    m_transitionPropertyName = nullptr;
+    setTransitionPropertyName(locker, nullptr);
 }
 
 void Structure::allocateRareData(VM& vm)
 {
     ASSERT(!hasRareData());
-    StructureRareData* rareData = StructureRareData::create(vm, previousID());
+    StructureRareData* rareData = StructureRareData::create(vm, cachedPrototypeChain());
     WTF::storeStoreFence();
-    m_previousOrRareData.set(vm, this, rareData);
+#if CPU(ADDRESS64)
+    m_inlineCapacityAndCachedPrototypeChainOrRareData.setPointer(rareData);
+    vm.heap.writeBarrier(this, rareData);
+#else
+    m_cachedPrototypeChainOrRareData.set(vm, this, rareData);
+#endif
     ASSERT(hasRareData());
 }
 
@@ -928,7 +932,7 @@ WatchpointSet* Structure::ensurePropertyReplacementWatchpointSet(VM& vm, Propert
     
     if (!hasRareData())
         allocateRareData(vm);
-    ConcurrentJSLocker locker(m_lock);
+    ConcurrentJSCellLocker locker(cellLock());
     StructureRareData* rareData = this->rareData();
     if (!rareData->m_replacementWatchpointSets) {
         rareData->m_replacementWatchpointSets =
@@ -998,7 +1002,7 @@ PropertyMapStatisticsExitLogger::~PropertyMapStatisticsExitLogger()
 
 PropertyTable* Structure::copyPropertyTableForPinning(VM& vm)
 {
-    if (PropertyTable* table = propertyTableOrNull())
+    if (PropertyTable* table = propertyTableUnsafeOrNull())
         return PropertyTable::clone(vm, *table);
     bool setPropertyTable = false;
     return materializePropertyTable(vm, setPropertyTable);
@@ -1038,14 +1042,14 @@ PropertyOffset Structure::add(VM& vm, PropertyName propertyName, unsigned attrib
 {
     return add<ShouldPin::No>(
         vm, propertyName, attributes,
-        [this, &vm] (const GCSafeConcurrentJSLocker&, PropertyOffset, PropertyOffset newMaxOffset) {
+        [this, &vm](const GCSafeConcurrentJSCellLocker&, PropertyOffset, PropertyOffset newMaxOffset) {
             setMaxOffset(vm, newMaxOffset);
         });
 }
 
 PropertyOffset Structure::remove(VM& vm, PropertyName propertyName)
 {
-    return remove<ShouldPin::No>(vm, propertyName, [this, &vm] (const GCSafeConcurrentJSLocker&, PropertyOffset, PropertyOffset newMaxOffset) {
+    return remove<ShouldPin::No>(vm, propertyName, [this, &vm](const GCSafeConcurrentJSCellLocker&, PropertyOffset, PropertyOffset newMaxOffset) {
         setMaxOffset(vm, newMaxOffset);
     });
 }
@@ -1116,25 +1120,21 @@ void Structure::visitChildren(JSCell* cell, SlotVisitor& visitor)
 
     Base::visitChildren(thisObject, visitor);
     
-    ConcurrentJSLocker locker(thisObject->m_lock);
+    ConcurrentJSCellLocker locker(thisObject->cellLock());
     
     visitor.append(thisObject->m_globalObject);
-    if (!thisObject->isObject())
-        thisObject->m_cachedPrototypeChain.clear();
-    else {
-        visitor.append(thisObject->m_prototype);
-        visitor.append(thisObject->m_cachedPrototypeChain);
-    }
-    visitor.append(thisObject->m_previousOrRareData);
+    visitor.append(thisObject->m_prototype);
+    visitor.appendUnbarriered(thisObject->previousID(visitor.vm()));
+    visitor.appendUnbarriered(thisObject->cachedPrototypeChainOrRareData());
 
     if (thisObject->isPinnedPropertyTable() || thisObject->protectPropertyTableWhileTransitioning()) {
         // NOTE: This can interleave in pin(), in which case it may see a null property table.
         // That's fine, because then the barrier will fire and we will scan this again.
-        visitor.append(thisObject->m_propertyTableUnsafe);
+        visitor.appendUnbarriered(thisObject->propertyTableUnsafeOrNull());
     } else if (visitor.isAnalyzingHeap())
-        visitor.append(thisObject->m_propertyTableUnsafe);
-    else if (thisObject->m_propertyTableUnsafe)
-        thisObject->m_propertyTableUnsafe.clear();
+        visitor.appendUnbarriered(thisObject->propertyTableUnsafeOrNull());
+    else if (thisObject->propertyTableUnsafeOrNull())
+        thisObject->clearPropertyTable();
 }
 
 bool Structure::isCheapDuringGC(VM& vm)
@@ -1290,7 +1290,7 @@ bool Structure::canCachePropertyNameEnumerator(VM& vm) const
     if (!this->canCacheOwnKeys())
         return false;
 
-    StructureChain* structureChain = m_cachedPrototypeChain.get();
+    StructureChain* structureChain = cachedPrototypeChain();
     ASSERT(structureChain);
     StructureID* currentStructureID = structureChain->head();
     while (true) {
index 52cd3c7..a5b1bc5 100644 (file)
@@ -42,6 +42,7 @@
 #include "TinyBloomFilter.h"
 #include "Watchpoint.h"
 #include "WriteBarrierInlines.h"
+#include <wtf/CompactRefPtrTuple.h>
 #include <wtf/PrintStream.h>
 
 namespace WTF {
@@ -258,9 +259,18 @@ public:
     {
         return typeInfo().getOwnPropertySlotIsImpure();
     }
+
+    TypeInfo::OutOfLineTypeFlags outOfLineTypeFlags() const
+    {
+#if CPU(ADDRESS64)
+        return m_outOfLineTypeFlagsAndPropertyTableUnsafe.type();
+#else
+        return m_outOfLineTypeFlags;
+#endif
+    }
     
     // Type accessors.
-    TypeInfo typeInfo() const { return m_blob.typeInfo(m_outOfLineTypeFlags); }
+    TypeInfo typeInfo() const { return m_blob.typeInfo(outOfLineTypeFlags()); }
     bool isObject() const { return typeInfo().isObject(); }
 
     IndexingType indexingType() const { return m_blob.indexingModeIncludingHistory() & AllWritableArrayTypes; }
@@ -315,24 +325,24 @@ public:
     
     bool hasRareData() const
     {
-        return isRareData(m_previousOrRareData.get());
+        return isRareData(cachedPrototypeChainOrRareData());
     }
 
     StructureRareData* rareData()
     {
         ASSERT(hasRareData());
-        return static_cast<StructureRareData*>(m_previousOrRareData.get());
+        return static_cast<StructureRareData*>(cachedPrototypeChainOrRareData());
     }
 
     const StructureRareData* rareData() const
     {
         ASSERT(hasRareData());
-        return static_cast<const StructureRareData*>(m_previousOrRareData.get());
+        return static_cast<const StructureRareData*>(cachedPrototypeChainOrRareData());
     }
 
     const StructureRareData* rareDataConcurrently() const
     {
-        JSCell* cell = m_previousOrRareData.get();
+        JSCell* cell = cachedPrototypeChainOrRareData();
         if (isRareData(cell))
             return static_cast<StructureRareData*>(cell);
         return nullptr;
@@ -345,21 +355,21 @@ public:
         return rareData();
     }
     
-    Structure* previousID() const
+    Structure* previousID(VM& vm) const
     {
-        ASSERT(structure()->classInfo() == info());
-        // This is so written because it's used concurrently. We only load from m_previousOrRareData
-        // once, and this load is guaranteed atomic.
-        JSCell* cell = m_previousOrRareData.get();
-        if (isRareData(cell))
-            return static_cast<StructureRareData*>(cell)->previousID();
-        return static_cast<Structure*>(cell);
+        if (!m_previousID)
+            return nullptr;
+        return vm.getStructure(m_previousID);
     }
     bool transitivelyTransitionedFrom(Structure* structureToFind);
 
     PropertyOffset maxOffset() const
     {
+#if CPU(ADDRESS64)
+        uint16_t maxOffset = m_maxOffsetAndTransitionPropertyName.type();
+#else
         uint16_t maxOffset = m_maxOffset;
+#endif
         if (maxOffset == shortInvalidOffset)
             return invalidOffset;
         if (maxOffset == useRareDataFlag)
@@ -369,22 +379,45 @@ public:
 
     void setMaxOffset(VM& vm, PropertyOffset offset)
     {
-        if (offset == invalidOffset)
-            m_maxOffset = shortInvalidOffset;
-        else if (offset < useRareDataFlag && offset < shortInvalidOffset)
-            m_maxOffset = offset;
-        else if (m_maxOffset == useRareDataFlag)
+        ASSERT(!isCompilationThread() && !Thread::mayBeGCThread());
+        auto commit = [&](uint16_t value) {
+#if CPU(ADDRESS64)
+            m_maxOffsetAndTransitionPropertyName.setType(value);
+#else
+            m_maxOffset = value;
+#endif
+        };
+
+        if (offset == invalidOffset) {
+            commit(shortInvalidOffset);
+            return;
+        }
+        if (offset < useRareDataFlag && offset < shortInvalidOffset) {
+            commit(offset);
+            return;
+        }
+#if CPU(ADDRESS64)
+        uint16_t maxOffset = m_maxOffsetAndTransitionPropertyName.type();
+#else
+        uint16_t maxOffset = m_maxOffset;
+#endif
+        if (maxOffset == useRareDataFlag) {
             rareData()->m_maxOffset = offset;
-        else {
-            ensureRareData(vm)->m_maxOffset = offset;
-            WTF::storeStoreFence();
-            m_maxOffset = useRareDataFlag;
+            return;
         }
+
+        ensureRareData(vm)->m_maxOffset = offset;
+        WTF::storeStoreFence();
+        commit(useRareDataFlag);
     }
 
     PropertyOffset transitionOffset() const
     {
+#if CPU(ADDRESS64)
+        uint16_t transitionOffset = m_transitionOffsetAndClassInfo.type();
+#else
         uint16_t transitionOffset = m_transitionOffset;
+#endif
         if (transitionOffset == shortInvalidOffset)
             return invalidOffset;
         if (transitionOffset == useRareDataFlag)
@@ -394,17 +427,36 @@ public:
 
     void setTransitionOffset(VM& vm, PropertyOffset offset)
     {
-        if (offset == invalidOffset)
-            m_transitionOffset = shortInvalidOffset;
-        else if (offset < useRareDataFlag && offset < shortInvalidOffset)
-            m_transitionOffset = offset;
-        else if (m_transitionOffset == useRareDataFlag)
+        ASSERT(!isCompilationThread() && !Thread::mayBeGCThread());
+        auto commit = [&](uint16_t value) {
+#if CPU(ADDRESS64)
+            m_transitionOffsetAndClassInfo.setType(value);
+#else
+            m_transitionOffset = value;
+#endif
+        };
+
+        if (offset == invalidOffset) {
+            commit(shortInvalidOffset);
+            return;
+        }
+        if (offset < useRareDataFlag && offset < shortInvalidOffset) {
+            commit(offset);
+            return;
+        }
+#if CPU(ADDRESS64)
+        uint16_t transitionOffset = m_transitionOffsetAndClassInfo.type();
+#else
+        uint16_t transitionOffset = m_transitionOffset;
+#endif
+        if (transitionOffset == useRareDataFlag) {
             rareData()->m_transitionOffset = offset;
-        else {
-            ensureRareData(vm)->m_transitionOffset = offset;
-            WTF::storeStoreFence();
-            m_transitionOffset = useRareDataFlag;
+            return;
         }
+
+        ensureRareData(vm)->m_transitionOffset = offset;
+        WTF::storeStoreFence();
+        commit(useRareDataFlag);
     }
 
     static unsigned outOfLineCapacity(PropertyOffset maxOffset)
@@ -440,17 +492,18 @@ public:
     {
         return outOfLineSize(maxOffset());
     }
-    bool hasInlineStorage() const
-    {
-        return !!m_inlineCapacity;
-    }
+    bool hasInlineStorage() const { return !!inlineCapacity(); }
     unsigned inlineCapacity() const
     {
+#if CPU(ADDRESS64)
+        return static_cast<uint8_t>(m_inlineCapacityAndCachedPrototypeChainOrRareData.type());
+#else
         return m_inlineCapacity;
+#endif
     }
     unsigned inlineSize() const
     {
-        return std::min<unsigned>(maxOffset() + 1, m_inlineCapacity);
+        return std::min<unsigned>(maxOffset() + 1, inlineCapacity());
     }
     unsigned totalStorageCapacity() const
     {
@@ -462,12 +515,12 @@ public:
     {
         return JSC::isValidOffset(offset)
             && offset <= maxOffset()
-            && (offset < m_inlineCapacity || offset >= firstOutOfLineOffset);
+            && (offset < static_cast<int>(inlineCapacity()) || offset >= firstOutOfLineOffset);
     }
 
     bool hijacksIndexingHeader() const
     {
-        return isTypedView(m_classInfo->typedArrayStorageType);
+        return isTypedView(classInfo()->typedArrayStorageType);
     }
     
     bool couldHaveIndexingHeader() const
@@ -535,7 +588,14 @@ public:
 
     void setObjectToStringValue(JSGlobalObject*, VM&, JSString* value, PropertySlot toStringTagSymbolSlot);
 
-    const ClassInfo* classInfo() const { return m_classInfo; }
+    const ClassInfo* classInfo() const
+    {
+#if CPU(ADDRESS64)
+        return m_transitionOffsetAndClassInfo.pointer();
+#else
+        return m_classInfo;
+#endif
+    }
 
     static ptrdiff_t structureIDOffset()
     {
@@ -552,29 +612,39 @@ public:
         return OBJECT_OFFSETOF(Structure, m_globalObject);
     }
 
-    static ptrdiff_t classInfoOffset()
+    static ptrdiff_t offsetOfClassInfo()
     {
+#if CPU(ADDRESS64)
+        return OBJECT_OFFSETOF(Structure, m_transitionOffsetAndClassInfo);
+#else
         return OBJECT_OFFSETOF(Structure, m_classInfo);
+#endif
     }
-        
+
     static ptrdiff_t indexingModeIncludingHistoryOffset()
     {
         return OBJECT_OFFSETOF(Structure, m_blob) + StructureIDBlob::indexingModeIncludingHistoryOffset();
     }
-    
-    static ptrdiff_t propertyTableUnsafeOffset()
-    {
-        return OBJECT_OFFSETOF(Structure, m_propertyTableUnsafe);
-    }
 
-    static ptrdiff_t inlineCapacityOffset()
+#if CPU(LITTLE_ENDIAN)
+    static ptrdiff_t offsetOfInlineCapacity()
     {
+#if CPU(ADDRESS64)
+        return OBJECT_OFFSETOF(Structure, m_inlineCapacityAndCachedPrototypeChainOrRareData) + CompactPointerTuple<JSCell*, uint16_t>::offsetOfType();
+#else
         return OBJECT_OFFSETOF(Structure, m_inlineCapacity);
+#endif
+
     }
+#endif
 
-    static ptrdiff_t previousOrRareDataOffset()
+    static ptrdiff_t offsetOfCachedPrototypeChainOrRareData()
     {
-        return OBJECT_OFFSETOF(Structure, m_previousOrRareData);
+#if CPU(ADDRESS64)
+        return OBJECT_OFFSETOF(Structure, m_inlineCapacityAndCachedPrototypeChainOrRareData);
+#else
+        return OBJECT_OFFSETOF(Structure, m_cachedPrototypeChainOrRareData);
+#endif
     }
 
     static Structure* createStructure(VM&);
@@ -653,12 +723,17 @@ public:
     
     static void dumpContextHeader(PrintStream&);
     
-    ConcurrentJSLock& lock() { return m_lock; }
-
-    unsigned propertyHash() const { return m_propertyHash; }
-
     static bool shouldConvertToPolyProto(const Structure* a, const Structure* b);
 
+    UniquedStringImpl* transitionPropertyName() const
+    {
+#if CPU(ADDRESS64)
+        return m_maxOffsetAndTransitionPropertyName.pointer();
+#else
+        return m_transitionPropertyName.get();
+#endif
+    }
+
     struct PropertyHashEntry {
         const HashTable* table;
         const HashTableValue* value;
@@ -668,6 +743,21 @@ public:
     DECLARE_EXPORT_INFO;
 
 private:
+    bool ruleOutUnseenProperty(UniquedStringImpl*) const;
+#if CPU(ADDRESS64)
+    // As a propertyHash, 64bit environment uses 16bit property-hash + seenProperties set.
+    uintptr_t propertyHash() const { return m_propertyHashAndSeenProperties.data(); }
+#else
+    uint32_t propertyHash() const { return m_propertyHash; }
+#endif
+    TinyBloomFilter seenProperties() const;
+    void addPropertyHashAndSeenProperty(unsigned, UniquedStringImpl*);
+
+    void setTransitionPropertyName(const AbstractLocker&, UniquedStringImpl* transitionPropertyName)
+    {
+        m_maxOffsetAndTransitionPropertyName.setPointer(transitionPropertyName);
+    }
+
     typedef enum { 
         NoneDictionaryKind = 0,
         CachedDictionaryKind = 1,
@@ -684,6 +774,7 @@ public:
     {\
         m_bitField &= ~(s_##lowerName##Mask << offset);\
         m_bitField |= (newValue & s_##lowerName##Mask) << offset;\
+        ASSERT(newValue == lowerName());\
     }
 
     DEFINE_BITFIELD(DictionaryKind, dictionaryKind, DictionaryKind, 2, 0);
@@ -691,18 +782,18 @@ public:
     DEFINE_BITFIELD(bool, hasGetterSetterProperties, HasGetterSetterProperties, 1, 3);
     DEFINE_BITFIELD(bool, hasReadOnlyOrGetterSetterPropertiesExcludingProto, HasReadOnlyOrGetterSetterPropertiesExcludingProto, 1, 4);
     DEFINE_BITFIELD(bool, isQuickPropertyAccessAllowedForEnumeration, IsQuickPropertyAccessAllowedForEnumeration, 1, 5);
-    DEFINE_BITFIELD(unsigned, transitionPropertyAttributes, TransitionPropertyAttributes, 14, 6);
-    DEFINE_BITFIELD(bool, didPreventExtensions, DidPreventExtensions, 1, 20);
-    DEFINE_BITFIELD(bool, didTransition, DidTransition, 1, 21);
-    DEFINE_BITFIELD(bool, staticPropertiesReified, StaticPropertiesReified, 1, 22);
-    DEFINE_BITFIELD(bool, hasBeenFlattenedBefore, HasBeenFlattenedBefore, 1, 23);
-    DEFINE_BITFIELD(bool, hasCustomGetterSetterProperties, HasCustomGetterSetterProperties, 1, 24);
-    DEFINE_BITFIELD(bool, didWatchInternalProperties, DidWatchInternalProperties, 1, 25);
-    DEFINE_BITFIELD(bool, transitionWatchpointIsLikelyToBeFired, TransitionWatchpointIsLikelyToBeFired, 1, 26);
-    DEFINE_BITFIELD(bool, hasBeenDictionary, HasBeenDictionary, 1, 27);
-    DEFINE_BITFIELD(bool, protectPropertyTableWhileTransitioning, ProtectPropertyTableWhileTransitioning, 1, 28);
-    DEFINE_BITFIELD(bool, hasUnderscoreProtoPropertyExcludingOriginalProto, HasUnderscoreProtoPropertyExcludingOriginalProto, 1, 29);
-    DEFINE_BITFIELD(bool, isPropertyDeletionTransition, IsPropertyDeletionTransition, 1, 30);
+    DEFINE_BITFIELD(unsigned, transitionPropertyAttributes, TransitionPropertyAttributes, 8, 6);
+    DEFINE_BITFIELD(bool, didPreventExtensions, DidPreventExtensions, 1, 14);
+    DEFINE_BITFIELD(bool, didTransition, DidTransition, 1, 15);
+    DEFINE_BITFIELD(bool, staticPropertiesReified, StaticPropertiesReified, 1, 16);
+    DEFINE_BITFIELD(bool, hasBeenFlattenedBefore, HasBeenFlattenedBefore, 1, 17);
+    DEFINE_BITFIELD(bool, hasCustomGetterSetterProperties, HasCustomGetterSetterProperties, 1, 18);
+    DEFINE_BITFIELD(bool, didWatchInternalProperties, DidWatchInternalProperties, 1, 19);
+    DEFINE_BITFIELD(bool, transitionWatchpointIsLikelyToBeFired, TransitionWatchpointIsLikelyToBeFired, 1, 20);
+    DEFINE_BITFIELD(bool, hasBeenDictionary, HasBeenDictionary, 1, 21);
+    DEFINE_BITFIELD(bool, protectPropertyTableWhileTransitioning, ProtectPropertyTableWhileTransitioning, 1, 22);
+    DEFINE_BITFIELD(bool, hasUnderscoreProtoPropertyExcludingOriginalProto, HasUnderscoreProtoPropertyExcludingOriginalProto, 1, 23);
+    DEFINE_BITFIELD(bool, isPropertyDeletionTransition, IsPropertyDeletionTransition, 1, 24);
 
     static_assert(s_bitWidthOfTransitionPropertyAttributes <= sizeof(TransitionPropertyAttributes) * 8);
 
@@ -721,7 +812,7 @@ private:
     // and the list of structures that we visited before we got to it. If it returns a
     // non-null structure, it will also lock the structure that it returns; it is your job
     // to unlock it.
-    void findStructuresAndMapForMaterialization(Vector<Structure*, 8>& structures, Structure*&, PropertyTable*&);
+    void findStructuresAndMapForMaterialization(VM&, Vector<Structure*, 8>& structures, Structure*&, PropertyTable*&);
     
     static Structure* toDictionaryTransition(VM&, Structure*, DictionaryKind, DeferredStructureTransitionWatchpointFire* = nullptr);
 
@@ -738,9 +829,9 @@ private:
     // This may grab the lock, or not. Do not call when holding the Structure's lock.
     PropertyTable* ensurePropertyTableIfNotEmpty(VM& vm)
     {
-        if (PropertyTable* result = m_propertyTableUnsafe.get())
+        if (PropertyTable* result = propertyTableUnsafeOrNull())
             return result;
-        if (!previousID())
+        if (!previousID(vm))
             return nullptr;
         return materializePropertyTable(vm);
     }
@@ -748,39 +839,37 @@ private:
     // This may grab the lock, or not. Do not call when holding the Structure's lock.
     PropertyTable* ensurePropertyTable(VM& vm)
     {
-        if (PropertyTable* result = m_propertyTableUnsafe.get())
+        if (PropertyTable* result = propertyTableUnsafeOrNull())
             return result;
         return materializePropertyTable(vm);
     }
     
-    PropertyTable* propertyTableOrNull() const
+    PropertyTable* propertyTableUnsafeOrNull() const
     {
+#if CPU(ADDRESS64)
+        return m_outOfLineTypeFlagsAndPropertyTableUnsafe.pointer();
+#else
         return m_propertyTableUnsafe.get();
+#endif
     }
     
     // This will grab the lock. Do not call when holding the Structure's lock.
     JS_EXPORT_PRIVATE PropertyTable* materializePropertyTable(VM&, bool setPropertyTable = true);
     
     void setPropertyTable(VM& vm, PropertyTable* table);
+    void clearPropertyTable();
     
     PropertyTable* takePropertyTableOrCloneIfPinned(VM&);
     PropertyTable* copyPropertyTableForPinning(VM&);
 
     void setPreviousID(VM&, Structure*);
-
-    void clearPreviousID()
-    {
-        if (hasRareData())
-            rareData()->clearPreviousID();
-        else
-            m_previousOrRareData.clear();
-    }
+    void clearPreviousID();
 
     int transitionCountEstimate() const
     {
         // Since the number of transitions is often the same as the last offset (except if there are deletes)
         // we keep the size of Structure down by not storing both.
-        return numberOfSlotsForMaxOffset(maxOffset(), m_inlineCapacity);
+        return numberOfSlotsForMaxOffset(maxOffset(), inlineCapacity());
     }
 
     bool isValid(JSGlobalObject*, StructureChain* cachedPrototypeChain, JSObject* base) const;
@@ -791,7 +880,7 @@ private:
     
     bool isRareData(JSCell* cell) const
     {
-        return cell && cell->structureID() != structureID();
+        return cell && cell->type() == StructureRareDataType;
     }
 
     template<typename DetailsFunc>
@@ -802,48 +891,79 @@ private:
     
     void startWatchingInternalProperties(VM&);
 
+    StructureChain* cachedPrototypeChain() const;
+    void setCachedPrototypeChain(VM&, StructureChain*);
+
+    void setOutOfLineTypeFlags(TypeInfo::OutOfLineTypeFlags);
+    void setClassInfo(const ClassInfo*);
+    void setInlineCapacity(uint8_t);
+
+    JSCell* cachedPrototypeChainOrRareData() const
+    {
+#if CPU(ADDRESS64)
+        return m_inlineCapacityAndCachedPrototypeChainOrRareData.pointer();
+#else
+        return m_cachedPrototypeChainOrRareData.get();
+#endif
+    }
+
     static constexpr int s_maxTransitionLength = 64;
     static constexpr int s_maxTransitionLengthForNonEvalPutById = 512;
 
     // These need to be properly aligned at the beginning of the 'Structure'
     // part of the object.
     StructureIDBlob m_blob;
-    TypeInfo::OutOfLineTypeFlags m_outOfLineTypeFlags;
-
-    uint8_t m_inlineCapacity;
-
-    ConcurrentJSLock m_lock;
-
-    uint32_t m_bitField;
-
-    WriteBarrier<JSGlobalObject> m_globalObject;
-    WriteBarrier<Unknown> m_prototype;
-    mutable WriteBarrier<StructureChain> m_cachedPrototypeChain;
-
-    WriteBarrier<JSCell> m_previousOrRareData;
 
+    // The property table pointer should be accessed through ensurePropertyTable(). During GC, m_propertyTableUnsafe field part may be set to 0 by another thread.
+    // During a Heap Snapshot GC we avoid clearing the table so it is safe to use.
+#if CPU(ADDRESS64)
+public:
+    static constexpr uintptr_t classInfoMask = CompactPointerTuple<const ClassInfo*, uint16_t>::pointerMask;
+    static constexpr uintptr_t cachedPrototypeChainOrRareDataMask = CompactPointerTuple<JSCell*, uint16_t>::pointerMask;
+private:
+    // Structure is one of the most frequently allocated data structure. Moreover, Structure tends to be alive a long time!
+    // This motivates extra complicated hack which optimizes sizeof(Structure).
+    //
+    // We combine 16bit data and 64bit pointer into one pointer-size field to (1) save memory while (2) not losing atomic load/store.
+    // The key here is analyzing data access patterns carefully. They are categoriezed into three types.
+    //     1. ImmutableAfterConstruction
+    //     2. MutableFromAnyThread
+    //     3. MutableFromMainThread
+    //  We assume that loading happens from any threads. Under this assumption, MutableFromAnyThread + (MutableFromMainThread / MutableFromAnyThread) is the pair which is racy.
+    //  Other pairs works well. We carefully put assertions to setters, analyze access patterns and pick appropriate pairs in Structure fields.
+    CompactPointerTuple<PropertyTable*, TypeInfo::OutOfLineTypeFlags> m_outOfLineTypeFlagsAndPropertyTableUnsafe; // ImmutableAfterConstruction(m_outOfLineTypeFlags) and MutableFromAnyThread(m_propertyTableUnsafe).
+    CompactRefPtrTuple<UniquedStringImpl, uint16_t> m_maxOffsetAndTransitionPropertyName; // MutableFromMainThread(m_maxOffset) and MutableFromMainThread(m_transitionPropertyName).
+    CompactPointerTuple<const ClassInfo*, uint16_t> m_transitionOffsetAndClassInfo; // MutableFromMainThread(m_transitionOffset) and ImmutableAfterConstruction(m_classInfo).
+    CompactPointerTuple<JSCell*, uint16_t> m_inlineCapacityAndCachedPrototypeChainOrRareData; // ImmutableAfterConstruction(m_inlineCapacity) and MutableFromMainThread(m_cachedPrototypeChainOrRareData).
+    CompactPointerTuple<UniquedStringImpl*, uint16_t> m_propertyHashAndSeenProperties; // MutableFromMainThread(m_propertyHash) and MutableFromMainThread(m_seenProperties).
+#else
+    TypeInfo::OutOfLineTypeFlags m_outOfLineTypeFlags { 0 };
+    uint8_t m_inlineCapacity { 0 };
+    uint32_t m_propertyHash { 0 };
+    uint16_t m_transitionOffset { 0 };
+    uint16_t m_maxOffset { 0 };
+    WriteBarrier<PropertyTable> m_propertyTableUnsafe;
+    const ClassInfo* m_classInfo { nullptr };
+    WriteBarrier<JSCell> m_cachedPrototypeChainOrRareData;
+    uintptr_t m_seenProperties { 0 };
     RefPtr<UniquedStringImpl> m_transitionPropertyName;
-
-    const ClassInfo* m_classInfo;
+#endif
+    StructureID m_previousID { 0 };
+    uint32_t m_bitField { 0 };
 
     StructureTransitionTable m_transitionTable;
-
-    // Should be accessed through ensurePropertyTable(). During GC, it may be set to 0 by another thread.
-    // During a Heap Snapshot GC we avoid clearing the table so it is safe to use.
-    WriteBarrier<PropertyTable> m_propertyTableUnsafe;
+    WriteBarrier<JSGlobalObject> m_globalObject;
+    WriteBarrier<Unknown> m_prototype;
 
     mutable InlineWatchpointSet m_transitionWatchpointSet;
 
     COMPILE_ASSERT(firstOutOfLineOffset < 256, firstOutOfLineOffset_fits);
 
-    uint16_t m_transitionOffset;
-    uint16_t m_maxOffset;
-
-    uint32_t m_propertyHash;
-    TinyBloomFilter m_seenProperties;
-
     friend class VMInspector;
     friend class JSDollarVMHelper;
 };
+#if CPU(ADDRESS64)
+static_assert(sizeof(Structure) <= 96, "Do not increase sizeof(Structure), it immediately causes memory regression");
+#endif
 
 } // namespace JSC
index 0cd98d5..3d68219 100644 (file)
@@ -144,7 +144,7 @@ ALWAYS_INLINE PropertyOffset Structure::get(VM& vm, PropertyName propertyName, u
     ASSERT(!isCompilationThread());
     ASSERT(structure(vm)->classInfo() == info());
 
-    if (m_seenProperties.ruleOut(bitwise_cast<uintptr_t>(propertyName.uid())))
+    if (ruleOutUnseenProperty(propertyName.uid()))
         return invalidOffset;
 
     PropertyTable* propertyTable = ensurePropertyTableIfNotEmpty(vm);
@@ -159,29 +159,57 @@ ALWAYS_INLINE PropertyOffset Structure::get(VM& vm, PropertyName propertyName, u
     return entry->offset;
 }
 
+inline bool Structure::ruleOutUnseenProperty(UniquedStringImpl* uid) const
+{
+    ASSERT(uid);
+    return seenProperties().ruleOut(bitwise_cast<uintptr_t>(uid));
+}
+
+inline TinyBloomFilter Structure::seenProperties() const
+{
+#if CPU(ADDRESS64)
+    return TinyBloomFilter(bitwise_cast<uintptr_t>(m_propertyHashAndSeenProperties.pointer()));
+#else
+    return TinyBloomFilter(m_seenProperties);
+#endif
+}
+
+inline void Structure::addPropertyHashAndSeenProperty(unsigned hash, UniquedStringImpl* pointer)
+{
+#if CPU(ADDRESS64)
+    m_propertyHashAndSeenProperties.setType(m_propertyHashAndSeenProperties.type() ^ hash);
+    m_propertyHashAndSeenProperties.setPointer(bitwise_cast<UniquedStringImpl*>(bitwise_cast<uintptr_t>(m_propertyHashAndSeenProperties.pointer()) | bitwise_cast<uintptr_t>(pointer)));
+#else
+    m_propertyHash = m_propertyHash ^ hash;
+    m_seenProperties = bitwise_cast<uintptr_t>(pointer) | m_seenProperties;
+#endif
+}
+
 template<typename Functor>
 void Structure::forEachPropertyConcurrently(const Functor& functor)
 {
     Vector<Structure*, 8> structures;
     Structure* tableStructure;
     PropertyTable* table;
+    VM& vm = this->vm();
     
-    findStructuresAndMapForMaterialization(structures, tableStructure, table);
+    findStructuresAndMapForMaterialization(vm, structures, tableStructure, table);
 
     HashSet<UniquedStringImpl*> seenProperties;
 
-    for (auto* structure : structures) {
-        if (!structure->m_transitionPropertyName || seenProperties.contains(structure->m_transitionPropertyName.get()))
+    for (Structure* structure : structures) {
+        UniquedStringImpl* transitionPropertyName = structure->transitionPropertyName();
+        if (!transitionPropertyName || seenProperties.contains(transitionPropertyName))
             continue;
 
-        seenProperties.add(structure->m_transitionPropertyName.get());
+        seenProperties.add(transitionPropertyName);
 
         if (structure->isPropertyDeletionTransition())
             continue;
 
-        if (!functor(PropertyMapEntry(structure->m_transitionPropertyName.get(), structure->transitionOffset(), structure->transitionPropertyAttributes()))) {
+        if (!functor(PropertyMapEntry(transitionPropertyName, structure->transitionOffset(), structure->transitionPropertyAttributes()))) {
             if (table)
-                tableStructure->m_lock.unlock();
+                tableStructure->cellLock().unlock();
             return;
         }
     }
@@ -192,11 +220,11 @@ void Structure::forEachPropertyConcurrently(const Functor& functor)
                 continue;
 
             if (!functor(entry)) {
-                tableStructure->m_lock.unlock();
+                tableStructure->cellLock().unlock();
                 return;
             }
         }
-        tableStructure->m_lock.unlock();
+        tableStructure->cellLock().unlock();
     }
 }
 
@@ -235,7 +263,8 @@ inline bool Structure::masqueradesAsUndefined(JSGlobalObject* lexicalGlobalObjec
 
 inline bool Structure::transitivelyTransitionedFrom(Structure* structureToFind)
 {
-    for (Structure* current = this; current; current = current->previousID()) {
+    VM& vm = this->vm();
+    for (Structure* current = this; current; current = current->previousID(vm)) {
         if (current == structureToFind)
             return true;
     }
@@ -302,15 +331,41 @@ inline JSValue Structure::prototypeForLookup(JSGlobalObject* globalObject, JSCel
     return prototypeForLookupPrimitiveImpl(globalObject, this);
 }
 
+inline StructureChain* Structure::cachedPrototypeChain() const
+{
+    JSCell* cell = cachedPrototypeChainOrRareData();
+    if (isRareData(cell))
+        return jsCast<StructureRareData*>(cell)->cachedPrototypeChain();
+    return jsCast<StructureChain*>(cell);
+}
+
+inline void Structure::setCachedPrototypeChain(VM& vm, StructureChain* chain)
+{
+    ASSERT(isObject());
+    ASSERT(!isCompilationThread() && !Thread::mayBeGCThread());
+    JSCell* cell = cachedPrototypeChainOrRareData();
+    if (isRareData(cell)) {
+        jsCast<StructureRareData*>(cell)->setCachedPrototypeChain(vm, chain);
+        return;
+    }
+#if CPU(ADDRESS64)
+    m_inlineCapacityAndCachedPrototypeChainOrRareData.setPointer(chain);
+    vm.heap.writeBarrier(this, chain);
+#else
+    m_cachedPrototypeChainOrRareData.set(vm, this, chain);
+#endif
+}
+
 inline StructureChain* Structure::prototypeChain(VM& vm, JSGlobalObject* globalObject, JSObject* base) const
 {
+    ASSERT(this->isObject());
     ASSERT(base->structure(vm) == this);
     // We cache our prototype chain so our clients can share it.
-    if (!isValid(globalObject, m_cachedPrototypeChain.get(), base)) {
+    if (!isValid(globalObject, cachedPrototypeChain(), base)) {
         JSValue prototype = prototypeForLookup(globalObject, base);
-        m_cachedPrototypeChain.set(vm, this, StructureChain::create(vm, prototype.isNull() ? nullptr : asObject(prototype)));
+        const_cast<Structure*>(this)->setCachedPrototypeChain(vm, StructureChain::create(vm, prototype.isNull() ? nullptr : asObject(prototype)));
     }
-    return m_cachedPrototypeChain.get();
+    return cachedPrototypeChain();
 }
 
 inline StructureChain* Structure::prototypeChain(JSGlobalObject* globalObject, JSObject* base) const
@@ -350,7 +405,7 @@ inline void Structure::didReplaceProperty(PropertyOffset offset)
 
 inline WatchpointSet* Structure::propertyReplacementWatchpointSet(PropertyOffset offset)
 {
-    ConcurrentJSLocker locker(m_lock);
+    ConcurrentJSCellLocker locker(cellLock());
     if (!hasRareData())
         return nullptr;
     WTF::loadLoadFence();
@@ -371,16 +426,16 @@ ALWAYS_INLINE bool Structure::checkOffsetConsistency(PropertyTable* propertyTabl
         return true;
     
     unsigned totalSize = propertyTable->propertyStorageSize();
-    unsigned inlineOverflowAccordingToTotalSize = totalSize < m_inlineCapacity ? 0 : totalSize - m_inlineCapacity;
+    unsigned inlineOverflowAccordingToTotalSize = totalSize < inlineCapacity() ? 0 : totalSize - inlineCapacity();
 
     auto fail = [&] (const char* description) {
         dataLog("Detected offset inconsistency: ", description, "!\n");
         dataLog("this = ", RawPointer(this), "\n");
         dataLog("transitionOffset = ", transitionOffset(), "\n");
         dataLog("maxOffset = ", maxOffset(), "\n");
-        dataLog("m_inlineCapacity = ", m_inlineCapacity, "\n");
+        dataLog("m_inlineCapacity = ", inlineCapacity(), "\n");
         dataLog("propertyTable = ", RawPointer(propertyTable), "\n");
-        dataLog("numberOfSlotsForMaxOffset = ", numberOfSlotsForMaxOffset(maxOffset(), m_inlineCapacity), "\n");
+        dataLog("numberOfSlotsForMaxOffset = ", numberOfSlotsForMaxOffset(maxOffset(), inlineCapacity()), "\n");
         dataLog("totalSize = ", totalSize, "\n");
         dataLog("inlineOverflowAccordingToTotalSize = ", inlineOverflowAccordingToTotalSize, "\n");
         dataLog("numberOfOutOfLineSlotsForMaxOffset = ", numberOfOutOfLineSlotsForMaxOffset(maxOffset()), "\n");
@@ -388,7 +443,7 @@ ALWAYS_INLINE bool Structure::checkOffsetConsistency(PropertyTable* propertyTabl
         UNREACHABLE_FOR_PLATFORM();
     };
     
-    if (numberOfSlotsForMaxOffset(maxOffset(), m_inlineCapacity) != totalSize)
+    if (numberOfSlotsForMaxOffset(maxOffset(), inlineCapacity()) != totalSize)
         fail("numberOfSlotsForMaxOffset doesn't match totalSize");
     if (inlineOverflowAccordingToTotalSize != numberOfOutOfLineSlotsForMaxOffset(maxOffset()))
         fail("inlineOverflowAccordingToTotalSize doesn't match numberOfOutOfLineSlotsForMaxOffset");
@@ -398,7 +453,7 @@ ALWAYS_INLINE bool Structure::checkOffsetConsistency(PropertyTable* propertyTabl
 
 ALWAYS_INLINE bool Structure::checkOffsetConsistency() const
 {
-    PropertyTable* propertyTable = propertyTableOrNull();
+    PropertyTable* propertyTable = propertyTableUnsafeOrNull();
 
     if (!propertyTable) {
         ASSERT(!isPinnedPropertyTable());
@@ -439,7 +494,7 @@ inline PropertyOffset Structure::add(VM& vm, PropertyName propertyName, unsigned
 {
     PropertyTable* table = ensurePropertyTable(vm);
 
-    GCSafeConcurrentJSLocker locker(m_lock, vm.heap);
+    GCSafeConcurrentJSCellLocker locker(cellLock(), vm.heap);
 
     switch (shouldPin) {
     case ShouldPin::Yes:
@@ -460,10 +515,9 @@ inline PropertyOffset Structure::add(VM& vm, PropertyName propertyName, unsigned
 
     auto rep = propertyName.uid();
 
-    PropertyOffset newOffset = table->nextOffset(m_inlineCapacity);
+    PropertyOffset newOffset = table->nextOffset(inlineCapacity());
 
-    m_propertyHash = m_propertyHash ^ rep->existingSymbolAwareHash();
-    m_seenProperties.add(bitwise_cast<uintptr_t>(rep));
+    addPropertyHashAndSeenProperty(rep->existingSymbolAwareHash(), rep);
 
     auto result = table->add(PropertyMapEntry(rep, newOffset, attributes));
     ASSERT_UNUSED(result, result.second);
@@ -482,7 +536,7 @@ template<Structure::ShouldPin shouldPin, typename Func>
 inline PropertyOffset Structure::remove(VM& vm, PropertyName propertyName, const Func& func)
 {
     PropertyTable* table = ensurePropertyTable(vm);
-    GCSafeConcurrentJSLocker locker(m_lock, vm.heap);
+    GCSafeConcurrentJSCellLocker locker(cellLock(), vm.heap);
 
     switch (shouldPin) {
     case ShouldPin::Yes:
@@ -532,7 +586,7 @@ inline PropertyOffset Structure::removePropertyWithoutTransition(VM& vm, Propert
 {
     ASSERT(isUncacheableDictionary());
     ASSERT(isPinnedPropertyTable());
-    ASSERT(propertyTableOrNull());
+    ASSERT(propertyTableUnsafeOrNull());
     
     return remove<ShouldPin::Yes>(vm, propertyName, func);
 }
@@ -550,15 +604,60 @@ ALWAYS_INLINE void Structure::setGlobalObject(VM& vm, JSGlobalObject* globalObje
 
 ALWAYS_INLINE void Structure::setPropertyTable(VM& vm, PropertyTable* table)
 {
+#if CPU(ADDRESS64)
+    m_outOfLineTypeFlagsAndPropertyTableUnsafe.setPointer(table);
+    vm.heap.writeBarrier(this, table);
+#else
     m_propertyTableUnsafe.setMayBeNull(vm, this, table);
+#endif
+}
+
+ALWAYS_INLINE void Structure::clearPropertyTable()
+{
+#if CPU(ADDRESS64)
+    m_outOfLineTypeFlagsAndPropertyTableUnsafe.setPointer(nullptr);
+#else
+    m_propertyTableUnsafe.clear();
+#endif
+}
+
+ALWAYS_INLINE void Structure::setOutOfLineTypeFlags(TypeInfo::OutOfLineTypeFlags outOfLineTypeFlags)
+{
+#if CPU(ADDRESS64)
+    m_outOfLineTypeFlagsAndPropertyTableUnsafe.setType(outOfLineTypeFlags);
+#else
+    m_outOfLineTypeFlags = outOfLineTypeFlags;
+#endif
+}
+
+ALWAYS_INLINE void Structure::setInlineCapacity(uint8_t inlineCapacity)
+{
+#if CPU(ADDRESS64)
+    m_inlineCapacityAndCachedPrototypeChainOrRareData.setType(inlineCapacity);
+#else
+    m_inlineCapacity = inlineCapacity;
+#endif
+}
+
+ALWAYS_INLINE void Structure::setClassInfo(const ClassInfo* classInfo)
+{
+#if CPU(ADDRESS64)
+    m_transitionOffsetAndClassInfo.setPointer(classInfo);
+#else
+    m_classInfo = classInfo;
+#endif
 }
 
 ALWAYS_INLINE void Structure::setPreviousID(VM& vm, Structure* structure)
 {
-    if (hasRareData())
-        rareData()->setPreviousID(vm, structure);
-    else
-        m_previousOrRareData.set(vm, this, structure);
+    ASSERT(structure);
+    m_previousID = structure->id();
+    vm.heap.writeBarrier(this, structure);
+}
+
+inline void Structure::clearPreviousID()
+{
+    m_previousID = 0;
 }
 
 ALWAYS_INLINE bool Structure::shouldConvertToPolyProto(const Structure* a, const Structure* b)
index b5a98db..a40294d 100644 (file)
@@ -40,12 +40,12 @@ const ClassInfo StructureRareData::s_info = { "StructureRareData", nullptr, null
 
 Structure* StructureRareData::createStructure(VM& vm, JSGlobalObject* globalObject, JSValue prototype)
 {
-    return Structure::create(vm, globalObject, prototype, TypeInfo(CellType, StructureFlags), info());
+    return Structure::create(vm, globalObject, prototype, TypeInfo(StructureRareDataType, StructureFlags), info());
 }
 
-StructureRareData* StructureRareData::create(VM& vm, Structure* previous)
+StructureRareData* StructureRareData::create(VM& vm, StructureChain* chain)
 {
-    StructureRareData* rareData = new (NotNull, allocateCell<StructureRareData>(vm.heap)) StructureRareData(vm, previous);
+    StructureRareData* rareData = new (NotNull, allocateCell<StructureRareData>(vm.heap)) StructureRareData(vm, chain);
     rareData->finishCreation(vm);
     return rareData;
 }
@@ -55,13 +55,13 @@ void StructureRareData::destroy(JSCell* cell)
     static_cast<StructureRareData*>(cell)->StructureRareData::~StructureRareData();
 }
 
-StructureRareData::StructureRareData(VM& vm, Structure* previous)
+StructureRareData::StructureRareData(VM& vm, StructureChain* chain)
     : JSCell(vm, vm.structureRareDataStructure.get())
     , m_maxOffset(invalidOffset)
     , m_transitionOffset(invalidOffset)
 {
-    if (previous)
-        m_previous.set(vm, this, previous);
+    if (chain)
+        m_cachedPrototypeChain.set(vm, this, chain);
 }
 
 void StructureRareData::visitChildren(JSCell* cell, SlotVisitor& visitor)
@@ -70,7 +70,7 @@ void StructureRareData::visitChildren(JSCell* cell, SlotVisitor& visitor)
     ASSERT_GC_OBJECT_INHERITS(thisObject, info());
 
     Base::visitChildren(thisObject, visitor);
-    visitor.append(thisObject->m_previous);
+    visitor.append(thisObject->m_cachedPrototypeChain);
     visitor.appendUnbarriered(thisObject->objectToStringValue());
     visitor.append(thisObject->m_cachedPropertyNameEnumerator);
     auto* cachedOwnKeys = thisObject->m_cachedOwnKeys.unvalidatedGet();
index 07ed904..fe903bb 100644 (file)
@@ -35,6 +35,7 @@ namespace JSC {
 
 class JSPropertyNameEnumerator;
 class Structure;
+class StructureChain;
 class ObjectToStringAdaptiveInferredPropertyValueWatchpoint;
 class ObjectToStringAdaptiveStructureWatchpoint;
 
@@ -49,7 +50,7 @@ public:
         return &vm.structureRareDataSpace;
     }
 
-    static StructureRareData* create(VM&, Structure*);
+    static StructureRareData* create(VM&, StructureChain*);
 
     static constexpr bool needsDestruction = true;
     static void destroy(JSCell*);
@@ -58,12 +59,11 @@ public:
 
     static Structure* createStructure(VM&, JSGlobalObject*, JSValue prototype);
 
-    Structure* previousID() const
+    StructureChain* cachedPrototypeChain() const
     {
-        return m_previous.get();
+        return m_cachedPrototypeChain.get();
     }
-    void setPreviousID(VM&, Structure*);
-    void clearPreviousID();
+    void setCachedPrototypeChain(VM&, StructureChain*);
 
     JSString* objectToStringValue() const;
     void setObjectToStringValue(JSGlobalObject*, VM&, Structure* baseStructure, JSString* value, PropertySlot toStringTagSymbolSlot);
@@ -102,9 +102,9 @@ private:
 
     void clearObjectToStringValue();
 
-    StructureRareData(VM&, Structure*);
+    StructureRareData(VM&, StructureChain*);
 
-    WriteBarrier<Structure> m_previous;
+    WriteBarrier<StructureChain> m_cachedPrototypeChain;
     WriteBarrier<JSString> m_objectToStringValue;
     // FIXME: We should have some story for clearing these property names caches in GC.
     // https://bugs.webkit.org/show_bug.cgi?id=192659
index e2fd504..94c49a8 100644 (file)
 
 namespace JSC {
 
-inline void StructureRareData::setPreviousID(VM& vm, Structure* structure)
+inline void StructureRareData::setCachedPrototypeChain(VM& vm, StructureChain* chain)
 {
-    m_previous.set(vm, this, structure);
-}
-
-inline void StructureRareData::clearPreviousID()
-{
-    m_previous.clear();
+    m_cachedPrototypeChain.setMayBeNull(vm, this, chain);
 }
 
 inline JSString* StructureRareData::objectToStringValue() const
index f48928c..ad29209 100644 (file)
@@ -150,7 +150,7 @@ class StructureTransitionTable {
         // We encode (2) and (3) into (1)'s empty bits since a pointer is 48bit and lower 3 bits are usable because of alignment.
         struct Key {
             friend struct Hash;
-            static_assert(WTF_OS_CONSTANT_EFFECTIVE_ADDRESS_WIDTH <= 48);
+            static_assert(OS_CONSTANT(EFFECTIVE_ADDRESS_WIDTH) <= 48);
             static constexpr uintptr_t isAdditionMask = 1ULL;
             static constexpr uintptr_t stringMask = ((1ULL << 48) - 1) & (~isAdditionMask);
             static constexpr unsigned attributesShift = 48;
index 9a7f49a..8b846ba 100644 (file)
@@ -2804,7 +2804,7 @@ EncodedJSValue JSC_HOST_CALL JSDollarVMHelper::functionGetStructureTransitionLis
         return JSValue::encode(jsNull());
     Vector<Structure*, 8> structures;
 
-    for (auto* structure = obj->structure(); structure; structure = structure->previousID())
+    for (auto* structure = obj->structure(); structure; structure = structure->previousID(vm))
         structures.append(structure);
 
     JSArray* result = JSArray::tryCreate(vm, globalObject->arrayStructureForIndexingTypeDuringAllocation(ArrayWithContiguous), 0);
@@ -2818,8 +2818,8 @@ EncodedJSValue JSC_HOST_CALL JSDollarVMHelper::functionGetStructureTransitionLis
         RETURN_IF_EXCEPTION(scope, { });
         result->push(globalObject, JSValue(structure->maxOffset()));
         RETURN_IF_EXCEPTION(scope, { });
-        if (structure->m_transitionPropertyName)
-            result->push(globalObject, jsString(vm, String(*structure->m_transitionPropertyName)));
+        if (auto* transitionPropertyName = structure->transitionPropertyName())
+            result->push(globalObject, jsString(vm, String(*transitionPropertyName)));
         else
             result->push(globalObject, jsNull());
         RETURN_IF_EXCEPTION(scope, { });
index 16ecdaa..2bff704 100644 (file)
@@ -275,7 +275,7 @@ MacroAssemblerCodePtr<JSEntryPtrTag> WebAssemblyFunction::jsCallEntrypointSlow()
 
             stackLimitGPRIsClobbered = true;
             jit.emitLoadStructure(vm, scratchGPR, scratchGPR, stackLimitGPR);
-            jit.loadPtr(CCallHelpers::Address(scratchGPR, Structure::classInfoOffset()), scratchGPR);
+            jit.emitLoadClassInfoFromStructure(scratchGPR, scratchGPR);
 
             static_assert(std::is_final<WebAssemblyFunction>::value, "We do not check for subtypes below");
             static_assert(std::is_final<WebAssemblyWrapperFunction>::value, "We do not check for subtypes below");
index cf12f27..407a38a 100644 (file)
@@ -1,3 +1,21 @@
+2020-02-23  Yusuke Suzuki  <ysuzuki@apple.com>
+
+        [JSC] Shrink Structure
+        https://bugs.webkit.org/show_bug.cgi?id=207827
+
+        Reviewed by Saam Barati.
+
+        Make CompactPointerTuple usable for storing 16 bits data.
+
+        * WTF.xcodeproj/project.pbxproj:
+        * wtf/CMakeLists.txt:
+        * wtf/CompactPointerTuple.h:
+        * wtf/CompactRefPtrTuple.h: Added.
+        * wtf/text/StringImpl.h:
+        * wtf/text/SymbolImpl.h:
+        (WTF::SymbolImpl::hashForSymbol const):
+        (WTF::SymbolImpl::SymbolImpl):
+
 2020-02-21  Antti Koivisto  <antti@apple.com>
 
         REGRESSION(r257072): MotionMark | Mac | -10%
index 896eae6..1a61143 100644 (file)
                E360C7652127B85C00C90F0E /* Unexpected.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Unexpected.h; sourceTree = "<group>"; };
                E36895CB23A445CD008DD4C8 /* PackedRef.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = PackedRef.h; sourceTree = "<group>"; };
                E36895CC23A445EE008DD4C8 /* PackedRefPtr.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = PackedRefPtr.h; sourceTree = "<group>"; };
+               E38020DB2401C0930037CA9E /* CompactRefPtrTuple.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = CompactRefPtrTuple.h; sourceTree = "<group>"; };
                E388886D20C9095100E632BC /* WorkerPool.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = WorkerPool.cpp; sourceTree = "<group>"; };
                E388886E20C9095100E632BC /* WorkerPool.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WorkerPool.h; sourceTree = "<group>"; };
                E38C41241EB4E04C0042957D /* CPUTimeCocoa.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CPUTimeCocoa.cpp; sourceTree = "<group>"; };
                                0F66B2811DC97BAB004A1D3F /* ClockType.h */,
                                0FC4EDE51696149600F65041 /* CommaPrinter.h */,
                                E3CF76902115D6BA0091DE48 /* CompactPointerTuple.h */,
+                               E38020DB2401C0930037CA9E /* CompactRefPtrTuple.h */,
                                0F8F2B8F172E00F0007DBDA5 /* CompilationThread.cpp */,
                                0F8F2B90172E00F0007DBDA5 /* CompilationThread.h */,
                                A8A47270151A825A004123FF /* Compiler.h */,
index 84f8bed..ce0dab6 100644 (file)
@@ -30,6 +30,7 @@ set(WTF_PUBLIC_HEADERS
     ClockType.h
     CommaPrinter.h
     CompactPointerTuple.h
+    CompactRefPtrTuple.h
     CompilationThread.h
     Compiler.h
     CompletionHandler.h
index b8f201e..33e4f95 100644 (file)
@@ -1,5 +1,6 @@
 /*
  * Copyright (C) 2018 Yusuke Suzuki <utatane.tea@gmail.com>.
+ * Copyright (C) 2020 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 
 namespace WTF {
 
-// The goal of this class is folding a pointer and 1 byte value into 8 bytes in both 32bit and 64bit architectures.
+// The goal of this class is folding a pointer and 2 bytes value into 8 bytes in both 32bit and 64bit architectures.
 // 32bit architecture just has a pair of byte and pointer, which should be 8 bytes.
-// In 64bit, we use the upper 5 bits and lower 3 bits (zero due to alignment) since these bits are safe to use even
-// with 5-level page tables where the effective pointer width is 57bits.
+// We are assuming 48bit pointers here, which is also assumed in JSValue anyway.
 template<typename PointerType, typename Type>
 class CompactPointerTuple final {
     WTF_MAKE_FAST_ALLOCATED;
 public:
-    static_assert(sizeof(Type) == 1, "");
+    static_assert(sizeof(Type) <= 2, "");
     static_assert(std::is_pointer<PointerType>::value, "");
     static_assert(std::is_integral<Type>::value || std::is_enum<Type>::value, "");
+    using UnsignedType = std::make_unsigned_t<std::conditional_t<std::is_same_v<Type, bool>, uint8_t, Type>>;
+    static_assert(sizeof(UnsignedType) == sizeof(Type));
 
     CompactPointerTuple() = default;
 
 #if CPU(ADDRESS64)
 public:
-    static constexpr uint64_t encodeType(uint8_t type)
-    {
-        // Encode 8bit type UUUDDDDD into 64bit data DDDDD..56bit..UUU.
-        return (static_cast<uint64_t>(type) << 59) | (static_cast<uint64_t>(type) >> 5);
-    }
-    static constexpr uint8_t decodeType(uint64_t value)
+    static constexpr unsigned maxNumberOfBitsInPointer = 48;
+    static_assert(OS_CONSTANT(EFFECTIVE_ADDRESS_WIDTH) <= maxNumberOfBitsInPointer);
+
+#if CPU(LITTLE_ENDIAN)
+    static ptrdiff_t offsetOfType()
     {
-        // Decode 64bit data DDDDD..56bit..UUU into 8bit type UUUDDDDD.
-        return static_cast<uint8_t>((value >> 59) | (value << 5));
+        return maxNumberOfBitsInPointer / 8;
     }
+#endif
 
-    static constexpr uint64_t typeMask = encodeType(UINT8_MAX);
-    static_assert(0xF800000000000007ULL == typeMask, "");
-    static constexpr uint64_t pointerMask = ~typeMask;
+    static constexpr uint64_t pointerMask = (1ULL << maxNumberOfBitsInPointer) - 1;
 
     CompactPointerTuple(PointerType pointer, Type type)
-        : m_data(bitwise_cast<uint64_t>(pointer) | encodeType(static_cast<uint8_t>(type)))
+        : m_data(encode(pointer, type))
     {
-        ASSERT((bitwise_cast<uint64_t>(pointer) & 0b111) == 0x0);
+        ASSERT(this->type() == type);
+        ASSERT(this->pointer() == pointer);
     }
 
     PointerType pointer() const { return bitwise_cast<PointerType>(m_data & pointerMask); }
     void setPointer(PointerType pointer)
     {
-        static_assert(alignof(typename std::remove_pointer<PointerType>::type) >= alignof(void*), "");
-        ASSERT((bitwise_cast<uint64_t>(pointer) & 0b111) == 0x0);
-        m_data = CompactPointerTuple(pointer, type()).m_data;
+        m_data = encode(pointer, type());
+        ASSERT(this->pointer() == pointer);
     }
 
-    Type type() const { return static_cast<Type>(decodeType(m_data)); }
+    Type type() const { return decodeType(m_data); }
     void setType(Type type)
     {
-        m_data = CompactPointerTuple(pointer(), type).m_data;
+        m_data = encode(pointer(), type);
+        ASSERT(this->type() == type);
     }
 
+    uint64_t data() const { return m_data; }
+
 private:
+    static constexpr uint64_t encodeType(Type type)
+    {
+        return static_cast<uint64_t>(static_cast<UnsignedType>(type)) << maxNumberOfBitsInPointer;
+    }
+    static constexpr Type decodeType(uint64_t value)
+    {
+        return static_cast<Type>(static_cast<UnsignedType>(value >> maxNumberOfBitsInPointer));
+    }
+
+    static uint64_t encode(PointerType pointer, Type type)
+    {
+        return bitwise_cast<uint64_t>(pointer) | encodeType(type);
+    }
+
     uint64_t m_data { 0 };
 #else
 public:
diff --git a/Source/WTF/wtf/CompactRefPtrTuple.h b/Source/WTF/wtf/CompactRefPtrTuple.h
new file mode 100644 (file)
index 0000000..bf3d3fe
--- /dev/null
@@ -0,0 +1,69 @@
+/*
+ * Copyright (C) 2020 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include <wtf/CompactPointerTuple.h>
+#include <wtf/RefPtr.h>
+
+namespace WTF {
+
+template<typename T, typename Type>
+class CompactRefPtrTuple final {
+    WTF_MAKE_FAST_ALLOCATED;
+    WTF_MAKE_NONCOPYABLE(CompactRefPtrTuple);
+public:
+    CompactRefPtrTuple() = default;
+    ~CompactRefPtrTuple()
+    {
+        derefIfNotNull(m_data.pointer());
+    }
+
+    T* pointer() const
+    {
+        return m_data.pointer();
+    }
+
+    void setPointer(T* pointer)
+    {
+        refIfNotNull(pointer);
+        auto* old = m_data.pointer();
+        m_data.setPointer(pointer);
+        derefIfNotNull(old);
+    }
+
+    Type type() const { return m_data.type(); }
+    void setType(Type type)
+    {
+        m_data.setType(type);
+    }
+
+private:
+    CompactPointerTuple<T*, Type> m_data;
+};
+
+} // namespace WTF
+
+using WTF::CompactRefPtrTuple;
index 6bbbefe..0043563 100644 (file)
@@ -189,8 +189,8 @@ public:
 
     static constexpr unsigned MaxLength = StringImplShape::MaxLength;
 
-    // The bottom 6 bits in the hash are flags.
-    static constexpr const unsigned s_flagCount = 6;
+    // The bottom 6 bits in the hash are flags, but reserve 8 bits since StringHash only has 24 bits anyway.
+    static constexpr const unsigned s_flagCount = 8;
 
 private:
     static constexpr const unsigned s_flagMask = (1u << s_flagCount) - 1;
index ed0f13e..6310dc5 100644 (file)
@@ -41,7 +41,7 @@ public:
     static constexpr Flags s_flagIsRegistered = 0b010u;
     static constexpr Flags s_flagIsPrivate = 0b100u;
 
-    unsigned hashForSymbol() const { return m_hashForSymbol; }
+    unsigned hashForSymbol() const { return m_hashForSymbolShiftedWithFlagCount >> s_flagCount; }
     bool isNullSymbol() const { return m_flags & s_flagIsNullSymbol; }
     bool isRegistered() const { return m_flags & s_flagIsRegistered; }
     bool isPrivate() const { return m_flags & s_flagIsPrivate; }
@@ -60,7 +60,7 @@ public:
         constexpr StaticSymbolImpl(const char (&characters)[characterCount], Flags flags = s_flagDefault)
             : StringImplShape(s_refCountFlagIsStaticString, characterCount - 1, characters,
                 s_hashFlag8BitBuffer | s_hashFlagDidReportCost | StringSymbol | BufferInternal | (StringHasher::computeLiteralHashAndMaskTop8Bits(characters) << s_flagCount), ConstructWithConstExpr)
-            , m_hashForSymbol(StringHasher::computeLiteralHashAndMaskTop8Bits(characters) << s_flagCount)
+            , m_hashForSymbolShiftedWithFlagCount(StringHasher::computeLiteralHashAndMaskTop8Bits(characters) << s_flagCount)
             , m_flags(flags)
         {
         }
@@ -69,7 +69,7 @@ public:
         constexpr StaticSymbolImpl(const char16_t (&characters)[characterCount], Flags flags = s_flagDefault)
             : StringImplShape(s_refCountFlagIsStaticString, characterCount - 1, characters,
                 s_hashFlagDidReportCost | StringSymbol | BufferInternal | (StringHasher::computeLiteralHashAndMaskTop8Bits(characters) << s_flagCount), ConstructWithConstExpr)
-            , m_hashForSymbol(StringHasher::computeLiteralHashAndMaskTop8Bits(characters) << s_flagCount)
+            , m_hashForSymbolShiftedWithFlagCount(StringHasher::computeLiteralHashAndMaskTop8Bits(characters) << s_flagCount)
             , m_flags(flags)
         {
         }
@@ -80,7 +80,7 @@ public:
         }
 
         StringImpl* m_owner { nullptr }; // We do not make StaticSymbolImpl BufferSubstring. Thus we can make this nullptr.
-        unsigned m_hashForSymbol;
+        unsigned m_hashForSymbolShiftedWithFlagCount;
         Flags m_flags;
     };
 
@@ -92,7 +92,7 @@ protected:
     SymbolImpl(const LChar* characters, unsigned length, Ref<StringImpl>&& base, Flags flags = s_flagDefault)
         : UniquedStringImpl(CreateSymbol, characters, length)
         , m_owner(&base.leakRef())
-        , m_hashForSymbol(nextHashForSymbol())
+        , m_hashForSymbolShiftedWithFlagCount(nextHashForSymbol())
         , m_flags(flags)
     {
         ASSERT(StringImpl::tailOffset<StringImpl*>() == OBJECT_OFFSETOF(SymbolImpl, m_owner));
@@ -101,7 +101,7 @@ protected:
     SymbolImpl(const UChar* characters, unsigned length, Ref<StringImpl>&& base, Flags flags = s_flagDefault)
         : UniquedStringImpl(CreateSymbol, characters, length)
         , m_owner(&base.leakRef())
-        , m_hashForSymbol(nextHashForSymbol())
+        , m_hashForSymbolShiftedWithFlagCount(nextHashForSymbol())
         , m_flags(flags)
     {
         ASSERT(StringImpl::tailOffset<StringImpl*>() == OBJECT_OFFSETOF(SymbolImpl, m_owner));
@@ -110,7 +110,7 @@ protected:
     SymbolImpl(Flags flags = s_flagDefault)
         : UniquedStringImpl(CreateSymbol)
         , m_owner(StringImpl::empty())
-        , m_hashForSymbol(nextHashForSymbol())
+        , m_hashForSymbolShiftedWithFlagCount(nextHashForSymbol())
         , m_flags(flags | s_flagIsNullSymbol)
     {
         ASSERT(StringImpl::tailOffset<StringImpl*>() == OBJECT_OFFSETOF(SymbolImpl, m_owner));
@@ -119,7 +119,7 @@ protected:
     // The pointer to the owner string should be immediately following after the StringImpl layout,
     // since we would like to align the layout of SymbolImpl to the one of BufferSubstring StringImpl.
     StringImpl* m_owner;
-    unsigned m_hashForSymbol;
+    unsigned m_hashForSymbolShiftedWithFlagCount;
     Flags m_flags { s_flagDefault };
 };
 static_assert(sizeof(SymbolImpl) == sizeof(SymbolImpl::StaticSymbolImpl), "");