GetByVal should use polymorphic access and hook into a status object
authorsbarati@apple.com <sbarati@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 20 Nov 2019 05:53:38 +0000 (05:53 +0000)
committersbarati@apple.com <sbarati@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 20 Nov 2019 05:53:38 +0000 (05:53 +0000)
https://bugs.webkit.org/show_bug.cgi?id=202767

Reviewed by Keith Miller.

This patch puts get_by_val in our normal IC caching infrastructure. This means
building it on top of StructureStubInfo and PolymorphicAccess. For this to
work, AccessCase now supports all the array load variants that we used to have
fast paths for. For identifier based variants, it we just fall back to the
code we've already implemented, but only after doing a runtime check that
the identifier matches the expected identifier. This allows us to reuse all
the IC infrastructure we have for get_by_id.

Our compilation strategy is that the baseline JIT always emits a get_by_val
IC. If that IC goes to the slow path, the DFG/FTL won't also emit the same IC,
since it's probable that we're seeing a megamorphic switch over strings. This
was needed to keep this patch neutral on Speedometer 2. It's likely there is
room to improve this heuristic: https://bugs.webkit.org/show_bug.cgi?id=204336

This now allows us to have inline caches which contain array loads, and uses
of different identifiers. They just show up as different access cases inside
polymorphic access.

This patch is a progression on various microbenchmarks, especially those with
uses of a fixed set of multiple identifiers. It's neutral on JetStream 2 and
Speedometer 2.

This patch also hooks in get_by_val ICs to our ICStatus infrastructure. This
is going to pave the way to allow us to eagerly throw away baseline code, since
when we go for an FTL compile, we will be able to use the IC status from the
prior compile without relying on baseline specific data structures.

There are a few interesting tidbits in this patch that are worth
highlighting.
- Unlike get_by_id, when we take an IC snapshot for a get_by_val
IC, we're not guaranteed the various identifiers in question will outlive
the compile (get_by_id ensures this since they're in the constant pool of
CodeBlock). For get_by_val, the Identifiers in question are dynamic fields
of AccessCase, and AccessCase may get destroyed as we're compiling concurrently.
Also, String's reference counting isn't thread safe, so we can't just ref it.
Instead, we use a Box<Identifier> inside AccessCase. This allows us to safely
ref the Box without refing the underlying String. We're not worried about the
Box being destroyed while we're doing this, since we're holding a lock while
taking an IC snapshot inside GetByStatus.
- We no longer hold onto the actual JS symbol object in the inline cache.
This is what we used to do for inlining by val infos. Instead, this patch
extends the CheckStringIdent node to be able to handle symbols as well. This
patch also renames CheckStringIdent to CheckIdent.

This patch also renames various IC related helpers from GetById* to GetBy*,
since they can both be used by get_by_val and get_by_id.

* JavaScriptCore.xcodeproj/project.pbxproj:
* Sources.txt:
* bytecode/AccessCase.cpp:
(JSC::AccessCase::AccessCase):
(JSC::AccessCase::create):
(JSC::AccessCase::fromStructureStubInfo):
(JSC::AccessCase::commit):
(JSC::AccessCase::guardedByStructureCheck const):
(JSC::AccessCase::guardedByStructureCheckSkippingConstantIdentifierCheck const):
(JSC::AccessCase::requiresIdentifierNameMatch const):
(JSC::AccessCase::requiresInt32PropertyCheck const):
(JSC::AccessCase::needsScratchFPR const):
(JSC::AccessCase::forEachDependentCell const):
(JSC::AccessCase::doesCalls const):
(JSC::AccessCase::canReplace const):
(JSC::AccessCase::dump const):
(JSC::AccessCase::generateWithGuard):
(JSC::AccessCase::generate):
(JSC::AccessCase::generateImpl):
(JSC::AccessCase::toTypedArrayType):
(JSC::AccessCase::checkConsistency):
* bytecode/AccessCase.h:
(JSC::AccessCase::uid const):
(JSC::AccessCase::identifier const):
(JSC::AccessCase::checkConsistency):
(JSC::AccessCase::AccessCase):
* bytecode/GetByIdStatus.cpp: Removed.
* bytecode/GetByIdStatus.h: Removed.
* bytecode/GetByIdVariant.cpp:
(JSC::GetByIdVariant::GetByIdVariant):
(JSC::GetByIdVariant::operator=):
(JSC::GetByIdVariant::attemptToMerge):
* bytecode/GetByIdVariant.h:
(JSC::GetByIdVariant::domAttribute const):
(JSC::GetByIdVariant::identifier const):
* bytecode/GetByStatus.cpp: Copied from Source/JavaScriptCore/bytecode/GetByIdStatus.cpp.
(JSC::GetByStatus::appendVariant):
(JSC::GetByStatus::computeFromLLInt):
(JSC::GetByStatus::computeFor):
(JSC::GetByStatus::GetByStatus):
(JSC::GetByStatus::computeForStubInfoWithoutExitSiteFeedback):
(JSC::GetByStatus::makesCalls const):
(JSC::GetByStatus::slowVersion const):
(JSC::GetByStatus::merge):
(JSC::GetByStatus::filter):
(JSC::GetByStatus::markIfCheap):
(JSC::GetByStatus::finalize):
(JSC::GetByStatus::singleIdentifier const):
(JSC::GetByStatus::dump const):
(JSC::GetByIdStatus::appendVariant): Deleted.
(JSC::GetByIdStatus::computeFromLLInt): Deleted.
(JSC::GetByIdStatus::computeFor): Deleted.
(JSC::GetByIdStatus::computeForStubInfo): Deleted.
(JSC::GetByIdStatus::GetByIdStatus): Deleted.
(JSC::GetByIdStatus::computeForStubInfoWithoutExitSiteFeedback): Deleted.
(JSC::GetByIdStatus::makesCalls const): Deleted.
(JSC::GetByIdStatus::slowVersion const): Deleted.
(JSC::GetByIdStatus::merge): Deleted.
(JSC::GetByIdStatus::filter): Deleted.
(JSC::GetByIdStatus::markIfCheap): Deleted.
(JSC::GetByIdStatus::finalize): Deleted.
(JSC::GetByIdStatus::dump const): Deleted.
* bytecode/GetByStatus.h: Copied from Source/JavaScriptCore/bytecode/GetByIdStatus.h.
(JSC::GetByStatus::GetByStatus):
(JSC::GetByStatus::moduleNamespaceObject const):
(JSC::GetByStatus::moduleEnvironment const):
(JSC::GetByStatus::scopeOffset const):
(JSC::GetByIdStatus::GetByIdStatus): Deleted.
(JSC::GetByIdStatus::state const): Deleted.
(JSC::GetByIdStatus::isSet const): Deleted.
(JSC::GetByIdStatus::operator bool const): Deleted.
(JSC::GetByIdStatus::isSimple const): Deleted.
(JSC::GetByIdStatus::isCustom const): Deleted.
(JSC::GetByIdStatus::isModuleNamespace const): Deleted.
(JSC::GetByIdStatus::numVariants const): Deleted.
(JSC::GetByIdStatus::variants const): Deleted.
(JSC::GetByIdStatus::at const): Deleted.
(JSC::GetByIdStatus::operator[] const): Deleted.
(JSC::GetByIdStatus::takesSlowPath const): Deleted.
(JSC::GetByIdStatus::wasSeenInJIT const): Deleted.
(JSC::GetByIdStatus::moduleNamespaceObject const): Deleted.
(JSC::GetByIdStatus::moduleEnvironment const): Deleted.
(JSC::GetByIdStatus::scopeOffset const): Deleted.
* bytecode/GetterSetterAccessCase.cpp:
(JSC::GetterSetterAccessCase::GetterSetterAccessCase):
(JSC::GetterSetterAccessCase::create):
* bytecode/GetterSetterAccessCase.h:
* bytecode/ICStatusMap.h:
* bytecode/InByIdStatus.cpp:
(JSC::InByIdStatus::computeForStubInfoWithoutExitSiteFeedback):
* bytecode/InlineAccess.cpp:
(JSC::InlineAccess::generateSelfPropertyAccess):
(JSC::InlineAccess::canGenerateSelfPropertyReplace):
(JSC::InlineAccess::generateSelfPropertyReplace):
(JSC::InlineAccess::isCacheableArrayLength):
(JSC::InlineAccess::generateArrayLength):
(JSC::InlineAccess::isCacheableStringLength):
(JSC::InlineAccess::generateStringLength):
(JSC::InlineAccess::generateSelfInAccess):
* bytecode/InstanceOfAccessCase.cpp:
(JSC::InstanceOfAccessCase::InstanceOfAccessCase):
* bytecode/InstanceOfStatus.cpp:
(JSC::InstanceOfStatus::computeForStubInfo):
* bytecode/IntrinsicGetterAccessCase.cpp:
(JSC::IntrinsicGetterAccessCase::IntrinsicGetterAccessCase):
(JSC::IntrinsicGetterAccessCase::create):
* bytecode/IntrinsicGetterAccessCase.h:
* bytecode/ModuleNamespaceAccessCase.cpp:
(JSC::ModuleNamespaceAccessCase::ModuleNamespaceAccessCase):
(JSC::ModuleNamespaceAccessCase::create):
* bytecode/ModuleNamespaceAccessCase.h:
* bytecode/PolymorphicAccess.cpp:
(JSC::AccessGenerationState::preserveLiveRegistersToStackForCall):
(JSC::PolymorphicAccess::addCases):
(JSC::PolymorphicAccess::addCase):
(JSC::PolymorphicAccess::commit):
(JSC::PolymorphicAccess::regenerate):
(WTF::printInternal):
* bytecode/PolymorphicAccess.h:
* bytecode/ProxyableAccessCase.cpp:
(JSC::ProxyableAccessCase::ProxyableAccessCase):
(JSC::ProxyableAccessCase::create):
* bytecode/ProxyableAccessCase.h:
* bytecode/PutByIdStatus.cpp:
(JSC::PutByIdStatus::computeForStubInfo):
* bytecode/RecordedStatuses.cpp:
(JSC::RecordedStatuses::addGetByStatus):
(JSC::RecordedStatuses::addGetByIdStatus): Deleted.
* bytecode/RecordedStatuses.h:
* bytecode/StructureStubInfo.cpp:
(JSC::StructureStubInfo::StructureStubInfo):
(JSC::StructureStubInfo::initGetByIdSelf):
(JSC::StructureStubInfo::initArrayLength):
(JSC::StructureStubInfo::initStringLength):
(JSC::StructureStubInfo::initPutByIdReplace):
(JSC::StructureStubInfo::initInByIdSelf):
(JSC::StructureStubInfo::deref):
(JSC::StructureStubInfo::aboutToDie):
(JSC::StructureStubInfo::addAccessCase):
(JSC::StructureStubInfo::reset):
(JSC::StructureStubInfo::visitWeakReferences):
(JSC::StructureStubInfo::propagateTransitions):
(JSC::StructureStubInfo::summary const):
(JSC::StructureStubInfo::containsPC const):
(JSC::StructureStubInfo::setCacheType):
(JSC::StructureStubInfo::checkConsistency):
* bytecode/StructureStubInfo.h:
(JSC::StructureStubInfo::getByIdSelfIdentifier):
(JSC::StructureStubInfo::thisValueIsInThisGPR const):
(JSC::StructureStubInfo::checkConsistency):
(JSC::StructureStubInfo::cacheType const):
(JSC::appropriateOptimizingGetByIdFunction):
(JSC::appropriateGenericGetByIdFunction):
* dfg/DFGAbstractInterpreterInlines.h:
(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
(JSC::DFG::AbstractInterpreter<AbstractStateType>::filterICStatus):
* dfg/DFGArgumentsEliminationPhase.cpp:
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::handleDOMJITGetter):
(JSC::DFG::ByteCodeParser::handleModuleNamespaceLoad):
(JSC::DFG::ByteCodeParser::load):
(JSC::DFG::ByteCodeParser::handleGetById):
(JSC::DFG::ByteCodeParser::parseGetById):
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::handlePutByVal):
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGClobbersExitState.cpp:
(JSC::DFG::clobbersExitState):
* dfg/DFGConstantFoldingPhase.cpp:
(JSC::DFG::ConstantFoldingPhase::foldConstants):
* dfg/DFGDesiredIdentifiers.cpp:
(JSC::DFG::DesiredIdentifiers::processCodeBlockIdentifiersIfNeeded):
(JSC::DFG::DesiredIdentifiers::ensure):
(JSC::DFG::DesiredIdentifiers::at const):
(JSC::DFG::DesiredIdentifiers::reallyAdd):
* dfg/DFGDesiredIdentifiers.h:
* dfg/DFGDoesGC.cpp:
(JSC::DFG::doesGC):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
* dfg/DFGGraph.cpp:
(JSC::DFG::Graph::dump):
* dfg/DFGGraph.h:
* dfg/DFGInPlaceAbstractState.cpp:
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::link):
* dfg/DFGJITCompiler.h:
(JSC::DFG::JITCompiler::addGetByVal):
* dfg/DFGMayExit.cpp:
* dfg/DFGNode.h:
(JSC::DFG::Node::hasUidOperand):
(JSC::DFG::Node::hasGetByStatus):
(JSC::DFG::Node::getByStatus):
(JSC::DFG::Node::hasGetByIdStatus): Deleted.
(JSC::DFG::Node::getByIdStatus): Deleted.
* dfg/DFGNodeType.h:
* dfg/DFGObjectAllocationSinkingPhase.cpp:
* dfg/DFGPredictionPropagationPhase.cpp:
* dfg/DFGSafeToExecute.h:
(JSC::DFG::safeToExecute):
* dfg/DFGSpeculativeJIT.cpp:
(JSC::DFG::SpeculativeJIT::compileGetById):
(JSC::DFG::SpeculativeJIT::compileCheckIdent):
(JSC::DFG::SpeculativeJIT::compileCheckStringIdent): Deleted.
* dfg/DFGSpeculativeJIT.h:
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetByIdWithThis):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetByIdWithThis):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGVarargsForwardingPhase.cpp:
* ftl/FTLCapabilities.cpp:
(JSC::FTL::canCompile):
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::compileNode):
(JSC::FTL::DFG::LowerDFGToB3::compileCheckIdent):
(JSC::FTL::DFG::LowerDFGToB3::compileGetById):
(JSC::FTL::DFG::LowerDFGToB3::compileGetByVal):
(JSC::FTL::DFG::LowerDFGToB3::getByIdWithThis):
(JSC::FTL::DFG::LowerDFGToB3::compileCheckStringIdent): Deleted.
* jit/ICStats.h:
* jit/JIT.cpp:
(JSC::JIT::privateCompileSlowCases):
(JSC::JIT::link):
* jit/JIT.h:
* jit/JITInlineCacheGenerator.cpp:
(JSC::garbageStubInfo):
(JSC::JITGetByIdWithThisGenerator::JITGetByIdWithThisGenerator):
(JSC::JITInstanceOfGenerator::JITInstanceOfGenerator):
(JSC::JITGetByValGenerator::JITGetByValGenerator):
(JSC::JITGetByValGenerator::generateFastPath):
(JSC::JITGetByValGenerator::finalize):
* jit/JITInlineCacheGenerator.h:
(JSC::JITGetByValGenerator::JITGetByValGenerator):
(JSC::JITGetByValGenerator::slowPathJump const):
* jit/JITInlines.h:
(JSC::JIT::emitDoubleGetByVal): Deleted.
(JSC::JIT::emitContiguousGetByVal): Deleted.
(JSC::JIT::emitArrayStorageGetByVal): Deleted.
* jit/JITOperations.cpp:
(JSC::getByVal):
(JSC::tryGetByValOptimize): Deleted.
* jit/JITOperations.h:
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emit_op_get_by_val):
(JSC::JIT::emitSlow_op_get_by_val):
(JSC::JIT::emit_op_try_get_by_id):
(JSC::JIT::emit_op_get_by_id_direct):
(JSC::JIT::emit_op_get_by_id):
(JSC::JIT::emit_op_get_by_id_with_this):
(JSC::JIT::emitGetByValWithCachedId): Deleted.
(JSC::JIT::privateCompileGetByVal): Deleted.
(JSC::JIT::privateCompileGetByValWithCachedId): Deleted.
(JSC::JIT::emitDirectArgumentsGetByVal): Deleted.
(JSC::JIT::emitScopedArgumentsGetByVal): Deleted.
(JSC::JIT::emitIntTypedArrayGetByVal): Deleted.
(JSC::JIT::emitFloatTypedArrayGetByVal): Deleted.
* jit/JITPropertyAccess32_64.cpp:
(JSC::JIT::emit_op_get_by_val):
(JSC::JIT::emit_op_try_get_by_id):
(JSC::JIT::emit_op_get_by_id_direct):
(JSC::JIT::emit_op_get_by_id):
(JSC::JIT::emit_op_get_by_id_with_this):
(JSC::JIT::emitGetByValWithCachedId): Deleted.
* jit/Repatch.cpp:
(JSC::appropriateOptimizingGetByFunction):
(JSC::appropriateGetByFunction):
(JSC::tryCacheGetBy):
(JSC::repatchGetBy):
(JSC::tryCacheArrayGetByVal):
(JSC::repatchArrayGetByVal):
(JSC::tryCachePutByID):
(JSC::tryCacheInByID):
(JSC::tryCacheInstanceOf):
(JSC::resetGetBy):
(JSC::appropriateOptimizingGetByIdFunction): Deleted.
(JSC::appropriateGetByIdFunction): Deleted.
(JSC::tryCacheGetByID): Deleted.
(JSC::repatchGetByID): Deleted.
(JSC::resetGetByID): Deleted.
* jit/Repatch.h:
* llint/LowLevelInterpreter.h:
* runtime/DOMAnnotation.h:
* runtime/JSCJSValue.cpp:
(JSC::JSValue::dumpInContextAssumingStructure const):
* runtime/Structure.h:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@252684 268f45cc-cd09-0410-ab3c-d52691b4dbfc

73 files changed:
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/Sources.txt
Source/JavaScriptCore/bytecode/AccessCase.cpp
Source/JavaScriptCore/bytecode/AccessCase.h
Source/JavaScriptCore/bytecode/GetByIdVariant.cpp
Source/JavaScriptCore/bytecode/GetByIdVariant.h
Source/JavaScriptCore/bytecode/GetByStatus.cpp [moved from Source/JavaScriptCore/bytecode/GetByIdStatus.cpp with 65% similarity]
Source/JavaScriptCore/bytecode/GetByStatus.h [moved from Source/JavaScriptCore/bytecode/GetByIdStatus.h with 75% similarity]
Source/JavaScriptCore/bytecode/GetterSetterAccessCase.cpp
Source/JavaScriptCore/bytecode/GetterSetterAccessCase.h
Source/JavaScriptCore/bytecode/ICStatusMap.h
Source/JavaScriptCore/bytecode/InByIdStatus.cpp
Source/JavaScriptCore/bytecode/InlineAccess.cpp
Source/JavaScriptCore/bytecode/InstanceOfAccessCase.cpp
Source/JavaScriptCore/bytecode/InstanceOfStatus.cpp
Source/JavaScriptCore/bytecode/IntrinsicGetterAccessCase.cpp
Source/JavaScriptCore/bytecode/IntrinsicGetterAccessCase.h
Source/JavaScriptCore/bytecode/ModuleNamespaceAccessCase.cpp
Source/JavaScriptCore/bytecode/ModuleNamespaceAccessCase.h
Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp
Source/JavaScriptCore/bytecode/PolymorphicAccess.h
Source/JavaScriptCore/bytecode/ProxyableAccessCase.cpp
Source/JavaScriptCore/bytecode/ProxyableAccessCase.h
Source/JavaScriptCore/bytecode/PutByIdStatus.cpp
Source/JavaScriptCore/bytecode/RecordedStatuses.cpp
Source/JavaScriptCore/bytecode/RecordedStatuses.h
Source/JavaScriptCore/bytecode/StructureStubInfo.cpp
Source/JavaScriptCore/bytecode/StructureStubInfo.h
Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp
Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
Source/JavaScriptCore/dfg/DFGClobberize.h
Source/JavaScriptCore/dfg/DFGClobbersExitState.cpp
Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp
Source/JavaScriptCore/dfg/DFGDesiredIdentifiers.cpp
Source/JavaScriptCore/dfg/DFGDesiredIdentifiers.h
Source/JavaScriptCore/dfg/DFGDoesGC.cpp
Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
Source/JavaScriptCore/dfg/DFGGraph.cpp
Source/JavaScriptCore/dfg/DFGGraph.h
Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.cpp
Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
Source/JavaScriptCore/dfg/DFGJITCompiler.h
Source/JavaScriptCore/dfg/DFGMayExit.cpp
Source/JavaScriptCore/dfg/DFGNode.h
Source/JavaScriptCore/dfg/DFGNodeType.h
Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp
Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
Source/JavaScriptCore/dfg/DFGSafeToExecute.h
Source/JavaScriptCore/dfg/DFGSpeculativeJIT.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT.h
Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
Source/JavaScriptCore/dfg/DFGVarargsForwardingPhase.cpp
Source/JavaScriptCore/ftl/FTLCapabilities.cpp
Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
Source/JavaScriptCore/jit/ICStats.h
Source/JavaScriptCore/jit/JIT.cpp
Source/JavaScriptCore/jit/JIT.h
Source/JavaScriptCore/jit/JITInlineCacheGenerator.cpp
Source/JavaScriptCore/jit/JITInlineCacheGenerator.h
Source/JavaScriptCore/jit/JITInlines.h
Source/JavaScriptCore/jit/JITOperations.cpp
Source/JavaScriptCore/jit/JITOperations.h
Source/JavaScriptCore/jit/JITPropertyAccess.cpp
Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp
Source/JavaScriptCore/jit/Repatch.cpp
Source/JavaScriptCore/jit/Repatch.h
Source/JavaScriptCore/llint/LowLevelInterpreter.h
Source/JavaScriptCore/runtime/DOMAnnotation.h
Source/JavaScriptCore/runtime/JSCJSValue.cpp
Source/JavaScriptCore/runtime/Structure.h

index 50e2268..cb2946b 100644 (file)
@@ -1,3 +1,346 @@
+2019-11-19  Saam Barati  <sbarati@apple.com>
+
+        GetByVal should use polymorphic access and hook into a status object
+        https://bugs.webkit.org/show_bug.cgi?id=202767
+
+        Reviewed by Keith Miller.
+
+        This patch puts get_by_val in our normal IC caching infrastructure. This means
+        building it on top of StructureStubInfo and PolymorphicAccess. For this to
+        work, AccessCase now supports all the array load variants that we used to have
+        fast paths for. For identifier based variants, it we just fall back to the
+        code we've already implemented, but only after doing a runtime check that
+        the identifier matches the expected identifier. This allows us to reuse all
+        the IC infrastructure we have for get_by_id.
+        
+        Our compilation strategy is that the baseline JIT always emits a get_by_val
+        IC. If that IC goes to the slow path, the DFG/FTL won't also emit the same IC,
+        since it's probable that we're seeing a megamorphic switch over strings. This
+        was needed to keep this patch neutral on Speedometer 2. It's likely there is
+        room to improve this heuristic: https://bugs.webkit.org/show_bug.cgi?id=204336
+        
+        This now allows us to have inline caches which contain array loads, and uses
+        of different identifiers. They just show up as different access cases inside
+        polymorphic access.
+        
+        This patch is a progression on various microbenchmarks, especially those with
+        uses of a fixed set of multiple identifiers. It's neutral on JetStream 2 and
+        Speedometer 2.
+        
+        This patch also hooks in get_by_val ICs to our ICStatus infrastructure. This
+        is going to pave the way to allow us to eagerly throw away baseline code, since
+        when we go for an FTL compile, we will be able to use the IC status from the
+        prior compile without relying on baseline specific data structures. 
+        
+        There are a few interesting tidbits in this patch that are worth
+        highlighting. 
+        - Unlike get_by_id, when we take an IC snapshot for a get_by_val 
+        IC, we're not guaranteed the various identifiers in question will outlive
+        the compile (get_by_id ensures this since they're in the constant pool of
+        CodeBlock). For get_by_val, the Identifiers in question are dynamic fields
+        of AccessCase, and AccessCase may get destroyed as we're compiling concurrently.
+        Also, String's reference counting isn't thread safe, so we can't just ref it.
+        Instead, we use a Box<Identifier> inside AccessCase. This allows us to safely
+        ref the Box without refing the underlying String. We're not worried about the
+        Box being destroyed while we're doing this, since we're holding a lock while
+        taking an IC snapshot inside GetByStatus.
+        - We no longer hold onto the actual JS symbol object in the inline cache.
+        This is what we used to do for inlining by val infos. Instead, this patch
+        extends the CheckStringIdent node to be able to handle symbols as well. This
+        patch also renames CheckStringIdent to CheckIdent.
+        
+        This patch also renames various IC related helpers from GetById* to GetBy*,
+        since they can both be used by get_by_val and get_by_id.
+
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * Sources.txt:
+        * bytecode/AccessCase.cpp:
+        (JSC::AccessCase::AccessCase):
+        (JSC::AccessCase::create):
+        (JSC::AccessCase::fromStructureStubInfo):
+        (JSC::AccessCase::commit):
+        (JSC::AccessCase::guardedByStructureCheck const):
+        (JSC::AccessCase::guardedByStructureCheckSkippingConstantIdentifierCheck const):
+        (JSC::AccessCase::requiresIdentifierNameMatch const):
+        (JSC::AccessCase::requiresInt32PropertyCheck const):
+        (JSC::AccessCase::needsScratchFPR const):
+        (JSC::AccessCase::forEachDependentCell const):
+        (JSC::AccessCase::doesCalls const):
+        (JSC::AccessCase::canReplace const):
+        (JSC::AccessCase::dump const):
+        (JSC::AccessCase::generateWithGuard):
+        (JSC::AccessCase::generate):
+        (JSC::AccessCase::generateImpl):
+        (JSC::AccessCase::toTypedArrayType):
+        (JSC::AccessCase::checkConsistency):
+        * bytecode/AccessCase.h:
+        (JSC::AccessCase::uid const):
+        (JSC::AccessCase::identifier const):
+        (JSC::AccessCase::checkConsistency):
+        (JSC::AccessCase::AccessCase):
+        * bytecode/GetByIdStatus.cpp: Removed.
+        * bytecode/GetByIdStatus.h: Removed.
+        * bytecode/GetByIdVariant.cpp:
+        (JSC::GetByIdVariant::GetByIdVariant):
+        (JSC::GetByIdVariant::operator=):
+        (JSC::GetByIdVariant::attemptToMerge):
+        * bytecode/GetByIdVariant.h:
+        (JSC::GetByIdVariant::domAttribute const):
+        (JSC::GetByIdVariant::identifier const):
+        * bytecode/GetByStatus.cpp: Copied from Source/JavaScriptCore/bytecode/GetByIdStatus.cpp.
+        (JSC::GetByStatus::appendVariant):
+        (JSC::GetByStatus::computeFromLLInt):
+        (JSC::GetByStatus::computeFor):
+        (JSC::GetByStatus::GetByStatus):
+        (JSC::GetByStatus::computeForStubInfoWithoutExitSiteFeedback):
+        (JSC::GetByStatus::makesCalls const):
+        (JSC::GetByStatus::slowVersion const):
+        (JSC::GetByStatus::merge):
+        (JSC::GetByStatus::filter):
+        (JSC::GetByStatus::markIfCheap):
+        (JSC::GetByStatus::finalize):
+        (JSC::GetByStatus::singleIdentifier const):
+        (JSC::GetByStatus::dump const):
+        (JSC::GetByIdStatus::appendVariant): Deleted.
+        (JSC::GetByIdStatus::computeFromLLInt): Deleted.
+        (JSC::GetByIdStatus::computeFor): Deleted.
+        (JSC::GetByIdStatus::computeForStubInfo): Deleted.
+        (JSC::GetByIdStatus::GetByIdStatus): Deleted.
+        (JSC::GetByIdStatus::computeForStubInfoWithoutExitSiteFeedback): Deleted.
+        (JSC::GetByIdStatus::makesCalls const): Deleted.
+        (JSC::GetByIdStatus::slowVersion const): Deleted.
+        (JSC::GetByIdStatus::merge): Deleted.
+        (JSC::GetByIdStatus::filter): Deleted.
+        (JSC::GetByIdStatus::markIfCheap): Deleted.
+        (JSC::GetByIdStatus::finalize): Deleted.
+        (JSC::GetByIdStatus::dump const): Deleted.
+        * bytecode/GetByStatus.h: Copied from Source/JavaScriptCore/bytecode/GetByIdStatus.h.
+        (JSC::GetByStatus::GetByStatus):
+        (JSC::GetByStatus::moduleNamespaceObject const):
+        (JSC::GetByStatus::moduleEnvironment const):
+        (JSC::GetByStatus::scopeOffset const):
+        (JSC::GetByIdStatus::GetByIdStatus): Deleted.
+        (JSC::GetByIdStatus::state const): Deleted.
+        (JSC::GetByIdStatus::isSet const): Deleted.
+        (JSC::GetByIdStatus::operator bool const): Deleted.
+        (JSC::GetByIdStatus::isSimple const): Deleted.
+        (JSC::GetByIdStatus::isCustom const): Deleted.
+        (JSC::GetByIdStatus::isModuleNamespace const): Deleted.
+        (JSC::GetByIdStatus::numVariants const): Deleted.
+        (JSC::GetByIdStatus::variants const): Deleted.
+        (JSC::GetByIdStatus::at const): Deleted.
+        (JSC::GetByIdStatus::operator[] const): Deleted.
+        (JSC::GetByIdStatus::takesSlowPath const): Deleted.
+        (JSC::GetByIdStatus::wasSeenInJIT const): Deleted.
+        (JSC::GetByIdStatus::moduleNamespaceObject const): Deleted.
+        (JSC::GetByIdStatus::moduleEnvironment const): Deleted.
+        (JSC::GetByIdStatus::scopeOffset const): Deleted.
+        * bytecode/GetterSetterAccessCase.cpp:
+        (JSC::GetterSetterAccessCase::GetterSetterAccessCase):
+        (JSC::GetterSetterAccessCase::create):
+        * bytecode/GetterSetterAccessCase.h:
+        * bytecode/ICStatusMap.h:
+        * bytecode/InByIdStatus.cpp:
+        (JSC::InByIdStatus::computeForStubInfoWithoutExitSiteFeedback):
+        * bytecode/InlineAccess.cpp:
+        (JSC::InlineAccess::generateSelfPropertyAccess):
+        (JSC::InlineAccess::canGenerateSelfPropertyReplace):
+        (JSC::InlineAccess::generateSelfPropertyReplace):
+        (JSC::InlineAccess::isCacheableArrayLength):
+        (JSC::InlineAccess::generateArrayLength):
+        (JSC::InlineAccess::isCacheableStringLength):
+        (JSC::InlineAccess::generateStringLength):
+        (JSC::InlineAccess::generateSelfInAccess):
+        * bytecode/InstanceOfAccessCase.cpp:
+        (JSC::InstanceOfAccessCase::InstanceOfAccessCase):
+        * bytecode/InstanceOfStatus.cpp:
+        (JSC::InstanceOfStatus::computeForStubInfo):
+        * bytecode/IntrinsicGetterAccessCase.cpp:
+        (JSC::IntrinsicGetterAccessCase::IntrinsicGetterAccessCase):
+        (JSC::IntrinsicGetterAccessCase::create):
+        * bytecode/IntrinsicGetterAccessCase.h:
+        * bytecode/ModuleNamespaceAccessCase.cpp:
+        (JSC::ModuleNamespaceAccessCase::ModuleNamespaceAccessCase):
+        (JSC::ModuleNamespaceAccessCase::create):
+        * bytecode/ModuleNamespaceAccessCase.h:
+        * bytecode/PolymorphicAccess.cpp:
+        (JSC::AccessGenerationState::preserveLiveRegistersToStackForCall):
+        (JSC::PolymorphicAccess::addCases):
+        (JSC::PolymorphicAccess::addCase):
+        (JSC::PolymorphicAccess::commit):
+        (JSC::PolymorphicAccess::regenerate):
+        (WTF::printInternal):
+        * bytecode/PolymorphicAccess.h:
+        * bytecode/ProxyableAccessCase.cpp:
+        (JSC::ProxyableAccessCase::ProxyableAccessCase):
+        (JSC::ProxyableAccessCase::create):
+        * bytecode/ProxyableAccessCase.h:
+        * bytecode/PutByIdStatus.cpp:
+        (JSC::PutByIdStatus::computeForStubInfo):
+        * bytecode/RecordedStatuses.cpp:
+        (JSC::RecordedStatuses::addGetByStatus):
+        (JSC::RecordedStatuses::addGetByIdStatus): Deleted.
+        * bytecode/RecordedStatuses.h:
+        * bytecode/StructureStubInfo.cpp:
+        (JSC::StructureStubInfo::StructureStubInfo):
+        (JSC::StructureStubInfo::initGetByIdSelf):
+        (JSC::StructureStubInfo::initArrayLength):
+        (JSC::StructureStubInfo::initStringLength):
+        (JSC::StructureStubInfo::initPutByIdReplace):
+        (JSC::StructureStubInfo::initInByIdSelf):
+        (JSC::StructureStubInfo::deref):
+        (JSC::StructureStubInfo::aboutToDie):
+        (JSC::StructureStubInfo::addAccessCase):
+        (JSC::StructureStubInfo::reset):
+        (JSC::StructureStubInfo::visitWeakReferences):
+        (JSC::StructureStubInfo::propagateTransitions):
+        (JSC::StructureStubInfo::summary const):
+        (JSC::StructureStubInfo::containsPC const):
+        (JSC::StructureStubInfo::setCacheType):
+        (JSC::StructureStubInfo::checkConsistency):
+        * bytecode/StructureStubInfo.h:
+        (JSC::StructureStubInfo::getByIdSelfIdentifier):
+        (JSC::StructureStubInfo::thisValueIsInThisGPR const):
+        (JSC::StructureStubInfo::checkConsistency):
+        (JSC::StructureStubInfo::cacheType const):
+        (JSC::appropriateOptimizingGetByIdFunction):
+        (JSC::appropriateGenericGetByIdFunction):
+        * dfg/DFGAbstractInterpreterInlines.h:
+        (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
+        (JSC::DFG::AbstractInterpreter<AbstractStateType>::filterICStatus):
+        * dfg/DFGArgumentsEliminationPhase.cpp:
+        * dfg/DFGByteCodeParser.cpp:
+        (JSC::DFG::ByteCodeParser::handleDOMJITGetter):
+        (JSC::DFG::ByteCodeParser::handleModuleNamespaceLoad):
+        (JSC::DFG::ByteCodeParser::load):
+        (JSC::DFG::ByteCodeParser::handleGetById):
+        (JSC::DFG::ByteCodeParser::parseGetById):
+        (JSC::DFG::ByteCodeParser::parseBlock):
+        (JSC::DFG::ByteCodeParser::handlePutByVal):
+        * dfg/DFGClobberize.h:
+        (JSC::DFG::clobberize):
+        * dfg/DFGClobbersExitState.cpp:
+        (JSC::DFG::clobbersExitState):
+        * dfg/DFGConstantFoldingPhase.cpp:
+        (JSC::DFG::ConstantFoldingPhase::foldConstants):
+        * dfg/DFGDesiredIdentifiers.cpp:
+        (JSC::DFG::DesiredIdentifiers::processCodeBlockIdentifiersIfNeeded):
+        (JSC::DFG::DesiredIdentifiers::ensure):
+        (JSC::DFG::DesiredIdentifiers::at const):
+        (JSC::DFG::DesiredIdentifiers::reallyAdd):
+        * dfg/DFGDesiredIdentifiers.h:
+        * dfg/DFGDoesGC.cpp:
+        (JSC::DFG::doesGC):
+        * dfg/DFGFixupPhase.cpp:
+        (JSC::DFG::FixupPhase::fixupNode):
+        * dfg/DFGGraph.cpp:
+        (JSC::DFG::Graph::dump):
+        * dfg/DFGGraph.h:
+        * dfg/DFGInPlaceAbstractState.cpp:
+        * dfg/DFGJITCompiler.cpp:
+        (JSC::DFG::JITCompiler::link):
+        * dfg/DFGJITCompiler.h:
+        (JSC::DFG::JITCompiler::addGetByVal):
+        * dfg/DFGMayExit.cpp:
+        * dfg/DFGNode.h:
+        (JSC::DFG::Node::hasUidOperand):
+        (JSC::DFG::Node::hasGetByStatus):
+        (JSC::DFG::Node::getByStatus):
+        (JSC::DFG::Node::hasGetByIdStatus): Deleted.
+        (JSC::DFG::Node::getByIdStatus): Deleted.
+        * dfg/DFGNodeType.h:
+        * dfg/DFGObjectAllocationSinkingPhase.cpp:
+        * dfg/DFGPredictionPropagationPhase.cpp:
+        * dfg/DFGSafeToExecute.h:
+        (JSC::DFG::safeToExecute):
+        * dfg/DFGSpeculativeJIT.cpp:
+        (JSC::DFG::SpeculativeJIT::compileGetById):
+        (JSC::DFG::SpeculativeJIT::compileCheckIdent):
+        (JSC::DFG::SpeculativeJIT::compileCheckStringIdent): Deleted.
+        * dfg/DFGSpeculativeJIT.h:
+        * dfg/DFGSpeculativeJIT32_64.cpp:
+        (JSC::DFG::SpeculativeJIT::cachedGetByIdWithThis):
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::cachedGetByIdWithThis):
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGVarargsForwardingPhase.cpp:
+        * ftl/FTLCapabilities.cpp:
+        (JSC::FTL::canCompile):
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::compileNode):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCheckIdent):
+        (JSC::FTL::DFG::LowerDFGToB3::compileGetById):
+        (JSC::FTL::DFG::LowerDFGToB3::compileGetByVal):
+        (JSC::FTL::DFG::LowerDFGToB3::getByIdWithThis):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCheckStringIdent): Deleted.
+        * jit/ICStats.h:
+        * jit/JIT.cpp:
+        (JSC::JIT::privateCompileSlowCases):
+        (JSC::JIT::link):
+        * jit/JIT.h:
+        * jit/JITInlineCacheGenerator.cpp:
+        (JSC::garbageStubInfo):
+        (JSC::JITGetByIdWithThisGenerator::JITGetByIdWithThisGenerator):
+        (JSC::JITInstanceOfGenerator::JITInstanceOfGenerator):
+        (JSC::JITGetByValGenerator::JITGetByValGenerator):
+        (JSC::JITGetByValGenerator::generateFastPath):
+        (JSC::JITGetByValGenerator::finalize):
+        * jit/JITInlineCacheGenerator.h:
+        (JSC::JITGetByValGenerator::JITGetByValGenerator):
+        (JSC::JITGetByValGenerator::slowPathJump const):
+        * jit/JITInlines.h:
+        (JSC::JIT::emitDoubleGetByVal): Deleted.
+        (JSC::JIT::emitContiguousGetByVal): Deleted.
+        (JSC::JIT::emitArrayStorageGetByVal): Deleted.
+        * jit/JITOperations.cpp:
+        (JSC::getByVal):
+        (JSC::tryGetByValOptimize): Deleted.
+        * jit/JITOperations.h:
+        * jit/JITPropertyAccess.cpp:
+        (JSC::JIT::emit_op_get_by_val):
+        (JSC::JIT::emitSlow_op_get_by_val):
+        (JSC::JIT::emit_op_try_get_by_id):
+        (JSC::JIT::emit_op_get_by_id_direct):
+        (JSC::JIT::emit_op_get_by_id):
+        (JSC::JIT::emit_op_get_by_id_with_this):
+        (JSC::JIT::emitGetByValWithCachedId): Deleted.
+        (JSC::JIT::privateCompileGetByVal): Deleted.
+        (JSC::JIT::privateCompileGetByValWithCachedId): Deleted.
+        (JSC::JIT::emitDirectArgumentsGetByVal): Deleted.
+        (JSC::JIT::emitScopedArgumentsGetByVal): Deleted.
+        (JSC::JIT::emitIntTypedArrayGetByVal): Deleted.
+        (JSC::JIT::emitFloatTypedArrayGetByVal): Deleted.
+        * jit/JITPropertyAccess32_64.cpp:
+        (JSC::JIT::emit_op_get_by_val):
+        (JSC::JIT::emit_op_try_get_by_id):
+        (JSC::JIT::emit_op_get_by_id_direct):
+        (JSC::JIT::emit_op_get_by_id):
+        (JSC::JIT::emit_op_get_by_id_with_this):
+        (JSC::JIT::emitGetByValWithCachedId): Deleted.
+        * jit/Repatch.cpp:
+        (JSC::appropriateOptimizingGetByFunction):
+        (JSC::appropriateGetByFunction):
+        (JSC::tryCacheGetBy):
+        (JSC::repatchGetBy):
+        (JSC::tryCacheArrayGetByVal):
+        (JSC::repatchArrayGetByVal):
+        (JSC::tryCachePutByID):
+        (JSC::tryCacheInByID):
+        (JSC::tryCacheInstanceOf):
+        (JSC::resetGetBy):
+        (JSC::appropriateOptimizingGetByIdFunction): Deleted.
+        (JSC::appropriateGetByIdFunction): Deleted.
+        (JSC::tryCacheGetByID): Deleted.
+        (JSC::repatchGetByID): Deleted.
+        (JSC::resetGetByID): Deleted.
+        * jit/Repatch.h:
+        * llint/LowLevelInterpreter.h:
+        * runtime/DOMAnnotation.h:
+        * runtime/JSCJSValue.cpp:
+        (JSC::JSValue::dumpInContextAssumingStructure const):
+        * runtime/Structure.h:
+
 2019-11-19  Ross Kirsling  <ross.kirsling@sony.com>
 
         Implement String.prototype.replaceAll
index 11091a6..465ad85 100644 (file)
                0F93275B1C20BCDF00CF6564 /* dynbench.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F93275A1C20BCDF00CF6564 /* dynbench.cpp */; };
                0F93275F1C21EF7F00CF6564 /* JSObjectInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F93275E1C21EF7F00CF6564 /* JSObjectInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F93329E14CA7DC50085F3C6 /* CallLinkStatus.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F93329414CA7DC10085F3C6 /* CallLinkStatus.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               0F9332A014CA7DCD0085F3C6 /* GetByIdStatus.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F93329614CA7DC10085F3C6 /* GetByIdStatus.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F9332A014CA7DCD0085F3C6 /* GetByStatus.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F93329614CA7DC10085F3C6 /* GetByStatus.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F9332A414CA7DD90085F3C6 /* PutByIdStatus.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F93329A14CA7DC10085F3C6 /* PutByIdStatus.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F9332A514CA7DDD0085F3C6 /* StructureSet.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F93329B14CA7DC10085F3C6 /* StructureSet.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F93B4AA18B92C4D00178A3F /* PutByIdVariant.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F93B4A818B92C4D00178A3F /* PutByIdVariant.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F93275E1C21EF7F00CF6564 /* JSObjectInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSObjectInlines.h; sourceTree = "<group>"; };
                0F93329314CA7DC10085F3C6 /* CallLinkStatus.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallLinkStatus.cpp; sourceTree = "<group>"; };
                0F93329414CA7DC10085F3C6 /* CallLinkStatus.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallLinkStatus.h; sourceTree = "<group>"; };
-               0F93329514CA7DC10085F3C6 /* GetByIdStatus.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = GetByIdStatus.cpp; sourceTree = "<group>"; };
-               0F93329614CA7DC10085F3C6 /* GetByIdStatus.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GetByIdStatus.h; sourceTree = "<group>"; };
+               0F93329514CA7DC10085F3C6 /* GetByStatus.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = GetByStatus.cpp; sourceTree = "<group>"; };
+               0F93329614CA7DC10085F3C6 /* GetByStatus.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GetByStatus.h; sourceTree = "<group>"; };
                0F93329914CA7DC10085F3C6 /* PutByIdStatus.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = PutByIdStatus.cpp; sourceTree = "<group>"; };
                0F93329A14CA7DC10085F3C6 /* PutByIdStatus.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PutByIdStatus.h; sourceTree = "<group>"; };
                0F93329B14CA7DC10085F3C6 /* StructureSet.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StructureSet.h; sourceTree = "<group>"; };
                                14AD91161DCA97FD0014F9FE /* FunctionCodeBlock.cpp */,
                                14AD91071DCA92940014F9FE /* FunctionCodeBlock.h */,
                                1498CAD5214BF36D00710879 /* GetByIdMetadata.h */,
-                               0F93329514CA7DC10085F3C6 /* GetByIdStatus.cpp */,
-                               0F93329614CA7DC10085F3C6 /* GetByIdStatus.h */,
+                               0F93329514CA7DC10085F3C6 /* GetByStatus.cpp */,
+                               0F93329614CA7DC10085F3C6 /* GetByStatus.h */,
                                0F0332C118B01763005F979A /* GetByIdVariant.cpp */,
                                0F0332C218B01763005F979A /* GetByIdVariant.h */,
                                14AD91081DCA92940014F9FE /* GlobalCodeBlock.h */,
                                0F2B66E017B6B5AB00A7AE3F /* GenericTypedArrayView.h in Headers */,
                                0F2B66E117B6B5AB00A7AE3F /* GenericTypedArrayViewInlines.h in Headers */,
                                1498CAD6214BF36D00710879 /* GetByIdMetadata.h in Headers */,
-                               0F9332A014CA7DCD0085F3C6 /* GetByIdStatus.h in Headers */,
+                               0F9332A014CA7DCD0085F3C6 /* GetByStatus.h in Headers */,
                                0F0332C418B01763005F979A /* GetByIdVariant.h in Headers */,
                                7964656A1B952FF0003059EE /* GetPutInfo.h in Headers */,
                                534E03581E53BF2F00213F64 /* GetterSetterAccessCase.h in Headers */,
index 2d7da42..3d70ed8 100644 (file)
@@ -226,7 +226,7 @@ bytecode/ExitingInlineKind.cpp
 bytecode/ExitingJITType.cpp
 bytecode/FullCodeOrigin.cpp
 bytecode/FunctionCodeBlock.cpp
-bytecode/GetByIdStatus.cpp
+bytecode/GetByStatus.cpp
 bytecode/GetByIdVariant.cpp
 bytecode/GetterSetterAccessCase.cpp
 bytecode/ICStatusMap.cpp
index 857bc6e..43745e8 100644 (file)
@@ -54,17 +54,18 @@ namespace AccessCaseInternal {
 static constexpr bool verbose = false;
 }
 
-AccessCase::AccessCase(VM& vm, JSCell* owner, AccessType type, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
+AccessCase::AccessCase(VM& vm, JSCell* owner, AccessType type, const Identifier& identifier, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
     : m_type(type)
     , m_offset(offset)
     , m_polyProtoAccessChain(WTFMove(prototypeAccessChain))
+    , m_identifier(Box<Identifier>::create(identifier))
 {
     m_structure.setMayBeNull(vm, owner, structure);
     m_conditionSet = conditionSet;
     RELEASE_ASSERT(m_conditionSet.isValid());
 }
 
-std::unique_ptr<AccessCase> AccessCase::create(VM& vm, JSCell* owner, AccessType type, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
+std::unique_ptr<AccessCase> AccessCase::create(VM& vm, JSCell* owner, AccessType type, const Identifier& identifier, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
 {
     switch (type) {
     case InHit:
@@ -77,17 +78,33 @@ std::unique_ptr<AccessCase> AccessCase::create(VM& vm, JSCell* owner, AccessType
     case ModuleNamespaceLoad:
     case Replace:
     case InstanceOfGeneric:
+    case IndexedInt32Load:
+    case IndexedDoubleLoad:
+    case IndexedContiguousLoad:
+    case IndexedArrayStorageLoad:
+    case IndexedScopedArgumentsLoad:
+    case IndexedDirectArgumentsLoad:
+    case IndexedTypedArrayInt8Load:
+    case IndexedTypedArrayUint8Load:
+    case IndexedTypedArrayUint8ClampedLoad:
+    case IndexedTypedArrayInt16Load:
+    case IndexedTypedArrayUint16Load:
+    case IndexedTypedArrayInt32Load:
+    case IndexedTypedArrayUint32Load:
+    case IndexedTypedArrayFloat32Load:
+    case IndexedTypedArrayFloat64Load:
+    case IndexedStringLoad:
         RELEASE_ASSERT(!prototypeAccessChain);
         break;
     default:
         RELEASE_ASSERT_NOT_REACHED();
     };
 
-    return std::unique_ptr<AccessCase>(new AccessCase(vm, owner, type, offset, structure, conditionSet, WTFMove(prototypeAccessChain)));
+    return std::unique_ptr<AccessCase>(new AccessCase(vm, owner, type, identifier, offset, structure, conditionSet, WTFMove(prototypeAccessChain)));
 }
 
 std::unique_ptr<AccessCase> AccessCase::create(
-    VM& vm, JSCell* owner, PropertyOffset offset, Structure* oldStructure, Structure* newStructure,
+    VM& vm, JSCell* owner, const Identifier& identifier, PropertyOffset offset, Structure* oldStructure, Structure* newStructure,
     const ObjectPropertyConditionSet& conditionSet, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
 {
     RELEASE_ASSERT(oldStructure == newStructure->previousID());
@@ -100,7 +117,7 @@ std::unique_ptr<AccessCase> AccessCase::create(
         return nullptr;
     }
 
-    return std::unique_ptr<AccessCase>(new AccessCase(vm, owner, Transition, offset, newStructure, conditionSet, WTFMove(prototypeAccessChain)));
+    return std::unique_ptr<AccessCase>(new AccessCase(vm, owner, Transition, identifier, offset, newStructure, conditionSet, WTFMove(prototypeAccessChain)));
 }
 
 AccessCase::~AccessCase()
@@ -108,23 +125,28 @@ AccessCase::~AccessCase()
 }
 
 std::unique_ptr<AccessCase> AccessCase::fromStructureStubInfo(
-    VM& vm, JSCell* owner, StructureStubInfo& stubInfo)
+    VM& vm, JSCell* owner, const Identifier& identifier, StructureStubInfo& stubInfo)
 {
-    switch (stubInfo.cacheType) {
+    switch (stubInfo.cacheType()) {
     case CacheType::GetByIdSelf:
-        return ProxyableAccessCase::create(vm, owner, Load, stubInfo.u.byIdSelf.offset, stubInfo.u.byIdSelf.baseObjectStructure.get());
+        RELEASE_ASSERT(stubInfo.hasConstantIdentifier);
+        return ProxyableAccessCase::create(vm, owner, Load, identifier, stubInfo.u.byIdSelf.offset, stubInfo.u.byIdSelf.baseObjectStructure.get());
 
     case CacheType::PutByIdReplace:
-        return AccessCase::create(vm, owner, Replace, stubInfo.u.byIdSelf.offset, stubInfo.u.byIdSelf.baseObjectStructure.get());
+        RELEASE_ASSERT(stubInfo.hasConstantIdentifier);
+        return AccessCase::create(vm, owner, Replace, identifier, stubInfo.u.byIdSelf.offset, stubInfo.u.byIdSelf.baseObjectStructure.get());
 
     case CacheType::InByIdSelf:
-        return AccessCase::create(vm, owner, InHit, stubInfo.u.byIdSelf.offset, stubInfo.u.byIdSelf.baseObjectStructure.get());
+        RELEASE_ASSERT(stubInfo.hasConstantIdentifier);
+        return AccessCase::create(vm, owner, InHit, identifier, stubInfo.u.byIdSelf.offset, stubInfo.u.byIdSelf.baseObjectStructure.get());
 
     case CacheType::ArrayLength:
-        return AccessCase::create(vm, owner, AccessCase::ArrayLength);
+        RELEASE_ASSERT(stubInfo.hasConstantIdentifier);
+        return AccessCase::create(vm, owner, AccessCase::ArrayLength, identifier);
 
     case CacheType::StringLength:
-        return AccessCase::create(vm, owner, AccessCase::StringLength);
+        RELEASE_ASSERT(stubInfo.hasConstantIdentifier);
+        return AccessCase::create(vm, owner, AccessCase::StringLength, identifier);
 
     default:
         return nullptr;
@@ -148,7 +170,7 @@ std::unique_ptr<AccessCase> AccessCase::clone() const
     return result;
 }
 
-Vector<WatchpointSet*, 2> AccessCase::commit(VM& vm, const Identifier& ident)
+Vector<WatchpointSet*, 2> AccessCase::commit(VM& vm)
 {
     // It's fine to commit something that is already committed. That arises when we switch to using
     // newly allocated watchpoints. When it happens, it's not efficient - but we think that's OK
@@ -158,11 +180,11 @@ Vector<WatchpointSet*, 2> AccessCase::commit(VM& vm, const Identifier& ident)
     Vector<WatchpointSet*, 2> result;
     Structure* structure = this->structure();
 
-    if (!ident.isNull()) {
+    if (!m_identifier->isNull()) {
         if ((structure && structure->needImpurePropertyWatchpoint())
             || m_conditionSet.needImpurePropertyWatchpoint()
             || (m_polyProtoAccessChain && m_polyProtoAccessChain->needImpurePropertyWatchpoint()))
-            result.append(vm.ensureWatchpointSetForImpureProperty(ident));
+            result.append(vm.ensureWatchpointSetForImpureProperty(*m_identifier));
     }
 
     if (additionalSet())
@@ -181,7 +203,14 @@ Vector<WatchpointSet*, 2> AccessCase::commit(VM& vm, const Identifier& ident)
     return result;
 }
 
-bool AccessCase::guardedByStructureCheck() const
+bool AccessCase::guardedByStructureCheck(const StructureStubInfo& stubInfo) const
+{
+    if (!stubInfo.hasConstantIdentifier)
+        return false;
+    return guardedByStructureCheckSkippingConstantIdentifierCheck(); 
+}
+
+bool AccessCase::guardedByStructureCheckSkippingConstantIdentifierCheck() const
 {
     if (viaProxy())
         return false;
@@ -198,12 +227,167 @@ bool AccessCase::guardedByStructureCheck() const
     case InstanceOfHit:
     case InstanceOfMiss:
     case InstanceOfGeneric:
+    case IndexedInt32Load:
+    case IndexedDoubleLoad:
+    case IndexedContiguousLoad:
+    case IndexedArrayStorageLoad:
+    case IndexedScopedArgumentsLoad:
+    case IndexedDirectArgumentsLoad:
+    case IndexedTypedArrayInt8Load:
+    case IndexedTypedArrayUint8Load:
+    case IndexedTypedArrayUint8ClampedLoad:
+    case IndexedTypedArrayInt16Load:
+    case IndexedTypedArrayUint16Load:
+    case IndexedTypedArrayInt32Load:
+    case IndexedTypedArrayUint32Load:
+    case IndexedTypedArrayFloat32Load:
+    case IndexedTypedArrayFloat64Load:
+    case IndexedStringLoad:
         return false;
     default:
         return true;
     }
 }
 
+bool AccessCase::requiresIdentifierNameMatch() const
+{
+    switch (m_type) {
+    case Load:
+    // We don't currently have a by_val for these puts, but we do care about the identifier.
+    case Transition:
+    case Replace: 
+    case Miss:
+    case GetGetter:
+    case Getter:
+    case Setter:
+    case CustomValueGetter:
+    case CustomAccessorGetter:
+    case CustomValueSetter:
+    case CustomAccessorSetter:
+    case IntrinsicGetter:
+    case InHit:
+    case InMiss:
+    case ArrayLength:
+    case StringLength:
+    case DirectArgumentsLength:
+    case ScopedArgumentsLength:
+    case ModuleNamespaceLoad:
+        return true;
+    case InstanceOfHit:
+    case InstanceOfMiss:
+    case InstanceOfGeneric:
+    case IndexedInt32Load:
+    case IndexedDoubleLoad:
+    case IndexedContiguousLoad:
+    case IndexedArrayStorageLoad:
+    case IndexedScopedArgumentsLoad:
+    case IndexedDirectArgumentsLoad:
+    case IndexedTypedArrayInt8Load:
+    case IndexedTypedArrayUint8Load:
+    case IndexedTypedArrayUint8ClampedLoad:
+    case IndexedTypedArrayInt16Load:
+    case IndexedTypedArrayUint16Load:
+    case IndexedTypedArrayInt32Load:
+    case IndexedTypedArrayUint32Load:
+    case IndexedTypedArrayFloat32Load:
+    case IndexedTypedArrayFloat64Load:
+    case IndexedStringLoad:
+        return false;
+    }
+}
+
+bool AccessCase::requiresInt32PropertyCheck() const
+{
+    switch (m_type) {
+    case Load:
+    case Transition:
+    case Replace: 
+    case Miss:
+    case GetGetter:
+    case Getter:
+    case Setter:
+    case CustomValueGetter:
+    case CustomAccessorGetter:
+    case CustomValueSetter:
+    case CustomAccessorSetter:
+    case IntrinsicGetter:
+    case InHit:
+    case InMiss:
+    case ArrayLength:
+    case StringLength:
+    case DirectArgumentsLength:
+    case ScopedArgumentsLength:
+    case ModuleNamespaceLoad:
+    case InstanceOfHit:
+    case InstanceOfMiss:
+    case InstanceOfGeneric:
+        return false;
+    case IndexedInt32Load:
+    case IndexedDoubleLoad:
+    case IndexedContiguousLoad:
+    case IndexedArrayStorageLoad:
+    case IndexedScopedArgumentsLoad:
+    case IndexedDirectArgumentsLoad:
+    case IndexedTypedArrayInt8Load:
+    case IndexedTypedArrayUint8Load:
+    case IndexedTypedArrayUint8ClampedLoad:
+    case IndexedTypedArrayInt16Load:
+    case IndexedTypedArrayUint16Load:
+    case IndexedTypedArrayInt32Load:
+    case IndexedTypedArrayUint32Load:
+    case IndexedTypedArrayFloat32Load:
+    case IndexedTypedArrayFloat64Load:
+    case IndexedStringLoad:
+        return true;
+    }
+}
+
+bool AccessCase::needsScratchFPR() const
+{
+    switch (m_type) {
+    case Load:
+    case Transition:
+    case Replace: 
+    case Miss:
+    case GetGetter:
+    case Getter:
+    case Setter:
+    case CustomValueGetter:
+    case CustomAccessorGetter:
+    case CustomValueSetter:
+    case CustomAccessorSetter:
+    case IntrinsicGetter:
+    case InHit:
+    case InMiss:
+    case ArrayLength:
+    case StringLength:
+    case DirectArgumentsLength:
+    case ScopedArgumentsLength:
+    case ModuleNamespaceLoad:
+    case InstanceOfHit:
+    case InstanceOfMiss:
+    case InstanceOfGeneric:
+    case IndexedInt32Load:
+    case IndexedContiguousLoad:
+    case IndexedArrayStorageLoad:
+    case IndexedScopedArgumentsLoad:
+    case IndexedDirectArgumentsLoad:
+    case IndexedTypedArrayInt8Load:
+    case IndexedTypedArrayUint8Load:
+    case IndexedTypedArrayUint8ClampedLoad:
+    case IndexedTypedArrayInt16Load:
+    case IndexedTypedArrayUint16Load:
+    case IndexedTypedArrayInt32Load:
+    case IndexedStringLoad:
+        return false;
+    case IndexedDoubleLoad:
+    case IndexedTypedArrayFloat32Load:
+    case IndexedTypedArrayFloat64Load:
+    case IndexedTypedArrayUint32Load:
+        return true;
+    }
+}
+
 template<typename Functor>
 void AccessCase::forEachDependentCell(const Functor& functor) const
 {
@@ -263,6 +447,22 @@ void AccessCase::forEachDependentCell(const Functor& functor) const
     case DirectArgumentsLength:
     case ScopedArgumentsLength:
     case InstanceOfGeneric:
+    case IndexedInt32Load:
+    case IndexedDoubleLoad:
+    case IndexedContiguousLoad:
+    case IndexedArrayStorageLoad:
+    case IndexedScopedArgumentsLoad:
+    case IndexedDirectArgumentsLoad:
+    case IndexedTypedArrayInt8Load:
+    case IndexedTypedArrayUint8Load:
+    case IndexedTypedArrayUint8ClampedLoad:
+    case IndexedTypedArrayInt16Load:
+    case IndexedTypedArrayUint16Load:
+    case IndexedTypedArrayInt32Load:
+    case IndexedTypedArrayUint32Load:
+    case IndexedTypedArrayFloat32Load:
+    case IndexedTypedArrayFloat64Load:
+    case IndexedStringLoad:
         break;
     }
 }
@@ -297,6 +497,22 @@ bool AccessCase::doesCalls(Vector<JSCell*>* cellsToMarkIfDoesCalls) const
     case InstanceOfHit:
     case InstanceOfMiss:
     case InstanceOfGeneric:
+    case IndexedInt32Load:
+    case IndexedDoubleLoad:
+    case IndexedContiguousLoad:
+    case IndexedArrayStorageLoad:
+    case IndexedScopedArgumentsLoad:
+    case IndexedDirectArgumentsLoad:
+    case IndexedTypedArrayInt8Load:
+    case IndexedTypedArrayUint8Load:
+    case IndexedTypedArrayUint8ClampedLoad:
+    case IndexedTypedArrayInt16Load:
+    case IndexedTypedArrayUint16Load:
+    case IndexedTypedArrayInt32Load:
+    case IndexedTypedArrayUint32Load:
+    case IndexedTypedArrayFloat32Load:
+    case IndexedTypedArrayFloat64Load:
+    case IndexedStringLoad:
         doesCalls = false;
         break;
     }
@@ -330,13 +546,33 @@ bool AccessCase::canReplace(const AccessCase& other) const
     //
     // Note that if A->guardedByStructureCheck() && B->guardedByStructureCheck() then
     // A->canReplace(B) == B->canReplace(A).
+
+    if (*m_identifier != *other.m_identifier)
+        return false;
     
     switch (type()) {
+    case IndexedInt32Load:
+    case IndexedDoubleLoad:
+    case IndexedContiguousLoad:
+    case IndexedArrayStorageLoad:
     case ArrayLength:
     case StringLength:
     case DirectArgumentsLength:
     case ScopedArgumentsLength:
+    case IndexedScopedArgumentsLoad:
+    case IndexedDirectArgumentsLoad:
+    case IndexedTypedArrayInt8Load:
+    case IndexedTypedArrayUint8Load:
+    case IndexedTypedArrayUint8ClampedLoad:
+    case IndexedTypedArrayInt16Load:
+    case IndexedTypedArrayUint16Load:
+    case IndexedTypedArrayInt32Load:
+    case IndexedTypedArrayUint32Load:
+    case IndexedTypedArrayFloat32Load:
+    case IndexedTypedArrayFloat64Load:
+    case IndexedStringLoad:
         return other.type() == type();
+
     case ModuleNamespaceLoad: {
         if (other.type() != type())
             return false;
@@ -344,6 +580,7 @@ bool AccessCase::canReplace(const AccessCase& other) const
         auto& otherCase = this->as<ModuleNamespaceAccessCase>();
         return thisCase.moduleNamespaceObject() == otherCase.moduleNamespaceObject();
     }
+
     case InstanceOfHit:
     case InstanceOfMiss: {
         if (other.type() != type())
@@ -354,6 +591,7 @@ bool AccessCase::canReplace(const AccessCase& other) const
         
         return structure() == other.structure();
     }
+
     case InstanceOfGeneric:
         switch (other.type()) {
         case InstanceOfGeneric:
@@ -363,7 +601,21 @@ bool AccessCase::canReplace(const AccessCase& other) const
         default:
             return false;
         }
-    default:
+
+    case Load:
+    case Transition:
+    case Replace:
+    case Miss:
+    case GetGetter:
+    case Getter:
+    case Setter:
+    case CustomValueGetter:
+    case CustomAccessorGetter:
+    case CustomValueSetter:
+    case CustomAccessorSetter:
+    case IntrinsicGetter:
+    case InHit:
+    case InMiss:
         if (other.type() != type())
             return false;
 
@@ -377,7 +629,7 @@ bool AccessCase::canReplace(const AccessCase& other) const
                 && *m_polyProtoAccessChain == *other.m_polyProtoAccessChain;
         }
 
-        if (!guardedByStructureCheck() || !other.guardedByStructureCheck())
+        if (!guardedByStructureCheckSkippingConstantIdentifierCheck() || !other.guardedByStructureCheckSkippingConstantIdentifierCheck())
             return false;
 
         return structure() == other.structure();
@@ -392,10 +644,9 @@ void AccessCase::dump(PrintStream& out) const
 
     out.print(comma, m_state);
 
+    out.print(comma, "ident = '", *m_identifier, "'");
     if (isValidOffset(m_offset))
         out.print(comma, "offset = ", m_offset);
-    if (!m_conditionSet.isEmpty())
-        out.print(comma, "conditions = ", m_conditionSet);
 
     if (m_polyProtoAccessChain) {
         out.print(comma, "prototype access chain = ");
@@ -407,6 +658,9 @@ void AccessCase::dump(PrintStream& out) const
             out.print(comma, "structure = ", pointerDump(m_structure.get()));
     }
 
+    if (!m_conditionSet.isEmpty())
+        out.print(comma, "conditions = ", m_conditionSet);
+
     dumpImpl(out, comma);
     out.print(")");
 }
@@ -457,6 +711,8 @@ void AccessCase::generateWithGuard(
 {
     SuperSamplerScope superSamplerScope(false);
 
+    checkConsistency(*state.stubInfo);
+
     RELEASE_ASSERT(m_state == Committed);
     m_state = Generated;
 
@@ -467,7 +723,17 @@ void AccessCase::generateWithGuard(
     GPRReg baseGPR = state.baseGPR;
     GPRReg scratchGPR = state.scratchGPR;
 
-    UNUSED_PARAM(vm);
+    if (requiresIdentifierNameMatch() && !stubInfo.hasConstantIdentifier) {
+        RELEASE_ASSERT(!m_identifier->isNull());
+        GPRReg propertyGPR = state.u.propertyGPR;
+        // non-rope string check done inside polymorphic access.
+
+        if (uid()->isSymbol())
+            jit.loadPtr(MacroAssembler::Address(propertyGPR, Symbol::offsetOfSymbolImpl()), scratchGPR);
+        else
+            jit.loadPtr(MacroAssembler::Address(propertyGPR, JSString::offsetOfValue()), scratchGPR);
+        fallThrough.append(jit.branchPtr(CCallHelpers::NotEqual, scratchGPR, CCallHelpers::TrustedImmPtr(uid())));
+    }
 
     auto emitDefaultGuard = [&] () {
         if (m_polyProtoAccessChain) {
@@ -593,6 +859,320 @@ void AccessCase::generateWithGuard(
         return;
     }
 
+    case IndexedScopedArgumentsLoad: {
+        // This code is written such that the result could alias with the base or the property.
+        GPRReg propertyGPR = state.u.propertyGPR;
+
+        jit.load8(CCallHelpers::Address(baseGPR, JSCell::typeInfoTypeOffset()), scratchGPR);
+        fallThrough.append(jit.branch32(CCallHelpers::NotEqual, scratchGPR, CCallHelpers::TrustedImm32(ScopedArgumentsType)));
+
+        ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters);
+        allocator.lock(baseGPR);
+        allocator.lock(valueRegs.gpr());
+        allocator.lock(propertyGPR);
+        allocator.lock(scratchGPR);
+        
+        GPRReg scratch2GPR = allocator.allocateScratchGPR();
+        GPRReg scratch3GPR = allocator.allocateScratchGPR();
+        
+        ScratchRegisterAllocator::PreservedState preservedState = allocator.preserveReusedRegistersByPushing(
+            jit, ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
+
+        CCallHelpers::JumpList failAndIgnore;
+
+        failAndIgnore.append(jit.branch32(CCallHelpers::AboveOrEqual, propertyGPR, CCallHelpers::Address(baseGPR, ScopedArguments::offsetOfTotalLength())));
+        
+        jit.loadPtr(CCallHelpers::Address(baseGPR, ScopedArguments::offsetOfTable()), scratchGPR);
+        jit.load32(CCallHelpers::Address(scratchGPR, ScopedArgumentsTable::offsetOfLength()), scratch2GPR);
+        auto overflowCase = jit.branch32(CCallHelpers::AboveOrEqual, propertyGPR, scratch2GPR);
+
+        jit.loadPtr(CCallHelpers::Address(baseGPR, ScopedArguments::offsetOfScope()), scratch2GPR);
+        jit.loadPtr(CCallHelpers::Address(scratchGPR, ScopedArgumentsTable::offsetOfArguments()), scratchGPR);
+        jit.zeroExtend32ToPtr(propertyGPR, scratch3GPR);
+        jit.load32(CCallHelpers::BaseIndex(scratchGPR, scratch3GPR, CCallHelpers::TimesFour), scratchGPR);
+        failAndIgnore.append(jit.branch32(CCallHelpers::Equal, scratchGPR, CCallHelpers::TrustedImm32(ScopeOffset::invalidOffset)));
+        jit.loadValue(CCallHelpers::BaseIndex(scratch2GPR, scratchGPR, CCallHelpers::TimesEight, JSLexicalEnvironment::offsetOfVariables()), valueRegs);
+        auto done = jit.jump();
+
+        overflowCase.link(&jit);
+        jit.sub32(propertyGPR, scratch2GPR);
+        jit.neg32(scratch2GPR);
+        jit.loadPtr(CCallHelpers::Address(baseGPR, ScopedArguments::offsetOfStorage()), scratch3GPR);
+        jit.loadValue(CCallHelpers::BaseIndex(scratch3GPR, scratch2GPR, CCallHelpers::TimesEight), JSValueRegs(scratchGPR));
+        failAndIgnore.append(jit.branchIfEmpty(scratchGPR));
+        jit.move(scratchGPR, valueRegs.gpr());
+
+        done.link(&jit);
+
+        allocator.restoreReusedRegistersByPopping(jit, preservedState);
+        state.succeed();
+
+        if (allocator.didReuseRegisters()) {
+            failAndIgnore.link(&jit);
+            allocator.restoreReusedRegistersByPopping(jit, preservedState);
+            state.failAndIgnore.append(jit.jump());
+        } else
+            state.failAndIgnore.append(failAndIgnore);
+
+        return;
+    }
+
+    case IndexedDirectArgumentsLoad: {
+        // This code is written such that the result could alias with the base or the property.
+        GPRReg propertyGPR = state.u.propertyGPR;
+        jit.load8(CCallHelpers::Address(baseGPR, JSCell::typeInfoTypeOffset()), scratchGPR);
+        fallThrough.append(jit.branch32(CCallHelpers::NotEqual, scratchGPR, CCallHelpers::TrustedImm32(DirectArgumentsType)));
+        
+        jit.load32(CCallHelpers::Address(baseGPR, DirectArguments::offsetOfLength()), scratchGPR);
+        state.failAndRepatch.append(jit.branch32(CCallHelpers::AboveOrEqual, propertyGPR, scratchGPR));
+        state.failAndRepatch.append(jit.branchTestPtr(CCallHelpers::NonZero, CCallHelpers::Address(baseGPR, DirectArguments::offsetOfMappedArguments())));
+        jit.zeroExtend32ToPtr(propertyGPR, scratchGPR);
+        jit.loadValue(CCallHelpers::BaseIndex(baseGPR, scratchGPR, CCallHelpers::TimesEight, DirectArguments::storageOffset()), valueRegs);
+        state.succeed();
+        return;
+    }
+
+    case IndexedTypedArrayInt8Load:
+    case IndexedTypedArrayUint8Load:
+    case IndexedTypedArrayUint8ClampedLoad:
+    case IndexedTypedArrayInt16Load:
+    case IndexedTypedArrayUint16Load:
+    case IndexedTypedArrayInt32Load:
+    case IndexedTypedArrayUint32Load:
+    case IndexedTypedArrayFloat32Load:
+    case IndexedTypedArrayFloat64Load: {
+        // This code is written such that the result could alias with the base or the property.
+
+        TypedArrayType type = toTypedArrayType(m_type);
+
+        GPRReg propertyGPR = state.u.propertyGPR;
+
+        
+        jit.load8(CCallHelpers::Address(baseGPR, JSCell::typeInfoTypeOffset()), scratchGPR);
+        fallThrough.append(jit.branch32(CCallHelpers::NotEqual, scratchGPR, CCallHelpers::TrustedImm32(typeForTypedArrayType(type))));
+
+        jit.load32(CCallHelpers::Address(baseGPR, JSArrayBufferView::offsetOfLength()), scratchGPR);
+        state.failAndRepatch.append(jit.branch32(CCallHelpers::AboveOrEqual, propertyGPR, scratchGPR));
+
+        ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters);
+        allocator.lock(baseGPR);
+        allocator.lock(valueRegs.gpr());
+        allocator.lock(propertyGPR);
+        allocator.lock(scratchGPR);
+        GPRReg scratch2GPR = allocator.allocateScratchGPR();
+
+        ScratchRegisterAllocator::PreservedState preservedState = allocator.preserveReusedRegistersByPushing(
+            jit, ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
+
+        jit.loadPtr(CCallHelpers::Address(baseGPR, JSArrayBufferView::offsetOfVector()), scratch2GPR);
+        jit.cageConditionally(Gigacage::Primitive, scratch2GPR, scratchGPR, scratchGPR);
+
+        jit.signExtend32ToPtr(propertyGPR, scratchGPR);
+        if (isInt(type)) {
+            switch (elementSize(type)) {
+            case 1:
+                if (JSC::isSigned(type))
+                    jit.load8SignedExtendTo32(CCallHelpers::BaseIndex(scratch2GPR, scratchGPR, CCallHelpers::TimesOne), valueRegs.gpr());
+                else
+                    jit.load8(CCallHelpers::BaseIndex(scratch2GPR, scratchGPR, CCallHelpers::TimesOne), valueRegs.gpr());
+                break;
+            case 2:
+                if (JSC::isSigned(type))
+                    jit.load16SignedExtendTo32(CCallHelpers::BaseIndex(scratch2GPR, scratchGPR, CCallHelpers::TimesTwo), valueRegs.gpr());
+                else
+                    jit.load16(CCallHelpers::BaseIndex(scratch2GPR, scratchGPR, CCallHelpers::TimesTwo), valueRegs.gpr());
+                break;
+            case 4:
+                jit.load32(CCallHelpers::BaseIndex(scratch2GPR, scratchGPR, CCallHelpers::TimesFour), valueRegs.gpr());
+                break;
+            default:
+                CRASH();
+            }
+
+            CCallHelpers::Jump done;
+            if (type == TypeUint32) {
+                RELEASE_ASSERT(state.scratchFPR != InvalidFPRReg);
+                auto canBeInt = jit.branch32(CCallHelpers::GreaterThanOrEqual, valueRegs.gpr(), CCallHelpers::TrustedImm32(0));
+                
+                jit.convertInt32ToDouble(valueRegs.gpr(), state.scratchFPR);
+                jit.addDouble(CCallHelpers::AbsoluteAddress(&CCallHelpers::twoToThe32), state.scratchFPR);
+                jit.boxDouble(state.scratchFPR, valueRegs);
+                done = jit.jump();
+                canBeInt.link(&jit);
+            }
+
+            jit.boxInt32(valueRegs.gpr(), valueRegs);
+            if (done.isSet())
+                done.link(&jit);
+        } else {
+            ASSERT(isFloat(type));
+            RELEASE_ASSERT(state.scratchFPR != InvalidFPRReg);
+            switch (elementSize(type)) {
+            case 4:
+                jit.loadFloat(CCallHelpers::BaseIndex(scratch2GPR, scratchGPR, CCallHelpers::TimesFour), state.scratchFPR);
+                jit.convertFloatToDouble(state.scratchFPR, state.scratchFPR);
+                break;
+            case 8: {
+                jit.loadDouble(CCallHelpers::BaseIndex(scratch2GPR, scratchGPR, CCallHelpers::TimesEight), state.scratchFPR);
+                break;
+            }
+            default:
+                CRASH();
+            }
+
+            jit.purifyNaN(state.scratchFPR);
+            jit.boxDouble(state.scratchFPR, valueRegs);
+        }
+
+        allocator.restoreReusedRegistersByPopping(jit, preservedState);
+        state.succeed();
+
+        return;
+    }
+
+    case IndexedStringLoad: {
+        // This code is written such that the result could alias with the base or the property.
+        GPRReg propertyGPR = state.u.propertyGPR;
+
+        fallThrough.append(jit.branchIfNotString(baseGPR));
+
+        ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters);
+        allocator.lock(baseGPR);
+        allocator.lock(valueRegs.gpr());
+        allocator.lock(propertyGPR);
+        allocator.lock(scratchGPR);
+        GPRReg scratch2GPR = allocator.allocateScratchGPR();
+
+        CCallHelpers::JumpList failAndIgnore;
+
+        ScratchRegisterAllocator::PreservedState preservedState = allocator.preserveReusedRegistersByPushing(
+            jit, ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
+
+        jit.loadPtr(CCallHelpers::Address(baseGPR, JSString::offsetOfValue()), scratch2GPR);
+        failAndIgnore.append(jit.branchIfRopeStringImpl(scratch2GPR));
+        jit.load32(CCallHelpers::Address(scratch2GPR, StringImpl::lengthMemoryOffset()), scratchGPR);
+
+        failAndIgnore.append(jit.branch32(CCallHelpers::AboveOrEqual, propertyGPR, scratchGPR));
+
+        jit.load32(CCallHelpers::Address(scratch2GPR, StringImpl::flagsOffset()), scratchGPR);
+        jit.loadPtr(CCallHelpers::Address(scratch2GPR, StringImpl::dataOffset()), scratch2GPR);
+        auto is16Bit = jit.branchTest32(CCallHelpers::Zero, scratchGPR, CCallHelpers::TrustedImm32(StringImpl::flagIs8Bit()));
+        jit.zeroExtend32ToPtr(propertyGPR, scratchGPR);
+        jit.load8(CCallHelpers::BaseIndex(scratch2GPR, scratchGPR, CCallHelpers::TimesOne, 0), scratch2GPR);
+        auto is8BitLoadDone = jit.jump();
+        is16Bit.link(&jit);
+        jit.zeroExtend32ToPtr(propertyGPR, scratchGPR);
+        jit.load16(CCallHelpers::BaseIndex(scratch2GPR, scratchGPR, CCallHelpers::TimesTwo, 0), scratch2GPR);
+        is8BitLoadDone.link(&jit);
+
+        failAndIgnore.append(jit.branch32(CCallHelpers::Above, scratch2GPR, CCallHelpers::TrustedImm32(maxSingleCharacterString)));
+        jit.move(CCallHelpers::TrustedImmPtr(vm.smallStrings.singleCharacterStrings()), scratchGPR);
+        jit.loadPtr(CCallHelpers::BaseIndex(scratchGPR, scratch2GPR, CCallHelpers::ScalePtr, 0), valueRegs.gpr());
+        allocator.restoreReusedRegistersByPopping(jit, preservedState);
+        state.succeed();
+
+        if (allocator.didReuseRegisters()) {
+            failAndIgnore.link(&jit);
+            allocator.restoreReusedRegistersByPopping(jit, preservedState);
+            state.failAndIgnore.append(jit.jump());
+        } else
+            state.failAndIgnore.append(failAndIgnore);
+
+        return;
+    }
+
+    case IndexedInt32Load:
+    case IndexedDoubleLoad:
+    case IndexedContiguousLoad:
+    case IndexedArrayStorageLoad: {
+        // This code is written such that the result could alias with the base or the property.
+        GPRReg propertyGPR = state.u.propertyGPR;
+
+        // int32 check done in polymorphic access.
+        jit.load8(CCallHelpers::Address(baseGPR, JSCell::indexingTypeAndMiscOffset()), scratchGPR);
+        jit.and32(CCallHelpers::TrustedImm32(IndexingShapeMask), scratchGPR);
+
+        CCallHelpers::Jump isOutOfBounds;
+        CCallHelpers::Jump isEmpty;
+
+        ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters);
+        allocator.lock(baseGPR);
+        allocator.lock(valueRegs.gpr());
+        allocator.lock(propertyGPR);
+        allocator.lock(scratchGPR);
+        GPRReg scratch2GPR = allocator.allocateScratchGPR();
+        ScratchRegisterAllocator::PreservedState preservedState;
+
+        CCallHelpers::JumpList failAndIgnore;
+        auto preserveReusedRegisters = [&] {
+            preservedState = allocator.preserveReusedRegistersByPushing(jit, ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
+        };
+
+        if (m_type == IndexedArrayStorageLoad) {
+            jit.add32(CCallHelpers::TrustedImm32(-ArrayStorageShape), scratchGPR, scratchGPR);
+            fallThrough.append(jit.branch32(CCallHelpers::Above, scratchGPR, CCallHelpers::TrustedImm32(SlowPutArrayStorageShape - ArrayStorageShape)));
+
+            preserveReusedRegisters();
+
+            jit.loadPtr(CCallHelpers::Address(baseGPR, JSObject::butterflyOffset()), scratchGPR);
+            isOutOfBounds = jit.branch32(CCallHelpers::AboveOrEqual, propertyGPR, CCallHelpers::Address(scratchGPR, ArrayStorage::vectorLengthOffset()));
+
+            jit.zeroExtend32ToPtr(propertyGPR, scratch2GPR);
+            jit.loadValue(CCallHelpers::BaseIndex(scratchGPR, scratch2GPR, CCallHelpers::TimesEight, ArrayStorage::vectorOffset()), JSValueRegs(scratchGPR));
+            isEmpty = jit.branchIfEmpty(scratchGPR);
+            jit.move(scratchGPR, valueRegs.gpr());
+        } else {
+            IndexingType expectedShape;
+            switch (m_type) {
+            case IndexedInt32Load:
+                expectedShape = Int32Shape;
+                break;
+            case IndexedDoubleLoad:
+                expectedShape = DoubleShape;
+                break;
+            case IndexedContiguousLoad:
+                expectedShape = ContiguousShape;
+                break;
+            default:
+                RELEASE_ASSERT_NOT_REACHED();
+                break;
+            }
+
+            fallThrough.append(jit.branch32(CCallHelpers::NotEqual, scratchGPR, CCallHelpers::TrustedImm32(expectedShape)));
+
+            preserveReusedRegisters();
+
+            jit.loadPtr(CCallHelpers::Address(baseGPR, JSObject::butterflyOffset()), scratchGPR);
+            isOutOfBounds = jit.branch32(CCallHelpers::AboveOrEqual, propertyGPR, CCallHelpers::Address(scratchGPR, Butterfly::offsetOfPublicLength()));
+            jit.zeroExtend32ToPtr(propertyGPR, scratch2GPR);
+            if (m_type == IndexedDoubleLoad) {
+                RELEASE_ASSERT(state.scratchFPR != InvalidFPRReg);
+                jit.loadDouble(CCallHelpers::BaseIndex(scratchGPR, scratch2GPR, CCallHelpers::TimesEight), state.scratchFPR);
+                isEmpty = jit.branchIfNaN(state.scratchFPR);
+                jit.boxDouble(state.scratchFPR, valueRegs);
+            } else {
+                jit.loadValue(CCallHelpers::BaseIndex(scratchGPR, scratch2GPR, CCallHelpers::TimesEight), JSValueRegs(scratchGPR));
+                isEmpty = jit.branchIfEmpty(scratchGPR);
+                jit.move(scratchGPR, valueRegs.gpr());
+            }
+        }
+
+        allocator.restoreReusedRegistersByPopping(jit, preservedState);
+        state.succeed();
+
+        if (allocator.didReuseRegisters()) {
+            isOutOfBounds.link(&jit);
+            isEmpty.link(&jit);
+            allocator.restoreReusedRegistersByPopping(jit, preservedState);
+            state.failAndIgnore.append(jit.jump());
+        } else {
+            state.failAndIgnore.append(isOutOfBounds);
+            state.failAndIgnore.append(isEmpty);
+        }
+
+        return;
+    }
+
     case InstanceOfHit:
     case InstanceOfMiss:
         emitDefaultGuard();
@@ -693,8 +1273,11 @@ void AccessCase::generateWithGuard(
 void AccessCase::generate(AccessGenerationState& state)
 {
     RELEASE_ASSERT(m_state == Committed);
+    RELEASE_ASSERT(state.stubInfo->hasConstantIdentifier);
     m_state = Generated;
 
+    checkConsistency(*state.stubInfo);
+
     generateImpl(state);
 }
 
@@ -710,10 +1293,9 @@ void AccessCase::generateImpl(AccessGenerationState& state)
     VM& vm = state.m_vm;
     CodeBlock* codeBlock = jit.codeBlock();
     StructureStubInfo& stubInfo = *state.stubInfo;
-    const Identifier& ident = *state.ident;
     JSValueRegs valueRegs = state.valueRegs;
     GPRReg baseGPR = state.baseGPR;
-    GPRReg thisGPR = state.u.thisGPR != InvalidGPRReg ? state.u.thisGPR : baseGPR;
+    GPRReg thisGPR = stubInfo.thisValueIsInThisGPR() ? state.u.thisGPR : baseGPR;
     GPRReg scratchGPR = state.scratchGPR;
 
     for (const ObjectPropertyCondition& condition : m_conditionSet) {
@@ -1055,10 +1637,11 @@ void AccessCase::generateImpl(AccessGenerationState& state)
             // FIXME: Revisit JSGlobalObject.
             // https://bugs.webkit.org/show_bug.cgi?id=203204
             if (m_type == CustomValueGetter || m_type == CustomAccessorGetter) {
+                RELEASE_ASSERT(!m_identifier->isNull());
                 jit.setupArguments<PropertySlot::GetValueFunc>(
                     CCallHelpers::TrustedImmPtr(codeBlock->globalObject()),
                     CCallHelpers::CellValue(baseForCustom),
-                    CCallHelpers::TrustedImmPtr(ident.impl()));
+                    CCallHelpers::TrustedImmPtr(uid()));
             } else {
                 jit.setupArguments<PutPropertySlot::PutValueFunc>(
                     CCallHelpers::TrustedImmPtr(codeBlock->globalObject()),
@@ -1321,6 +1904,22 @@ void AccessCase::generateImpl(AccessGenerationState& state)
     case ScopedArgumentsLength:
     case ModuleNamespaceLoad:
     case InstanceOfGeneric:
+    case IndexedInt32Load:
+    case IndexedDoubleLoad:
+    case IndexedContiguousLoad:
+    case IndexedArrayStorageLoad:
+    case IndexedScopedArgumentsLoad:
+    case IndexedDirectArgumentsLoad:
+    case IndexedTypedArrayInt8Load:
+    case IndexedTypedArrayUint8Load:
+    case IndexedTypedArrayUint8ClampedLoad:
+    case IndexedTypedArrayInt16Load:
+    case IndexedTypedArrayUint16Load:
+    case IndexedTypedArrayInt32Load:
+    case IndexedTypedArrayUint32Load:
+    case IndexedTypedArrayFloat32Load:
+    case IndexedTypedArrayFloat64Load:
+    case IndexedStringLoad:
         // These need to be handled by generateWithGuard(), since the guard is part of the
         // algorithm. We can be sure that nobody will call generate() directly for these since they
         // are not guarded by structure checks.
@@ -1330,6 +1929,44 @@ void AccessCase::generateImpl(AccessGenerationState& state)
     RELEASE_ASSERT_NOT_REACHED();
 }
 
+TypedArrayType AccessCase::toTypedArrayType(AccessType accessType)
+{
+    switch (accessType) {
+    case IndexedTypedArrayInt8Load:
+        return TypeInt8;
+    case IndexedTypedArrayUint8Load:
+        return TypeUint8;
+    case IndexedTypedArrayUint8ClampedLoad:
+        return TypeUint8Clamped;
+    case IndexedTypedArrayInt16Load:
+        return TypeInt16;
+    case IndexedTypedArrayUint16Load:
+        return TypeUint16;
+    case IndexedTypedArrayInt32Load:
+        return TypeInt32;
+    case IndexedTypedArrayUint32Load:
+        return TypeUint32;
+    case IndexedTypedArrayFloat32Load:
+        return TypeFloat32;
+    case IndexedTypedArrayFloat64Load:
+        return TypeFloat64;
+    default:
+        RELEASE_ASSERT_NOT_REACHED();
+    }
+}
+
+#if !ASSERT_DISABLED
+void AccessCase::checkConsistency(StructureStubInfo& stubInfo)
+{
+    RELEASE_ASSERT(!(requiresInt32PropertyCheck() && requiresIdentifierNameMatch()));
+
+    if (stubInfo.hasConstantIdentifier) {
+        RELEASE_ASSERT(!requiresInt32PropertyCheck());
+        RELEASE_ASSERT(requiresIdentifierNameMatch());
+    }
+}
+#endif
+
 } // namespace JSC
 
 #endif
index 3508f83..5d56e99 100644 (file)
@@ -30,6 +30,7 @@
 #include "JSFunctionInlines.h"
 #include "ObjectPropertyConditionSet.h"
 #include "PolyProtoAccessChain.h"
+#include <wtf/Box.h>
 #include <wtf/CommaPrinter.h>
 
 namespace JSC {
@@ -101,7 +102,23 @@ public:
         ModuleNamespaceLoad,
         InstanceOfHit,
         InstanceOfMiss,
-        InstanceOfGeneric
+        InstanceOfGeneric,
+        IndexedInt32Load,
+        IndexedDoubleLoad,
+        IndexedContiguousLoad,
+        IndexedArrayStorageLoad,
+        IndexedScopedArgumentsLoad,
+        IndexedDirectArgumentsLoad,
+        IndexedTypedArrayInt8Load,
+        IndexedTypedArrayUint8Load,
+        IndexedTypedArrayUint8ClampedLoad,
+        IndexedTypedArrayInt16Load,
+        IndexedTypedArrayUint16Load,
+        IndexedTypedArrayInt32Load,
+        IndexedTypedArrayUint32Load,
+        IndexedTypedArrayFloat32Load,
+        IndexedTypedArrayFloat64Load,
+        IndexedStringLoad
     };
 
     enum State : uint8_t {
@@ -123,14 +140,14 @@ public:
         return std::unique_ptr<AccessCaseType>(new AccessCaseType(arguments...));
     }
 
-    static std::unique_ptr<AccessCase> create(VM&, JSCell* owner, AccessType, PropertyOffset = invalidOffset,
+    static std::unique_ptr<AccessCase> create(VM&, JSCell* owner, AccessType, const Identifier&, PropertyOffset = invalidOffset,
         Structure* = nullptr, const ObjectPropertyConditionSet& = ObjectPropertyConditionSet(), std::unique_ptr<PolyProtoAccessChain> = nullptr);
 
     // This create method should be used for transitions.
-    static std::unique_ptr<AccessCase> create(VM&, JSCell* owner, PropertyOffset, Structure* oldStructure,
+    static std::unique_ptr<AccessCase> create(VM&, JSCell* owner, const Identifier&, PropertyOffset, Structure* oldStructure,
         Structure* newStructure, const ObjectPropertyConditionSet&, std::unique_ptr<PolyProtoAccessChain>);
     
-    static std::unique_ptr<AccessCase> fromStructureStubInfo(VM&, JSCell* owner, StructureStubInfo&);
+    static std::unique_ptr<AccessCase> fromStructureStubInfo(VM&, JSCell* owner, const Identifier&, StructureStubInfo&);
 
     AccessType type() const { return m_type; }
     State state() const { return m_state; }
@@ -142,7 +159,7 @@ public:
             return m_structure->previousID();
         return m_structure.get();
     }
-    bool guardedByStructureCheck() const;
+    bool guardedByStructureCheck(const StructureStubInfo&) const;
 
     Structure* newStructure() const
     {
@@ -196,9 +213,25 @@ public:
     {
         return !!m_polyProtoAccessChain;
     }
+
+    bool requiresIdentifierNameMatch() const;
+    bool requiresInt32PropertyCheck() const;
+    bool needsScratchFPR() const;
+
+    static TypedArrayType toTypedArrayType(AccessType);
+
+    UniquedStringImpl* uid() const { return m_identifier->impl(); }
+    Box<Identifier> identifier() const { return m_identifier; }
+
+
+#if !ASSERT_DISABLED
+    void checkConsistency(StructureStubInfo&);
+#else
+    ALWAYS_INLINE void checkConsistency(StructureStubInfo&) { }
+#endif
     
 protected:
-    AccessCase(VM&, JSCell* owner, AccessType, PropertyOffset, Structure*, const ObjectPropertyConditionSet&, std::unique_ptr<PolyProtoAccessChain>);
+    AccessCase(VM&, JSCell* owner, AccessType, const Identifier&, PropertyOffset, Structure*, const ObjectPropertyConditionSet&, std::unique_ptr<PolyProtoAccessChain>);
     AccessCase(AccessCase&&) = default;
     AccessCase(const AccessCase& other)
         : m_type(other.m_type)
@@ -207,6 +240,7 @@ protected:
         , m_offset(other.m_offset)
         , m_structure(other.m_structure)
         , m_conditionSet(other.m_conditionSet)
+        , m_identifier(other.m_identifier)
     {
         if (other.m_polyProtoAccessChain)
             m_polyProtoAccessChain = other.m_polyProtoAccessChain->clone();
@@ -231,7 +265,7 @@ private:
 
     // Perform any action that must be performed before the end of the epoch in which the case
     // was created. Returns a set of watchpoint sets that will need to be watched.
-    Vector<WatchpointSet*, 2> commit(VM&, const Identifier&);
+    Vector<WatchpointSet*, 2> commit(VM&);
 
     // Fall through on success. Two kinds of failures are supported: fall-through, which means that we
     // should try a different case; and failure, which means that this was the right case but it needs
@@ -243,6 +277,8 @@ private:
 
     void generateImpl(AccessGenerationState&);
 
+    bool guardedByStructureCheckSkippingConstantIdentifierCheck() const;
+
     AccessType m_type;
     State m_state { Primordial };
 protected:
@@ -261,6 +297,8 @@ private:
     ObjectPropertyConditionSet m_conditionSet;
 
     std::unique_ptr<PolyProtoAccessChain> m_polyProtoAccessChain;
+
+    Box<Identifier> m_identifier; // We use this indirection so the concurrent compiler can concurrently ref this Box.
 };
 
 } // namespace JSC
index ed8d493..04c7a3a 100644 (file)
 namespace JSC {
 
 GetByIdVariant::GetByIdVariant(
+    Box<Identifier> identifier,
     const StructureSet& structureSet, PropertyOffset offset,
     const ObjectPropertyConditionSet& conditionSet,
     std::unique_ptr<CallLinkStatus> callLinkStatus,
     JSFunction* intrinsicFunction,
     FunctionPtr<OperationPtrTag> customAccessorGetter,
-    Optional<DOMAttributeAnnotation> domAttribute)
+    std::unique_ptr<DOMAttributeAnnotation> domAttribute)
     : m_structureSet(structureSet)
     , m_conditionSet(conditionSet)
     , m_offset(offset)
     , m_callLinkStatus(WTFMove(callLinkStatus))
     , m_intrinsicFunction(intrinsicFunction)
     , m_customAccessorGetter(customAccessorGetter)
-    , m_domAttribute(domAttribute)
+    , m_domAttribute(WTFMove(domAttribute))
+    , m_identifier(WTFMove(identifier))
 {
     if (!structureSet.size()) {
         ASSERT(offset == invalidOffset);
@@ -58,19 +60,23 @@ GetByIdVariant::GetByIdVariant(
 GetByIdVariant::~GetByIdVariant() { }
 
 GetByIdVariant::GetByIdVariant(const GetByIdVariant& other)
-    : GetByIdVariant()
+    : GetByIdVariant(other.m_identifier)
 {
     *this = other;
 }
 
 GetByIdVariant& GetByIdVariant::operator=(const GetByIdVariant& other)
 {
+    m_identifier = other.m_identifier;
     m_structureSet = other.m_structureSet;
     m_conditionSet = other.m_conditionSet;
     m_offset = other.m_offset;
     m_intrinsicFunction = other.m_intrinsicFunction;
     m_customAccessorGetter = other.m_customAccessorGetter;
-    m_domAttribute = other.m_domAttribute;
+    if (other.m_domAttribute)
+        m_domAttribute = WTF::makeUnique<DOMAttributeAnnotation>(*other.m_domAttribute);
+    else
+        m_domAttribute = nullptr;
     if (other.m_callLinkStatus)
         m_callLinkStatus = makeUnique<CallLinkStatus>(*other.m_callLinkStatus);
     else
@@ -101,6 +107,12 @@ inline bool GetByIdVariant::canMergeIntrinsicStructures(const GetByIdVariant& ot
 
 bool GetByIdVariant::attemptToMerge(const GetByIdVariant& other)
 {
+    if (!!m_identifier != !!other.m_identifier)
+        return false;
+
+    if (m_identifier && (*m_identifier != *other.m_identifier))
+        return false;
+
     if (m_offset != other.m_offset)
         return false;
     
index 10a252e..6865e57 100644 (file)
@@ -29,6 +29,7 @@
 #include "ObjectPropertyConditionSet.h"
 #include "PropertyOffset.h"
 #include "StructureSet.h"
+#include <wtf/Box.h>
 
 namespace JSC {
 namespace DOMJIT {
@@ -36,19 +37,20 @@ class GetterSetter;
 }
 
 class CallLinkStatus;
-class GetByIdStatus;
+class GetByStatus;
 struct DumpContext;
 
 class GetByIdVariant {
     WTF_MAKE_FAST_ALLOCATED;
 public:
     GetByIdVariant(
+        Box<Identifier>,
         const StructureSet& structureSet = StructureSet(), PropertyOffset offset = invalidOffset,
         const ObjectPropertyConditionSet& = ObjectPropertyConditionSet(),
         std::unique_ptr<CallLinkStatus> = nullptr,
         JSFunction* = nullptr,
         FunctionPtr<OperationPtrTag> customAccessorGetter = nullptr,
-        Optional<DOMAttributeAnnotation> = WTF::nullopt);
+        std::unique_ptr<DOMAttributeAnnotation> = nullptr);
 
     ~GetByIdVariant();
     
@@ -68,7 +70,7 @@ public:
     JSFunction* intrinsicFunction() const { return m_intrinsicFunction; }
     Intrinsic intrinsic() const { return m_intrinsicFunction ? m_intrinsicFunction->intrinsic() : NoIntrinsic; }
     FunctionPtr<OperationPtrTag> customAccessorGetter() const { return m_customAccessorGetter; }
-    Optional<DOMAttributeAnnotation> domAttribute() const { return m_domAttribute; }
+    DOMAttributeAnnotation* domAttribute() const { return m_domAttribute.get(); }
 
     bool isPropertyUnset() const { return offset() == invalidOffset; }
 
@@ -79,9 +81,11 @@ public:
     
     void dump(PrintStream&) const;
     void dumpInContext(PrintStream&, DumpContext*) const;
+
+    Box<Identifier> identifier() const { return m_identifier; }
     
 private:
-    friend class GetByIdStatus;
+    friend class GetByStatus;
 
     bool canMergeIntrinsicStructures(const GetByIdVariant&) const;
     
@@ -91,7 +95,8 @@ private:
     std::unique_ptr<CallLinkStatus> m_callLinkStatus;
     JSFunction* m_intrinsicFunction;
     FunctionPtr<OperationPtrTag> m_customAccessorGetter;
-    Optional<DOMAttributeAnnotation> m_domAttribute;
+    std::unique_ptr<DOMAttributeAnnotation> m_domAttribute;
+    Box<Identifier> m_identifier; // We use this indirection to allow ref/deref in the concurrent compiler.
 };
 
 } // namespace JSC
@@ -24,7 +24,7 @@
  */
 
 #include "config.h"
-#include "GetByIdStatus.h"
+#include "GetByStatus.h"
 
 #include "BytecodeStructs.h"
 #include "CodeBlock.h"
@@ -47,71 +47,79 @@ namespace DOMJIT {
 class GetterSetter;
 }
 
-bool GetByIdStatus::appendVariant(const GetByIdVariant& variant)
+bool GetByStatus::appendVariant(const GetByIdVariant& variant)
 {
     return appendICStatusVariant(m_variants, variant);
 }
 
-GetByIdStatus GetByIdStatus::computeFromLLInt(CodeBlock* profiledBlock, BytecodeIndex bytecodeIndex, UniquedStringImpl* uid)
+GetByStatus GetByStatus::computeFromLLInt(CodeBlock* profiledBlock, BytecodeIndex bytecodeIndex)
 {
     VM& vm = profiledBlock->vm();
     
     auto instruction = profiledBlock->instructions().at(bytecodeIndex.offset());
 
     StructureID structureID;
+    const Identifier* identifier = nullptr;
     switch (instruction->opcodeID()) {
     case op_get_by_id: {
         auto& metadata = instruction->as<OpGetById>().metadata(profiledBlock);
         // FIXME: We should not just bail if we see a get_by_id_proto_load.
         // https://bugs.webkit.org/show_bug.cgi?id=158039
         if (metadata.m_modeMetadata.mode != GetByIdMode::Default)
-            return GetByIdStatus(NoInformation, false);
+            return GetByStatus(NoInformation, false);
         structureID = metadata.m_modeMetadata.defaultMode.structureID;
+
+        identifier = &(profiledBlock->identifier(instruction->as<OpGetById>().m_property));
         break;
     }
     case op_get_by_id_direct:
         structureID = instruction->as<OpGetByIdDirect>().metadata(profiledBlock).m_structureID;
+        identifier = &(profiledBlock->identifier(instruction->as<OpGetByIdDirect>().m_property));
         break;
     case op_try_get_by_id: {
         // FIXME: We should not just bail if we see a try_get_by_id.
         // https://bugs.webkit.org/show_bug.cgi?id=158039
-        return GetByIdStatus(NoInformation, false);
+        return GetByStatus(NoInformation, false);
     }
 
+    case op_get_by_val:
+        return GetByStatus(NoInformation, false);
+
     default: {
         ASSERT_NOT_REACHED();
-        return GetByIdStatus(NoInformation, false);
+        return GetByStatus(NoInformation, false);
     }
     }
 
     if (!structureID)
-        return GetByIdStatus(NoInformation, false);
+        return GetByStatus(NoInformation, false);
 
     Structure* structure = vm.heap.structureIDTable().get(structureID);
 
     if (structure->takesSlowPathInDFGForImpureProperty())
-        return GetByIdStatus(NoInformation, false);
+        return GetByStatus(NoInformation, false);
 
     unsigned attributes;
-    PropertyOffset offset = structure->getConcurrently(uid, attributes);
+    PropertyOffset offset = structure->getConcurrently(identifier->impl(), attributes);
     if (!isValidOffset(offset))
-        return GetByIdStatus(NoInformation, false);
+        return GetByStatus(NoInformation, false);
     if (attributes & PropertyAttribute::CustomAccessorOrValue)
-        return GetByIdStatus(NoInformation, false);
+        return GetByStatus(NoInformation, false);
 
-    return GetByIdStatus(Simple, false, GetByIdVariant(StructureSet(structure), offset));
+    GetByStatus result(Simple, false);
+    result.appendVariant(GetByIdVariant(nullptr, StructureSet(structure), offset));
+    return result;
 }
 
-GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, ICStatusMap& map, BytecodeIndex bytecodeIndex, UniquedStringImpl* uid, ExitFlag didExit, CallLinkStatus::ExitSiteData callExitSiteData)
+GetByStatus GetByStatus::computeFor(CodeBlock* profiledBlock, ICStatusMap& map, BytecodeIndex bytecodeIndex, ExitFlag didExit, CallLinkStatus::ExitSiteData callExitSiteData)
 {
     ConcurrentJSLocker locker(profiledBlock->m_lock);
 
-    GetByIdStatus result;
+    GetByStatus result;
 
 #if ENABLE(DFG_JIT)
     result = computeForStubInfoWithoutExitSiteFeedback(
-        locker, profiledBlock, map.get(CodeOrigin(bytecodeIndex)).stubInfo, uid,
-        callExitSiteData);
+        locker, profiledBlock, map.get(CodeOrigin(bytecodeIndex)).stubInfo, callExitSiteData);
     
     if (didExit)
         return result.slowVersion();
@@ -122,62 +130,48 @@ GetByIdStatus GetByIdStatus::computeFor(CodeBlock* profiledBlock, ICStatusMap& m
 #endif
 
     if (!result)
-        return computeFromLLInt(profiledBlock, bytecodeIndex, uid);
+        return computeFromLLInt(profiledBlock, bytecodeIndex);
     
     return result;
 }
 
-#if ENABLE(DFG_JIT)
-GetByIdStatus GetByIdStatus::computeForStubInfo(const ConcurrentJSLocker& locker, CodeBlock* profiledBlock, StructureStubInfo* stubInfo, CodeOrigin codeOrigin, UniquedStringImpl* uid)
-{
-    BytecodeIndex bytecodeIndex = codeOrigin.bytecodeIndex();
-    GetByIdStatus result = GetByIdStatus::computeForStubInfoWithoutExitSiteFeedback(
-        locker, profiledBlock, stubInfo, uid,
-        CallLinkStatus::computeExitSiteData(profiledBlock, bytecodeIndex));
-
-    if (!result.takesSlowPath() && hasBadCacheExitSite(profiledBlock, bytecodeIndex))
-        return result.slowVersion();
-    return result;
-}
-#endif // ENABLE(DFG_JIT)
-
 #if ENABLE(JIT)
-GetByIdStatus::GetByIdStatus(const ModuleNamespaceAccessCase& accessCase)
-    : m_moduleNamespaceObject(accessCase.moduleNamespaceObject())
-    , m_moduleEnvironment(accessCase.moduleEnvironment())
-    , m_scopeOffset(accessCase.scopeOffset())
+GetByStatus::GetByStatus(const ModuleNamespaceAccessCase& accessCase)
+    : m_moduleNamespaceData(Box<ModuleNamespaceData>::create(ModuleNamespaceData { accessCase.moduleNamespaceObject(), accessCase.moduleEnvironment(), accessCase.scopeOffset(), accessCase.identifier() }))
     , m_state(ModuleNamespace)
     , m_wasSeenInJIT(true)
 {
 }
 
-GetByIdStatus GetByIdStatus::computeForStubInfoWithoutExitSiteFeedback(
-    const ConcurrentJSLocker& locker, CodeBlock* profiledBlock, StructureStubInfo* stubInfo, UniquedStringImpl* uid,
-    CallLinkStatus::ExitSiteData callExitSiteData)
+GetByStatus GetByStatus::computeForStubInfoWithoutExitSiteFeedback(
+    const ConcurrentJSLocker& locker, CodeBlock* profiledBlock, StructureStubInfo* stubInfo, CallLinkStatus::ExitSiteData callExitSiteData)
 {
     StubInfoSummary summary = StructureStubInfo::summary(stubInfo);
     if (!isInlineable(summary))
-        return GetByIdStatus(summary);
+        return GetByStatus(summary);
     
     // Finally figure out if we can derive an access strategy.
-    GetByIdStatus result;
+    GetByStatus result;
     result.m_state = Simple;
     result.m_wasSeenInJIT = true; // This is interesting for bytecode dumping only.
-    switch (stubInfo->cacheType) {
+    switch (stubInfo->cacheType()) {
     case CacheType::Unset:
-        return GetByIdStatus(NoInformation);
+        return GetByStatus(NoInformation);
         
     case CacheType::GetByIdSelf: {
         Structure* structure = stubInfo->u.byIdSelf.baseObjectStructure.get();
         if (structure->takesSlowPathInDFGForImpureProperty())
-            return GetByIdStatus(JSC::slowVersion(summary));
+            return GetByStatus(JSC::slowVersion(summary));
+        Box<Identifier> identifier = stubInfo->getByIdSelfIdentifier();
+        UniquedStringImpl* uid = identifier->impl();
+        RELEASE_ASSERT(uid);
+        GetByIdVariant variant(WTFMove(identifier));
         unsigned attributes;
-        GetByIdVariant variant;
         variant.m_offset = structure->getConcurrently(uid, attributes);
         if (!isValidOffset(variant.m_offset))
-            return GetByIdStatus(JSC::slowVersion(summary));
+            return GetByStatus(JSC::slowVersion(summary));
         if (attributes & PropertyAttribute::CustomAccessorOrValue)
-            return GetByIdStatus(JSC::slowVersion(summary));
+            return GetByStatus(JSC::slowVersion(summary));
         
         variant.m_structureSet.add(structure);
         bool didAppend = result.appendVariant(variant);
@@ -191,7 +185,7 @@ GetByIdStatus GetByIdStatus::computeForStubInfoWithoutExitSiteFeedback(
             const AccessCase& access = list->at(0);
             switch (access.type()) {
             case AccessCase::ModuleNamespaceLoad:
-                return GetByIdStatus(access.as<ModuleNamespaceAccessCase>());
+                return GetByStatus(access.as<ModuleNamespaceAccessCase>());
             default:
                 break;
             }
@@ -200,10 +194,17 @@ GetByIdStatus GetByIdStatus::computeForStubInfoWithoutExitSiteFeedback(
         for (unsigned listIndex = 0; listIndex < list->size(); ++listIndex) {
             const AccessCase& access = list->at(listIndex);
             if (access.viaProxy())
-                return GetByIdStatus(JSC::slowVersion(summary));
+                return GetByStatus(JSC::slowVersion(summary));
 
             if (access.usesPolyProto())
-                return GetByIdStatus(JSC::slowVersion(summary));
+                return GetByStatus(JSC::slowVersion(summary));
+
+            if (!access.requiresIdentifierNameMatch()) {
+                // FIXME: We could use this for indexed loads in the future. This is pretty solid profiling
+                // information, and probably better than ArrayProfile when it's available.
+                // https://bugs.webkit.org/show_bug.cgi?id=204215
+                return GetByStatus(JSC::slowVersion(summary));
+            }
             
             Structure* structure = access.structure();
             if (!structure) {
@@ -213,24 +214,25 @@ GetByIdStatus GetByIdStatus::computeForStubInfoWithoutExitSiteFeedback(
                 // shouldn't have to use value profiling to discover something that the AccessCase
                 // could have told us. But, it works well enough. So, our only concern here is to not
                 // crash on null structure.
-                return GetByIdStatus(JSC::slowVersion(summary));
+                return GetByStatus(JSC::slowVersion(summary));
             }
             
             ComplexGetStatus complexGetStatus = ComplexGetStatus::computeFor(
-                structure, access.conditionSet(), uid);
+                structure, access.conditionSet(), access.uid());
              
             switch (complexGetStatus.kind()) {
             case ComplexGetStatus::ShouldSkip:
                 continue;
                  
             case ComplexGetStatus::TakesSlowPath:
-                return GetByIdStatus(JSC::slowVersion(summary));
+                return GetByStatus(JSC::slowVersion(summary));
                  
             case ComplexGetStatus::Inlineable: {
                 std::unique_ptr<CallLinkStatus> callLinkStatus;
                 JSFunction* intrinsicFunction = nullptr;
                 FunctionPtr<OperationPtrTag> customAccessorGetter;
-                Optional<DOMAttributeAnnotation> domAttribute;
+                std::unique_ptr<DOMAttributeAnnotation> domAttribute;
+                bool haveDOMAttribute = false;
 
                 switch (access.type()) {
                 case AccessCase::Load:
@@ -252,37 +254,38 @@ GetByIdStatus GetByIdStatus::computeForStubInfoWithoutExitSiteFeedback(
                 }
                 case AccessCase::CustomAccessorGetter: {
                     customAccessorGetter = access.as<GetterSetterAccessCase>().customAccessor();
-                    domAttribute = access.as<GetterSetterAccessCase>().domAttribute();
-                    if (!domAttribute)
-                        return GetByIdStatus(JSC::slowVersion(summary));
+                    if (!access.as<GetterSetterAccessCase>().domAttribute())
+                        return GetByStatus(JSC::slowVersion(summary));
+                    domAttribute = WTF::makeUnique<DOMAttributeAnnotation>(*access.as<GetterSetterAccessCase>().domAttribute());
+                    haveDOMAttribute = true;
                     result.m_state = Custom;
                     break;
                 }
                 default: {
                     // FIXME: It would be totally sweet to support more of these at some point in the
                     // future. https://bugs.webkit.org/show_bug.cgi?id=133052
-                    return GetByIdStatus(JSC::slowVersion(summary));
+                    return GetByStatus(JSC::slowVersion(summary));
                 } }
 
                 ASSERT((AccessCase::Miss == access.type()) == (access.offset() == invalidOffset));
                 GetByIdVariant variant(
-                    StructureSet(structure), complexGetStatus.offset(),
+                    access.identifier(), StructureSet(structure), complexGetStatus.offset(),
                     complexGetStatus.conditionSet(), WTFMove(callLinkStatus),
                     intrinsicFunction,
                     customAccessorGetter,
-                    domAttribute);
+                    WTFMove(domAttribute));
 
                 if (!result.appendVariant(variant))
-                    return GetByIdStatus(JSC::slowVersion(summary));
+                    return GetByStatus(JSC::slowVersion(summary));
 
-                if (domAttribute) {
+                if (haveDOMAttribute) {
                     // Give up when custom accesses are not merged into one.
                     if (result.numVariants() != 1)
-                        return GetByIdStatus(JSC::slowVersion(summary));
+                        return GetByStatus(JSC::slowVersion(summary));
                 } else {
                     // Give up when custom access and simple access are mixed.
                     if (result.m_state == Custom)
-                        return GetByIdStatus(JSC::slowVersion(summary));
+                        return GetByStatus(JSC::slowVersion(summary));
                 }
                 break;
             } }
@@ -292,30 +295,30 @@ GetByIdStatus GetByIdStatus::computeForStubInfoWithoutExitSiteFeedback(
     }
         
     default:
-        return GetByIdStatus(JSC::slowVersion(summary));
+        return GetByStatus(JSC::slowVersion(summary));
     }
     
     RELEASE_ASSERT_NOT_REACHED();
-    return GetByIdStatus();
+    return GetByStatus();
 }
 
-GetByIdStatus GetByIdStatus::computeFor(
+GetByStatus GetByStatus::computeFor(
     CodeBlock* profiledBlock, ICStatusMap& baselineMap,
-    ICStatusContextStack& icContextStack, CodeOrigin codeOrigin, UniquedStringImpl* uid)
+    ICStatusContextStack& icContextStack, CodeOrigin codeOrigin)
 {
     BytecodeIndex bytecodeIndex = codeOrigin.bytecodeIndex();
     CallLinkStatus::ExitSiteData callExitSiteData = CallLinkStatus::computeExitSiteData(profiledBlock, bytecodeIndex);
     ExitFlag didExit = hasBadCacheExitSite(profiledBlock, bytecodeIndex);
-    
+
     for (ICStatusContext* context : icContextStack) {
         ICStatus status = context->get(codeOrigin);
         
-        auto bless = [&] (const GetByIdStatus& result) -> GetByIdStatus {
+        auto bless = [&] (const GetByStatus& result) -> GetByStatus {
             if (!context->isInlined(codeOrigin)) {
                 // Merge with baseline result, which also happens to contain exit data for both
                 // inlined and not-inlined.
-                GetByIdStatus baselineResult = computeFor(
-                    profiledBlock, baselineMap, bytecodeIndex, uid, didExit,
+                GetByStatus baselineResult = computeFor(
+                    profiledBlock, baselineMap, bytecodeIndex, didExit,
                     callExitSiteData);
                 baselineResult.merge(result);
                 return baselineResult;
@@ -326,11 +329,11 @@ GetByIdStatus GetByIdStatus::computeFor(
         };
         
         if (status.stubInfo) {
-            GetByIdStatus result;
+            GetByStatus result;
             {
                 ConcurrentJSLocker locker(context->optimizedCodeBlock->m_lock);
                 result = computeForStubInfoWithoutExitSiteFeedback(
-                    locker, context->optimizedCodeBlock, status.stubInfo, uid, callExitSiteData);
+                    locker, context->optimizedCodeBlock, status.stubInfo, callExitSiteData);
             }
             if (result.isSet())
                 return bless(result);
@@ -340,10 +343,10 @@ GetByIdStatus GetByIdStatus::computeFor(
             return bless(*status.getStatus);
     }
     
-    return computeFor(profiledBlock, baselineMap, bytecodeIndex, uid, didExit, callExitSiteData);
+    return computeFor(profiledBlock, baselineMap, bytecodeIndex, didExit, callExitSiteData);
 }
 
-GetByIdStatus GetByIdStatus::computeFor(const StructureSet& set, UniquedStringImpl* uid)
+GetByStatus GetByStatus::computeFor(const StructureSet& set, UniquedStringImpl* uid)
 {
     // For now we only handle the super simple self access case. We could handle the
     // prototype case in the future.
@@ -353,40 +356,40 @@ GetByIdStatus GetByIdStatus::computeFor(const StructureSet& set, UniquedStringIm
     // GetById and GetByIdDirect.
     
     if (set.isEmpty())
-        return GetByIdStatus();
+        return GetByStatus();
 
     if (parseIndex(*uid))
-        return GetByIdStatus(TakesSlowPath);
+        return GetByStatus(TakesSlowPath);
     
-    GetByIdStatus result;
+    GetByStatus result;
     result.m_state = Simple;
     result.m_wasSeenInJIT = false;
     for (unsigned i = 0; i < set.size(); ++i) {
         Structure* structure = set[i];
         if (structure->typeInfo().overridesGetOwnPropertySlot() && structure->typeInfo().type() != GlobalObjectType)
-            return GetByIdStatus(TakesSlowPath);
+            return GetByStatus(TakesSlowPath);
         
         if (!structure->propertyAccessesAreCacheable())
-            return GetByIdStatus(TakesSlowPath);
+            return GetByStatus(TakesSlowPath);
         
         unsigned attributes;
         PropertyOffset offset = structure->getConcurrently(uid, attributes);
         if (!isValidOffset(offset))
-            return GetByIdStatus(TakesSlowPath); // It's probably a prototype lookup. Give up on life for now, even though we could totally be way smarter about it.
+            return GetByStatus(TakesSlowPath); // It's probably a prototype lookup. Give up on life for now, even though we could totally be way smarter about it.
         if (attributes & PropertyAttribute::Accessor)
-            return GetByIdStatus(MakesCalls); // We could be smarter here, like strength-reducing this to a Call.
+            return GetByStatus(MakesCalls); // We could be smarter here, like strength-reducing this to a Call.
         if (attributes & PropertyAttribute::CustomAccessorOrValue)
-            return GetByIdStatus(TakesSlowPath);
+            return GetByStatus(TakesSlowPath);
         
-        if (!result.appendVariant(GetByIdVariant(structure, offset)))
-            return GetByIdStatus(TakesSlowPath);
+        if (!result.appendVariant(GetByIdVariant(nullptr, structure, offset)))
+            return GetByStatus(TakesSlowPath);
     }
     
     return result;
 }
 #endif // ENABLE(JIT)
 
-bool GetByIdStatus::makesCalls() const
+bool GetByStatus::makesCalls() const
 {
     switch (m_state) {
     case NoInformation:
@@ -408,18 +411,18 @@ bool GetByIdStatus::makesCalls() const
     return false;
 }
 
-GetByIdStatus GetByIdStatus::slowVersion() const
+GetByStatus GetByStatus::slowVersion() const
 {
-    return GetByIdStatus(makesCalls() ? MakesCalls : TakesSlowPath, wasSeenInJIT());
+    return GetByStatus(makesCalls() ? MakesCalls : TakesSlowPath, wasSeenInJIT());
 }
 
-void GetByIdStatus::merge(const GetByIdStatus& other)
+void GetByStatus::merge(const GetByStatus& other)
 {
     if (other.m_state == NoInformation)
         return;
     
     auto mergeSlow = [&] () {
-        *this = GetByIdStatus((makesCalls() || other.makesCalls()) ? MakesCalls : TakesSlowPath);
+        *this = GetByStatus((makesCalls() || other.makesCalls()) ? MakesCalls : TakesSlowPath);
     };
     
     switch (m_state) {
@@ -442,13 +445,13 @@ void GetByIdStatus::merge(const GetByIdStatus& other)
         if (other.m_state != ModuleNamespace)
             return mergeSlow();
         
-        if (m_moduleNamespaceObject != other.m_moduleNamespaceObject)
+        if (m_moduleNamespaceData->m_moduleNamespaceObject != other.m_moduleNamespaceData->m_moduleNamespaceObject)
             return mergeSlow();
         
-        if (m_moduleEnvironment != other.m_moduleEnvironment)
+        if (m_moduleNamespaceData->m_moduleEnvironment != other.m_moduleNamespaceData->m_moduleEnvironment)
             return mergeSlow();
         
-        if (m_scopeOffset != other.m_scopeOffset)
+        if (m_moduleNamespaceData->m_scopeOffset != other.m_moduleNamespaceData->m_scopeOffset)
             return mergeSlow();
         
         return;
@@ -461,7 +464,7 @@ void GetByIdStatus::merge(const GetByIdStatus& other)
     RELEASE_ASSERT_NOT_REACHED();
 }
 
-void GetByIdStatus::filter(const StructureSet& set)
+void GetByStatus::filter(const StructureSet& set)
 {
     if (m_state != Simple)
         return;
@@ -470,26 +473,55 @@ void GetByIdStatus::filter(const StructureSet& set)
         m_state = NoInformation;
 }
 
-void GetByIdStatus::markIfCheap(SlotVisitor& visitor)
+void GetByStatus::markIfCheap(SlotVisitor& visitor)
 {
     for (GetByIdVariant& variant : m_variants)
         variant.markIfCheap(visitor);
 }
 
-bool GetByIdStatus::finalize(VM& vm)
+bool GetByStatus::finalize(VM& vm)
 {
     for (GetByIdVariant& variant : m_variants) {
         if (!variant.finalize(vm))
             return false;
     }
-    if (m_moduleNamespaceObject && !vm.heap.isMarked(m_moduleNamespaceObject))
-        return false;
-    if (m_moduleEnvironment && !vm.heap.isMarked(m_moduleEnvironment))
-        return false;
+    if (isModuleNamespace()) {
+        if (m_moduleNamespaceData->m_moduleNamespaceObject && !vm.heap.isMarked(m_moduleNamespaceData->m_moduleNamespaceObject))
+            return false;
+        if (m_moduleNamespaceData->m_moduleEnvironment && !vm.heap.isMarked(m_moduleNamespaceData->m_moduleEnvironment))
+            return false;
+    }
     return true;
 }
 
-void GetByIdStatus::dump(PrintStream& out) const
+Box<Identifier> GetByStatus::singleIdentifier() const
+{
+    if (isModuleNamespace()) {
+        Box<Identifier> result = m_moduleNamespaceData->m_identifier;
+        if (!result || result->isNull())
+            return nullptr;
+        return result;
+    }
+
+    if (m_variants.isEmpty())
+        return nullptr;
+
+    Box<Identifier> result = m_variants.first().identifier();
+    if (!result)
+        return nullptr;
+    if (result->isNull())
+        return nullptr;
+    for (size_t i = 1; i < m_variants.size(); ++i) {
+        Box<Identifier> uid = m_variants[i].identifier();
+        if (!uid)
+            return nullptr;
+        if (*uid != *result)
+            return nullptr;
+    }
+    return result;
+}
+
+void GetByStatus::dump(PrintStream& out) const
 {
     out.print("(");
     switch (m_state) {
@@ -43,7 +43,7 @@ class JSModuleNamespaceObject;
 class ModuleNamespaceAccessCase;
 class StructureStubInfo;
 
-class GetByIdStatus {
+class GetByStatus {
     WTF_MAKE_FAST_ALLOCATED;
 public:
     enum State : uint8_t {
@@ -62,18 +62,18 @@ public:
         MakesCalls,
     };
 
-    GetByIdStatus()
+    GetByStatus()
         : m_state(NoInformation)
     {
     }
     
-    explicit GetByIdStatus(State state)
+    explicit GetByStatus(State state)
         : m_state(state)
     {
         ASSERT(state == NoInformation || state == TakesSlowPath || state == MakesCalls);
     }
     
-    explicit GetByIdStatus(StubInfoSummary summary)
+    explicit GetByStatus(StubInfoSummary summary)
         : m_wasSeenInJIT(true)
     {
         switch (summary) {
@@ -94,23 +94,15 @@ public:
         RELEASE_ASSERT_NOT_REACHED();
     }
     
-    GetByIdStatus(
-        State state, bool wasSeenInJIT, const GetByIdVariant& variant = GetByIdVariant())
+    GetByStatus(
+        State state, bool wasSeenInJIT)
         : m_state(state)
         , m_wasSeenInJIT(wasSeenInJIT)
     {
-        ASSERT((state == Simple || state == Custom) == variant.isSet());
-        m_variants.append(variant);
     }
     
-    static GetByIdStatus computeFor(CodeBlock*, ICStatusMap&, BytecodeIndex, UniquedStringImpl* uid, ExitFlag, CallLinkStatus::ExitSiteData);
-    static GetByIdStatus computeFor(const StructureSet&, UniquedStringImpl* uid);
-    
-    static GetByIdStatus computeFor(CodeBlock* baselineBlock, ICStatusMap& baselineMap, ICStatusContextStack& dfgContextStack, CodeOrigin, UniquedStringImpl* uid);
-
-#if ENABLE(DFG_JIT)
-    static GetByIdStatus computeForStubInfo(const ConcurrentJSLocker&, CodeBlock* baselineBlock, StructureStubInfo*, CodeOrigin, UniquedStringImpl* uid);
-#endif
+    static GetByStatus computeFor(CodeBlock* baselineBlock, ICStatusMap& baselineMap, ICStatusContextStack& dfgContextStack, CodeOrigin);
+    static GetByStatus computeFor(const StructureSet&, UniquedStringImpl*);
 
     State state() const { return m_state; }
     
@@ -128,40 +120,46 @@ public:
     bool takesSlowPath() const { return m_state == TakesSlowPath || m_state == MakesCalls || m_state == Custom || m_state == ModuleNamespace; }
     bool makesCalls() const;
     
-    GetByIdStatus slowVersion() const;
+    GetByStatus slowVersion() const;
     
     bool wasSeenInJIT() const { return m_wasSeenInJIT; }
     
-    void merge(const GetByIdStatus&);
+    void merge(const GetByStatus&);
     
     // Attempts to reduce the set of variants to fit the given structure set. This may be approximate.
     void filter(const StructureSet&);
 
-    JSModuleNamespaceObject* moduleNamespaceObject() const { return m_moduleNamespaceObject; }
-    JSModuleEnvironment* moduleEnvironment() const { return m_moduleEnvironment; }
-    ScopeOffset scopeOffset() const { return m_scopeOffset; }
+    JSModuleNamespaceObject* moduleNamespaceObject() const { return m_moduleNamespaceData->m_moduleNamespaceObject; }
+    JSModuleEnvironment* moduleEnvironment() const { return m_moduleNamespaceData->m_moduleEnvironment; }
+    ScopeOffset scopeOffset() const { return m_moduleNamespaceData->m_scopeOffset; }
     
     void markIfCheap(SlotVisitor&);
     bool finalize(VM&); // Return true if this gets to live.
+
+    bool appendVariant(const GetByIdVariant&);
     
     void dump(PrintStream&) const;
+
+    Box<Identifier> singleIdentifier() const;
     
 private:
 #if ENABLE(JIT)
-    GetByIdStatus(const ModuleNamespaceAccessCase&);
-    static GetByIdStatus computeForStubInfoWithoutExitSiteFeedback(
-        const ConcurrentJSLocker&, CodeBlock* profiledBlock, StructureStubInfo*,
-        UniquedStringImpl* uid, CallLinkStatus::ExitSiteData);
+    GetByStatus(const ModuleNamespaceAccessCase&);
+    static GetByStatus computeForStubInfoWithoutExitSiteFeedback(
+        const ConcurrentJSLocker&, CodeBlock* profiledBlock, StructureStubInfo*, CallLinkStatus::ExitSiteData);
 #endif
-    static GetByIdStatus computeFromLLInt(CodeBlock*, BytecodeIndex, UniquedStringImpl* uid);
-    
-    bool appendVariant(const GetByIdVariant&);
-    
-    
+    static GetByStatus computeFromLLInt(CodeBlock*, BytecodeIndex);
+    static GetByStatus computeFor(CodeBlock*, ICStatusMap&, BytecodeIndex, ExitFlag, CallLinkStatus::ExitSiteData);
+
+    struct ModuleNamespaceData {
+        JSModuleNamespaceObject* m_moduleNamespaceObject { nullptr };
+        JSModuleEnvironment* m_moduleEnvironment { nullptr };
+        ScopeOffset m_scopeOffset { };
+        Box<Identifier> m_identifier;
+    };
+
     Vector<GetByIdVariant, 1> m_variants;
-    JSModuleNamespaceObject* m_moduleNamespaceObject { nullptr };
-    JSModuleEnvironment* m_moduleEnvironment { nullptr };
-    ScopeOffset m_scopeOffset { };
+    Box<ModuleNamespaceData> m_moduleNamespaceData;
     State m_state;
     bool m_wasSeenInJIT { false };
 };
index 37d90e4..49e117a 100644 (file)
@@ -42,15 +42,15 @@ namespace GetterSetterAccessCaseInternal {
 static constexpr bool verbose = false;
 }
 
-GetterSetterAccessCase::GetterSetterAccessCase(VM& vm, JSCell* owner, AccessType accessType, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet, bool viaProxy, WatchpointSet* additionalSet, JSObject* customSlotBase, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
-    : Base(vm, owner, accessType, offset, structure, conditionSet, viaProxy, additionalSet, WTFMove(prototypeAccessChain))
+GetterSetterAccessCase::GetterSetterAccessCase(VM& vm, JSCell* owner, AccessType accessType, const Identifier& identifier, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet, bool viaProxy, WatchpointSet* additionalSet, JSObject* customSlotBase, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
+    : Base(vm, owner, accessType, identifier, offset, structure, conditionSet, viaProxy, additionalSet, WTFMove(prototypeAccessChain))
 {
     m_customSlotBase.setMayBeNull(vm, owner, customSlotBase);
 }
 
 
 std::unique_ptr<AccessCase> GetterSetterAccessCase::create(
-    VM& vm, JSCell* owner, AccessType type, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet,
+    VM& vm, JSCell* owner, AccessType type, const Identifier& identifier, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet,
     bool viaProxy, WatchpointSet* additionalSet, FunctionPtr<OperationPtrTag> customGetter, JSObject* customSlotBase,
     Optional<DOMAttributeAnnotation> domAttribute, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
 {
@@ -63,18 +63,18 @@ std::unique_ptr<AccessCase> GetterSetterAccessCase::create(
         ASSERT_NOT_REACHED();
     };
 
-    std::unique_ptr<GetterSetterAccessCase> result(new GetterSetterAccessCase(vm, owner, type, offset, structure, conditionSet, viaProxy, additionalSet, customSlotBase, WTFMove(prototypeAccessChain)));
+    std::unique_ptr<GetterSetterAccessCase> result(new GetterSetterAccessCase(vm, owner, type, identifier, offset, structure, conditionSet, viaProxy, additionalSet, customSlotBase, WTFMove(prototypeAccessChain)));
     result->m_domAttribute = domAttribute;
     result->m_customAccessor = customGetter ? FunctionPtr<OperationPtrTag>(customGetter) : nullptr;
     return result;
 }
 
-std::unique_ptr<AccessCase> GetterSetterAccessCase::create(VM& vm, JSCell* owner, AccessType type, Structure* structure, PropertyOffset offset,
+std::unique_ptr<AccessCase> GetterSetterAccessCase::create(VM& vm, JSCell* owner, AccessType type, Structure* structure, const Identifier& identifier, PropertyOffset offset,
     const ObjectPropertyConditionSet& conditionSet, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain, FunctionPtr<OperationPtrTag> customSetter,
     JSObject* customSlotBase)
 {
     ASSERT(type == Setter || type == CustomValueSetter || type == CustomAccessorSetter);
-    std::unique_ptr<GetterSetterAccessCase> result(new GetterSetterAccessCase(vm, owner, type, offset, structure, conditionSet, false, nullptr, customSlotBase, WTFMove(prototypeAccessChain)));
+    std::unique_ptr<GetterSetterAccessCase> result(new GetterSetterAccessCase(vm, owner, type, identifier, offset, structure, conditionSet, false, nullptr, customSlotBase, WTFMove(prototypeAccessChain)));
     result->m_customAccessor = customSetter ? FunctionPtr<OperationPtrTag>(customSetter) : nullptr;
     return result;
 }
index 273ce3a..98f3cf7 100644 (file)
@@ -49,11 +49,11 @@ public:
     void emitDOMJITGetter(AccessGenerationState&, const DOMJIT::GetterSetter*, GPRReg baseForGetGPR);
 
     static std::unique_ptr<AccessCase> create(
-        VM&, JSCell* owner, AccessType, PropertyOffset, Structure*,
+        VM&, JSCell* owner, AccessType, const Identifier&, PropertyOffset, Structure*,
         const ObjectPropertyConditionSet&, bool viaProxy, WatchpointSet* additionalSet, FunctionPtr<OperationPtrTag> customGetter,
         JSObject* customSlotBase, Optional<DOMAttributeAnnotation>, std::unique_ptr<PolyProtoAccessChain>);
 
-    static std::unique_ptr<AccessCase> create(VM&, JSCell* owner, AccessType, Structure*, PropertyOffset,
+    static std::unique_ptr<AccessCase> create(VM&, JSCell* owner, AccessType, Structure*, const Identifier&, PropertyOffset,
         const ObjectPropertyConditionSet&, std::unique_ptr<PolyProtoAccessChain>,
         FunctionPtr<OperationPtrTag> customSetter = nullptr, JSObject* customSlotBase = nullptr);
 
@@ -65,7 +65,7 @@ public:
     FunctionPtr<OperationPtrTag> customAccessor() const { return m_customAccessor; }
 
 private:
-    GetterSetterAccessCase(VM&, JSCell*, AccessType, PropertyOffset, Structure*, const ObjectPropertyConditionSet&, bool viaProxy, WatchpointSet* additionalSet, JSObject* customSlotBase, std::unique_ptr<PolyProtoAccessChain>);
+    GetterSetterAccessCase(VM&, JSCell*, AccessType, const Identifier&, PropertyOffset, Structure*, const ObjectPropertyConditionSet&, bool viaProxy, WatchpointSet* additionalSet, JSObject* customSlotBase, std::unique_ptr<PolyProtoAccessChain>);
 
     GetterSetterAccessCase(const GetterSetterAccessCase&);
 
index 40b3144..dbd5a88 100644 (file)
@@ -34,7 +34,7 @@ namespace JSC {
 class CallLinkInfo;
 class CallLinkStatus;
 class CodeBlock;
-class GetByIdStatus;
+class GetByStatus;
 class InByIdStatus;
 class PutByIdStatus;
 class StructureStubInfo;
@@ -45,7 +45,7 @@ struct ICStatus {
     CallLinkInfo* callLinkInfo { nullptr };
     ByValInfo* byValInfo { nullptr };
     CallLinkStatus* callStatus { nullptr };
-    GetByIdStatus* getStatus { nullptr };
+    GetByStatus* getStatus { nullptr };
     InByIdStatus* inStatus { nullptr };
     PutByIdStatus* putStatus { nullptr };
 };
index 9237053..5ae0bdc 100644 (file)
@@ -130,7 +130,7 @@ InByIdStatus InByIdStatus::computeForStubInfoWithoutExitSiteFeedback(const Concu
     // Finally figure out if we can derive an access strategy.
     InByIdStatus result;
     result.m_state = Simple;
-    switch (stubInfo->cacheType) {
+    switch (stubInfo->cacheType()) {
     case CacheType::Unset:
         return InByIdStatus(NoInformation);
 
index d7e6c35..d4e07ea 100644 (file)
@@ -178,6 +178,9 @@ ALWAYS_INLINE static bool linkCodeInline(const char* name, CCallHelpers& jit, St
 
 bool InlineAccess::generateSelfPropertyAccess(StructureStubInfo& stubInfo, Structure* structure, PropertyOffset offset)
 {
+    if (!stubInfo.hasConstantIdentifier)
+        return false;
+
     CCallHelpers jit;
     
     GPRReg base = stubInfo.baseGPR();
@@ -226,6 +229,9 @@ ALWAYS_INLINE static bool hasFreeRegister(StructureStubInfo& stubInfo)
 
 bool InlineAccess::canGenerateSelfPropertyReplace(StructureStubInfo& stubInfo, PropertyOffset offset)
 {
+    if (!stubInfo.hasConstantIdentifier)
+        return false;
+
     if (isInlineOffset(offset))
         return true;
 
@@ -234,6 +240,9 @@ bool InlineAccess::canGenerateSelfPropertyReplace(StructureStubInfo& stubInfo, P
 
 bool InlineAccess::generateSelfPropertyReplace(StructureStubInfo& stubInfo, Structure* structure, PropertyOffset offset)
 {
+    if (!stubInfo.hasConstantIdentifier)
+        return false;
+
     ASSERT(canGenerateSelfPropertyReplace(stubInfo, offset));
 
     CCallHelpers jit;
@@ -268,6 +277,9 @@ bool InlineAccess::isCacheableArrayLength(StructureStubInfo& stubInfo, JSArray*
 {
     ASSERT(array->indexingType() & IsArray);
 
+    if (!stubInfo.hasConstantIdentifier)
+        return false;
+
     if (!hasFreeRegister(stubInfo))
         return false;
 
@@ -278,6 +290,9 @@ bool InlineAccess::generateArrayLength(StructureStubInfo& stubInfo, JSArray* arr
 {
     ASSERT(isCacheableArrayLength(stubInfo, array));
 
+    if (!stubInfo.hasConstantIdentifier)
+        return false;
+
     CCallHelpers jit;
 
     GPRReg base = stubInfo.baseGPR();
@@ -300,6 +315,9 @@ bool InlineAccess::generateArrayLength(StructureStubInfo& stubInfo, JSArray* arr
 
 bool InlineAccess::isCacheableStringLength(StructureStubInfo& stubInfo)
 {
+    if (!stubInfo.hasConstantIdentifier)
+        return false;
+
     return hasFreeRegister(stubInfo);
 }
 
@@ -307,6 +325,9 @@ bool InlineAccess::generateStringLength(StructureStubInfo& stubInfo)
 {
     ASSERT(isCacheableStringLength(stubInfo));
 
+    if (!stubInfo.hasConstantIdentifier)
+        return false;
+
     CCallHelpers jit;
 
     GPRReg base = stubInfo.baseGPR();
@@ -340,6 +361,9 @@ bool InlineAccess::generateSelfInAccess(StructureStubInfo& stubInfo, Structure*
 {
     CCallHelpers jit;
 
+    if (!stubInfo.hasConstantIdentifier)
+        return false;
+
     GPRReg base = stubInfo.baseGPR();
     JSValueRegs value = stubInfo.valueRegs();
 
index 7e9d9f7..0979127 100644 (file)
@@ -59,7 +59,7 @@ InstanceOfAccessCase::~InstanceOfAccessCase()
 InstanceOfAccessCase::InstanceOfAccessCase(
     VM& vm, JSCell* owner, AccessType accessType, Structure* structure,
     const ObjectPropertyConditionSet& conditionSet, JSObject* prototype)
-    : Base(vm, owner, accessType, invalidOffset, structure, conditionSet, nullptr)
+    : Base(vm, owner, accessType, Identifier(), invalidOffset, structure, conditionSet, nullptr)
     , m_prototype(vm, owner, prototype)
 {
 }
index ed00e69..ab694e5 100644 (file)
@@ -75,7 +75,7 @@ InstanceOfStatus InstanceOfStatus::computeForStubInfo(const ConcurrentJSLocker&,
     if (!isInlineable(summary))
         return InstanceOfStatus(summary);
     
-    if (stubInfo->cacheType != CacheType::Stub)
+    if (stubInfo->cacheType() != CacheType::Stub)
         return TakesSlowPath; // This is conservative. It could be that we have no information.
     
     PolymorphicAccess* list = stubInfo->u.stub;
index 152941f..d689323 100644 (file)
 
 namespace JSC {
 
-IntrinsicGetterAccessCase::IntrinsicGetterAccessCase(VM& vm, JSCell* owner, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet, JSFunction* intrinsicFunction, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
-    : Base(vm, owner, IntrinsicGetter, offset, structure, conditionSet, WTFMove(prototypeAccessChain))
+IntrinsicGetterAccessCase::IntrinsicGetterAccessCase(VM& vm, JSCell* owner, const Identifier& identifier, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet, JSFunction* intrinsicFunction, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
+    : Base(vm, owner, IntrinsicGetter, identifier, offset, structure, conditionSet, WTFMove(prototypeAccessChain))
 {
     m_intrinsicFunction.set(vm, owner, intrinsicFunction);
 }
 
-std::unique_ptr<AccessCase> IntrinsicGetterAccessCase::create(VM& vm, JSCell* owner, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet, JSFunction* intrinsicFunction, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
+std::unique_ptr<AccessCase> IntrinsicGetterAccessCase::create(VM& vm, JSCell* owner, const Identifier& identifier, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet, JSFunction* intrinsicFunction, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
 {
-    return std::unique_ptr<AccessCase>(new IntrinsicGetterAccessCase(vm, owner, offset, structure, conditionSet, intrinsicFunction, WTFMove(prototypeAccessChain)));
+    return std::unique_ptr<AccessCase>(new IntrinsicGetterAccessCase(vm, owner, identifier, offset, structure, conditionSet, intrinsicFunction, WTFMove(prototypeAccessChain)));
 }
 
 IntrinsicGetterAccessCase::~IntrinsicGetterAccessCase()
index b3980e4..377d311 100644 (file)
@@ -42,14 +42,14 @@ public:
     static bool canEmitIntrinsicGetter(JSFunction*, Structure*);
     void emitIntrinsicGetter(AccessGenerationState&);
 
-    static std::unique_ptr<AccessCase> create(VM&, JSCell*, PropertyOffset, Structure*, const ObjectPropertyConditionSet&, JSFunction* intrinsicFunction, std::unique_ptr<PolyProtoAccessChain>);
+    static std::unique_ptr<AccessCase> create(VM&, JSCell*, const Identifier&, PropertyOffset, Structure*, const ObjectPropertyConditionSet&, JSFunction* intrinsicFunction, std::unique_ptr<PolyProtoAccessChain>);
 
     std::unique_ptr<AccessCase> clone() const override;
 
     ~IntrinsicGetterAccessCase();
 
 private:
-    IntrinsicGetterAccessCase(VM&, JSCell*, PropertyOffset, Structure*, const ObjectPropertyConditionSet&, JSFunction* intrinsicFunction, std::unique_ptr<PolyProtoAccessChain>);
+    IntrinsicGetterAccessCase(VM&, JSCell*, const Identifier&, PropertyOffset, Structure*, const ObjectPropertyConditionSet&, JSFunction* intrinsicFunction, std::unique_ptr<PolyProtoAccessChain>);
 
     WriteBarrier<JSFunction> m_intrinsicFunction;
 };
index b647fa5..f273fcb 100644 (file)
 
 namespace JSC {
 
-ModuleNamespaceAccessCase::ModuleNamespaceAccessCase(VM& vm, JSCell* owner, JSModuleNamespaceObject* moduleNamespaceObject, JSModuleEnvironment* moduleEnvironment, ScopeOffset scopeOffset)
-    : Base(vm, owner, ModuleNamespaceLoad, invalidOffset, nullptr, ObjectPropertyConditionSet(), nullptr)
+ModuleNamespaceAccessCase::ModuleNamespaceAccessCase(VM& vm, JSCell* owner, const Identifier& identifier, JSModuleNamespaceObject* moduleNamespaceObject, JSModuleEnvironment* moduleEnvironment, ScopeOffset scopeOffset)
+    : Base(vm, owner, ModuleNamespaceLoad, identifier, invalidOffset, nullptr, ObjectPropertyConditionSet(), nullptr)
     , m_scopeOffset(scopeOffset)
 {
     m_moduleNamespaceObject.set(vm, owner, moduleNamespaceObject);
     m_moduleEnvironment.set(vm, owner, moduleEnvironment);
 }
 
-std::unique_ptr<AccessCase> ModuleNamespaceAccessCase::create(VM& vm, JSCell* owner, JSModuleNamespaceObject* moduleNamespaceObject, JSModuleEnvironment* moduleEnvironment, ScopeOffset scopeOffset)
+std::unique_ptr<AccessCase> ModuleNamespaceAccessCase::create(VM& vm, JSCell* owner, const Identifier& identifier, JSModuleNamespaceObject* moduleNamespaceObject, JSModuleEnvironment* moduleEnvironment, ScopeOffset scopeOffset)
 {
-    return std::unique_ptr<AccessCase>(new ModuleNamespaceAccessCase(vm, owner, moduleNamespaceObject, moduleEnvironment, scopeOffset));
+    return std::unique_ptr<AccessCase>(new ModuleNamespaceAccessCase(vm, owner, identifier, moduleNamespaceObject, moduleEnvironment, scopeOffset));
 }
 
 ModuleNamespaceAccessCase::~ModuleNamespaceAccessCase()
index 333075f..79566d9 100644 (file)
@@ -43,7 +43,7 @@ public:
     JSModuleEnvironment* moduleEnvironment() const { return m_moduleEnvironment.get(); }
     ScopeOffset scopeOffset() const { return m_scopeOffset; }
 
-    static std::unique_ptr<AccessCase> create(VM&, JSCell* owner, JSModuleNamespaceObject*, JSModuleEnvironment*, ScopeOffset);
+    static std::unique_ptr<AccessCase> create(VM&, JSCell* owner, const Identifier&, JSModuleNamespaceObject*, JSModuleEnvironment*, ScopeOffset);
 
     std::unique_ptr<AccessCase> clone() const override;
 
@@ -52,7 +52,7 @@ public:
     ~ModuleNamespaceAccessCase();
 
 private:
-    ModuleNamespaceAccessCase(VM&, JSCell* owner, JSModuleNamespaceObject*, JSModuleEnvironment*, ScopeOffset);
+    ModuleNamespaceAccessCase(VM&, JSCell* owner, const Identifier&, JSModuleNamespaceObject*, JSModuleEnvironment*, ScopeOffset);
 
     WriteBarrier<JSModuleNamespaceObject> m_moduleNamespaceObject;
     WriteBarrier<JSModuleEnvironment> m_moduleEnvironment;
index ffbb17f..2fb671e 100644 (file)
@@ -113,7 +113,7 @@ auto AccessGenerationState::preserveLiveRegistersToStackForCall(const RegisterSe
 {
     RegisterSet liveRegisters = liveRegistersForCall();
     liveRegisters.merge(extra);
-    
+
     unsigned extraStackPadding = 0;
     unsigned numberOfStackBytesUsedForRegisterPreservation = ScratchRegisterAllocator::preserveRegistersToStackForCall(*jit, liveRegisters, extraStackPadding);
     return SpillState {
@@ -227,7 +227,7 @@ PolymorphicAccess::~PolymorphicAccess() { }
 
 AccessGenerationResult PolymorphicAccess::addCases(
     const GCSafeConcurrentJSLocker& locker, VM& vm, CodeBlock* codeBlock, StructureStubInfo& stubInfo,
-    const Identifier& ident, Vector<std::unique_ptr<AccessCase>, 2> originalCasesToAdd)
+    Vector<std::unique_ptr<AccessCase>, 2> originalCasesToAdd)
 {
     SuperSamplerScope superSamplerScope(false);
     
@@ -310,7 +310,7 @@ AccessGenerationResult PolymorphicAccess::addCases(
     // Now add things to the new list. Note that at this point, we will still have old cases that
     // may be replaced by the new ones. That's fine. We will sort that out when we regenerate.
     for (auto& caseToAdd : casesToAdd) {
-        commit(locker, vm, m_watchpoints, codeBlock, stubInfo, ident, *caseToAdd);
+        commit(locker, vm, m_watchpoints, codeBlock, stubInfo, *caseToAdd);
         m_list.append(WTFMove(caseToAdd));
     }
     
@@ -321,12 +321,11 @@ AccessGenerationResult PolymorphicAccess::addCases(
 }
 
 AccessGenerationResult PolymorphicAccess::addCase(
-    const GCSafeConcurrentJSLocker& locker, VM& vm, CodeBlock* codeBlock, StructureStubInfo& stubInfo,
-    const Identifier& ident, std::unique_ptr<AccessCase> newAccess)
+    const GCSafeConcurrentJSLocker& locker, VM& vm, CodeBlock* codeBlock, StructureStubInfo& stubInfo, std::unique_ptr<AccessCase> newAccess)
 {
     Vector<std::unique_ptr<AccessCase>, 2> newAccesses;
     newAccesses.append(WTFMove(newAccess));
-    return addCases(locker, vm, codeBlock, stubInfo, ident, WTFMove(newAccesses));
+    return addCases(locker, vm, codeBlock, stubInfo, WTFMove(newAccesses));
 }
 
 bool PolymorphicAccess::visitWeak(VM& vm) const
@@ -363,7 +362,7 @@ void PolymorphicAccess::dump(PrintStream& out) const
 
 void PolymorphicAccess::commit(
     const GCSafeConcurrentJSLocker&, VM& vm, std::unique_ptr<WatchpointsOnStructureStubInfo>& watchpoints, CodeBlock* codeBlock,
-    StructureStubInfo& stubInfo, const Identifier& ident, AccessCase& accessCase)
+    StructureStubInfo& stubInfo, AccessCase& accessCase)
 {
     // NOTE: We currently assume that this is relatively rare. It mainly arises for accesses to
     // properties on DOM nodes. For sure we cache many DOM node accesses, but even in
@@ -371,7 +370,7 @@ void PolymorphicAccess::commit(
     // vanilla objects or exotic objects from within JSC (like Arguments, those are super popular).
     // Those common kinds of JSC object accesses don't hit this case.
     
-    for (WatchpointSet* set : accessCase.commit(vm, ident)) {
+    for (WatchpointSet* set : accessCase.commit(vm)) {
         Watchpoint* watchpoint =
             WatchpointsOnStructureStubInfo::ensureReferenceAndAddWatchpoint(
                 watchpoints, codeBlock, &stubInfo);
@@ -381,41 +380,22 @@ void PolymorphicAccess::commit(
 }
 
 AccessGenerationResult PolymorphicAccess::regenerate(
-    const GCSafeConcurrentJSLocker& locker, VM& vm, CodeBlock* codeBlock, StructureStubInfo& stubInfo, const Identifier& ident)
+    const GCSafeConcurrentJSLocker& locker, VM& vm, CodeBlock* codeBlock, StructureStubInfo& stubInfo)
 {
     SuperSamplerScope superSamplerScope(false);
     
     if (PolymorphicAccessInternal::verbose)
         dataLog("Regenerate with m_list: ", listDump(m_list), "\n");
-    
+
     AccessGenerationState state(vm, codeBlock->globalObject());
 
     state.access = this;
     state.stubInfo = &stubInfo;
-    state.ident = &ident;
     
     state.baseGPR = stubInfo.baseGPR();
     state.u.thisGPR = stubInfo.patch.u.thisGPR;
     state.valueRegs = stubInfo.valueRegs();
 
-    ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters);
-    state.allocator = &allocator;
-    allocator.lock(state.baseGPR);
-    if (state.u.thisGPR != InvalidGPRReg)
-        allocator.lock(state.u.thisGPR);
-    allocator.lock(state.valueRegs);
-#if USE(JSVALUE32_64)
-    allocator.lock(stubInfo.patch.baseTagGPR);
-#endif
-
-    state.scratchGPR = allocator.allocateScratchGPR();
-    
-    CCallHelpers jit(codeBlock);
-    state.jit = &jit;
-
-    state.preservedReusedRegisterState =
-        allocator.preserveReusedRegistersByPushing(jit, ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
-
     // Regenerating is our opportunity to figure out what our list of cases should look like. We
     // do this here. The newly produced 'cases' list may be smaller than m_list. We don't edit
     // m_list in-place because we may still fail, in which case we want the PolymorphicAccess object
@@ -463,6 +443,32 @@ AccessGenerationResult PolymorphicAccess::regenerate(
             m_list[dstIndex++] = WTFMove(someCase);
     }
     m_list.resize(dstIndex);
+
+    ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters);
+    state.allocator = &allocator;
+    allocator.lock(state.baseGPR);
+    if (state.u.thisGPR != InvalidGPRReg)
+        allocator.lock(state.u.thisGPR);
+    allocator.lock(state.valueRegs);
+#if USE(JSVALUE32_64)
+    allocator.lock(stubInfo.patch.baseTagGPR);
+#endif
+
+    state.scratchGPR = allocator.allocateScratchGPR();
+
+    for (auto& accessCase : cases) {
+        if (accessCase->needsScratchFPR()) {
+            state.scratchFPR = allocator.allocateScratchFPR();
+            break;
+        }
+    }
+    
+    CCallHelpers jit(codeBlock);
+    state.jit = &jit;
+
+    state.preservedReusedRegisterState =
+        allocator.preserveReusedRegistersByPushing(jit, ScratchRegisterAllocator::ExtraStackSpace::NoExtraSpace);
+
     
     bool generatedFinalCode = false;
 
@@ -472,7 +478,7 @@ AccessGenerationResult PolymorphicAccess::regenerate(
         && stubInfo.accessType == AccessType::InstanceOf) {
         while (!cases.isEmpty())
             m_list.append(cases.takeLast());
-        cases.append(AccessCase::create(vm, codeBlock, AccessCase::InstanceOfGeneric));
+        cases.append(AccessCase::create(vm, codeBlock, AccessCase::InstanceOfGeneric, Identifier()));
         generatedFinalCode = true;
     }
 
@@ -484,9 +490,21 @@ AccessGenerationResult PolymorphicAccess::regenerate(
     
     bool allGuardedByStructureCheck = true;
     bool hasJSGetterSetterCall = false;
+    bool needsInt32PropertyCheck = false;
+    bool needsStringPropertyCheck = false;
+    bool needsSymbolPropertyCheck = false;
     for (auto& newCase : cases) {
-        commit(locker, vm, state.watchpoints, codeBlock, stubInfo, ident, *newCase);
-        allGuardedByStructureCheck &= newCase->guardedByStructureCheck();
+        if (!stubInfo.hasConstantIdentifier) {
+            if (newCase->requiresIdentifierNameMatch()) {
+                if (newCase->uid()->isSymbol())
+                    needsSymbolPropertyCheck = true;
+                else
+                    needsStringPropertyCheck = true;
+            } else if (newCase->requiresInt32PropertyCheck())
+                needsInt32PropertyCheck = true; 
+        }
+        commit(locker, vm, state.watchpoints, codeBlock, stubInfo, *newCase);
+        allGuardedByStructureCheck &= newCase->guardedByStructureCheck(stubInfo);
         if (newCase->type() == AccessCase::Getter || newCase->type() == AccessCase::Setter)
             hasJSGetterSetterCall = true;
     }
@@ -499,14 +517,84 @@ AccessGenerationResult PolymorphicAccess::regenerate(
         // We need to resort to a cascade. A cascade also happens to be optimal if we only have just
         // one case.
         CCallHelpers::JumpList fallThrough;
+        if (needsInt32PropertyCheck || needsStringPropertyCheck || needsSymbolPropertyCheck) {
+            if (needsInt32PropertyCheck) {
+                CCallHelpers::Jump notInt32;
+
+                if (!stubInfo.propertyIsInt32)
+                    notInt32 = jit.branchIfNotInt32(state.u.propertyGPR);
+                for (unsigned i = cases.size(); i--;) {
+                    fallThrough.link(&jit);
+                    fallThrough.clear();
+                    if (cases[i]->requiresInt32PropertyCheck())
+                        cases[i]->generateWithGuard(state, fallThrough);
+                }
+
+                if (needsStringPropertyCheck || needsSymbolPropertyCheck) {
+                    if (notInt32.isSet())
+                        notInt32.link(&jit);
+                    fallThrough.link(&jit);
+                    fallThrough.clear();
+                } else {
+                    if (notInt32.isSet())
+                        state.failAndRepatch.append(notInt32);
+                }
+            }
+
+            if (needsStringPropertyCheck) {
+                GPRReg propertyGPR = state.u.propertyGPR;
+                CCallHelpers::JumpList notString;
+                if (!stubInfo.propertyIsString) {
+                    notString.append(jit.branchIfNotCell(propertyGPR));
+                    notString.append(jit.branchIfNotString(propertyGPR));
+                }
+                jit.loadPtr(MacroAssembler::Address(propertyGPR, JSString::offsetOfValue()), state.scratchGPR);
+
+                state.failAndRepatch.append(jit.branchIfRopeStringImpl(state.scratchGPR));
+
+                for (unsigned i = cases.size(); i--;) {
+                    fallThrough.link(&jit);
+                    fallThrough.clear();
+                    if (cases[i]->requiresIdentifierNameMatch() && !cases[i]->uid()->isSymbol())
+                        cases[i]->generateWithGuard(state, fallThrough);
+                }
+
+                if (needsSymbolPropertyCheck) {
+                    notString.link(&jit);
+                    fallThrough.link(&jit);
+                    fallThrough.clear();
+                } else
+                    state.failAndRepatch.append(notString);
+            }
+
+            if (needsSymbolPropertyCheck) {
+                CCallHelpers::JumpList notSymbol;
+                if (!stubInfo.propertyIsSymbol) {
+                    GPRReg propertyGPR = state.u.propertyGPR;
+                    notSymbol.append(jit.branchIfNotCell(propertyGPR));
+                    notSymbol.append(jit.branchIfNotSymbol(propertyGPR));
+                }
+
+                for (unsigned i = cases.size(); i--;) {
+                    fallThrough.link(&jit);
+                    fallThrough.clear();
+                    if (cases[i]->requiresIdentifierNameMatch() && cases[i]->uid()->isSymbol())
+                        cases[i]->generateWithGuard(state, fallThrough);
+                }
 
-        // Cascade through the list, preferring newer entries.
-        for (unsigned i = cases.size(); i--;) {
-            fallThrough.link(&jit);
-            fallThrough.clear();
-            cases[i]->generateWithGuard(state, fallThrough);
+                state.failAndRepatch.append(notSymbol);
+            }
+        } else {
+            // Cascade through the list, preferring newer entries.
+            for (unsigned i = cases.size(); i--;) {
+                fallThrough.link(&jit);
+                fallThrough.clear();
+                cases[i]->generateWithGuard(state, fallThrough);
+            }
         }
+
         state.failAndRepatch.append(fallThrough);
+
     } else {
         jit.load32(
             CCallHelpers::Address(state.baseGPR, JSCell::structureIDOffset()),
@@ -739,6 +827,54 @@ void printInternal(PrintStream& out, AccessCase::AccessType type)
     case AccessCase::InstanceOfGeneric:
         out.print("InstanceOfGeneric");
         return;
+    case AccessCase::IndexedInt32Load:
+        out.print("IndexedInt32Load");
+        return;
+    case AccessCase::IndexedDoubleLoad:
+        out.print("IndexedDoubleLoad");
+        return;
+    case AccessCase::IndexedContiguousLoad:
+        out.print("IndexedContiguousLoad");
+        return;
+    case AccessCase::IndexedArrayStorageLoad:
+        out.print("IndexedArrayStorageLoad");
+        return;
+    case AccessCase::IndexedScopedArgumentsLoad:
+        out.print("IndexedScopedArgumentsLoad");
+        return;
+    case AccessCase::IndexedDirectArgumentsLoad:
+        out.print("IndexedDirectArgumentsLoad");
+        return;
+    case AccessCase::IndexedTypedArrayInt8Load:
+        out.print("IndexedTypedArrayInt8Load");
+        return;
+    case AccessCase::IndexedTypedArrayUint8Load:
+        out.print("IndexedTypedArrayUint8Load");
+        return;
+    case AccessCase::IndexedTypedArrayUint8ClampedLoad:
+        out.print("IndexedTypedArrayUint8ClampedLoad");
+        return;
+    case AccessCase::IndexedTypedArrayInt16Load:
+        out.print("IndexedTypedArrayInt16Load");
+        return;
+    case AccessCase::IndexedTypedArrayUint16Load:
+        out.print("IndexedTypedArrayUint16Load");
+        return;
+    case AccessCase::IndexedTypedArrayInt32Load:
+        out.print("IndexedTypedArrayInt32Load");
+        return;
+    case AccessCase::IndexedTypedArrayUint32Load:
+        out.print("IndexedTypedArrayUint32Load");
+        return;
+    case AccessCase::IndexedTypedArrayFloat32Load:
+        out.print("IndexedTypedArrayFloat32Load");
+        return;
+    case AccessCase::IndexedTypedArrayFloat64Load:
+        out.print("IndexedTypedArrayFloat64Load");
+        return;
+    case AccessCase::IndexedStringLoad:
+        out.print("IndexedStringLoad");
+        return;
     }
 
     RELEASE_ASSERT_NOT_REACHED();
index 1ebe3c1..5de3b47 100644 (file)
@@ -137,12 +137,12 @@ public:
     // When this fails (returns GaveUp), this will leave the old stub intact but you should not try
     // to call this method again for that PolymorphicAccess instance.
     AccessGenerationResult addCases(
-        const GCSafeConcurrentJSLocker&, VM&, CodeBlock*, StructureStubInfo&, const Identifier&, Vector<std::unique_ptr<AccessCase>, 2>);
+        const GCSafeConcurrentJSLocker&, VM&, CodeBlock*, StructureStubInfo&, Vector<std::unique_ptr<AccessCase>, 2>);
 
     AccessGenerationResult addCase(
-        const GCSafeConcurrentJSLocker&, VM&, CodeBlock*, StructureStubInfo&, const Identifier&, std::unique_ptr<AccessCase>);
+        const GCSafeConcurrentJSLocker&, VM&, CodeBlock*, StructureStubInfo&, std::unique_ptr<AccessCase>);
     
-    AccessGenerationResult regenerate(const GCSafeConcurrentJSLocker&, VM&, CodeBlock*, StructureStubInfo&, const Identifier&);
+    AccessGenerationResult regenerate(const GCSafeConcurrentJSLocker&, VM&, CodeBlock*, StructureStubInfo&);
     
     bool isEmpty() const { return m_list.isEmpty(); }
     unsigned size() const { return m_list.size(); }
@@ -177,7 +177,7 @@ private:
     
     void commit(
         const GCSafeConcurrentJSLocker&, VM&, std::unique_ptr<WatchpointsOnStructureStubInfo>&, CodeBlock*, StructureStubInfo&,
-        const Identifier&, AccessCase&);
+        AccessCase&);
 
     ListType m_list;
     RefPtr<JITStubRoutine> m_stubRoutine;
@@ -209,10 +209,11 @@ struct AccessGenerationState {
     union {
         GPRReg thisGPR;
         GPRReg prototypeGPR;
+        GPRReg propertyGPR;
     } u;
     JSValueRegs valueRegs;
     GPRReg scratchGPR { InvalidGPRReg };
-    const Identifier* ident;
+    FPRReg scratchFPR { InvalidFPRReg };
     std::unique_ptr<WatchpointsOnStructureStubInfo> watchpoints;
     Vector<WriteBarrier<JSCell>> weakReferences;
 
index 8a3ef81..136a9ff 100644 (file)
 
 namespace JSC {
 
-ProxyableAccessCase::ProxyableAccessCase(VM& vm, JSCell* owner, AccessType accessType, PropertyOffset offset, Structure* structure,
+ProxyableAccessCase::ProxyableAccessCase(VM& vm, JSCell* owner, AccessType accessType, const Identifier& identifier, PropertyOffset offset, Structure* structure,
     const ObjectPropertyConditionSet& conditionSet, bool viaProxy, WatchpointSet* additionalSet, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
-    : Base(vm, owner, accessType, offset, structure, conditionSet, WTFMove(prototypeAccessChain))
+    : Base(vm, owner, accessType, identifier, offset, structure, conditionSet, WTFMove(prototypeAccessChain))
     , m_additionalSet(additionalSet)
 {
     m_viaProxy = viaProxy;
 }
 
-std::unique_ptr<AccessCase> ProxyableAccessCase::create(VM& vm, JSCell* owner, AccessType type, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet, bool viaProxy, WatchpointSet* additionalSet, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
+std::unique_ptr<AccessCase> ProxyableAccessCase::create(VM& vm, JSCell* owner, AccessType type, const Identifier& identifier, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet, bool viaProxy, WatchpointSet* additionalSet, std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain)
 {
     ASSERT(type == Load || type == Miss || type == GetGetter);
-    return std::unique_ptr<AccessCase>(new ProxyableAccessCase(vm, owner, type, offset, structure, conditionSet, viaProxy, additionalSet, WTFMove(prototypeAccessChain)));
+    return std::unique_ptr<AccessCase>(new ProxyableAccessCase(vm, owner, type, identifier, offset, structure, conditionSet, viaProxy, additionalSet, WTFMove(prototypeAccessChain)));
 }
 
 ProxyableAccessCase::~ProxyableAccessCase()
index a8e8cb2..ec5184e 100644 (file)
@@ -37,7 +37,7 @@ public:
 
     WatchpointSet* additionalSet() const override { return m_additionalSet.get(); }
 
-    static std::unique_ptr<AccessCase> create(VM&, JSCell*, AccessType, PropertyOffset, Structure*, const ObjectPropertyConditionSet& = ObjectPropertyConditionSet(),
+    static std::unique_ptr<AccessCase> create(VM&, JSCell*, AccessType, const Identifier&, PropertyOffset, Structure*, const ObjectPropertyConditionSet& = ObjectPropertyConditionSet(),
         bool viaProxy = false, WatchpointSet* additionalSet = nullptr, std::unique_ptr<PolyProtoAccessChain> = nullptr);
 
     void dumpImpl(PrintStream&, CommaPrinter&) const override;
@@ -46,7 +46,7 @@ public:
     ~ProxyableAccessCase();
 
 protected:
-    ProxyableAccessCase(VM&, JSCell*, AccessType, PropertyOffset, Structure*, const ObjectPropertyConditionSet&, bool viaProxy, WatchpointSet* additionalSet, std::unique_ptr<PolyProtoAccessChain>);
+    ProxyableAccessCase(VM&, JSCell*, AccessType, const Identifier&, PropertyOffset, Structure*, const ObjectPropertyConditionSet&, bool viaProxy, WatchpointSet* additionalSet, std::unique_ptr<PolyProtoAccessChain>);
 
 private:
     RefPtr<WatchpointSet> m_additionalSet;
index 3250be5..f495bef 100644 (file)
@@ -133,7 +133,7 @@ PutByIdStatus PutByIdStatus::computeForStubInfo(
     if (!isInlineable(summary))
         return PutByIdStatus(summary);
     
-    switch (stubInfo->cacheType) {
+    switch (stubInfo->cacheType()) {
     case CacheType::Unset:
         // This means that we attempted to cache but failed for some reason.
         return PutByIdStatus(JSC::slowVersion(summary));
index 53f208b..653eca1 100644 (file)
@@ -51,10 +51,10 @@ CallLinkStatus* RecordedStatuses::addCallLinkStatus(const CodeOrigin& codeOrigin
     return result;
 }
 
-GetByIdStatus* RecordedStatuses::addGetByIdStatus(const CodeOrigin& codeOrigin, const GetByIdStatus& status)
+GetByStatus* RecordedStatuses::addGetByStatus(const CodeOrigin& codeOrigin, const GetByStatus& status)
 {
-    auto statusPtr = makeUnique<GetByIdStatus>(status);
-    GetByIdStatus* result = statusPtr.get();
+    auto statusPtr = makeUnique<GetByStatus>(status);
+    GetByStatus* result = statusPtr.get();
     gets.append(std::make_pair(codeOrigin, WTFMove(statusPtr)));
     return result;
 }
index a03b4f6..7d33c8b 100644 (file)
@@ -26,7 +26,7 @@
 #pragma once
 
 #include "CallLinkStatus.h"
-#include "GetByIdStatus.h"
+#include "GetByStatus.h"
 #include "InByIdStatus.h"
 #include "PutByIdStatus.h"
 
@@ -44,7 +44,7 @@ struct RecordedStatuses {
     RecordedStatuses(RecordedStatuses&& other);
     
     CallLinkStatus* addCallLinkStatus(const CodeOrigin&, const CallLinkStatus&);
-    GetByIdStatus* addGetByIdStatus(const CodeOrigin&, const GetByIdStatus&);
+    GetByStatus* addGetByStatus(const CodeOrigin&, const GetByStatus&);
     PutByIdStatus* addPutByIdStatus(const CodeOrigin&, const PutByIdStatus&);
     InByIdStatus* addInByIdStatus(const CodeOrigin&, const InByIdStatus&);
     
@@ -65,7 +65,7 @@ struct RecordedStatuses {
     }
     
     Vector<std::pair<CodeOrigin, std::unique_ptr<CallLinkStatus>>> calls;
-    Vector<std::pair<CodeOrigin, std::unique_ptr<GetByIdStatus>>> gets;
+    Vector<std::pair<CodeOrigin, std::unique_ptr<GetByStatus>>> gets;
     Vector<std::pair<CodeOrigin, std::unique_ptr<PutByIdStatus>>> puts;
     Vector<std::pair<CodeOrigin, std::unique_ptr<InByIdStatus>>> ins;
 };
index d7b3b64..f1392bb 100644 (file)
@@ -41,7 +41,7 @@ static constexpr bool verbose = false;
 
 StructureStubInfo::StructureStubInfo(AccessType accessType)
     : accessType(accessType)
-    , cacheType(CacheType::Unset)
+    , m_cacheType(CacheType::Unset)
     , countdown(1) // For a totally clear stub, we'll patch it after the first execution.
     , repatchCount(0)
     , numberOfCoolDowns(0)
@@ -51,6 +51,10 @@ StructureStubInfo::StructureStubInfo(AccessType accessType)
     , everConsidered(false)
     , prototypeIsKnownObject(false)
     , sawNonCell(false)
+    , hasConstantIdentifier(true)
+    , propertyIsString(false)
+    , propertyIsInt32(false)
+    , propertyIsSymbol(false)
 {
 }
 
@@ -58,9 +62,11 @@ StructureStubInfo::~StructureStubInfo()
 {
 }
 
-void StructureStubInfo::initGetByIdSelf(CodeBlock* codeBlock, Structure* baseObjectStructure, PropertyOffset offset)
+void StructureStubInfo::initGetByIdSelf(CodeBlock* codeBlock, Structure* baseObjectStructure, PropertyOffset offset, const Identifier& identifier)
 {
-    cacheType = CacheType::GetByIdSelf;
+    ASSERT(hasConstantIdentifier);
+    setCacheType(CacheType::GetByIdSelf);
+    m_getByIdSelfIdentifier = Box<Identifier>::create(identifier);
     
     u.byIdSelf.baseObjectStructure.set(
         codeBlock->vm(), codeBlock, baseObjectStructure);
@@ -69,17 +75,17 @@ void StructureStubInfo::initGetByIdSelf(CodeBlock* codeBlock, Structure* baseObj
 
 void StructureStubInfo::initArrayLength()
 {
-    cacheType = CacheType::ArrayLength;
+    setCacheType(CacheType::ArrayLength);
 }
 
 void StructureStubInfo::initStringLength()
 {
-    cacheType = CacheType::StringLength;
+    setCacheType(CacheType::StringLength);
 }
 
 void StructureStubInfo::initPutByIdReplace(CodeBlock* codeBlock, Structure* baseObjectStructure, PropertyOffset offset)
 {
-    cacheType = CacheType::PutByIdReplace;
+    setCacheType(CacheType::PutByIdReplace);
     
     u.byIdSelf.baseObjectStructure.set(
         codeBlock->vm(), codeBlock, baseObjectStructure);
@@ -88,7 +94,7 @@ void StructureStubInfo::initPutByIdReplace(CodeBlock* codeBlock, Structure* base
 
 void StructureStubInfo::initInByIdSelf(CodeBlock* codeBlock, Structure* baseObjectStructure, PropertyOffset offset)
 {
-    cacheType = CacheType::InByIdSelf;
+    setCacheType(CacheType::InByIdSelf);
 
     u.byIdSelf.baseObjectStructure.set(
         codeBlock->vm(), codeBlock, baseObjectStructure);
@@ -97,7 +103,7 @@ void StructureStubInfo::initInByIdSelf(CodeBlock* codeBlock, Structure* baseObje
 
 void StructureStubInfo::deref()
 {
-    switch (cacheType) {
+    switch (m_cacheType) {
     case CacheType::Stub:
         delete u.stub;
         return;
@@ -115,7 +121,7 @@ void StructureStubInfo::deref()
 
 void StructureStubInfo::aboutToDie()
 {
-    switch (cacheType) {
+    switch (m_cacheType) {
     case CacheType::Stub:
         u.stub->aboutToDie();
         return;
@@ -134,6 +140,8 @@ void StructureStubInfo::aboutToDie()
 AccessGenerationResult StructureStubInfo::addAccessCase(
     const GCSafeConcurrentJSLocker& locker, CodeBlock* codeBlock, const Identifier& ident, std::unique_ptr<AccessCase> accessCase)
 {
+    checkConsistency();
+
     VM& vm = codeBlock->vm();
     ASSERT(vm.heap.isDeferred());
     AccessGenerationResult result = ([&] () -> AccessGenerationResult {
@@ -145,8 +153,8 @@ AccessGenerationResult StructureStubInfo::addAccessCase(
         
         AccessGenerationResult result;
         
-        if (cacheType == CacheType::Stub) {
-            result = u.stub->addCase(locker, vm, codeBlock, *this, ident, WTFMove(accessCase));
+        if (m_cacheType == CacheType::Stub) {
+            result = u.stub->addCase(locker, vm, codeBlock, *this, WTFMove(accessCase));
             
             if (StructureStubInfoInternal::verbose)
                 dataLog("Had stub, result: ", result, "\n");
@@ -163,14 +171,13 @@ AccessGenerationResult StructureStubInfo::addAccessCase(
             
             Vector<std::unique_ptr<AccessCase>, 2> accessCases;
             
-            std::unique_ptr<AccessCase> previousCase =
-                AccessCase::fromStructureStubInfo(vm, codeBlock, *this);
+            std::unique_ptr<AccessCase> previousCase = AccessCase::fromStructureStubInfo(vm, codeBlock, ident, *this);
             if (previousCase)
                 accessCases.append(WTFMove(previousCase));
             
             accessCases.append(WTFMove(accessCase));
             
-            result = access->addCases(locker, vm, codeBlock, *this, ident, WTFMove(accessCases));
+            result = access->addCases(locker, vm, codeBlock, *this, WTFMove(accessCases));
             
             if (StructureStubInfoInternal::verbose)
                 dataLog("Created stub, result: ", result, "\n");
@@ -183,7 +190,7 @@ AccessGenerationResult StructureStubInfo::addAccessCase(
                 return result;
             }
             
-            cacheType = CacheType::Stub;
+            setCacheType(CacheType::Stub);
             u.stub = access.release();
         }
         
@@ -209,7 +216,7 @@ AccessGenerationResult StructureStubInfo::addAccessCase(
         // PolymorphicAccess.
         bufferedStructures.clear();
         
-        result = u.stub->regenerate(locker, vm, codeBlock, *this, ident);
+        result = u.stub->regenerate(locker, vm, codeBlock, *this);
         
         if (StructureStubInfoInternal::verbose)
             dataLog("Regeneration result: ", result, "\n");
@@ -232,7 +239,7 @@ void StructureStubInfo::reset(CodeBlock* codeBlock)
 {
     bufferedStructures.clear();
 
-    if (cacheType == CacheType::Unset)
+    if (m_cacheType == CacheType::Unset)
         return;
 
     if (Options::verboseOSR()) {
@@ -242,17 +249,20 @@ void StructureStubInfo::reset(CodeBlock* codeBlock)
     }
 
     switch (accessType) {
-    case AccessType::TryGet:
-        resetGetByID(codeBlock, *this, GetByIDKind::Try);
+    case AccessType::TryGetById:
+        resetGetBy(codeBlock, *this, GetByKind::Try);
+        break;
+    case AccessType::GetById:
+        resetGetBy(codeBlock, *this, GetByKind::Normal);
         break;
-    case AccessType::Get:
-        resetGetByID(codeBlock, *this, GetByIDKind::Normal);
+    case AccessType::GetByIdWithThis:
+        resetGetBy(codeBlock, *this, GetByKind::WithThis);
         break;
-    case AccessType::GetWithThis:
-        resetGetByID(codeBlock, *this, GetByIDKind::WithThis);
+    case AccessType::GetByIdDirect:
+        resetGetBy(codeBlock, *this, GetByKind::Direct);
         break;
-    case AccessType::GetDirect:
-        resetGetByID(codeBlock, *this, GetByIDKind::Direct);
+    case AccessType::GetByVal:
+        resetGetBy(codeBlock, *this, GetByKind::NormalByVal);
         break;
     case AccessType::Put:
         resetPutByID(codeBlock, *this);
@@ -266,7 +276,7 @@ void StructureStubInfo::reset(CodeBlock* codeBlock)
     }
     
     deref();
-    cacheType = CacheType::Unset;
+    setCacheType(CacheType::Unset);
 }
 
 void StructureStubInfo::visitWeakReferences(CodeBlock* codeBlock)
@@ -278,7 +288,7 @@ void StructureStubInfo::visitWeakReferences(CodeBlock* codeBlock)
             return vm.heap.isMarked(structure);
         });
 
-    switch (cacheType) {
+    switch (m_cacheType) {
     case CacheType::GetByIdSelf:
     case CacheType::PutByIdReplace:
     case CacheType::InByIdSelf:
@@ -299,7 +309,7 @@ void StructureStubInfo::visitWeakReferences(CodeBlock* codeBlock)
 
 bool StructureStubInfo::propagateTransitions(SlotVisitor& visitor)
 {
-    switch (cacheType) {
+    switch (m_cacheType) {
     case CacheType::Unset:
     case CacheType::ArrayLength:
     case CacheType::StringLength:
@@ -320,7 +330,7 @@ StubInfoSummary StructureStubInfo::summary() const
 {
     StubInfoSummary takesSlowPath = StubInfoSummary::TakesSlowPath;
     StubInfoSummary simple = StubInfoSummary::Simple;
-    if (cacheType == CacheType::Stub) {
+    if (m_cacheType == CacheType::Stub) {
         PolymorphicAccess* list = u.stub;
         for (unsigned i = 0; i < list->size(); ++i) {
             const AccessCase& access = list->at(i);
@@ -351,11 +361,29 @@ StubInfoSummary StructureStubInfo::summary(const StructureStubInfo* stubInfo)
 
 bool StructureStubInfo::containsPC(void* pc) const
 {
-    if (cacheType != CacheType::Stub)
+    if (m_cacheType != CacheType::Stub)
         return false;
     return u.stub->containsPC(pc);
 }
 
+void StructureStubInfo::setCacheType(CacheType newCacheType)
+{
+    if (m_cacheType == CacheType::GetByIdSelf)
+        m_getByIdSelfIdentifier = nullptr;
+    m_cacheType = newCacheType;
+}
+
+#if !ASSERT_DISABLED
+void StructureStubInfo::checkConsistency()
+{
+    if (thisValueIsInThisGPR()) {
+        // We currently use a union for both "thisGPR" and "propertyGPR". If this were
+        // not the case, we'd need to take one of them out of the union.
+        RELEASE_ASSERT(hasConstantIdentifier);
+    }
+}
+#endif
+
 #endif // ENABLE(JIT)
 
 } // namespace JSC
index 81412e4..34b39c6 100644 (file)
@@ -36,6 +36,7 @@
 #include "StructureSet.h"
 #include "StructureStubClearingWatchpoint.h"
 #include "StubInfoSummary.h"
+#include <wtf/Box.h>
 
 namespace JSC {
 
@@ -46,10 +47,11 @@ class AccessGenerationResult;
 class PolymorphicAccess;
 
 enum class AccessType : int8_t {
-    Get,
-    GetWithThis,
-    GetDirect,
-    TryGet,
+    GetById,
+    GetByIdWithThis,
+    GetByIdDirect,
+    TryGetById,
+    GetByVal,
     Put,
     In,
     InstanceOf
@@ -72,7 +74,7 @@ public:
     StructureStubInfo(AccessType);
     ~StructureStubInfo();
 
-    void initGetByIdSelf(CodeBlock*, Structure* baseObjectStructure, PropertyOffset);
+    void initGetByIdSelf(CodeBlock*, Structure* baseObjectStructure, PropertyOffset, const Identifier&);
     void initArrayLength();
     void initStringLength();
     void initPutByIdReplace(CodeBlock*, Structure* baseObjectStructure, PropertyOffset);
@@ -168,7 +170,9 @@ public:
     bool containsPC(void* pc) const;
 
     CodeOrigin codeOrigin;
-    CallSiteIndex callSiteIndex;
+private:
+    Box<Identifier> m_getByIdSelfIdentifier;
+public:
 
     union {
         struct {
@@ -177,6 +181,12 @@ public:
         } byIdSelf;
         PolymorphicAccess* stub;
     } u;
+
+    Box<Identifier> getByIdSelfIdentifier()
+    {
+        RELEASE_ASSERT(m_cacheType == CacheType::GetByIdSelf);
+        return m_getByIdSelfIdentifier;
+    }
     
     // Represents those structures that already have buffered AccessCases in the PolymorphicAccess.
     // Note that it's always safe to clear this. If we clear it prematurely, then if we see the same
@@ -204,6 +214,7 @@ public:
         union {
             GPRReg thisGPR;
             GPRReg prototypeGPR;
+            GPRReg propertyGPR;
         } u;
 #if USE(JSVALUE32_64)
         GPRReg valueTagGPR;
@@ -236,18 +247,36 @@ public:
             patch.valueGPR);
     }
 
+    bool thisValueIsInThisGPR() const { return accessType == AccessType::GetByIdWithThis; }
+
+#if !ASSERT_DISABLED
+    void checkConsistency();
+#else
+    ALWAYS_INLINE void checkConsistency() { }
+#endif
 
     AccessType accessType;
-    CacheType cacheType;
+private:
+    CacheType m_cacheType;
+    void setCacheType(CacheType);
+public:
+    CacheType cacheType() const { return m_cacheType; }
     uint8_t countdown; // We repatch only when this is zero. If not zero, we decrement.
     uint8_t repatchCount;
     uint8_t numberOfCoolDowns;
+
+    CallSiteIndex callSiteIndex;
+
     uint8_t bufferingCountdown;
     bool resetByGC : 1;
     bool tookSlowPath : 1;
     bool everConsidered : 1;
     bool prototypeIsKnownObject : 1; // Only relevant for InstanceOf.
     bool sawNonCell : 1;
+    bool hasConstantIdentifier : 1;
+    bool propertyIsString : 1;
+    bool propertyIsInt32 : 1;
+    bool propertyIsSymbol : 1;
 };
 
 inline CodeOrigin getStructureStubInfoCodeOrigin(StructureStubInfo& structureStubInfo)
@@ -258,13 +287,13 @@ inline CodeOrigin getStructureStubInfoCodeOrigin(StructureStubInfo& structureStu
 inline auto appropriateOptimizingGetByIdFunction(AccessType type) -> decltype(&operationGetByIdOptimize)
 {
     switch (type) {
-    case AccessType::Get:
+    case AccessType::GetById:
         return operationGetByIdOptimize;
-    case AccessType::TryGet:
+    case AccessType::TryGetById:
         return operationTryGetByIdOptimize;
-    case AccessType::GetDirect:
+    case AccessType::GetByIdDirect:
         return operationGetByIdDirectOptimize;
-    case AccessType::GetWithThis:
+    case AccessType::GetByIdWithThis:
     default:
         ASSERT_NOT_REACHED();
         return nullptr;
@@ -274,13 +303,13 @@ inline auto appropriateOptimizingGetByIdFunction(AccessType type) -> decltype(&o
 inline auto appropriateGenericGetByIdFunction(AccessType type) -> decltype(&operationGetByIdGeneric)
 {
     switch (type) {
-    case AccessType::Get:
+    case AccessType::GetById:
         return operationGetByIdGeneric;
-    case AccessType::TryGet:
+    case AccessType::TryGetById:
         return operationTryGetByIdGeneric;
-    case AccessType::GetDirect:
+    case AccessType::GetByIdDirect:
         return operationGetByIdDirectGeneric;
-    case AccessType::GetWithThis:
+    case AccessType::GetByIdWithThis:
     default:
         ASSERT_NOT_REACHED();
         return nullptr;
index d0ae0a6..273b8e0 100644 (file)
@@ -33,7 +33,7 @@
 #include "DFGAbstractInterpreterClobberState.h"
 #include "DOMJITGetterSetter.h"
 #include "DOMJITSignature.h"
-#include "GetByIdStatus.h"
+#include "GetByStatus.h"
 #include "GetterSetter.h"
 #include "HashMapImpl.h"
 #include "JITOperations.h"
@@ -3123,7 +3123,7 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
         if (value.m_structure.isFinite()
             && (node->child1().useKind() == CellUse || !(value.m_type & ~SpecCell))) {
             UniquedStringImpl* uid = m_graph.identifiers()[node->identifierNumber()];
-            GetByIdStatus status = GetByIdStatus::computeFor(value.m_structure.toStructureSet(), uid);
+            GetByStatus status = GetByStatus::computeFor(value.m_structure.toStructureSet(), uid);
             if (status.isSimple()) {
                 // Figure out what the result is going to be - is it TOP, a constant, or maybe
                 // something more subtle?
@@ -3686,21 +3686,29 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
         break;
     }
 
-    case CheckStringIdent: {
+    case CheckIdent: {
         AbstractValue& value = forNode(node->child1());
         UniquedStringImpl* uid = node->uidOperand();
-        ASSERT(!(value.m_type & ~SpecStringIdent)); // Edge filtering should have already ensured this.
 
         JSValue childConstant = value.value();
         if (childConstant) {
-            ASSERT(childConstant.isString());
-            if (asString(childConstant)->tryGetValueImpl() == uid) {
-                m_state.setShouldTryConstantFolding(true);
-                break;
+            if (childConstant.isString()) {
+                if (asString(childConstant)->tryGetValueImpl() == uid) {
+                    m_state.setShouldTryConstantFolding(true);
+                    break;
+                }
+            } else if (childConstant.isSymbol()) {
+                if (&jsCast<Symbol*>(childConstant)->privateName().uid() == uid) {
+                    m_state.setShouldTryConstantFolding(true);
+                    break;
+                }
             }
         }
 
-        filter(value, SpecStringIdent);
+        if (node->child1().useKind() == StringIdentUse)
+            filter(value, SpecStringIdent);
+        else
+            filter(value, SpecSymbol);
         break;
     }
 
@@ -4022,7 +4030,7 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
     case ZombieHint:
     case ExitOK:
     case FilterCallLinkStatus:
-    case FilterGetByIdStatus:
+    case FilterGetByStatus:
     case FilterPutByIdStatus:
     case FilterInByIdStatus:
     case ClearCatchLocals:
@@ -4185,10 +4193,10 @@ void AbstractInterpreter<AbstractStateType>::filterICStatus(Node* node)
             node->callLinkStatus()->filter(m_vm, value);
         break;
         
-    case FilterGetByIdStatus: {
+    case FilterGetByStatus: {
         AbstractValue& value = forNode(node->child1());
         if (value.m_structure.isFinite())
-            node->getByIdStatus()->filter(value.m_structure.toStructureSet());
+            node->getByStatus()->filter(value.m_structure.toStructureSet());
         break;
     }
         
index ef65a5f..af2eda4 100644 (file)
@@ -400,7 +400,7 @@ private:
                     // butterfly's child and check if it's a candidate.
                     break;
                     
-                case FilterGetByIdStatus:
+                case FilterGetByStatus:
                 case FilterPutByIdStatus:
                 case FilterCallLinkStatus:
                 case FilterInByIdStatus:
@@ -1246,7 +1246,7 @@ private:
                     
                 case CheckArray:
                 case GetButterfly:
-                case FilterGetByIdStatus:
+                case FilterGetByStatus:
                 case FilterPutByIdStatus:
                 case FilterCallLinkStatus:
                 case FilterInByIdStatus: {
index 34f45e8..816cc53 100644 (file)
@@ -46,7 +46,7 @@
 #include "DFGGraph.h"
 #include "DFGJITCode.h"
 #include "FunctionCodeBlock.h"
-#include "GetByIdStatus.h"
+#include "GetByStatus.h"
 #include "GetterSetter.h"
 #include "Heap.h"
 #include "InByIdStatus.h"
@@ -192,7 +192,7 @@ private:
     Node* handlePutByOffset(Node* base, unsigned identifier, PropertyOffset, Node* value);
     Node* handleGetByOffset(SpeculatedType, Node* base, unsigned identifierNumber, PropertyOffset, NodeType = GetByOffset);
     bool handleDOMJITGetter(VirtualRegister result, const GetByIdVariant&, Node* thisNode, unsigned identifierNumber, SpeculatedType prediction);
-    bool handleModuleNamespaceLoad(VirtualRegister result, SpeculatedType, Node* base, GetByIdStatus);
+    bool handleModuleNamespaceLoad(VirtualRegister result, SpeculatedType, Node* base, GetByStatus);
 
     template<typename Bytecode>
     void handlePutByVal(Bytecode, unsigned instructionSize);
@@ -227,7 +227,7 @@ private:
     template<typename Op>
     void parseGetById(const Instruction*);
     void handleGetById(
-        VirtualRegister destination, SpeculatedType, Node* base, unsigned identifierNumber, GetByIdStatus, AccessType, unsigned instructionSize);
+        VirtualRegister destination, SpeculatedType, Node* base, unsigned identifierNumber, GetByStatus, AccessType, unsigned instructionSize);
     void emitPutById(
         Node* base, unsigned identifierNumber, Node* value,  const PutByIdStatus&, bool isDirect);
     void handlePutById(
@@ -3665,7 +3665,7 @@ bool ByteCodeParser::handleDOMJITGetter(VirtualRegister result, const GetByIdVar
     if (!variant.domAttribute())
         return false;
 
-    auto domAttribute = variant.domAttribute().value();
+    auto* domAttribute = variant.domAttribute();
 
     // We do not need to actually look up CustomGetterSetter here. Checking Structures or registering watchpoints are enough,
     // since replacement of CustomGetterSetter always incurs Structure transition.
@@ -3674,16 +3674,19 @@ bool ByteCodeParser::handleDOMJITGetter(VirtualRegister result, const GetByIdVar
     addToGraph(CheckStructure, OpInfo(m_graph.addStructureSet(variant.structureSet())), thisNode);
     
     // We do not need to emit CheckCell thingy here. When the custom accessor is replaced to different one, Structure transition occurs.
-    addToGraph(CheckSubClass, OpInfo(domAttribute.classInfo), thisNode);
+    addToGraph(CheckSubClass, OpInfo(domAttribute->classInfo), thisNode);
     
     bool wasSeenInJIT = true;
-    addToGraph(FilterGetByIdStatus, OpInfo(m_graph.m_plan.recordedStatuses().addGetByIdStatus(currentCodeOrigin(), GetByIdStatus(GetByIdStatus::Custom, wasSeenInJIT, variant))), thisNode);
+    GetByStatus* status = m_graph.m_plan.recordedStatuses().addGetByStatus(currentCodeOrigin(), GetByStatus(GetByStatus::Custom, wasSeenInJIT));
+    bool success = status->appendVariant(variant);
+    RELEASE_ASSERT(success);
+    addToGraph(FilterGetByStatus, OpInfo(status), thisNode);
 
     CallDOMGetterData* callDOMGetterData = m_graph.m_callDOMGetterData.add();
     callDOMGetterData->customAccessorGetter = variant.customAccessorGetter();
     ASSERT(callDOMGetterData->customAccessorGetter);
 
-    if (const auto* domJIT = domAttribute.domJIT) {
+    if (const auto* domJIT = domAttribute->domJIT) {
         callDOMGetterData->domJIT = domJIT;
         Ref<DOMJIT::CallDOMGetterSnippet> snippet = domJIT->compiler()();
         callDOMGetterData->snippet = snippet.ptr();
@@ -3704,13 +3707,13 @@ bool ByteCodeParser::handleDOMJITGetter(VirtualRegister result, const GetByIdVar
     return true;
 }
 
-bool ByteCodeParser::handleModuleNamespaceLoad(VirtualRegister result, SpeculatedType prediction, Node* base, GetByIdStatus getById)
+bool ByteCodeParser::handleModuleNamespaceLoad(VirtualRegister result, SpeculatedType prediction, Node* base, GetByStatus getById)
 {
     if (m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadCell))
         return false;
     addToGraph(CheckCell, OpInfo(m_graph.freeze(getById.moduleNamespaceObject())), Edge(base, CellUse));
 
-    addToGraph(FilterGetByIdStatus, OpInfo(m_graph.m_plan.recordedStatuses().addGetByIdStatus(currentCodeOrigin(), getById)), base);
+    addToGraph(FilterGetByStatus, OpInfo(m_graph.m_plan.recordedStatuses().addGetByStatus(currentCodeOrigin(), getById)), base);
 
     // Ideally we wouldn't have to do this Phantom. But:
     //
@@ -4222,7 +4225,7 @@ Node* ByteCodeParser::load(
                 // the base not to have the property. We can only use ObjectPropertyCondition if all of
                 // the structures in the variant.structureSet() agree on the prototype (it would be
                 // hilariously rare if they didn't). Note that we are relying on structureSet() having
-                // at least one element. That will always be true here because of how GetByIdStatus/PutByIdStatus work.
+                // at least one element. That will always be true here because of how GetByStatus/PutByIdStatus work.
 
                 // FIXME: right now, if we have an OPCS, we have mono proto. However, this will
                 // need to be changed in the future once we have a hybrid data structure for
@@ -4311,9 +4314,9 @@ Node* ByteCodeParser::store(Node* base, unsigned identifier, const PutByIdVarian
 
 void ByteCodeParser::handleGetById(
     VirtualRegister destination, SpeculatedType prediction, Node* base, unsigned identifierNumber,
-    GetByIdStatus getByIdStatus, AccessType type, unsigned instructionSize)
+    GetByStatus getByStatus, AccessType type, unsigned instructionSize)
 {
-    // Attempt to reduce the set of things in the GetByIdStatus.
+    // Attempt to reduce the set of things in the GetByStatus.
     if (base->op() == NewObject) {
         bool ok = true;
         for (unsigned i = m_currentBlock->size(); i--;) {
@@ -4326,19 +4329,19 @@ void ByteCodeParser::handleGetById(
             }
         }
         if (ok)
-            getByIdStatus.filter(base->structure().get());
+            getByStatus.filter(base->structure().get());
     }
     
     NodeType getById;
-    if (type == AccessType::Get)
-        getById = getByIdStatus.makesCalls() ? GetByIdFlush : GetById;
-    else if (type == AccessType::TryGet)
+    if (type == AccessType::GetById)
+        getById = getByStatus.makesCalls() ? GetByIdFlush : GetById;
+    else if (type == AccessType::TryGetById)
         getById = TryGetById;
     else
-        getById = getByIdStatus.makesCalls() ? GetByIdDirectFlush : GetByIdDirect;
+        getById = getByStatus.makesCalls() ? GetByIdDirectFlush : GetByIdDirect;
 
-    if (getById != TryGetById && getByIdStatus.isModuleNamespace()) {
-        if (handleModuleNamespaceLoad(destination, prediction, base, getByIdStatus)) {
+    if (getById != TryGetById && getByStatus.isModuleNamespace()) {
+        if (handleModuleNamespaceLoad(destination, prediction, base, getByStatus)) {
             if (UNLIKELY(m_graph.compilation()))
                 m_graph.compilation()->noticeInlinedGetById();
             return;
@@ -4347,10 +4350,10 @@ void ByteCodeParser::handleGetById(
 
     // Special path for custom accessors since custom's offset does not have any meanings.
     // So, this is completely different from Simple one. But we have a chance to optimize it when we use DOMJIT.
-    if (Options::useDOMJIT() && getByIdStatus.isCustom()) {
-        ASSERT(getByIdStatus.numVariants() == 1);
-        ASSERT(!getByIdStatus.makesCalls());
-        GetByIdVariant variant = getByIdStatus[0];
+    if (Options::useDOMJIT() && getByStatus.isCustom()) {
+        ASSERT(getByStatus.numVariants() == 1);
+        ASSERT(!getByStatus.makesCalls());
+        GetByIdVariant variant = getByStatus[0];
         ASSERT(variant.domAttribute());
         if (handleDOMJITGetter(destination, variant, base, identifierNumber, prediction)) {
             if (UNLIKELY(m_graph.compilation()))
@@ -4359,34 +4362,34 @@ void ByteCodeParser::handleGetById(
         }
     }
 
-    ASSERT(type == AccessType::Get || type == AccessType::GetDirect ||  !getByIdStatus.makesCalls());
-    if (!getByIdStatus.isSimple() || !getByIdStatus.numVariants() || !Options::useAccessInlining()) {
+    ASSERT(type == AccessType::GetById || type == AccessType::GetByIdDirect ||  !getByStatus.makesCalls());
+    if (!getByStatus.isSimple() || !getByStatus.numVariants() || !Options::useAccessInlining()) {
         set(destination,
             addToGraph(getById, OpInfo(identifierNumber), OpInfo(prediction), base));
         return;
     }
     
-    // FIXME: If we use the GetByIdStatus for anything then we should record it and insert a node
+    // FIXME: If we use the GetByStatus for anything then we should record it and insert a node
     // after everything else (like the GetByOffset or whatever) that will filter the recorded
-    // GetByIdStatus. That means that the constant folder also needs to do the same!
+    // GetByStatus. That means that the constant folder also needs to do the same!
     
-    if (getByIdStatus.numVariants() > 1) {
-        if (getByIdStatus.makesCalls() || !m_graph.m_plan.isFTL()
+    if (getByStatus.numVariants() > 1) {
+        if (getByStatus.makesCalls() || !m_graph.m_plan.isFTL()
             || !Options::usePolymorphicAccessInlining()
-            || getByIdStatus.numVariants() > Options::maxPolymorphicAccessInliningListSize()) {
+            || getByStatus.numVariants() > Options::maxPolymorphicAccessInliningListSize()) {
             set(destination,
                 addToGraph(getById, OpInfo(identifierNumber), OpInfo(prediction), base));
             return;
         }
 
-        addToGraph(FilterGetByIdStatus, OpInfo(m_graph.m_plan.recordedStatuses().addGetByIdStatus(currentCodeOrigin(), getByIdStatus)), base);
+        addToGraph(FilterGetByStatus, OpInfo(m_graph.m_plan.recordedStatuses().addGetByStatus(currentCodeOrigin(), getByStatus)), base);
 
         Vector<MultiGetByOffsetCase, 2> cases;
         
         // 1) Emit prototype structure checks for all chains. This could sort of maybe not be
         //    optimal, if there is some rarely executed case in the chain that requires a lot
         //    of checks and those checks are not watchpointable.
-        for (const GetByIdVariant& variant : getByIdStatus.variants()) {
+        for (const GetByIdVariant& variant : getByStatus.variants()) {
             if (variant.intrinsic() != NoIntrinsic) {
                 set(destination,
                     addToGraph(getById, OpInfo(identifierNumber), OpInfo(prediction), base));
@@ -4423,10 +4426,10 @@ void ByteCodeParser::handleGetById(
         return;
     }
 
-    addToGraph(FilterGetByIdStatus, OpInfo(m_graph.m_plan.recordedStatuses().addGetByIdStatus(currentCodeOrigin(), getByIdStatus)), base);
+    addToGraph(FilterGetByStatus, OpInfo(m_graph.m_plan.recordedStatuses().addGetByStatus(currentCodeOrigin(), getByStatus)), base);
 
-    ASSERT(getByIdStatus.numVariants() == 1);
-    GetByIdVariant variant = getByIdStatus[0];
+    ASSERT(getByStatus.numVariants() == 1);
+    GetByIdVariant variant = getByStatus[0];
     
     Node* loadedValue = load(prediction, base, identifierNumber, variant);
     if (!loadedValue) {
@@ -4438,7 +4441,7 @@ void ByteCodeParser::handleGetById(
     if (UNLIKELY(m_graph.compilation()))
         m_graph.compilation()->noticeInlinedGetById();
 
-    ASSERT(type == AccessType::Get || type == AccessType::GetDirect || !variant.callLinkStatus());
+    ASSERT(type == AccessType::GetById || type == AccessType::GetByIdDirect || !variant.callLinkStatus());
     if (!variant.callLinkStatus() && variant.intrinsic() == NoIntrinsic) {
         set(destination, loadedValue);
         return;
@@ -4707,21 +4710,20 @@ void ByteCodeParser::parseGetById(const Instruction* currentInstruction)
     Node* base = get(bytecode.m_base);
     unsigned identifierNumber = m_inlineStackTop->m_identifierRemap[bytecode.m_property];
     
-    UniquedStringImpl* uid = m_graph.identifiers()[identifierNumber];
-    GetByIdStatus getByIdStatus = GetByIdStatus::computeFor(
+    GetByStatus getByStatus = GetByStatus::computeFor(
         m_inlineStackTop->m_profiledBlock,
         m_inlineStackTop->m_baselineMap, m_icContextStack,
-        currentCodeOrigin(), uid);
+        currentCodeOrigin());
 
-    AccessType type = AccessType::Get;
+    AccessType type = AccessType::GetById;
     unsigned opcodeLength = currentInstruction->size();
     if (Op::opcodeID == op_try_get_by_id)
-        type = AccessType::TryGet;
+        type = AccessType::TryGetById;
     else if (Op::opcodeID == op_get_by_id_direct)
-        type = AccessType::GetDirect;
+        type = AccessType::GetByIdDirect;
 
     handleGetById(
-        bytecode.m_dst, prediction, base, identifierNumber, getByIdStatus, type, opcodeLength);
+        bytecode.m_dst, prediction, base, identifierNumber, getByStatus, type, opcodeLength);
 
 }
 
@@ -5619,40 +5621,29 @@ void ByteCodeParser::parseBlock(unsigned limit)
 
             Node* base = get(bytecode.m_base);
             Node* property = get(bytecode.m_property);
-            bool compiledAsGetById = false;
-            GetByIdStatus getByIdStatus;
+            bool shouldCompileAsGetById = false;
+            GetByStatus getByStatus = GetByStatus::computeFor(m_inlineStackTop->m_profiledBlock, m_inlineStackTop->m_baselineMap, m_icContextStack, currentCodeOrigin());
             unsigned identifierNumber = 0;
             {
-                ConcurrentJSLocker locker(m_inlineStackTop->m_profiledBlock->m_lock);
-                ByValInfo* byValInfo = m_inlineStackTop->m_baselineMap.get(CodeOrigin(currentCodeOrigin().bytecodeIndex())).byValInfo;
                 // FIXME: When the bytecode is not compiled in the baseline JIT, byValInfo becomes null.
                 // At that time, there is no information.
-                if (byValInfo
-                    && byValInfo->stubInfo
-                    && !byValInfo->tookSlowPath
-                    && !m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadIdent)
+                if (!m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadIdent)
                     && !m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadType)
                     && !m_inlineStackTop->m_exitProfile.hasExitSite(m_currentIndex, BadCell)) {
-                    compiledAsGetById = true;
-                    identifierNumber = m_graph.identifiers().ensure(byValInfo->cachedId.impl());
-                    UniquedStringImpl* uid = m_graph.identifiers()[identifierNumber];
-
-                    if (Symbol* symbol = byValInfo->cachedSymbol.get()) {
-                        FrozenValue* frozen = m_graph.freezeStrong(symbol);
-                        addToGraph(CheckCell, OpInfo(frozen), property);
-                    } else {
-                        ASSERT(!uid->isSymbol());
-                        addToGraph(CheckStringIdent, OpInfo(uid), property);
-                    }
 
-                    getByIdStatus = GetByIdStatus::computeForStubInfo(
-                        locker, m_inlineStackTop->m_profiledBlock,
-                        byValInfo->stubInfo, currentCodeOrigin(), uid);
+                    // FIXME: In the future, we should be able to do something like MultiGetByOffset in a multi identifier mode.
+                    // That way, we could both switch on multiple structures and multiple identifiers (or int 32 properties).
+                    // https://bugs.webkit.org/show_bug.cgi?id=204216
+                    if (Box<Identifier> impl = getByStatus.singleIdentifier()) {
+                        identifierNumber = m_graph.identifiers().ensure(impl);
+                        shouldCompileAsGetById = true;
+                        addToGraph(CheckIdent, OpInfo(impl->impl()), property);
+                    }
                 }
             }
 
-            if (compiledAsGetById)
-                handleGetById(bytecode.m_dst, prediction, base, identifierNumber, getByIdStatus, AccessType::Get, currentInstruction->size());
+            if (shouldCompileAsGetById)
+                handleGetById(bytecode.m_dst, prediction, base, identifierNumber, getByStatus, AccessType::GetById, currentInstruction->size());
             else {
                 ArrayMode arrayMode = getArrayMode(bytecode.metadata(codeBlock).m_arrayProfile, Array::Read);
                 // FIXME: We could consider making this not vararg, since it only uses three child
@@ -5664,6 +5655,8 @@ void ByteCodeParser::parseBlock(unsigned limit)
                 Node* getByVal = addToGraph(Node::VarArg, GetByVal, OpInfo(arrayMode.asWord()), OpInfo(prediction));
                 m_exitOK = false; // GetByVal must be treated as if it clobbers exit state, since FixupPhase may make it generic.
                 set(bytecode.m_dst, getByVal);
+                if (getByStatus.takesSlowPath())
+                    m_graph.m_slowGetByVal.add(getByVal);
             }
 
             NEXT_OPCODE(op_get_by_val);
@@ -6542,8 +6535,8 @@ void ByteCodeParser::parseBlock(unsigned limit)
 
                 SpeculatedType prediction = getPrediction();
 
-                GetByIdStatus status = GetByIdStatus::computeFor(structure, uid);
-                if (status.state() != GetByIdStatus::Simple
+                GetByStatus status = GetByStatus::computeFor(structure, uid);
+                if (status.state() != GetByStatus::Simple
                     || status.numVariants() != 1
                     || status[0].structureSet().size() != 1) {
                     set(bytecode.m_dst, addToGraph(GetByIdFlush, OpInfo(identifierNumber), OpInfo(prediction), get(bytecode.m_scope)));
@@ -7455,7 +7448,7 @@ void ByteCodeParser::handlePutByVal(Bytecode bytecode, unsigned instructionSize)
                     addToGraph(CheckCell, OpInfo(frozen), property);
                 } else {
                     ASSERT(!uid->isSymbol());
-                    addToGraph(CheckStringIdent, OpInfo(uid), property);
+                    addToGraph(CheckIdent, OpInfo(uid), property);
                 }
 
                 putByIdStatus = PutByIdStatus::computeForStubInfo(
index 5df6300..88efee7 100644 (file)
@@ -433,8 +433,8 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
         write(SideState);
         return;
 
-    case CheckStringIdent:
-        def(PureValue(CheckStringIdent, AdjacencyList(AdjacencyList::Fixed, node->child1()), node->uidOperand()));
+    case CheckIdent:
+        def(PureValue(CheckIdent, AdjacencyList(AdjacencyList::Fixed, node->child1()), node->uidOperand()));
         return;
 
     case ConstantStoragePointer:
@@ -471,7 +471,7 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
     case PutHint:
     case InitializeEntrypointArguments:
     case FilterCallLinkStatus:
-    case FilterGetByIdStatus:
+    case FilterGetByStatus:
     case FilterPutByIdStatus:
     case FilterInByIdStatus:
         write(SideState);
index ef68fd5..0f81a8e 100644 (file)
@@ -80,7 +80,7 @@ bool clobbersExitState(Graph& graph, Node* node)
     case AllocatePropertyStorage:
     case ReallocatePropertyStorage:
     case FilterCallLinkStatus:
-    case FilterGetByIdStatus:
+    case FilterGetByStatus:
     case FilterPutByIdStatus:
     case FilterInByIdStatus:
         // These do clobber memory, but nothing that is observable. It may be nice to separate the
index 4a44513..e881378 100644 (file)
@@ -36,7 +36,7 @@
 #include "DFGInPlaceAbstractState.h"
 #include "DFGInsertionSet.h"
 #include "DFGPhase.h"
-#include "GetByIdStatus.h"
+#include "GetByStatus.h"
 #include "JSCInlines.h"
 #include "PutByIdStatus.h"
 #include "StructureCache.h"
@@ -311,7 +311,7 @@ private:
                 break;
             }
 
-            case CheckStringIdent: {
+            case CheckIdent: {
                 UniquedStringImpl* uid = node->uidOperand();
                 const UniquedStringImpl* constantUid = nullptr;
 
@@ -326,6 +326,9 @@ private:
                             if (impl->isAtom())
                                 constantUid = static_cast<const UniquedStringImpl*>(impl);
                         }
+                    } else if (childConstant.isSymbol()) {
+                        Symbol* symbol = jsCast<Symbol*>(childConstant);
+                        constantUid = &symbol->privateName().uid();
                     }
                 }
 
@@ -546,7 +549,7 @@ private:
                     || (node->child1().useKind() == UntypedUse || (baseValue.m_type & ~SpecCell)))
                     break;
                 
-                GetByIdStatus status = GetByIdStatus::computeFor(
+                GetByStatus status = GetByStatus::computeFor(
                     baseValue.m_structure.toStructureSet(), m_graph.identifiers()[identifierNumber]);
                 if (!status.isSimple())
                     break;
@@ -561,8 +564,8 @@ private:
                 
                 auto addFilterStatus = [&] () {
                     m_insertionSet.insertNode(
-                        indexInBlock, SpecNone, FilterGetByIdStatus, node->origin,
-                        OpInfo(m_graph.m_plan.recordedStatuses().addGetByIdStatus(node->origin.semantic, status)),
+                        indexInBlock, SpecNone, FilterGetByStatus, node->origin,
+                        OpInfo(m_graph.m_plan.recordedStatuses().addGetByStatus(node->origin.semantic, status)),
                         Edge(child));
                 };
                 
index be35647..5cd9319 100644 (file)
@@ -55,7 +55,7 @@ unsigned DesiredIdentifiers::numberOfIdentifiers()
     return m_codeBlock->numberOfIdentifiers() + m_addedIdentifiers.size();
 }
 
-unsigned DesiredIdentifiers::ensure(UniquedStringImpl* rep)
+void DesiredIdentifiers::processCodeBlockIdentifiersIfNeeded()
 {
     if (!m_didProcessIdentifiers) {
         // Do this now instead of the constructor so that we don't pay the price on the main
@@ -64,6 +64,11 @@ unsigned DesiredIdentifiers::ensure(UniquedStringImpl* rep)
             m_identifierNumberForName.add(m_codeBlock->identifier(index).impl(), index);
         m_didProcessIdentifiers = true;
     }
+}
+
+unsigned DesiredIdentifiers::ensure(UniquedStringImpl* rep)
+{
+    processCodeBlockIdentifiersIfNeeded();
 
     auto addResult = m_identifierNumberForName.add(rep, numberOfIdentifiers());
     unsigned result = addResult.iterator->value;
@@ -74,20 +79,45 @@ unsigned DesiredIdentifiers::ensure(UniquedStringImpl* rep)
     return result;
 }
 
+unsigned DesiredIdentifiers::ensure(Box<Identifier> rep)
+{
+    processCodeBlockIdentifiersIfNeeded();
+
+    UniquedStringImpl* impl = rep->impl();
+    auto addResult = m_identifierNumberForName.add(impl, numberOfIdentifiers());
+    unsigned result = addResult.iterator->value;
+    if (addResult.isNewEntry) {
+        m_addedIdentifiers.append(WTFMove(rep));
+        ASSERT(at(result) == impl);
+    }
+    return result;
+}
+
 UniquedStringImpl* DesiredIdentifiers::at(unsigned index) const
 {
     UniquedStringImpl* result;
     if (index < m_codeBlock->numberOfIdentifiers())
         result = m_codeBlock->identifier(index).impl();
-    else
-        result = m_addedIdentifiers[index - m_codeBlock->numberOfIdentifiers()];
+    else {
+        const auto& variant = m_addedIdentifiers[index - m_codeBlock->numberOfIdentifiers()];
+        if (WTF::holds_alternative<UniquedStringImpl*>(variant))
+            result = WTF::get<UniquedStringImpl*>(variant);
+        else
+            result = WTF::get<Box<Identifier>>(variant)->impl();
+    }
     ASSERT(result->hasAtLeastOneRef());
     return result;
 }
 
 void DesiredIdentifiers::reallyAdd(VM& vm, CommonData* commonData)
 {
-    for (auto rep : m_addedIdentifiers) {
+    for (const auto& variant : m_addedIdentifiers) {
+        UniquedStringImpl* rep;
+        if (WTF::holds_alternative<UniquedStringImpl*>(variant))
+            rep = WTF::get<UniquedStringImpl*>(variant);
+        else
+            rep = WTF::get<Box<Identifier>>(variant)->impl();
+
         ASSERT(rep->hasAtLeastOneRef());
         Identifier uid = Identifier::fromUid(vm, rep);
         {
index 1b2a5bc..c239e4a 100644 (file)
@@ -28,6 +28,7 @@
 #if ENABLE(DFG_JIT)
 
 #include <wtf/HashMap.h>
+#include <wtf/Variant.h>
 #include <wtf/text/UniquedStringImpl.h>
 
 namespace JSC {
@@ -47,6 +48,7 @@ public:
     
     unsigned numberOfIdentifiers();
     unsigned ensure(UniquedStringImpl*);
+    unsigned ensure(Box<Identifier>);
     
     UniquedStringImpl* at(unsigned index) const;
     
@@ -55,8 +57,10 @@ public:
     void reallyAdd(VM&, CommonData*);
     
 private:
+    void processCodeBlockIdentifiersIfNeeded();
+
     CodeBlock* m_codeBlock;
-    Vector<UniquedStringImpl*> m_addedIdentifiers;
+    Vector<Variant<UniquedStringImpl*, Box<Identifier>>> m_addedIdentifiers;
     HashMap<UniquedStringImpl*, unsigned> m_identifierNumberForName;
     bool m_didProcessIdentifiers;
 };
index e9662ae..8541231 100644 (file)
@@ -134,7 +134,7 @@ bool doesGC(Graph& graph, Node* node)
     case CheckCell:
     case CheckNotEmpty:
     case AssertNotEmpty:
-    case CheckStringIdent:
+    case CheckIdent:
     case CompareBelow:
     case CompareBelowEq:
     case CompareEqPtr:
@@ -239,7 +239,7 @@ bool doesGC(Graph& graph, Node* node)
     case AtomicsIsLockFree:
     case MatchStructure:
     case FilterCallLinkStatus:
-    case FilterGetByIdStatus:
+    case FilterGetByStatus:
     case FilterPutByIdStatus:
     case FilterInByIdStatus:
     case DateGetInt32OrNaN:
index bba4304..aee1f00 100644 (file)
@@ -1726,8 +1726,11 @@ private:
             break;
         }
 
-        case CheckStringIdent: {
-            fixEdge<StringIdentUse>(node->child1());
+        case CheckIdent: {
+            if (node->uidOperand()->isSymbol())
+                fixEdge<SymbolUse>(node->child1());
+            else
+                fixEdge<StringIdentUse>(node->child1());
             break;
         }
             
@@ -2514,7 +2517,7 @@ private:
         case ExtractValueFromWeakMapGet:
         case CPUIntrinsic:
         case FilterCallLinkStatus:
-        case FilterGetByIdStatus:
+        case FilterGetByStatus:
         case FilterPutByIdStatus:
         case FilterInByIdStatus:
         case InvalidationPoint:
index 1b1b198..2675932 100644 (file)
@@ -371,8 +371,8 @@ void Graph::dump(PrintStream& out, const char* prefixStr, Node* node, DumpContex
         out.print(comma, pointerDumpInContext(node->constant(), context));
     if (node->hasCallLinkStatus())
         out.print(comma, *node->callLinkStatus());
-    if (node->hasGetByIdStatus())
-        out.print(comma, *node->getByIdStatus());
+    if (node->hasGetByStatus())
+        out.print(comma, *node->getByStatus());
     if (node->hasInByIdStatus())
         out.print(comma, *node->inByIdStatus());
     if (node->hasPutByIdStatus())
index 6c64d47..869faee 100644 (file)
@@ -1121,6 +1121,8 @@ public:
     RegisteredStructure stringStructure;
     RegisteredStructure symbolStructure;
 
+    HashSet<Node*> m_slowGetByVal;
+
 private:
     bool isStringPrototypeMethodSane(JSGlobalObject*, UniquedStringImpl*);
 
index ad252a5..99aeffc 100644 (file)
@@ -30,7 +30,7 @@
 
 #include "CodeBlock.h"
 #include "DFGBasicBlock.h"
-#include "GetByIdStatus.h"
+#include "GetByStatus.h"
 #include "JSCInlines.h"
 #include "PutByIdStatus.h"
 #include "StringObject.h"
index 92740a6..3d843ce 100644 (file)
@@ -252,6 +252,7 @@ void JITCompiler::link(LinkBuffer& linkBuffer)
 
     finalizeInlineCaches(m_getByIds, linkBuffer);
     finalizeInlineCaches(m_getByIdsWithThis, linkBuffer);
+    finalizeInlineCaches(m_getByVals, linkBuffer);
     finalizeInlineCaches(m_putByIds, linkBuffer);
     finalizeInlineCaches(m_inByIds, linkBuffer);
     finalizeInlineCaches(m_instanceOfs, linkBuffer);
index 6a139d2..68ca98d 100644 (file)
@@ -181,6 +181,11 @@ public:
     {
         m_getByIdsWithThis.append(InlineCacheWrapper<JITGetByIdWithThisGenerator>(gen, slowPath));
     }
+
+    void addGetByVal(const JITGetByValGenerator& gen, SlowPathGenerator* slowPath)
+    {
+        m_getByVals.append(InlineCacheWrapper<JITGetByValGenerator>(gen, slowPath));
+    }
     
     void addPutById(const JITPutByIdGenerator& gen, SlowPathGenerator* slowPath)
     {
@@ -341,6 +346,7 @@ private:
     
     Vector<InlineCacheWrapper<JITGetByIdGenerator>, 4> m_getByIds;
     Vector<InlineCacheWrapper<JITGetByIdWithThisGenerator>, 4> m_getByIdsWithThis;
+    Vector<InlineCacheWrapper<JITGetByValGenerator>, 4> m_getByVals;
     Vector<InlineCacheWrapper<JITPutByIdGenerator>, 4> m_putByIds;
     Vector<InlineCacheWrapper<JITInByIdGenerator>, 4> m_inByIds;
     Vector<InlineCacheWrapper<JITInstanceOfGenerator>, 4> m_instanceOfs;
index c7fa5c1..ae224cc 100644 (file)
@@ -105,7 +105,7 @@ ExitMode mayExitImpl(Graph& graph, Node* node, StateType& state)
     case RecordRegExpCachedResult:
     case NukeStructureAndSetButterfly:
     case FilterCallLinkStatus:
-    case FilterGetByIdStatus:
+    case FilterGetByStatus:
     case FilterPutByIdStatus:
     case FilterInByIdStatus:
         break;
index 2df78a9..3915937 100644 (file)
@@ -1873,7 +1873,7 @@ public:
 
     bool hasUidOperand()
     {
-        return op() == CheckStringIdent;
+        return op() == CheckIdent;
     }
 
     UniquedStringImpl* uidOperand()
@@ -2916,15 +2916,15 @@ public:
         return m_opInfo.as<CallLinkStatus*>();
     }
     
-    bool hasGetByIdStatus()
+    bool hasGetByStatus()
     {
-        return op() == FilterGetByIdStatus;
+        return op() == FilterGetByStatus;
     }
     
-    GetByIdStatus* getByIdStatus()
+    GetByStatus* getByStatus()
     {
-        ASSERT(hasGetByIdStatus());
-        return m_opInfo.as<GetByIdStatus*>();
+        ASSERT(hasGetByStatus());
+        return m_opInfo.as<GetByStatus*>();
     }
     
     bool hasInByIdStatus()
index f1c0a39..24c733d 100644 (file)
@@ -272,7 +272,7 @@ namespace JSC { namespace DFG {
     macro(AssertNotEmpty, NodeMustGenerate) \
     macro(CheckBadCell, NodeMustGenerate) \
     macro(CheckInBounds, NodeMustGenerate | NodeResultJS) \
-    macro(CheckStringIdent, NodeMustGenerate) \
+    macro(CheckIdent, NodeMustGenerate) \
     macro(CheckTypeInfoFlags, NodeMustGenerate) /* Takes an OpInfo with the flags you want to test are set */\
     macro(CheckSubClass, NodeMustGenerate) \
     macro(ParseInt, NodeMustGenerate | NodeResultJS) \
@@ -513,7 +513,7 @@ namespace JSC { namespace DFG {
     \
     /* Used to provide feedback to the IC profiler. */ \
     macro(FilterCallLinkStatus, NodeMustGenerate) \
-    macro(FilterGetByIdStatus, NodeMustGenerate) \
+    macro(FilterGetByStatus, NodeMustGenerate) \
     macro(FilterInByIdStatus, NodeMustGenerate) \
     macro(FilterPutByIdStatus, NodeMustGenerate) \
     /* Data view access */ \
index cfc3d7b..216c5ba 100644 (file)
@@ -1090,7 +1090,7 @@ private:
             break;
             
         case FilterCallLinkStatus:
-        case FilterGetByIdStatus:
+        case FilterGetByStatus:
         case FilterPutByIdStatus:
         case FilterInByIdStatus:
             break;
@@ -2378,7 +2378,7 @@ private:
             for (Node* node : *block) {
                 switch (node->op()) {
                 case FilterCallLinkStatus:
-                case FilterGetByIdStatus:
+                case FilterGetByStatus:
                 case FilterPutByIdStatus:
                 case FilterInByIdStatus:
                     if (node->child1()->isPhantomAllocation())
index 5a7abe8..8050516 100644 (file)
@@ -1348,7 +1348,7 @@ private:
         case CheckCell:
         case CheckNotEmpty:
         case AssertNotEmpty:
-        case CheckStringIdent:
+        case CheckIdent:
         case CheckBadCell:
         case PutStructure:
         case Phantom:
@@ -1373,7 +1373,7 @@ private:
         case WeakSetAdd:
         case WeakMapSet:
         case FilterCallLinkStatus:
-        case FilterGetByIdStatus:
+        case FilterGetByStatus:
         case FilterPutByIdStatus:
         case FilterInByIdStatus:
         case ClearCatchLocals:
index 5635c7c..15be42a 100644 (file)
@@ -295,7 +295,7 @@ bool safeToExecute(AbstractStateType& state, Graph& graph, Node* node, bool igno
     case CheckBadCell:
     case CheckNotEmpty:
     case AssertNotEmpty:
-    case CheckStringIdent:
+    case CheckIdent:
     case RegExpExec:
     case RegExpExecNonGlobalOrSticky:
     case RegExpTest:
@@ -523,7 +523,7 @@ bool safeToExecute(AbstractStateType& state, Graph& graph, Node* node, bool igno
         return false;
         
     case FilterCallLinkStatus:
-    case FilterGetByIdStatus:
+    case FilterGetByStatus:
     case FilterPutByIdStatus:
     case FilterInByIdStatus:
         // We don't want these to be moved anywhere other than where we put them, since we want them
index 109f79a..dfef474 100644 (file)
@@ -984,7 +984,7 @@ void SpeculativeJIT::useChildren(Node* node)
 
 void SpeculativeJIT::compileGetById(Node* node, AccessType accessType)
 {
-    ASSERT(accessType == AccessType::Get || accessType == AccessType::GetDirect || accessType == AccessType::TryGet);
+    ASSERT(accessType == AccessType::GetById || accessType == AccessType::GetByIdDirect || accessType == AccessType::TryGetById);
 
     switch (node->child1().useKind()) {
     case CellUse: {
@@ -7209,21 +7209,26 @@ void SpeculativeJIT::compileGetArrayLength(Node* node)
     } }
 }
 
-void SpeculativeJIT::compileCheckStringIdent(Node* node)
+void SpeculativeJIT::compileCheckIdent(Node* node)
 {
-    SpeculateCellOperand string(this, node->child1());
-    GPRTemporary storage(this);
-
-    GPRReg stringGPR = string.gpr();
-    GPRReg storageGPR = storage.gpr();
+    SpeculateCellOperand stringOrSymbol(this, node->child1());
+    GPRTemporary impl(this);
+    GPRReg stringOrSymbolGPR = stringOrSymbol.gpr();
+    GPRReg implGPR = impl.gpr();
 
-    speculateString(node->child1(), stringGPR);
-    speculateStringIdentAndLoadStorage(node->child1(), stringGPR, storageGPR);
+    if (node->child1().useKind() == StringIdentUse) {
+        speculateString(node->child1(), stringOrSymbolGPR);
+        speculateStringIdentAndLoadStorage(node->child1(), stringOrSymbolGPR, implGPR);
+    } else {
+        ASSERT(node->child1().useKind() == SymbolUse);
+        speculateSymbol(node->child1(), stringOrSymbolGPR);
+        m_jit.loadPtr(MacroAssembler::Address(stringOrSymbolGPR, Symbol::offsetOfSymbolImpl()), implGPR);
+    }
 
     UniquedStringImpl* uid = node->uidOperand();
     speculationCheck(
         BadIdent, JSValueSource(), nullptr,
-        m_jit.branchPtr(JITCompiler::NotEqual, storageGPR, TrustedImmPtr(uid)));
+        m_jit.branchPtr(JITCompiler::NotEqual, implGPR, TrustedImmPtr(uid)));
     noResult(node);
 }
 
index ed9ae10..c64f978 100644 (file)
@@ -716,6 +716,7 @@ public:
 
     void cachedGetById(CodeOrigin, JSValueRegs base, JSValueRegs result, unsigned identifierNumber, JITCompiler::Jump slowPathTarget, SpillRegistersMode, AccessType);
     void cachedPutById(CodeOrigin, GPRReg baseGPR, JSValueRegs valueRegs, GPRReg scratchGPR, unsigned identifierNumber, PutKind, JITCompiler::Jump slowPathTarget = JITCompiler::Jump(), SpillRegistersMode = NeedToSpill);
+    void cachedGetByVal(CodeOrigin, JSValueRegs base, JSValueRegs property, JSValueRegs result, JITCompiler::Jump slowPathTarget);
 
 #if USE(JSVALUE64)
     void cachedGetById(CodeOrigin, GPRReg baseGPR, GPRReg resultGPR, unsigned identifierNumber, JITCompiler::Jump slowPathTarget, SpillRegistersMode, AccessType);
@@ -1304,7 +1305,7 @@ public:
     void compileGetArrayLength(Node*);
 
     void compileCheckTypeInfoFlags(Node*);
-    void compileCheckStringIdent(Node*);
+    void compileCheckIdent(Node*);
 
     void compileParseInt(Node*);
     
index 7e57539..1ab7849 100644 (file)
@@ -235,7 +235,7 @@ void SpeculativeJIT::cachedGetByIdWithThis(
     CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(codeOrigin, m_stream->size());
     JITGetByIdWithThisGenerator gen(
         m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, identifierUID(identifierNumber),
-        JSValueRegs(resultTagGPR, resultPayloadGPR), JSValueRegs(baseTagGPROrNone, basePayloadGPR), JSValueRegs(thisTagGPR, thisPayloadGPR), AccessType::GetWithThis);
+        JSValueRegs(resultTagGPR, resultPayloadGPR), JSValueRegs(baseTagGPROrNone, basePayloadGPR), JSValueRegs(thisTagGPR, thisPayloadGPR));
     
     gen.generateFastPath(m_jit);
     
@@ -2254,6 +2254,8 @@ void SpeculativeJIT::compile(Node* node)
             break;
         }
         case Array::Generic: {
+            // FIXME: Implement IC here:
+            // https://bugs.webkit.org/show_bug.cgi?id=204082
             if (m_graph.varArgChild(node, 0).useKind() == ObjectUse) {
                 if (m_graph.varArgChild(node, 1).useKind() == StringUse) {
                     compileGetByValForObjectWithString(node);
@@ -3254,27 +3256,27 @@ void SpeculativeJIT::compile(Node* node)
     }
 
     case TryGetById: {
-        compileGetById(node, AccessType::TryGet);
+        compileGetById(node, AccessType::TryGetById);
         break;
     }
 
     case GetByIdDirect: {
-        compileGetById(node, AccessType::GetDirect);
+        compileGetById(node, AccessType::GetByIdDirect);
         break;
     }
 
     case GetByIdDirectFlush: {
-        compileGetByIdFlush(node, AccessType::GetDirect);
+        compileGetByIdFlush(node, AccessType::GetByIdDirect);
         break;
     }
 
     case GetById: {
-        compileGetById(node, AccessType::Get);
+        compileGetById(node, AccessType::GetById);
         break;
     }
 
     case GetByIdFlush: {
-        compileGetByIdFlush(node, AccessType::Get);
+        compileGetByIdFlush(node, AccessType::GetById);
         break;
     }
 
@@ -3342,8 +3344,8 @@ void SpeculativeJIT::compile(Node* node)
         break;
     }
 
-    case CheckStringIdent:
-        compileCheckStringIdent(node);
+    case CheckIdent:
+        compileCheckIdent(node);
         break;
 
     case GetExecutable: {
@@ -4106,7 +4108,7 @@ void SpeculativeJIT::compile(Node* node)
         break;
         
     case FilterCallLinkStatus:
-    case FilterGetByIdStatus:
+    case FilterGetByStatus:
     case FilterPutByIdStatus:
     case FilterInByIdStatus:
         m_interpreter.filterICStatus(node);
index fc0342c..3e638c9 100644 (file)
@@ -200,7 +200,7 @@ void SpeculativeJIT::cachedGetByIdWithThis(CodeOrigin codeOrigin, GPRReg baseGPR
     
     JITGetByIdWithThisGenerator gen(
         m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, identifierUID(identifierNumber),
-        JSValueRegs(resultGPR), JSValueRegs(baseGPR), JSValueRegs(thisGPR), AccessType::GetWithThis);
+        JSValueRegs(resultGPR), JSValueRegs(baseGPR), JSValueRegs(thisGPR));
     gen.generateFastPath(m_jit);
     
     JITCompiler::JumpList slowCases;
@@ -2408,28 +2408,74 @@ void SpeculativeJIT::compile(Node* node)
             break;
         }
         case Array::Generic: {
-            if (m_graph.varArgChild(node, 0).useKind() == ObjectUse) {
-                if (m_graph.varArgChild(node, 1).useKind() == StringUse) {
-                    compileGetByValForObjectWithString(node);
-                    break;
+            if (m_graph.m_slowGetByVal.contains(node)) {
+                if (m_graph.varArgChild(node, 0).useKind() == ObjectUse) {
+                    if (m_graph.varArgChild(node, 1).useKind() == StringUse) {
+                        compileGetByValForObjectWithString(node);
+                        break;
+                    }
+
+                    if (m_graph.varArgChild(node, 1).useKind() == SymbolUse) {
+                        compileGetByValForObjectWithSymbol(node);
+                        break;
+                    }
                 }
 
-                if (m_graph.varArgChild(node, 1).useKind() == SymbolUse) {
-                    compileGetByValForObjectWithSymbol(node);
-                    break;
-                }
+                JSValueOperand base(this, m_graph.varArgChild(node, 0));
+                JSValueOperand property(this, m_graph.varArgChild(node, 1));
+                GPRReg baseGPR = base.gpr();
+                GPRReg propertyGPR = property.gpr();
+                
+                flushRegisters();
+                GPRFlushedCallResult result(this);
+                callOperation(operationGetByVal, result.gpr(), TrustedImmPtr::weakPointer(m_graph, m_graph.globalObjectFor(node->origin.semantic)), baseGPR, propertyGPR);
+                m_jit.exceptionCheck();
+                
+                jsValueResult(result.gpr(), node);
+                break;
             }
-            JSValueOperand base(this, m_graph.varArgChild(node, 0));
-            JSValueOperand property(this, m_graph.varArgChild(node, 1));
+
+            speculate(node, m_graph.varArgChild(node, 0));
+            speculate(node, m_graph.varArgChild(node, 1));
+
+            JSValueOperand base(this, m_graph.varArgChild(node, 0), ManualOperandSpeculation);
+            JSValueOperand property(this, m_graph.varArgChild(node, 1), ManualOperandSpeculation);
+            GPRTemporary result(this, Reuse, property);
             GPRReg baseGPR = base.gpr();
             GPRReg propertyGPR = property.gpr();
+            GPRReg resultGPR = result.gpr();
+
+            CodeOrigin codeOrigin = node->origin.semantic;
+            CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(codeOrigin, m_stream->size());
+            RegisterSet usedRegisters = this->usedRegisters();
+
+            JITCompiler::JumpList slowCases;
+            if (!m_state.forNode(m_graph.varArgChild(node, 0)).isType(SpecCell))
+                slowCases.append(m_jit.branchIfNotCell(baseGPR));
+
+            JITGetByValGenerator gen(
+                m_jit.codeBlock(), codeOrigin, callSite, usedRegisters,
+                JSValueRegs(baseGPR), JSValueRegs(propertyGPR), JSValueRegs(resultGPR));
+
+            if (m_state.forNode(m_graph.varArgChild(node, 1)).isType(SpecString))
+                gen.stubInfo()->propertyIsString = true;
+            else if (m_state.forNode(m_graph.varArgChild(node, 1)).isType(SpecInt32Only))
+                gen.stubInfo()->propertyIsInt32 = true;
+            else if (m_state.forNode(m_graph.varArgChild(node, 1)).isType(SpecSymbol))
+                gen.stubInfo()->propertyIsSymbol = true;
+
+            gen.generateFastPath(m_jit);
             
-            flushRegisters();
-            GPRFlushedCallResult result(this);
-            callOperation(operationGetByVal, result.gpr(), TrustedImmPtr::weakPointer(m_graph, m_graph.globalObjectFor(node->origin.semantic)), baseGPR, propertyGPR);
-            m_jit.exceptionCheck();
+            slowCases.append(gen.slowPathJump());
+
+            std::unique_ptr<SlowPathGenerator> slowPath = slowPathCall(
+                slowCases, this, operationGetByValOptimize,
+                resultGPR, TrustedImmPtr::weakPointer(m_graph, m_graph.globalObjectFor(codeOrigin)), gen.stubInfo(), nullptr, baseGPR, propertyGPR);
             
-            jsValueResult(result.gpr(), node);
+            m_jit.addGetByVal(gen, slowPath.get());
+            addSlowPathGenerator(WTFMove(slowPath));
+
+            jsValueResult(resultGPR, node);
             break;
         }
         case Array::Int32:
@@ -3592,27 +3638,27 @@ void SpeculativeJIT::compile(Node* node)
     }
 
     case TryGetById: {
-        compileGetById(node, AccessType::TryGet);
+        compileGetById(node, AccessType::TryGetById);
         break;
     }
 
     case GetByIdDirect: {
-        compileGetById(node, AccessType::GetDirect);
+        compileGetById(node, AccessType::GetByIdDirect);
         break;
     }
 
     case GetByIdDirectFlush: {
-        compileGetByIdFlush(node, AccessType::GetDirect);
+        compileGetByIdFlush(node, AccessType::GetByIdDirect);
         break;
     }
 
     case GetById: {
-        compileGetById(node, AccessType::Get);
+        compileGetById(node, AccessType::GetById);
         break;
     }
 
     case GetByIdFlush: {
-        compileGetByIdFlush(node, AccessType::Get);
+        compileGetByIdFlush(node, AccessType::GetById);
         break;
     }
 
@@ -3691,8 +3737,8 @@ void SpeculativeJIT::compile(Node* node)
         break;
     }
 
-    case CheckStringIdent:
-        compileCheckStringIdent(node);
+    case CheckIdent:
+        compileCheckIdent(node);
         break;
 
     case GetExecutable: {
@@ -5179,7 +5225,7 @@ void SpeculativeJIT::compile(Node* node)
 #endif // ENABLE(FTL_JIT)
 
     case FilterCallLinkStatus:
-    case FilterGetByIdStatus:
+    case FilterGetByStatus:
     case FilterPutByIdStatus:
     case FilterInByIdStatus:
         m_interpreter.filterICStatus(node);
index 2762d99..26f0329 100644 (file)
@@ -194,7 +194,7 @@ private:
                 break;
             }
                 
-            case FilterGetByIdStatus:
+            case FilterGetByStatus:
             case FilterPutByIdStatus:
             case FilterCallLinkStatus:
             case FilterInByIdStatus:
@@ -388,7 +388,7 @@ private:
                 break;
             }
 
-            case FilterGetByIdStatus:
+            case FilterGetByStatus:
             case FilterPutByIdStatus:
             case FilterCallLinkStatus:
             case FilterInByIdStatus:
index 02f20e3..781b71e 100644 (file)
@@ -161,7 +161,7 @@ inline CapabilityLevel canCompile(Node* node)
     case CheckBadCell:
     case CheckNotEmpty:
     case AssertNotEmpty:
-    case CheckStringIdent:
+    case CheckIdent:
     case CheckTraps:
     case StringCharCodeAt:
     case StringCodePointAt:
@@ -381,7 +381,7 @@ inline CapabilityLevel canCompile(Node* node)
     case PutByValWithThis:
     case MatchStructure:
     case FilterCallLinkStatus:
-    case FilterGetByIdStatus:
+    case FilterGetByStatus:
     case FilterPutByIdStatus:
     case FilterInByIdStatus:
     case CreateThis:
index eed7df3..4f35043 100644 (file)
@@ -893,8 +893,8 @@ private:
         case CheckBadCell:
             compileCheckBadCell();
             break;
-        case CheckStringIdent:
-            compileCheckStringIdent();
+        case CheckIdent:
+            compileCheckIdent();
             break;
         case GetExecutable:
             compileGetExecutable();
@@ -907,18 +907,18 @@ private:
             compilePutStructure();
             break;
         case TryGetById:
-            compileGetById(AccessType::TryGet);
+            compileGetById(AccessType::TryGetById);
             break;
         case GetById:
         case GetByIdFlush:
-            compileGetById(AccessType::Get);
+            compileGetById(AccessType::GetById);
             break;
         case GetByIdWithThis:
             compileGetByIdWithThis();
             break;
         case GetByIdDirect:
         case GetByIdDirectFlush:
-            compileGetById(AccessType::GetDirect);
+            compileGetById(AccessType::GetByIdDirect);
             break;
         case InById:
             compileInById();
@@ -1546,7 +1546,7 @@ private:
             compileCallDOMGetter();
             break;
         case FilterCallLinkStatus:
-        case FilterGetByIdStatus:
+        case FilterGetByStatus:
         case FilterPutByIdStatus:
         case FilterInByIdStatus:
             compileFilterICStatus();
@@ -3447,10 +3447,16 @@ private:
             });
     }
 
-    void compileCheckStringIdent()
+    void compileCheckIdent()
     {
         UniquedStringImpl* uid = m_node->uidOperand();
-        LValue stringImpl = lowStringIdent(m_node->child1());
+        LValue stringImpl;
+        if (m_node->child1().useKind() == StringIdentUse)
+            stringImpl = lowStringIdent(m_node->child1());
+        else {
+            ASSERT(m_node->child1().useKind() == SymbolUse);
+            stringImpl = m_out.loadPtr(lowSymbol(m_node->child1()), m_heaps.Symbol_symbolImpl);
+        }
         speculate(BadIdent, noValue(), nullptr, m_out.notEqual(stringImpl, m_out.constIntPtr(uid)));
     }
 
@@ -3538,7 +3544,7 @@ private:
     
     void compileGetById(AccessType type)
     {
-        ASSERT(type == AccessType::Get || type == AccessType::TryGet || type == AccessType::GetDirect);
+        ASSERT(type == AccessType::GetById || type == AccessType::TryGetById || type == AccessType::GetByIdDirect);
         JSGlobalObject* globalObject = m_graph.globalObjectFor(m_node->origin.semantic);
         switch (m_node->child1().useKind()) {
         case CellUse: {
@@ -4447,24 +4453,112 @@ private:
         }
             
         case Array::Generic: {
-            if (m_graph.varArgChild(m_node, 0).useKind() == ObjectUse) {
-                if (m_graph.varArgChild(m_node, 1).useKind() == StringUse) {
-                    setJSValue(vmCall(
-                        Int64, operationGetByValObjectString, weakPointer(globalObject),
-                        lowObject(m_graph.varArgChild(m_node, 0)), lowString(m_graph.varArgChild(m_node, 1))));
-                    return;
-                }
+            if (m_graph.m_slowGetByVal.contains(m_node)) {
+                if (m_graph.varArgChild(m_node, 0).useKind() == ObjectUse) {
+                    if (m_graph.varArgChild(m_node, 1).useKind() == StringUse) {
+                        setJSValue(vmCall(
+                            Int64, operationGetByValObjectString, weakPointer(globalObject),
+                            lowObject(m_graph.varArgChild(m_node, 0)), lowString(m_graph.varArgChild(m_node, 1))));
+                        return;
+                    }
 
-                if (m_graph.varArgChild(m_node, 1).useKind() == SymbolUse) {
-                    setJSValue(vmCall(
-                        Int64, operationGetByValObjectSymbol, weakPointer(globalObject),
-                        lowObject(m_graph.varArgChild(m_node, 0)), lowSymbol(m_graph.varArgChild(m_node, 1))));
-                    return;
+                    if (m_graph.varArgChild(m_node, 1).useKind() == SymbolUse) {
+                        setJSValue(vmCall(
+                            Int64, operationGetByValObjectSymbol, weakPointer(globalObject),
+                            lowObject(m_graph.varArgChild(m_node, 0)), lowSymbol(m_graph.varArgChild(m_node, 1))));
+                        return;
+                    }
                 }
+
+                setJSValue(vmCall(
+                    Int64, operationGetByVal, weakPointer(globalObject),
+                    lowJSValue(m_graph.varArgChild(m_node, 0)), lowJSValue(m_graph.varArgChild(m_node, 1))));
+                return;
             }
-            setJSValue(vmCall(
-                Int64, operationGetByVal, weakPointer(globalObject),
-                lowJSValue(m_graph.varArgChild(m_node, 0)), lowJSValue(m_graph.varArgChild(m_node, 1))));
+
+            Node* node = m_node;
+
+            LValue base = lowJSValue(m_graph.varArgChild(node, 0), ManualOperandSpeculation);
+            LValue property = lowJSValue(m_graph.varArgChild(node, 1), ManualOperandSpeculation);
+
+            speculate(m_graph.varArgChild(node, 0));
+            speculate(m_graph.varArgChild(node, 1));
+            bool baseIsCell = abstractValue(m_graph.varArgChild(node, 0)).isType(SpecCell);
+            bool propertyIsString = false;
+            bool propertyIsInt32 = false;
+            bool propertyIsSymbol = false;
+            if (abstractValue(m_graph.varArgChild(node, 1)).isType(SpecString))
+                propertyIsString = true;
+            else if (abstractValue(m_graph.varArgChild(node, 1)).isType(SpecInt32Only))
+                propertyIsInt32 = true;
+            else if (abstractValue(m_graph.varArgChild(node, 1)).isType(SpecSymbol))
+                propertyIsSymbol = true;
+
+            PatchpointValue* patchpoint = m_out.patchpoint(Int64);
+            patchpoint->appendSomeRegister(base);
+            patchpoint->appendSomeRegister(property);
+            patchpoint->append(m_notCellMask, ValueRep::lateReg(GPRInfo::notCellMaskRegister));
+            patchpoint->append(m_numberTag, ValueRep::lateReg(GPRInfo::numberTagRegister));
+            patchpoint->clobber(RegisterSet::macroScratchRegisters());
+
+            RefPtr<PatchpointExceptionHandle> exceptionHandle = preparePatchpointForExceptions(patchpoint);
+
+            State* state = &m_ftlState;
+            patchpoint->setGenerator([=] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+                AllowMacroScratchRegisterUsage allowScratch(jit);
+
+                CallSiteIndex callSiteIndex = state->jitCode->common.addUniqueCallSiteIndex(node->origin.semantic);
+
+                // This is the direct exit target for operation calls.
+                Box<CCallHelpers::JumpList> exceptions = exceptionHandle->scheduleExitCreation(params)->jumps(jit);
+
+                // This is the exit for call IC's created by the IC for getters. We don't have
+                // to do anything weird other than call this, since it will associate the exit with
+                // the callsite index.
+                exceptionHandle->scheduleExitCreationForUnwind(params, callSiteIndex);
+
+                GPRReg resultGPR = params[0].gpr();
+                GPRReg baseGPR = params[1].gpr();
+                GPRReg propertyGPR = params[2].gpr();
+
+                auto generator = Box<JITGetByValGenerator>::create(
+                    jit.codeBlock(), node->origin.semantic, callSiteIndex, params.unavailableRegisters(),
+                    JSValueRegs(baseGPR), JSValueRegs(propertyGPR), JSValueRegs(resultGPR));
+
+                generator->stubInfo()->propertyIsString = propertyIsString;
+                generator->stubInfo()->propertyIsInt32 = propertyIsInt32;
+                generator->stubInfo()->propertyIsSymbol = propertyIsSymbol;
+
+                CCallHelpers::Jump notCell;
+                if (!baseIsCell)
+                    notCell = jit.branchIfNotCell(baseGPR);
+
+                generator->generateFastPath(jit);
+                CCallHelpers::Label done = jit.label();
+
+                params.addLatePath([=] (CCallHelpers& jit) {
+                    AllowMacroScratchRegisterUsage allowScratch(jit);
+
+                    if (notCell.isSet())
+                        notCell.link(&jit);
+                    generator->slowPathJump().link(&jit);
+                    CCallHelpers::Label slowPathBegin = jit.label();
+                    CCallHelpers::Call slowPathCall = callOperation(
+                        *state, params.unavailableRegisters(), jit, node->origin.semantic,
+                        exceptions.get(), operationGetByValOptimize, resultGPR,
+                        jit.codeBlock()->globalObjectFor(node->origin.semantic),
+                        CCallHelpers::TrustedImmPtr(generator->stubInfo()), CCallHelpers::TrustedImmPtr(nullptr), baseGPR, propertyGPR).call();
+                    jit.jump().linkTo(done, &jit);
+
+                    generator->reportSlowPathCall(slowPathBegin, slowPathCall);
+
+                    jit.addLinkTask([=] (LinkBuffer& linkBuffer) {
+                        generator->finalize(linkBuffer, linkBuffer);
+                    });
+                });
+            });
+
+            setJSValue(patchpoint);
             return;
         }
 
@@ -12567,7 +12661,7 @@ private:
                 auto generator = Box<JITGetByIdWithThisGenerator>::create(
                     jit.codeBlock(), node->origin.semantic, callSiteIndex,
                     params.unavailableRegisters(), uid, JSValueRegs(params[0].gpr()),
-                    JSValueRegs(params[1].gpr()), JSValueRegs(params[2].gpr()), AccessType::GetWithThis);
+                    JSValueRegs(params[1].gpr()), JSValueRegs(params[2].gpr()));
 
                 generator->generateFastPath(jit);
                 CCallHelpers::Label done = jit.label();
index 61f515f..5d30928 100644 (file)
@@ -38,9 +38,9 @@ namespace JSC {
 
 #define FOR_EACH_ICEVENT_KIND(macro) \
     macro(InvalidKind) \
-    macro(GetByIdAddAccessCase) \
-    macro(GetByIdReplaceWithJump) \
-    macro(GetByIdSelfPatch) \
+    macro(GetByAddAccessCase) \
+    macro(GetByReplaceWithJump) \
+    macro(GetBySelfPatch) \
     macro(InAddAccessCase) \
     macro(InReplaceWithJump) \
     macro(InstanceOfAddAccessCase) \
@@ -49,6 +49,7 @@ namespace JSC {
     macro(OperationGetByIdGeneric) \
     macro(OperationGetByIdBuildList) \
     macro(OperationGetByIdOptimize) \
+    macro(OperationGetByValOptimize) \
     macro(OperationGetByIdWithThisOptimize) \
     macro(OperationGenericIn) \
     macro(OperationInById) \
index a4d912e..17c0eda 100644 (file)
@@ -486,6 +486,7 @@ void JIT::privateCompileLinkPass()
 void JIT::privateCompileSlowCases()
 {
     m_getByIdIndex = 0;
+    m_getByValIndex = 0;
     m_getByIdWithThisIndex = 0;
     m_putByIdIndex = 0;
     m_inByIdIndex = 0;
@@ -852,6 +853,7 @@ CompilationResult JIT::link()
     }
     
     finalizeInlineCaches(m_getByIds, patchBuffer);
+    finalizeInlineCaches(m_getByVals, patchBuffer);
     finalizeInlineCaches(m_getByIdsWithThis, patchBuffer);
     finalizeInlineCaches(m_putByIds, patchBuffer);
     finalizeInlineCaches(m_inByIds, patchBuffer);
index f60f007..0ab73e7 100644 (file)
@@ -207,20 +207,6 @@ namespace JSC {
             return JIT(vm, codeBlock, bytecodeOffset).privateCompile(effort);
         }
         
-        static void compileGetByVal(const ConcurrentJSLocker& locker, VM& vm, CodeBlock* codeBlock, ByValInfo* byValInfo, ReturnAddressPtr returnAddress, JITArrayMode arrayMode)
-        {
-            JIT jit(vm, codeBlock);
-            jit.m_bytecodeIndex = byValInfo->bytecodeIndex;
-            jit.privateCompileGetByVal(locker, byValInfo, returnAddress, arrayMode);
-        }
-
-        static void compileGetByValWithCachedId(VM& vm, CodeBlock* codeBlock, ByValInfo* byValInfo, ReturnAddressPtr returnAddress, const Identifier& propertyName)
-        {
-            JIT jit(vm, codeBlock);
-            jit.m_bytecodeIndex = byValInfo->bytecodeIndex;
-            jit.privateCompileGetByValWithCachedId(byValInfo, returnAddress, propertyName);
-        }
-
         static void compilePutByVal(const ConcurrentJSLocker& locker, VM& vm, CodeBlock* codeBlock, ByValInfo* byValInfo, ReturnAddressPtr returnAddress, JITArrayMode arrayMode)
         {
             JIT jit(vm, codeBlock);
@@ -382,15 +368,6 @@ namespace JSC {
         JumpList emitArrayStorageLoad(const Instruction*, PatchableJump& badType);
         JumpList emitLoadForArrayMode(const Instruction*, JITArrayMode, PatchableJump& badType);
 
-        JumpList emitInt32GetByVal(const Instruction* instruction, PatchableJump& badType) { return emitContiguousGetByVal(instruction, badType, Int32Shape); }
-        JumpList emitDoubleGetByVal(const Instruction*, PatchableJump& badType);
-        JumpList emitContiguousGetByVal(const Instruction*, PatchableJump& badType, IndexingType expectedShape = ContiguousShape);
-        JumpList emitArrayStorageGetByVal(const Instruction*, PatchableJump& badType);
-        JumpList emitDirectArgumentsGetByVal(const Instruction*, PatchableJump& badType);
-        JumpList emitScopedArgumentsGetByVal(const Instruction*, PatchableJump& badType);
-        JumpList emitIntTypedArrayGetByVal(const Instruction*, PatchableJump& badType, TypedArrayType);
-        JumpList emitFloatTypedArrayGetByVal(const Instruction*, PatchableJump& badType, TypedArrayType);
-        
         // Property is in regT1, base is in regT0. regT2 contains indecing type.
         // The value to store is not yet loaded. Property is int-checked and
         // zero-extended. Base is cell checked. Structure is already profiled.
@@ -422,7 +399,6 @@ namespace JSC {
         // Identifier check helper for GetByVal and PutByVal.
         void emitByValIdentifierCheck(ByValInfo*, RegisterID cell, RegisterID scratch, const Identifier&, JumpList& slowCases);
 
-        JITGetByIdGenerator emitGetByValWithCachedId(ByValInfo*, OpGetByVal, const Identifier&, Jump& fastDoneCase, Jump& slowDoneCase, JumpList& slowCases);
         template<typename Op>
         JITPutByIdGenerator emitPutByValWithCachedId(ByValInfo*, Op, PutKind, const Identifier&, JumpList& doneCases, JumpList& slowCases);
 
@@ -927,6 +903,7 @@ namespace JSC {
         Vector<CallRecord> m_calls;
         Vector<Label> m_labels;
         Vector<JITGetByIdGenerator> m_getByIds;
+        Vector<JITGetByValGenerator> m_getByVals;
         Vector<JITGetByIdWithThisGenerator> m_getByIdsWithThis;
         Vector<JITPutByIdGenerator> m_putByIds;
         Vector<JITInByIdGenerator> m_inByIds;
@@ -947,6 +924,7 @@ namespace JSC {
         Label m_exceptionHandler;
 
         unsigned m_getByIdIndex { UINT_MAX };
+        unsigned m_getByValIndex { UINT_MAX };
         unsigned m_getByIdWithThisIndex { UINT_MAX };
         unsigned m_putByIdIndex { UINT_MAX };
         unsigned m_inByIdIndex { UINT_MAX };
index 9472671..cd6aa31 100644 (file)
@@ -38,7 +38,7 @@ namespace JSC {
 
 static StructureStubInfo* garbageStubInfo()
 {
-    static StructureStubInfo* stubInfo = new StructureStubInfo(AccessType::Get);
+    static StructureStubInfo* stubInfo = new StructureStubInfo(AccessType::GetById);
     return stubInfo;
 }
 
@@ -117,8 +117,8 @@ void JITGetByIdGenerator::generateFastPath(MacroAssembler& jit)
 
 JITGetByIdWithThisGenerator::JITGetByIdWithThisGenerator(
     CodeBlock* codeBlock, CodeOrigin codeOrigin, CallSiteIndex callSite, const RegisterSet& usedRegisters,
-    UniquedStringImpl*, JSValueRegs value, JSValueRegs base, JSValueRegs thisRegs, AccessType accessType)
-    : JITByIdGenerator(codeBlock, codeOrigin, callSite, accessType, usedRegisters, base, value)
+    UniquedStringImpl*, JSValueRegs value, JSValueRegs base, JSValueRegs thisRegs)
+    : JITByIdGenerator(codeBlock, codeOrigin, callSite, AccessType::GetByIdWithThis, usedRegisters, base, value)
 {
     RELEASE_ASSERT(thisRegs.payloadGPR() != thisRegs.tagGPR());
 
@@ -200,6 +200,8 @@ JITInstanceOfGenerator::JITInstanceOfGenerator(
         m_stubInfo->patch.usedRegisters.clear(scratch2);
 
     m_stubInfo->prototypeIsKnownObject = prototypeIsKnownObject;
+
+    m_stubInfo->hasConstantIdentifier = false;
 }
 
 void JITInstanceOfGenerator::generateFastPath(MacroAssembler& jit)
@@ -217,6 +219,33 @@ void JITInstanceOfGenerator::finalize(LinkBuffer& fastPath, LinkBuffer& slowPath
     fastPath.link(m_jump.m_jump, slowPath.locationOf<NoPtrTag>(m_slowPathBegin));
 }
 
+JITGetByValGenerator::JITGetByValGenerator(CodeBlock* codeBlock, CodeOrigin codeOrigin, CallSiteIndex callSiteIndex, const RegisterSet& usedRegisters, JSValueRegs base, JSValueRegs property, JSValueRegs result)
+    : Base(codeBlock, codeOrigin, callSiteIndex, AccessType::GetByVal, usedRegisters)
+    , m_base(base)
+    , m_result(result)
+{
+    m_stubInfo->hasConstantIdentifier = false;
+
+    m_stubInfo->patch.baseGPR = base.payloadGPR();
+    m_stubInfo->patch.u.propertyGPR = property.payloadGPR();
+    m_stubInfo->patch.valueGPR = result.payloadGPR();
+}
+
+void JITGetByValGenerator::generateFastPath(MacroAssembler& jit)
+{
+    m_start = jit.label();
+    m_slowPathJump = jit.patchableJump();
+    m_done = jit.label();
+}
+
+void JITGetByValGenerator::finalize(
+    LinkBuffer& fastPath, LinkBuffer& slowPath)
+{
+    ASSERT(m_start.isSet());
+    Base::finalize(
+        fastPath, slowPath, fastPath.locationOf<JITStubRoutinePtrTag>(m_start));
+}
+
 } // namespace JSC
 
 #endif // ENABLE(JIT)
index 31ae4b2..bfb9021 100644 (file)
@@ -120,7 +120,7 @@ public:
 
     JITGetByIdWithThisGenerator(
         CodeBlock*, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters, UniquedStringImpl* propertyName,
-        JSValueRegs value, JSValueRegs base, JSValueRegs thisRegs, AccessType);
+        JSValueRegs value, JSValueRegs base, JSValueRegs thisRegs);
 
     void generateFastPath(MacroAssembler&);
 };
@@ -170,6 +170,35 @@ private:
     MacroAssembler::PatchableJump m_jump;
 };
 
+class JITGetByValGenerator : public JITInlineCacheGenerator {
+    using Base = JITInlineCacheGenerator;
+public:
+    JITGetByValGenerator() { }
+
+    JITGetByValGenerator(
+        CodeBlock*, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters,
+        JSValueRegs base, JSValueRegs property, JSValueRegs result);
+
+    MacroAssembler::Jump slowPathJump() const
+    {
+        ASSERT(m_slowPathJump.m_jump.isSet());
+        return m_slowPathJump.m_jump;
+    }
+
+    void finalize(
+        LinkBuffer& fastPathLinkBuffer, LinkBuffer& slowPathLinkBuffer);
+    
+    void generateFastPath(MacroAssembler&);
+
+private:
+    JSValueRegs m_base;
+    JSValueRegs m_result;
+    JSValueRegs m_;
+
+    MacroAssembler::Label m_start;
+    MacroAssembler::PatchableJump m_slowPathJump;
+};
+
 template<typename VectorType>
 void finalizeInlineCaches(VectorType& vector, LinkBuffer& fastPath, LinkBuffer& slowPath)
 {
index 25bcc2b..fd2f6c5 100644 (file)
 
 namespace JSC {
 
-inline MacroAssembler::JumpList JIT::emitDoubleGetByVal(const Instruction* instruction, PatchableJump& badType)
-{
-#if USE(JSVALUE64)
-    JSValueRegs result = JSValueRegs(regT0);
-#else
-    JSValueRegs result = JSValueRegs(regT1, regT0);
-#endif
-    JumpList slowCases = emitDoubleLoad(instruction, badType);
-    boxDouble(fpRegT0, result);
-    return slowCases;
-}
-
 ALWAYS_INLINE MacroAssembler::JumpList JIT::emitLoadForArrayMode(const Instruction* currentInstruction, JITArrayMode arrayMode, PatchableJump& badType)
 {
     switch (arrayMode) {
@@ -60,16 +48,6 @@ ALWAYS_INLINE MacroAssembler::JumpList JIT::emitLoadForArrayMode(const Instructi
     return MacroAssembler::JumpList();
 }
 
-inline MacroAssembler::JumpList JIT::emitContiguousGetByVal(const Instruction* instruction, PatchableJump& badType, IndexingType expectedShape)
-{
-    return emitContiguousLoad(instruction, badType, expectedShape);
-}
-
-inline MacroAssembler::JumpList JIT::emitArrayStorageGetByVal(const Instruction* instruction, PatchableJump& badType)
-{
-    return emitArrayStorageLoad(instruction, badType);
-}
-
 ALWAYS_INLINE bool JIT::isOperandConstantDouble(int src)
 {
     return m_codeBlock->isConstantRegisterIndex(src) && getConstantOperand(src).isDouble();
index 3a9e85a..7d043b3 100644 (file)
@@ -205,7 +205,7 @@ EncodedJSValue JIT_OPERATION operationTryGetByIdOptimize(JSGlobalObject* globalO
 
     CodeBlock* codeBlock = callFrame->codeBlock();
     if (stubInfo->considerCaching(vm, codeBlock, baseValue.structureOrNull()) && !slot.isTaintedByOpaqueObject() && (slot.isCacheableValue() || slot.isCacheableGetter() || slot.isUnset()))
-        repatchGetByID(globalObject, codeBlock, baseValue, ident, slot, *stubInfo, GetByIDKind::Try);
+        repatchGetBy(globalObject, codeBlock, baseValue, ident, slot, *stubInfo, GetByKind::Try);
 
     return JSValue::encode(slot.getPureResult());
 }
@@ -261,7 +261,7 @@ EncodedJSValue JIT_OPERATION operationGetByIdDirectOptimize(JSGlobalObject* glob
 
     CodeBlock* codeBlock = callFrame->codeBlock();
     if (stubInfo->considerCaching(vm, codeBlock, baseValue.structureOrNull()))
-        repatchGetByID(globalObject, codeBlock, baseValue, ident, slot, *stubInfo, GetByIDKind::Direct);
+        repatchGetBy(globalObject, codeBlock, baseValue, ident, slot, *stubInfo, GetByKind::Direct);
 
     RELEASE_AND_RETURN(scope, JSValue::encode(found ? slot.getValue(globalObject, ident) : jsUndefined()));
 }
@@ -321,7 +321,7 @@ EncodedJSValue JIT_OPERATION operationGetByIdOptimize(JSGlobalObject* globalObje
         
         CodeBlock* codeBlock = callFrame->codeBlock();
         if (stubInfo->considerCaching(vm, codeBlock, baseValue.structureOrNull()))
-            repatchGetByID(globalObject, codeBlock, baseValue, ident, slot, *stubInfo, GetByIDKind::Normal);
+            repatchGetBy(globalObject, codeBlock, baseValue, ident, slot, *stubInfo, GetByKind::Normal);
         return found ? slot.getValue(globalObject, ident) : jsUndefined();
     }));
 }
@@ -378,7 +378,7 @@ EncodedJSValue JIT_OPERATION operationGetByIdWithThisOptimize(JSGlobalObject* gl
         
         CodeBlock* codeBlock = callFrame->codeBlock();
         if (stubInfo->considerCaching(vm, codeBlock, baseValue.structureOrNull()))
-            repatchGetByID(globalObject, codeBlock, baseValue, ident, slot, *stubInfo, GetByIDKind::WithThis);
+            repatchGetBy(globalObject, codeBlock, baseValue, ident, slot, *stubInfo, GetByKind::WithThis);
         return found ? slot.getValue(globalObject, ident) : jsUndefined();
     }));
 }
@@ -1919,7 +1919,7 @@ int32_t JIT_OPERATION operationInstanceOfCustom(JSGlobalObject* globalObject, En
 
 }
 
-static JSValue getByVal(JSGlobalObject* globalObject, CallFrame* callFrame, JSValue baseValue, JSValue subscript, ByValInfo* byValInfo, ReturnAddressPtr returnAddress)
+ALWAYS_INLINE static JSValue getByVal(JSGlobalObject* globalObject, CallFrame* callFrame, ArrayProfile* arrayProfile, JSValue baseValue, JSValue subscript)
 {
     UNUSED_PARAM(callFrame);
     VM& vm = globalObject->vm();
@@ -1933,8 +1933,6 @@ static JSValue getByVal(JSGlobalObject* globalObject, CallFrame* callFrame, JSVa
             if (existingAtomString) {
                 if (JSValue result = baseValue.asCell()->fastGetOwnProperty(vm, structure, existingAtomString.get())) {
                     ASSERT(callFrame->bytecodeIndex() != BytecodeIndex(0));
-                    if (byValInfo->stubInfo && byValInfo->cachedId.impl() != existingAtomString)
-                        byValInfo->tookSlowPath = true;
                     return result;
                 }
             }
@@ -1942,16 +1940,12 @@ static JSValue getByVal(JSGlobalObject* globalObject, CallFrame* callFrame, JSVa
     }
 
     if (subscript.isInt32()) {
-        ASSERT(callFrame->bytecodeIndex() != BytecodeIndex(0));
-        byValInfo->tookSlowPath = true;
-
         int32_t i = subscript.asInt32();
         if (isJSString(baseValue)) {
-            if (i >= 0 && asString(baseValue)->canGetIndex(i)) {
-                ctiPatchCallByReturnAddress(returnAddress, operationGetByValString);
+            if (i >= 0 && asString(baseValue)->canGetIndex(i))
                 RELEASE_AND_RETURN(scope, asString(baseValue)->getIndex(globalObject, i));
-            }
-            byValInfo->arrayProfile->setOutOfBounds();
+            if (arrayProfile)
+                arrayProfile->setOutOfBounds();
         } else if (baseValue.isObject()) {
             JSObject* object = asObject(baseValue);
             if (object->canGetIndexQuickly(i))
@@ -1970,7 +1964,8 @@ static JSValue getByVal(JSGlobalObject* globalObject, CallFrame* callFrame, JSVa
                 // FIXME: This will make us think that in-bounds typed array accesses are actually
                 // out-of-bounds.
                 // https://bugs.webkit.org/show_bug.cgi?id=149886
-                byValInfo->arrayProfile->setOutOfBounds();
+                if (arrayProfile)
+                    arrayProfile->setOutOfBounds();
             }
         }
 
@@ -1984,89 +1979,12 @@ static JSValue getByVal(JSGlobalObject* globalObject, CallFrame* callFrame, JSVa
     RETURN_IF_EXCEPTION(scope, JSValue());
 
     ASSERT(callFrame->bytecodeIndex() != BytecodeIndex(0));
-    if (byValInfo->stubInfo && (!isStringOrSymbol(subscript) || byValInfo->cachedId != property))
-        byValInfo->tookSlowPath = true;
-
     RELEASE_AND_RETURN(scope, baseValue.get(globalObject, property));
 }
 
-static OptimizationResult tryGetByValOptimize(JSGlobalObject* globalObject, CallFrame* callFrame, JSValue baseValue, JSValue subscript, ByValInfo* byValInfo, ReturnAddressPtr returnAddress)
-{
-    // See if it's worth optimizing this at all.
-    OptimizationResult optimizationResult = OptimizationResult::NotOptimized;
-
-    VM& vm = globalObject->vm();
-    auto scope = DECLARE_THROW_SCOPE(vm);
-
-    if (baseValue.isObject() && subscript.isInt32()) {
-        JSObject* object = asObject(baseValue);
-
-        ASSERT(callFrame->bytecodeIndex() != BytecodeIndex(0));
-        ASSERT(!byValInfo->stubRoutine);
-
-        if (hasOptimizableIndexing(object->structure(vm))) {
-            // Attempt to optimize.
-            Structure* structure = object->structure(vm);
-            JITArrayMode arrayMode = jitArrayModeForStructure(structure);
-            if (arrayMode != byValInfo->arrayMode) {
-                // If we reached this case, we got an interesting array mode we did not expect when we compiled.
-                // Let's update the profile to do better next time.
-                CodeBlock* codeBlock = callFrame->codeBlock();
-                ConcurrentJSLocker locker(codeBlock->m_lock);
-                byValInfo->arrayProfile->computeUpdatedPrediction(locker, codeBlock, structure);
-
-                JIT::compileGetByVal(locker, vm, codeBlock, byValInfo, returnAddress, arrayMode);
-                optimizationResult = OptimizationResult::Optimized;
-            }
-        }
-
-        // If we failed to patch and we have some object that intercepts indexed get, then don't even wait until 10 times.
-        if (optimizationResult != OptimizationResult::Optimized && object->structure(vm)->typeInfo().interceptsGetOwnPropertySlotByIndexEvenWhenLengthIsNotZero())
-            optimizationResult = OptimizationResult::GiveUp;
-    }
-
-    if (baseValue.isObject() && isStringOrSymbol(subscript)) {
-        const Identifier propertyName = subscript.toPropertyKey(globalObject);
-        RETURN_IF_EXCEPTION(scope, OptimizationResult::GiveUp);
-        if (subscript.isSymbol() || !parseIndex(propertyName)) {
-            ASSERT(callFrame->bytecodeIndex() != BytecodeIndex(0));
-            ASSERT(!byValInfo->stubRoutine);
-            if (byValInfo->seen) {
-                if (byValInfo->cachedId == propertyName) {
-                    JIT::compileGetByValWithCachedId(vm, callFrame->codeBlock(), byValInfo, returnAddress, propertyName);
-                    optimizationResult = OptimizationResult::Optimized;
-                } else {
-                    // Seem like a generic property access site.
-                    optimizationResult = OptimizationResult::GiveUp;
-                }
-            } else {
-                CodeBlock* codeBlock = callFrame->codeBlock();
-                ConcurrentJSLocker locker(codeBlock->m_lock);
-                byValInfo->seen = true;
-                byValInfo->cachedId = propertyName;
-                if (subscript.isSymbol())
-                    byValInfo->cachedSymbol.set(vm, codeBlock, asSymbol(subscript));
-                optimizationResult = OptimizationResult::SeenOnce;
-            }
-        }
-    }
-
-    if (optimizationResult != OptimizationResult::Optimized && optimizationResult != OptimizationResult::SeenOnce) {
-        // If we take slow path more than 10 times without patching then make sure we
-        // never make that mistake again. For cases where we see non-index-intercepting
-        // objects, this gives 10 iterations worth of opportunity for us to observe
-        // that the get_by_val may be polymorphic. We count up slowPathCount even if
-        // the result is GiveUp.
-        if (++byValInfo->slowPathCount >= 10)
-            optimizationResult = OptimizationResult::GiveUp;
-    }
-
-    return optimizationResult;
-}
-
 extern "C" {
 
-EncodedJSValue JIT_OPERATION operationGetByValGeneric(JSGlobalObject* globalObject, EncodedJSValue encodedBase, EncodedJSValue encodedSubscript, ByValInfo* byValInfo)
+EncodedJSValue JIT_OPERATION operationGetByValGeneric(JSGlobalObject* globalObject, StructureStubInfo* stubInfo, ArrayProfile* profile, EncodedJSValue encodedBase, EncodedJSValue encodedSubscript)
 {
     VM& vm = globalObject->vm();
     CallFrame* callFrame = DECLARE_CALL_FRAME(vm);
@@ -2074,11 +1992,12 @@ EncodedJSValue JIT_OPERATION operationGetByValGeneric(JSGlobalObject* globalObje
     JSValue baseValue = JSValue::decode(encodedBase);
     JSValue subscript = JSValue::decode(encodedSubscript);
 
-    JSValue result = getByVal(globalObject, callFrame, baseValue, subscript, byValInfo, ReturnAddressPtr(OUR_RETURN_ADDRESS));
-    return JSValue::encode(result);
+    stubInfo->tookSlowPath = true;
+
+    return JSValue::encode(getByVal(globalObject, callFrame, profile, baseValue, subscript));
 }
 
-EncodedJSValue JIT_OPERATION operationGetByValOptimize(JSGlobalObject* globalObject, EncodedJSValue encodedBase, EncodedJSValue encodedSubscript, ByValInfo* byValInfo)
+EncodedJSValue JIT_OPERATION operationGetByValOptimize(JSGlobalObject* globalObject, StructureStubInfo* stubInfo, ArrayProfile* profile, EncodedJSValue encodedBase, EncodedJSValue encodedSubscript)
 {
     VM& vm = globalObject->vm();
     CallFrame* callFrame = DECLARE_CALL_FRAME(vm);
@@ -2087,16 +2006,29 @@ EncodedJSValue JIT_OPERATION operationGetByValOptimize(JSGlobalObject* globalObj
 
     JSValue baseValue = JSValue::decode(encodedBase);
     JSValue subscript = JSValue::decode(encodedSubscript);
-    ReturnAddressPtr returnAddress = ReturnAddressPtr(OUR_RETURN_ADDRESS);
-    OptimizationResult result = tryGetByValOptimize(globalObject, callFrame, baseValue, subscript, byValInfo, returnAddress);
-    RETURN_IF_EXCEPTION(scope, { });
-    if (result == OptimizationResult::GiveUp) {
-        // Don't ever try to optimize.
-        byValInfo->tookSlowPath = true;
-        ctiPatchCallByReturnAddress(returnAddress, operationGetByValGeneric);
+
+    if (baseValue.isCell() && subscript.isInt32()) {
+        if (stubInfo->considerCaching(vm, callFrame->codeBlock(), baseValue.structureOrNull()))
+            repatchArrayGetByVal(globalObject, callFrame->codeBlock(), baseValue, subscript, *stubInfo);
     }
 
-    RELEASE_AND_RETURN(scope, JSValue::encode(getByVal(globalObject, callFrame, baseValue, subscript, byValInfo, returnAddress)));
+    if (baseValue.isCell() && isStringOrSymbol(subscript)) {
+        const Identifier propertyName = subscript.toPropertyKey(globalObject);
+        RETURN_IF_EXCEPTION(scope, encodedJSValue());
+        if (subscript.isSymbol() || !parseIndex(propertyName)) {
+            scope.release();
+            return JSValue::encode(baseValue.getPropertySlot(globalObject, propertyName, [&] (bool found, PropertySlot& slot) -> JSValue {
+                LOG_IC((ICEvent::OperationGetByValOptimize, baseValue.classInfoOrNull(vm), propertyName, baseValue == slot.slotBase())); 
+                
+                CodeBlock* codeBlock = callFrame->codeBlock();
+                if (stubInfo->considerCaching(vm, codeBlock, baseValue.structureOrNull()))
+                    repatchGetBy(globalObject, codeBlock, baseValue, propertyName, slot, *stubInfo, GetByKind::NormalByVal);
+                return found ? slot.getValue(globalObject, propertyName) : jsUndefined();
+            }));
+        }
+    }
+
+    RELEASE_AND_RETURN(scope, JSValue::encode(getByVal(globalObject, callFrame, profile, baseValue, subscript)));
 }
 
 EncodedJSValue JIT_OPERATION operationHasIndexedPropertyDefault(JSGlobalObject* globalObject, EncodedJSValue encodedBase, EncodedJSValue encodedSubscript, ByValInfo* byValInfo)
@@ -2168,40 +2100,6 @@ EncodedJSValue JIT_OPERATION operationHasIndexedPropertyGeneric(JSGlobalObject*
     return JSValue::encode(jsBoolean(object->hasPropertyGeneric(globalObject, index, PropertySlot::InternalMethodType::GetOwnProperty)));
 }
     
-EncodedJSValue JIT_OPERATION operationGetByValString(JSGlobalObject* globalObject, EncodedJSValue encodedBase, EncodedJSValue encodedSubscript, ByValInfo* byValInfo)
-{
-    VM& vm = globalObject->vm();
-    CallFrame* callFrame = DECLARE_CALL_FRAME(vm);
-    JITOperationPrologueCallFrameTracer tracer(vm, callFrame);
-    auto scope = DECLARE_THROW_SCOPE(vm);
-    JSValue baseValue = JSValue::decode(encodedBase);
-    JSValue subscript = JSValue::decode(encodedSubscript);
-    
-    JSValue result;
-    if (LIKELY(subscript.isUInt32())) {
-        uint32_t i = subscript.asUInt32();
-        if (isJSString(baseValue) && asString(baseValue)->canGetIndex(i))
-            RELEASE_AND_RETURN(scope, JSValue::encode(asString(baseValue)->getIndex(globalObject, i)));
-
-        result = baseValue.get(globalObject, i);
-        RETURN_IF_EXCEPTION(scope, encodedJSValue());
-        if (!isJSString(baseValue)) {
-            ASSERT(callFrame->bytecodeIndex() != BytecodeIndex(0));
-            auto getByValFunction = byValInfo->stubRoutine ? operationGetByValGeneric : operationGetByValOptimize;
-            ctiPatchCallByReturnAddress(ReturnAddressPtr(OUR_RETURN_ADDRESS), getByValFunction);
-        }
-    } else {
-        baseValue.requireObjectCoercible(globalObject);
-        RETURN_IF_EXCEPTION(scope, encodedJSValue());
-        auto property = subscript.toPropertyKey(globalObject);
-        RETURN_IF_EXCEPTION(scope, encodedJSValue());
-        scope.release();
-        result = baseValue.get(globalObject, property);
-    }
-
-    return JSValue::encode(result);
-}
-
 static bool deleteById(JSGlobalObject* globalObject, CallFrame* callFrame, VM& vm, JSValue base, UniquedStringImpl* uid)
 {
     auto scope = DECLARE_THROW_SCOPE(vm);
index 595aec4..92e2b19 100644 (file)
@@ -245,9 +245,9 @@ void JIT_OPERATION operationPutGetterSetter(JSGlobalObject*, JSCell*, UniquedStr
 #endif
 void JIT_OPERATION operationPushFunctionNameScope(JSGlobalObject*, int32_t, SymbolTable*, EncodedJSValue) WTF_INTERNAL;
 void JIT_OPERATION operationPopScope(JSGlobalObject*, int32_t) WTF_INTERNAL;
-EncodedJSValue JIT_OPERATION operationGetByValOptimize(JSGlobalObject*, EncodedJSValue encodedBase, EncodedJSValue encodedSubscript, ByValInfo*) WTF_INTERNAL;
-EncodedJSValue JIT_OPERATION operationGetByValGeneric(JSGlobalObject*, EncodedJSValue encodedBase, EncodedJSValue encodedSubscript, ByValInfo*) WTF_INTERNAL;
-EncodedJSValue JIT_OPERATION operationGetByValString(JSGlobalObject*, EncodedJSValue encodedBase, EncodedJSValue encodedSubscript, ByValInfo*) WTF_INTERNAL;
+
+EncodedJSValue JIT_OPERATION operationGetByValOptimize(JSGlobalObject*, StructureStubInfo*, ArrayProfile*, EncodedJSValue encodedBase, EncodedJSValue encodedSubscript) WTF_INTERNAL;
+EncodedJSValue JIT_OPERATION operationGetByValGeneric(JSGlobalObject*, StructureStubInfo*, ArrayProfile*, EncodedJSValue encodedBase, EncodedJSValue encodedSubscript) WTF_INTERNAL;
 EncodedJSValue JIT_OPERATION operationHasIndexedPropertyDefault(JSGlobalObject*, EncodedJSValue encodedBase, EncodedJSValue encodedSubscript, ByValInfo*) WTF_INTERNAL;
 EncodedJSValue JIT_OPERATION operationHasIndexedPropertyGeneric(JSGlobalObject*, EncodedJSValue encodedBase, EncodedJSValue encodedSubscript, ByValInfo*) WTF_INTERNAL;
 EncodedJSValue JIT_OPERATION operationDeleteByIdJSResult(JSGlobalObject*, EncodedJSValue base, UniquedStringImpl*) WTF_INTERNAL;
index 4f867da..4263ee0 100644 (file)
@@ -61,140 +61,38 @@ void JIT::emit_op_get_by_val(const Instruction* currentInstruction)
     int base = bytecode.m_base.offset();
     int property = bytecode.m_property.offset();
     ArrayProfile* profile = &metadata.m_arrayProfile;
-    ByValInfo* byValInfo = m_codeBlock->addByValInfo();
 
     emitGetVirtualRegister(base, regT0);
-    bool propertyNameIsIntegerConstant = isOperandConstantInt(property);
-    if (propertyNameIsIntegerConstant)
-        move(Imm32(getOperandConstantInt(property)), regT1);
-    else
-        emitGetVirtualRegister(property, regT1);
-
+    emitGetVirtualRegister(property, regT1);
     emitJumpSlowCaseIfNotJSCell(regT0, base);
-
-    PatchableJump notIndex;
-    if (!propertyNameIsIntegerConstant) {
-        notIndex = emitPatchableJumpIfNotInt(regT1);
-        addSlowCase(notIndex);
-
-        // This is technically incorrect - we're zero-extending an int32. On the hot path this doesn't matter.
-        // We check the value as if it was a uint32 against the m_vectorLength - which will always fail if
-        // number was signed since m_vectorLength is always less than intmax (since the total allocation
-        // size is always less than 4Gb). As such zero extending will have been correct (and extending the value
-        // to 64-bits is necessary since it's used in the address calculation). We zero extend rather than sign
-        // extending since it makes it easier to re-tag the value in the slow case.
-        zeroExtend32ToPtr(regT1, regT1);
-    }
-
     emitArrayProfilingSiteWithCell(regT0, regT2, profile);
-    and32(TrustedImm32(IndexingShapeMask), regT2);
-
-    PatchableJump badType;
-    JumpList slowCases;
-
-    JITArrayMode mode = chooseArrayMode(profile);
-    switch (mode) {
-    case JITInt32:
-        slowCases = emitInt32GetByVal(currentInstruction, badType);
-        break;
-    case JITDouble:
-        slowCases = emitDoubleGetByVal(currentInstruction, badType);
-        break;
-    case JITContiguous:
-        slowCases = emitContiguousGetByVal(currentInstruction, badType);
-        break;
-    case JITArrayStorage:
-        slowCases = emitArrayStorageGetByVal(currentInstruction, badType);
-        break;
-    default:
-        CRASH();
-        break;
-    }
-    
-    addSlowCase(badType);
-    addSlowCase(slowCases);
-    
-    Label done = label();
-    
-    if (!ASSERT_DISABLED) {
-        Jump resultOK = branchIfNotEmpty(regT0);
-        abortWithReason(JITGetByValResultIsNotEmpty);
-        resultOK.link(this);
-    }
-
-    emitValueProfilingSite(metadata);
-    emitPutVirtualRegister(dst);
 
-    Label nextHotPath = label();
-
-    m_byValCompilationInfo.append(ByValCompilationInfo(byValInfo, m_bytecodeIndex, notIndex, badType, mode, profile, done, nextHotPath));
-}
-
-JITGetByIdGenerator JIT::emitGetByValWithCachedId(ByValInfo* byValInfo, OpGetByVal bytecode, const Identifier& propertyName, Jump& fastDoneCase, Jump& slowDoneCase, JumpList& slowCases)
-{
-    // base: regT0
-    // property: regT1
-    // scratch: regT3
-
-    int dst = bytecode.m_dst.offset();
-
-    slowCases.append(branchIfNotCell(regT1));
-    emitByValIdentifierCheck(byValInfo, regT1, regT3, propertyName, slowCases);
-
-    JITGetByIdGenerator gen(
+    JITGetByValGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeIndex), CallSiteIndex(m_bytecodeIndex), RegisterSet::stubUnavailableRegisters(),
-        propertyName.impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
+        JSValueRegs(regT0), JSValueRegs(regT1), JSValueRegs(regT0));
     gen.generateFastPath(*this);
+    addSlowCase(gen.slowPathJump());
+    m_getByVals.append(gen);
 
-    fastDoneCase = jump();
-
-    Label coldPathBegin = label();
-    gen.slowPathJump().link(this);
-
-    Call call = callOperationWithProfile(bytecode.metadata(m_codeBlock), operationGetByIdOptimize, dst, TrustedImmPtr(m_codeBlock->globalObject()), gen.stubInfo(), regT0, propertyName.impl());
-    gen.reportSlowPathCall(coldPathBegin, call);
-    slowDoneCase = jump();
-
-    return gen;
+    emitValueProfilingSite(bytecode.metadata(m_codeBlock));
+    emitPutVirtualRegister(dst);
 }
 
 void JIT::emitSlow_op_get_by_val(const Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
 {
     auto bytecode = currentInstruction->as<OpGetByVal>();
     int dst = bytecode.m_dst.offset();
-    int base = bytecode.m_base.offset();
-    int property = bytecode.m_property.offset();
-    ByValInfo* byValInfo = m_byValCompilationInfo[m_byValInstructionIndex].byValInfo;
-    
-    linkSlowCaseIfNotJSCell(iter, base); // base cell check
-
-    if (!isOperandConstantInt(property))
-        linkSlowCase(iter); // property int32 check
-    Jump nonCell = jump();
-    linkSlowCase(iter); // base array check
-    Jump notString = branchIfNotString(regT0);
-    emitNakedCall(CodeLocationLabel<NoPtrTag>(m_vm->getCTIStub(stringGetByValGenerator).retaggedCode<NoPtrTag>()));
-    Jump failed = branchTest64(Zero, regT0);
-    emitPutVirtualRegister(dst, regT0);
-    emitJumpSlowToHot(jump(), currentInstruction->size());
-    failed.link(this);
-    notString.link(this);
-    nonCell.link(this);
-    
-    linkSlowCase(iter); // vector length check
-    linkSlowCase(iter); // empty value
-    
-    Label slowPath = label();
-    
-    emitGetVirtualRegister(base, regT0);
-    emitGetVirtualRegister(property, regT1);
-    Call call = callOperation(operationGetByValOptimize, dst, TrustedImmPtr(m_codeBlock->globalObject()), regT0, regT1, byValInfo);
+    auto& metadata = bytecode.metadata(m_codeBlock);
+    ArrayProfile* profile = &metadata.m_arrayProfile;
 
-    m_byValCompilationInfo[m_byValInstructionIndex].slowPathTarget = slowPath;
-    m_byValCompilationInfo[m_byValInstructionIndex].returnAddress = call;
-    m_byValInstructionIndex++;
+    JITGetByValGenerator& gen = m_getByVals[m_getByValIndex];
+    ++m_getByValIndex;
 
-    emitValueProfilingSite(bytecode.metadata(m_codeBlock));
+    linkAllSlowCases(iter);
+
+    Label coldPathBegin = label();
+    Call call = callOperationWithProfile(bytecode.metadata(m_codeBlock), operationGetByValOptimize, dst, TrustedImmPtr(m_codeBlock->globalObject()), gen.stubInfo(), profile, regT0, regT1);
+    gen.reportSlowPathCall(coldPathBegin, call);
 }
 
 void JIT::emit_op_put_by_val_direct(const Instruction* currentInstruction)
@@ -506,7 +404,7 @@ void JIT::emit_op_try_get_by_id(const Instruction* currentInstruction)
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeIndex), CallSiteIndex(m_bytecodeIndex), RegisterSet::stubUnavailableRegisters(),
-        ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::TryGet);
+        ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::TryGetById);
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIds.append(gen);
@@ -545,7 +443,7 @@ void JIT::emit_op_get_by_id_direct(const Instruction* currentInstruction)
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeIndex), CallSiteIndex(m_bytecodeIndex), RegisterSet::stubUnavailableRegisters(),
-        ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetDirect);
+        ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetByIdDirect);
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIds.append(gen);
@@ -591,7 +489,7 @@ void JIT::emit_op_get_by_id(const Instruction* currentInstruction)
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeIndex), CallSiteIndex(m_bytecodeIndex), RegisterSet::stubUnavailableRegisters(),
-        ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
+        ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetById);
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIds.append(gen);
@@ -615,7 +513,7 @@ void JIT::emit_op_get_by_id_with_this(const Instruction* currentInstruction)
 
     JITGetByIdWithThisGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeIndex), CallSiteIndex(m_bytecodeIndex), RegisterSet::stubUnavailableRegisters(),
-        ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), JSValueRegs(regT1), AccessType::GetWithThis);
+        ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), JSValueRegs(regT1));
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIdsWithThis.append(gen);
@@ -1323,92 +1221,6 @@ void JIT::emitByValIdentifierCheck(ByValInfo* byValInfo, RegisterID cell, Regist
     }
 }
 
-void JIT::privateCompileGetByVal(const ConcurrentJSLocker&, ByValInfo* byValInfo, ReturnAddressPtr returnAddress, JITArrayMode arrayMode)
-{
-    const Instruction* currentInstruction = m_codeBlock->instructions().at(byValInfo->bytecodeIndex).ptr();
-    
-    PatchableJump badType;
-    JumpList slowCases;
-    
-    switch (arrayMode) {
-    case JITInt32:
-        slowCases = emitInt32GetByVal(currentInstruction, badType);
-        break;
-    case JITDouble:
-        slowCases = emitDoubleGetByVal(currentInstruction, badType);
-        break;
-    case JITContiguous:
-        slowCases = emitContiguousGetByVal(currentInstruction, badType);
-        break;
-    case JITArrayStorage:
-        slowCases = emitArrayStorageGetByVal(currentInstruction, badType);
-        break;
-    case JITDirectArguments:
-        slowCases = emitDirectArgumentsGetByVal(currentInstruction, badType);
-        break;
-    case JITScopedArguments:
-        slowCases = emitScopedArgumentsGetByVal(currentInstruction, badType);
-        break;
-    default:
-        TypedArrayType type = typedArrayTypeForJITArrayMode(arrayMode);
-        if (isInt(type))
-            slowCases = emitIntTypedArrayGetByVal(currentInstruction, badType, type);
-        else 
-            slowCases = emitFloatTypedArrayGetByVal(currentInstruction, badType, type);
-        break;
-    }
-    
-    Jump done = jump();
-
-    LinkBuffer patchBuffer(*this, m_codeBlock);
-
-    patchBuffer.link(badType, byValInfo->slowPathTarget);
-    patchBuffer.link(slowCases, byValInfo->slowPathTarget);
-
-    patchBuffer.link(done, byValInfo->badTypeDoneTarget);
-
-    byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB(
-        m_codeBlock, patchBuffer, JITStubRoutinePtrTag,
-        "Baseline get_by_val stub for %s, return point %p", toCString(*m_codeBlock).data(), returnAddress.value());
-    
-    MacroAssembler::repatchJump(byValInfo->badTypeJump, CodeLocationLabel<JITStubRoutinePtrTag>(byValInfo->stubRoutine->code().code()));
-    MacroAssembler::repatchCall(CodeLocationCall<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>(returnAddress)), FunctionPtr<OperationPtrTag>(operationGetByValGeneric));
-}
-
-void JIT::privateCompileGetByValWithCachedId(ByValInfo* byValInfo, ReturnAddressPtr returnAddress, const Identifier& propertyName)
-{
-    const Instruction* currentInstruction = m_codeBlock->instructions().at(byValInfo->bytecodeIndex).ptr();
-    auto bytecode = currentInstruction->as<OpGetByVal>();
-
-    Jump fastDoneCase;
-    Jump slowDoneCase;
-    JumpList slowCases;
-
-    JITGetByIdGenerator gen = emitGetByValWithCachedId(byValInfo, bytecode, propertyName, fastDoneCase, slowDoneCase, slowCases);
-
-    ConcurrentJSLocker locker(m_codeBlock->m_lock);
-    LinkBuffer patchBuffer(*this, m_codeBlock);
-    patchBuffer.link(slowCases, byValInfo->slowPathTarget);
-    patchBuffer.link(fastDoneCase, byValInfo->badTypeDoneTarget);
-    patchBuffer.link(slowDoneCase, byValInfo->badTypeNextHotPathTarget);
-    if (!m_exceptionChecks.empty())
-        patchBuffer.link(m_exceptionChecks, byValInfo->exceptionHandler);
-
-    for (const auto& callSite : m_calls) {
-        if (callSite.callee)
-            patchBuffer.link(callSite.from, callSite.callee);
-    }
-    gen.finalize(patchBuffer, patchBuffer);
-
-    byValInfo->stubRoutine = FINALIZE_CODE_FOR_STUB(
-        m_codeBlock, patchBuffer, JITStubRoutinePtrTag,
-        "Baseline get_by_val with cached property name '%s' stub for %s, return point %p", propertyName.impl()->utf8().data(), toCString(*m_codeBlock).data(), returnAddress.value());
-    byValInfo->stubInfo = gen.stubInfo();
-
-    MacroAssembler::repatchJump(byValInfo->notIndexJump, CodeLocationLabel<JITStubRoutinePtrTag>(byValInfo->stubRoutine->code().code()));
-    MacroAssembler::repatchCall(CodeLocationCall<NoPtrTag>(MacroAssemblerCodePtr<NoPtrTag>(returnAddress)), FunctionPtr<OperationPtrTag>(operationGetByValGeneric));
-}
-
 template<typename Op>
 void JIT::privateCompilePutByVal(const ConcurrentJSLocker&, ByValInfo* byValInfo, ReturnAddressPtr returnAddress, JITArrayMode arrayMode)
 {
@@ -1594,192 +1406,6 @@ JIT::JumpList JIT::emitArrayStorageLoad(const Instruction*, PatchableJump& badTy
     return slowCases;
 }
 
-JIT::JumpList JIT::emitDirectArgumentsGetByVal(const Instruction*, PatchableJump& badType)
-{
-    JumpList slowCases;
-    
-#if USE(JSVALUE64)
-    RegisterID base = regT0;
-    RegisterID property = regT1;
-    JSValueRegs result = JSValueRegs(regT0);
-    RegisterID scratch = regT3;
-    RegisterID scratch2 = regT4;
-#else
-    RegisterID base = regT0;
-    RegisterID property = regT2;
-    JSValueRegs result = JSValueRegs(regT1, regT0);
-    RegisterID scratch = regT3;
-    RegisterID scratch2 = regT4;
-#endif
-
-    load8(Address(base, JSCell::typeInfoTypeOffset()), scratch);
-    badType = patchableBranch32(NotEqual, scratch, TrustedImm32(DirectArgumentsType));
-    
-    load32(Address(base, DirectArguments::offsetOfLength()), scratch2);
-    slowCases.append(branch32(AboveOrEqual, property, scratch2));
-    slowCases.append(branchTestPtr(NonZero, Address(base, DirectArguments::offsetOfMappedArguments())));
-
-    loadValue(BaseIndex(base, property, TimesEight, DirectArguments::storageOffset()), result);
-    
-    return slowCases;
-}
-
-JIT::JumpList JIT::emitScopedArgumentsGetByVal(const Instruction*, PatchableJump& badType)
-{
-    JumpList slowCases;
-    
-#if USE(JSVALUE64)
-    RegisterID base = regT0;
-    RegisterID property = regT1;
-    JSValueRegs result = JSValueRegs(regT0);
-    RegisterID scratch = regT3;
-    RegisterID scratch2 = regT4;
-#else
-    RegisterID base = regT0;
-    RegisterID property = regT2;
-    JSValueRegs result = JSValueRegs(regT1, regT0);
-    RegisterID scratch = regT3;
-    RegisterID scratch2 = regT4;
-#endif
-
-    load8(Address(base, JSCell::typeInfoTypeOffset()), scratch);
-    badType = patchableBranch32(NotEqual, scratch, TrustedImm32(ScopedArgumentsType));
-    slowCases.append(branch32(AboveOrEqual, property, Address(base, ScopedArguments::offsetOfTotalLength())));
-    
-    loadPtr(Address(base, ScopedArguments::offsetOfTable()), scratch);
-    load32(Address(scratch, ScopedArgumentsTable::offsetOfLength()), scratch2);
-    Jump overflowCase = branch32(AboveOrEqual, property, scratch2);
-    loadPtr(Address(base, ScopedArguments::offsetOfScope()), scratch2);
-    loadPtr(Address(scratch, ScopedArgumentsTable::offsetOfArguments()), scratch);
-    load32(BaseIndex(scratch, property, TimesFour), scratch);
-    slowCases.append(branch32(Equal, scratch, TrustedImm32(ScopeOffset::invalidOffset)));
-    loadValue(BaseIndex(scratch2, scratch, TimesEight, JSLexicalEnvironment::offsetOfVariables()), result);
-    Jump done = jump();
-    overflowCase.link(this);
-    sub32(property, scratch2);
-    neg32(scratch2);
-    loadPtr(Address(base, ScopedArguments::offsetOfStorage()), scratch);
-    loadValue(BaseIndex(scratch, scratch2, TimesEight), result);
-    slowCases.append(branchIfEmpty(result));
-    done.link(this);
-    
-    return slowCases;
-}
-
-JIT::JumpList JIT::emitIntTypedArrayGetByVal(const Instruction*, PatchableJump& badType, TypedArrayType type)
-{
-    ASSERT(isInt(type));
-    
-    // The best way to test the array type is to use the classInfo. We need to do so without
-    // clobbering the register that holds the indexing type, base, and property.
-
-#if USE(JSVALUE64)
-    RegisterID base = regT0;
-    RegisterID property = regT1;
-    JSValueRegs result = JSValueRegs(regT0);
-    RegisterID scratch = regT3;
-    RegisterID scratch2 = regT4;
-#else
-    RegisterID base = regT0;
-    RegisterID property = regT2;
-    JSValueRegs result = JSValueRegs(regT1, regT0);
-    RegisterID scratch = regT3;
-    RegisterID scratch2 = regT4;
-#endif
-    RegisterID resultPayload = result.payloadGPR();
-    
-    JumpList slowCases;
-    
-    load8(Address(base, JSCell::typeInfoTypeOffset()), scratch);
-    badType = patchableBranch32(NotEqual, scratch, TrustedImm32(typeForTypedArrayType(type)));
-    load32(Address(base, JSArrayBufferView::offsetOfLength()), scratch2);
-    slowCases.append(branch32(AboveOrEqual, property, scratch2));
-    loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), scratch);
-    cageConditionally(Gigacage::Primitive, scratch, scratch2, scratch2);
-
-    switch (elementSize(type)) {
-    case 1:
-        if (JSC::isSigned(type))
-            load8SignedExtendTo32(BaseIndex(scratch, property, TimesOne), resultPayload);
-        else
-            load8(BaseIndex(scratch, property, TimesOne), resultPayload);
-        break;
-    case 2:
-        if (JSC::isSigned(type))
-            load16SignedExtendTo32(BaseIndex(scratch, property, TimesTwo), resultPayload);
-        else
-            load16(BaseIndex(scratch, property, TimesTwo), resultPayload);
-        break;
-    case 4:
-        load32(BaseIndex(scratch, property, TimesFour), resultPayload);
-        break;
-    default:
-        CRASH();
-    }
-    
-    Jump done;
-    if (type == TypeUint32) {
-        Jump canBeInt = branch32(GreaterThanOrEqual, resultPayload, TrustedImm32(0));
-        
-        convertInt32ToDouble(resultPayload, fpRegT0);
-        addDouble(AbsoluteAddress(&twoToThe32), fpRegT0);
-        boxDouble(fpRegT0, result);
-        done = jump();
-        canBeInt.link(this);
-    }
-
-    boxInt32(resultPayload, result);
-    if (done.isSet())
-        done.link(this);
-    return slowCases;
-}
-
-JIT::JumpList JIT::emitFloatTypedArrayGetByVal(const Instruction*, PatchableJump& badType, TypedArrayType type)
-{
-    ASSERT(isFloat(type));
-    
-#if USE(JSVALUE64)
-    RegisterID base = regT0;
-    RegisterID property = regT1;
-    JSValueRegs result = JSValueRegs(regT0);
-    RegisterID scratch = regT3;
-    RegisterID scratch2 = regT4;
-#else
-    RegisterID base = regT0;
-    RegisterID property = regT2;
-    JSValueRegs result = JSValueRegs(regT1, regT0);
-    RegisterID scratch = regT3;
-    RegisterID scratch2 = regT4;
-#endif
-    
-    JumpList slowCases;
-
-    load8(Address(base, JSCell::typeInfoTypeOffset()), scratch);
-    badType = patchableBranch32(NotEqual, scratch, TrustedImm32(typeForTypedArrayType(type)));
-    load32(Address(base, JSArrayBufferView::offsetOfLength()), scratch2);
-    slowCases.append(branch32(AboveOrEqual, property, scratch2));
-    loadPtr(Address(base, JSArrayBufferView::offsetOfVector()), scratch);
-    cageConditionally(Gigacage::Primitive, scratch, scratch2, scratch2);
-    
-    switch (elementSize(type)) {
-    case 4:
-        loadFloat(BaseIndex(scratch, property, TimesFour), fpRegT0);
-        convertFloatToDouble(fpRegT0, fpRegT0);
-        break;
-    case 8: {
-        loadDouble(BaseIndex(scratch, property, TimesEight), fpRegT0);
-        break;
-    }
-    default:
-        CRASH();
-    }
-    
-    purifyNaN(fpRegT0);
-    
-    boxDouble(fpRegT0, result);
-    return slowCases;    
-}
-
 template<typename Op>
 JIT::JumpList JIT::emitIntTypedArrayPutByVal(Op bytecode, PatchableJump& badType, TypedArrayType type)
 {
index 8bf7243..d0d4f07 100644 (file)
@@ -138,89 +138,15 @@ void JIT::emit_op_del_by_val(const Instruction* currentInstruction)
 
 void JIT::emit_op_get_by_val(const Instruction* currentInstruction)
 {
+    // FIXME: Implement IC here:
+    // https://bugs.webkit.org/show_bug.cgi?id=204082
     auto bytecode = currentInstruction->as<OpGetByVal>();
     auto& metadata = bytecode.metadata(m_codeBlock);
     int dst = bytecode.m_dst.offset();
     int base = bytecode.m_base.offset();
     int property = bytecode.m_property.offset();
-    ArrayProfile* profile = &metadata.m_arrayProfile;
-    ByValInfo* byValInfo = m_codeBlock->addByValInfo();
-
     emitLoad2(base, regT1, regT0, property, regT3, regT2);
-    
-    emitJumpSlowCaseIfNotJSCell(base, regT1);
-    PatchableJump notIndex = patchableBranch32(NotEqual, regT3, TrustedImm32(JSValue::Int32Tag));
-    addSlowCase(notIndex);
-    emitArrayProfilingSiteWithCell(regT0, regT1, profile);
-    and32(TrustedImm32(IndexingShapeMask), regT1);
-
-    PatchableJump badType;
-    JumpList slowCases;
-    
-    JITArrayMode mode = chooseArrayMode(profile);
-    switch (mode) {
-    case JITInt32:
-        slowCases = emitInt32GetByVal(currentInstruction, badType);
-        break;
-    case JITDouble:
-        slowCases = emitDoubleGetByVal(currentInstruction, badType);
-        break;
-    case JITContiguous:
-        slowCases = emitContiguousGetByVal(currentInstruction, badType);
-        break;
-    case JITArrayStorage:
-        slowCases = emitArrayStorageGetByVal(currentInstruction, badType);
-        break;
-    default:
-        CRASH();
-    }
-    
-    addSlowCase(badType);
-    addSlowCase(slowCases);
-    
-    Label done = label();
-
-    if (!ASSERT_DISABLED) {
-        Jump resultOK = branchIfNotEmpty(regT1);
-        abortWithReason(JITGetByValResultIsNotEmpty);
-        resultOK.link(this);
-    }
-
-    emitValueProfilingSite(bytecode.metadata(m_codeBlock));
-    emitStore(dst, regT1, regT0);
-
-    Label nextHotPath = label();
-    
-    m_byValCompilationInfo.append(ByValCompilationInfo(byValInfo, m_bytecodeIndex, notIndex, badType, mode, profile, done, nextHotPath));
-}
-
-JITGetByIdGenerator JIT::emitGetByValWithCachedId(ByValInfo* byValInfo, OpGetByVal bytecode, const Identifier& propertyName, Jump& fastDoneCase, Jump& slowDoneCase, JumpList& slowCases)
-{
-    // base: tag(regT1), payload(regT0)
-    // property: tag(regT3), payload(regT2)
-    // scratch: regT4
-
-    int dst = bytecode.m_dst.offset();
-
-    slowCases.append(branchIfNotCell(regT3));
-    emitByValIdentifierCheck(byValInfo, regT2, regT4, propertyName, slowCases);
-
-    const Instruction* currentInstruction = m_codeBlock->instructions().at(byValInfo->bytecodeIndex).ptr();
-    JITGetByIdGenerator gen(
-        m_codeBlock, CodeOrigin(m_bytecodeIndex), CallSiteIndex(BytecodeIndex(bitwise_cast<uint32_t>(currentInstruction))), RegisterSet::stubUnavailableRegisters(),
-        propertyName.impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
-    gen.generateFastPath(*this);
-
-    fastDoneCase = jump();
-
-    Label coldPathBegin = label();
-    gen.slowPathJump().link(this);
-
-    Call call = callOperationWithProfile(bytecode.metadata(m_codeBlock), operationGetByIdOptimize, dst, m_codeBlock->globalObject(), gen.stubInfo(), JSValueRegs(regT1, regT0), propertyName.impl());
-    gen.reportSlowPathCall(coldPathBegin, call);
-    slowDoneCase = jump();
-
-    return gen;
+    callOperation(operationGetByVal, dst, m_codeBlock->globalObject(), JSValueRegs(regT1, regT0), JSValueRegs(regT3, regT2));
 }
 
 void JIT::emitSlow_op_get_by_val(const Instruction* currentInstruction, Vector<SlowCaseEntry>::iterator& iter)
@@ -502,7 +428,7 @@ void JIT::emit_op_try_get_by_id(const Instruction* currentInstruction)
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeIndex), CallSiteIndex(BytecodeIndex(bitwise_cast<uint32_t>(currentInstruction))), RegisterSet::stubUnavailableRegisters(),
-        ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::TryGet);
+        ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::TryGetById);
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIds.append(gen);
@@ -541,7 +467,7 @@ void JIT::emit_op_get_by_id_direct(const Instruction* currentInstruction)
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeIndex), CallSiteIndex(BytecodeIndex(bitwise_cast<uint32_t>(currentInstruction))), RegisterSet::stubUnavailableRegisters(),
-        ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::GetDirect);
+        ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::GetByIdDirect);
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIds.append(gen);
@@ -587,7 +513,7 @@ void JIT::emit_op_get_by_id(const Instruction* currentInstruction)
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeIndex), CallSiteIndex(BytecodeIndex(bitwise_cast<uint32_t>(currentInstruction))), RegisterSet::stubUnavailableRegisters(),
-        ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
+        ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::GetById);
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIds.append(gen);
@@ -628,7 +554,7 @@ void JIT::emit_op_get_by_id_with_this(const Instruction* currentInstruction)
 
     JITGetByIdWithThisGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeIndex), CallSiteIndex(BytecodeIndex(bitwise_cast<uint32_t>(currentInstruction))), RegisterSet::stubUnavailableRegisters(),
-        ident->impl(), JSValueRegs(regT1, regT0), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT4, regT3), AccessType::GetWithThis);
+        ident->impl(), JSValueRegs(regT1, regT0), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT4, regT3));
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIdsWithThis.append(gen);
index 7dc46b5..02a6e15 100644 (file)
@@ -145,39 +145,39 @@ ALWAYS_INLINE static void fireWatchpointsAndClearStubIfNeeded(VM& vm, StructureS
     }
 }
 
-inline FunctionPtr<CFunctionPtrTag> appropriateOptimizingGetByIdFunction(GetByIDKind kind)
+inline FunctionPtr<CFunctionPtrTag> appropriateOptimizingGetByFunction(GetByKind kind)
 {
     switch (kind) {
-    case GetByIDKind::Normal:
+    case GetByKind::Normal:
         return operationGetByIdOptimize;
-    case GetByIDKind::WithThis:
+    case GetByKind::WithThis:
         return operationGetByIdWithThisOptimize;
-    case GetByIDKind::Try:
+    case GetByKind::Try:
         return operationTryGetByIdOptimize;
-    case GetByIDKind::Direct:
+    case GetByKind::Direct:
         return operationGetByIdDirectOptimize;
+    case GetByKind::NormalByVal:
+        return operationGetByValOptimize;
     }
-    ASSERT_NOT_REACHED();
-    return operationGetById;
 }
 
-inline FunctionPtr<CFunctionPtrTag> appropriateGetByIdFunction(GetByIDKind kind)
+inline FunctionPtr<CFunctionPtrTag> appropriateGetByFunction(GetByKind kind)
 {
     switch (kind) {
-    case GetByIDKind::Normal:
+    case GetByKind::Normal:
         return operationGetById;
-    case GetByIDKind::WithThis:
+    case GetByKind::WithThis:
         return operationGetByIdWithThis;
-    case GetByIDKind::Try:
+    case GetByKind::Try:
         return operationTryGetById;
-    case GetByIDKind::Direct:
+    case GetByKind::Direct:
         return operationGetByIdDirect;
+    case GetByKind::NormalByVal:
+        return operationGetByValGeneric;
     }
-    ASSERT_NOT_REACHED();
-    return operationGetById;
 }
 
-static InlineCacheAction tryCacheGetByID(JSGlobalObject* globalObject, CodeBlock* codeBlock, JSValue baseValue, const Identifier& propertyName, const PropertySlot& slot, StructureStubInfo& stubInfo, GetByIDKind kind)
+static InlineCacheAction tryCacheGetBy(JSGlobalObject* globalObject, CodeBlock* codeBlock, JSValue baseValue, const Identifier& propertyName, const PropertySlot& slot, StructureStubInfo& stubInfo, GetByKind kind)
 {
     VM& vm = globalObject->vm();
     AccessGenerationResult result;
@@ -197,46 +197,45 @@ static InlineCacheAction tryCacheGetByID(JSGlobalObject* globalObject, CodeBlock
 
         if (propertyName == vm.propertyNames->length) {
             if (isJSArray(baseCell)) {
-                if (stubInfo.cacheType == CacheType::Unset
+                if (stubInfo.cacheType() == CacheType::Unset
                     && slot.slotBase() == baseCell
                     && InlineAccess::isCacheableArrayLength(stubInfo, jsCast<JSArray*>(baseCell))) {
 
                     bool generatedCodeInline = InlineAccess::generateArrayLength(stubInfo, jsCast<JSArray*>(baseCell));
                     if (generatedCodeInline) {
-                        ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
+                        ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByFunction(kind));
                         stubInfo.initArrayLength();
                         return RetryCacheLater;
                     }
                 }
 
-                newCase = AccessCase::create(vm, codeBlock, AccessCase::ArrayLength);
+                newCase = AccessCase::create(vm, codeBlock, AccessCase::ArrayLength, propertyName);
             } else if (isJSString(baseCell)) {
-                if (stubInfo.cacheType == CacheType::Unset && InlineAccess::isCacheableStringLength(stubInfo)) {
+                if (stubInfo.cacheType() == CacheType::Unset && InlineAccess::isCacheableStringLength(stubInfo)) {
                     bool generatedCodeInline = InlineAccess::generateStringLength(stubInfo);
                     if (generatedCodeInline) {
-                        ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
+                        ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByFunction(kind));
                         stubInfo.initStringLength();
                         return RetryCacheLater;
                     }
                 }
 
-                newCase = AccessCase::create(vm, codeBlock, AccessCase::StringLength);
-            }
-            else if (DirectArguments* arguments = jsDynamicCast<DirectArguments*>(vm, baseCell)) {
+                newCase = AccessCase::create(vm, codeBlock, AccessCase::StringLength, propertyName);
+            } else if (DirectArguments* arguments = jsDynamicCast<DirectArguments*>(vm, baseCell)) {
                 // If there were overrides, then we can handle this as a normal property load! Guarding
                 // this with such a check enables us to add an IC case for that load if needed.
                 if (!arguments->overrodeThings())
-                    newCase = AccessCase::create(vm, codeBlock, AccessCase::DirectArgumentsLength);
+                    newCase = AccessCase::create(vm, codeBlock, AccessCase::DirectArgumentsLength, propertyName);
             } else if (ScopedArguments* arguments = jsDynamicCast<ScopedArguments*>(vm, baseCell)) {
                 // Ditto.
                 if (!arguments->overrodeThings())
-                    newCase = AccessCase::create(vm, codeBlock, AccessCase::ScopedArgumentsLength);
+                    newCase = AccessCase::create(vm, codeBlock, AccessCase::ScopedArgumentsLength, propertyName);
             }
         }
 
         if (!propertyName.isSymbol() && baseCell->inherits<JSModuleNamespaceObject>(vm) && !slot.isUnset()) {
             if (auto moduleNamespaceSlot = slot.moduleNamespaceSlot())
-                newCase = ModuleNamespaceAccessCase::create(vm, codeBlock, jsCast<JSModuleNamespaceObject*>(baseCell), moduleNamespaceSlot->environment, ScopeOffset(moduleNamespaceSlot->scopeOffset));
+                newCase = ModuleNamespaceAccessCase::create(vm, codeBlock, propertyName, jsCast<JSModuleNamespaceObject*>(baseCell), moduleNamespaceSlot->environment, ScopeOffset(moduleNamespaceSlot->scopeOffset));
         }
         
         if (!newCase) {
@@ -259,7 +258,7 @@ static InlineCacheAction tryCacheGetByID(JSGlobalObject* globalObject, CodeBlock
                 return action;
 
             // Optimize self access.
-            if (stubInfo.cacheType == CacheType::Unset
+            if (stubInfo.cacheType() == CacheType::Unset
                 && slot.isCacheableValue()
                 && slot.slotBase() == baseValue
                 && !slot.watchpointSet()
@@ -268,10 +267,10 @@ static InlineCacheAction tryCacheGetByID(JSGlobalObject* globalObject, CodeBlock
 
                 bool generatedCodeInline = InlineAccess::generateSelfPropertyAccess(stubInfo, structure, slot.cachedOffset());
                 if (generatedCodeInline) {
-                    LOG_IC((ICEvent::GetByIdSelfPatch, structure->classInfo(), propertyName, slot.slotBase() == baseValue));
+                    LOG_IC((ICEvent::GetBySelfPatch, structure->classInfo(), propertyName, slot.slotBase() == baseValue));
                     structure->startWatchingPropertyForReplacements(vm, slot.cachedOffset());
-                    ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
-                    stubInfo.initGetByIdSelf(codeBlock, structure, slot.cachedOffset());
+                    ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByFunction(kind));
+                    stubInfo.initGetByIdSelf(codeBlock, structure, slot.cachedOffset(), propertyName);
                     return RetryCacheLater;
                 }
             }
@@ -294,9 +293,9 @@ static InlineCacheAction tryCacheGetByID(JSGlobalObject* globalObject, CodeBlock
                 if (slot.isUnset() && structure->typeInfo().getOwnPropertySlotIsImpureForPropertyAbsence())
                     return GiveUpOnCache;
 
-                // If a kind is GetByIDKind::Direct, we do not need to investigate prototype chains further.
+                // If a kind is GetByKind::Direct, we do not need to investigate prototype chains further.
                 // Cacheability just depends on the head structure.
-                if (kind != GetByIDKind::Direct) {
+                if (kind != GetByKind::Direct) {
                     auto cacheStatus = preparePrototypeChainForCaching(globalObject, baseCell, slot);
                     if (!cacheStatus)
                         return GiveUpOnCache;
@@ -345,7 +344,7 @@ static InlineCacheAction tryCacheGetByID(JSGlobalObject* globalObject, CodeBlock
             if (slot.isCacheableCustom() && slot.domAttribute())
                 domAttribute = slot.domAttribute();
 
-            if (kind == GetByIDKind::Try) {
+            if (kind == GetByKind::Try) {
                 AccessCase::AccessType type;
                 if (slot.isCacheableValue())
                     type = AccessCase::Load;
@@ -356,13 +355,13 @@ static InlineCacheAction tryCacheGetByID(JSGlobalObject* globalObject, CodeBlock
                 else
                     RELEASE_ASSERT_NOT_REACHED();
 
-                newCase = ProxyableAccessCase::create(vm, codeBlock, type, offset, structure, conditionSet, loadTargetFromProxy, slot.watchpointSet(), WTFMove(prototypeAccessChain));
+                newCase = ProxyableAccessCase::create(vm, codeBlock, type, propertyName, offset, structure, conditionSet, loadTargetFromProxy, slot.watchpointSet(), WTFMove(prototypeAccessChain));
             } else if (!loadTargetFromProxy && getter && IntrinsicGetterAccessCase::canEmitIntrinsicGetter(getter, structure))
-                newCase = IntrinsicGetterAccessCase::create(vm, codeBlock, slot.cachedOffset(), structure, conditionSet, getter, WTFMove(prototypeAccessChain));
+                newCase = IntrinsicGetterAccessCase::create(vm, codeBlock, propertyName, slot.cachedOffset(), structure, conditionSet, getter, WTFMove(prototypeAccessChain));
             else {
                 if (slot.isCacheableValue() || slot.isUnset()) {
                     newCase = ProxyableAccessCase::create(vm, codeBlock, slot.isUnset() ? AccessCase::Miss : AccessCase::Load,
-                        offset, structure, conditionSet, loadTargetFromProxy, slot.watchpointSet(), WTFMove(prototypeAccessChain));
+                        propertyName, offset, structure, conditionSet, loadTargetFromProxy, slot.watchpointSet(), WTFMove(prototypeAccessChain));
                 } else {
                     AccessCase::AccessType type;
                     if (slot.isCacheableGetter())
@@ -372,11 +371,11 @@ static InlineCacheAction tryCacheGetByID(JSGlobalObject* globalObject, CodeBlock
                     else
                         type = AccessCase::CustomValueGetter;
 
-                    if (kind == GetByIDKind::WithThis && type == AccessCase::CustomAccessorGetter && domAttribute)
+                    if (kind == GetByKind::WithThis && type == AccessCase::CustomAccessorGetter && domAttribute)
                         return GiveUpOnCache;
 
                     newCase = GetterSetterAccessCase::create(
-                        vm, codeBlock, type, offset, structure, conditionSet, loadTargetFromProxy,
+                        vm, codeBlock, type, propertyName, offset, structure, conditionSet, loadTargetFromProxy,
                         slot.watchpointSet(), slot.isCacheableCustom() ? slot.customGetter() : nullptr,
                         slot.isCacheableCustom() && slot.slotBase() != baseValue ? slot.slotBase() : nullptr,
                         domAttribute, WTFMove(prototypeAccessChain));
@@ -384,12 +383,12 @@ static InlineCacheAction tryCacheGetByID(JSGlobalObject* globalObject, CodeBlock
             }
         }
 
-        LOG_IC((ICEvent::GetByIdAddAccessCase, baseValue.classInfoOrNull(vm), propertyName, slot.slotBase() == baseValue));
+        LOG_IC((ICEvent::GetByAddAccessCase, baseValue.classInfoOrNull(vm), propertyName, slot.slotBase() == baseValue));
 
         result = stubInfo.addAccessCase(locker, codeBlock, propertyName, WTFMove(newCase));
 
         if (result.generatedSomeCode()) {
-            LOG_IC((ICEvent::GetByIdReplaceWithJump, baseValue.classInfoOrNull(vm), propertyName, slot.slotBase() == baseValue));
+            LOG_IC((ICEvent::GetByReplaceWithJump, baseValue.classInfoOrNull(vm), propertyName, slot.slotBase() == baseValue));
             
             RELEASE_ASSERT(result.code());
             InlineAccess::rewireStubAsJump(stubInfo, CodeLocationLabel<JITStubRoutinePtrTag>(result.code()));
@@ -401,12 +400,108 @@ static InlineCacheAction tryCacheGetByID(JSGlobalObject* globalObject, CodeBlock
     return result.shouldGiveUpNow() ? GiveUpOnCache : RetryCacheLater;
 }
 
-void repatchGetByID(JSGlobalObject* globalObject, CodeBlock* codeBlock, JSValue baseValue, const Identifier& propertyName, const PropertySlot& slot, StructureStubInfo& stubInfo, GetByIDKind kind)
+void repatchGetBy(JSGlobalObject* globalObject, CodeBlock* codeBlock, JSValue baseValue, const Identifier& propertyName, const PropertySlot& slot, StructureStubInfo& stubInfo, GetByKind kind)
 {
     SuperSamplerScope superSamplerScope(false);
     
-    if (tryCacheGetByID(globalObject, codeBlock, baseValue, propertyName, slot, stubInfo, kind) == GiveUpOnCache)
-        ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateGetByIdFunction(kind));
+    if (tryCacheGetBy(globalObject, codeBlock, baseValue, propertyName, slot, stubInfo, kind) == GiveUpOnCache)
+        ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateGetByFunction(kind));
+}
+
+
+static InlineCacheAction tryCacheArrayGetByVal(JSGlobalObject* globalObject, CodeBlock* codeBlock, JSValue baseValue, JSValue index, StructureStubInfo& stubInfo)
+{
+    if (!baseValue.isCell())
+        return GiveUpOnCache;
+
+    if (!index.isInt32())
+        return RetryCacheLater;
+
+    VM& vm = globalObject->vm();
+    AccessGenerationResult result;
+
+    {
+        GCSafeConcurrentJSLocker locker(codeBlock->m_lock, globalObject->vm().heap);
+
+        JSCell* base = baseValue.asCell();
+
+        AccessCase::AccessType accessType;
+        if (base->type() == DirectArgumentsType)
+            accessType = AccessCase::IndexedDirectArgumentsLoad;
+        else if (base->type() == ScopedArgumentsType)
+            accessType = AccessCase::IndexedScopedArgumentsLoad;
+        else if (base->type() == StringType)
+            accessType = AccessCase::IndexedStringLoad;
+        else if (isTypedView(base->classInfo(vm)->typedArrayStorageType)) {
+            switch (base->classInfo(vm)->typedArrayStorageType) {
+            case TypeInt8:
+                accessType = AccessCase::IndexedTypedArrayInt8Load;
+                break;
+            case TypeUint8:
+                accessType = AccessCase::IndexedTypedArrayUint8Load;
+                break;
+            case TypeUint8Clamped:
+                accessType = AccessCase::IndexedTypedArrayUint8ClampedLoad;
+                break;
+            case TypeInt16:
+                accessType = AccessCase::IndexedTypedArrayInt16Load;
+                break;
+            case TypeUint16:
+                accessType = AccessCase::IndexedTypedArrayUint16Load;
+                break;
+            case TypeInt32:
+                accessType = AccessCase::IndexedTypedArrayInt32Load;
+                break;
+            case TypeUint32:
+                accessType = AccessCase::IndexedTypedArrayUint32Load;
+                break;
+            case TypeFloat32:
+                accessType = AccessCase::IndexedTypedArrayFloat32Load;
+                break;
+            case TypeFloat64:
+                accessType = AccessCase::IndexedTypedArrayFloat64Load;
+                break;
+            default:
+                RELEASE_ASSERT_NOT_REACHED();
+            }
+        } else {
+            IndexingType indexingShape = base->indexingType() & IndexingShapeMask;
+            switch (indexingShape) {
+            case Int32Shape:
+                accessType = AccessCase::IndexedInt32Load;
+                break;
+            case DoubleShape:
+                accessType = AccessCase::IndexedDoubleLoad;
+                break;
+            case ContiguousShape:
+                accessType = AccessCase::IndexedContiguousLoad;
+                break;
+            case ArrayStorageShape:
+                accessType = AccessCase::IndexedArrayStorageLoad;
+                break;
+            default:
+                return GiveUpOnCache;
+            }
+        }
+
+        result = stubInfo.addAccessCase(locker, codeBlock, Identifier(), AccessCase::create(vm, codeBlock, accessType, Identifier()));
+
+        if (result.generatedSomeCode()) {
+            LOG_IC((ICEvent::GetByReplaceWithJump, baseValue.classInfoOrNull(vm), Identifier()));
+            
+            RELEASE_ASSERT(result.code());
+            InlineAccess::rewireStubAsJump(stubInfo, CodeLocationLabel<JITStubRoutinePtrTag>(result.code()));
+        }
+    }
+
+    fireWatchpointsAndClearStubIfNeeded(vm, stubInfo, codeBlock, result);
+    return result.shouldGiveUpNow() ? GiveUpOnCache : RetryCacheLater;
+}
+
+void repatchArrayGetByVal(JSGlobalObject* globalObject, CodeBlock* codeBlock, JSValue base, JSValue index, StructureStubInfo& stubInfo)
+{
+    if (tryCacheArrayGetByVal(globalObject, codeBlock, base, index, stubInfo) == GiveUpOnCache)
+        ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), operationGetByValGeneric);
 }
 
 static V_JITOperation_GSsiJJI appropriateGenericPutByIdFunction(const PutPropertySlot &slot, PutKind putKind)
@@ -473,7 +568,7 @@ static InlineCacheAction tryCachePutByID(JSGlobalObject* globalObject, CodeBlock
 
                 structure->didCachePropertyReplacement(vm, slot.cachedOffset());
             
-                if (stubInfo.cacheType == CacheType::Unset
+                if (stubInfo.cacheType() == CacheType::Unset
                     && InlineAccess::canGenerateSelfPropertyReplace(stubInfo, slot.cachedOffset())
                     && !structure->needImpurePropertyWatchpoint()) {
                     
@@ -486,7 +581,7 @@ static InlineCacheAction tryCachePutByID(JSGlobalObject* globalObject, CodeBlock
                     }
                 }
 
-                newCase = AccessCase::create(vm, codeBlock, AccessCase::Replace, slot.cachedOffset(), structure);
+                newCase = AccessCase::create(vm, codeBlock, AccessCase::Replace, ident, slot.cachedOffset(), structure);
             } else {
                 ASSERT(slot.type() == PutPropertySlot::NewProperty);
 
@@ -529,7 +624,7 @@ static InlineCacheAction tryCachePutByID(JSGlobalObject* globalObject, CodeBlock
                     }
                 }
 
-                newCase = AccessCase::create(vm, codeBlock, offset, structure, newStructure, conditionSet, WTFMove(prototypeAccessChain));
+                newCase = AccessCase::create(vm, codeBlock, ident, offset, structure, newStructure, conditionSet, WTFMove(prototypeAccessChain));
             }
         } else if (slot.isCacheableCustom() || slot.isCacheableSetter()) {
             if (slot.isCacheableCustom()) {
@@ -555,8 +650,8 @@ static InlineCacheAction tryCachePutByID(JSGlobalObject* globalObject, CodeBlock
                 }
 
                 newCase = GetterSetterAccessCase::create(
-                    vm, codeBlock, slot.isCustomAccessor() ? AccessCase::CustomAccessorSetter : AccessCase::CustomValueSetter, structure, invalidOffset,
-                    conditionSet, WTFMove(prototypeAccessChain), slot.customSetter(), slot.base() != baseValue ? slot.base() : nullptr);
+                    vm, codeBlock, slot.isCustomAccessor() ? AccessCase::CustomAccessorSetter : AccessCase::CustomValueSetter, structure, ident,
+                    invalidOffset, conditionSet, WTFMove(prototypeAccessChain), slot.customSetter(), slot.base() != baseValue ? slot.base() : nullptr);
             } else {
                 ObjectPropertyConditionSet conditionSet;
                 std::unique_ptr<PolyProtoAccessChain> prototypeAccessChain;
@@ -589,7 +684,7 @@ static InlineCacheAction tryCachePutByID(JSGlobalObject* globalObject, CodeBlock
                 }
 
                 newCase = GetterSetterAccessCase::create(
-                    vm, codeBlock, AccessCase::Setter, structure, offset, conditionSet, WTFMove(prototypeAccessChain));
+                    vm, codeBlock, AccessCase::Setter, structure, ident, offset, conditionSet, WTFMove(prototypeAccessChain));
             }
         }
 
@@ -649,7 +744,7 @@ static InlineCacheAction tryCacheInByID(
                 return action;
 
             // Optimize self access.
-            if (stubInfo.cacheType == CacheType::Unset
+            if (stubInfo.cacheType() == CacheType::Unset
                 && slot.isCacheableValue()
                 && slot.slotBase() == base
                 && !slot.watchpointSet()
@@ -706,7 +801,7 @@ static InlineCacheAction tryCacheInByID(
         LOG_IC((ICEvent::InAddAccessCase, structure->classInfo(), ident, slot.slotBase() == base));
 
         std::unique_ptr<AccessCase> newCase = AccessCase::create(
-            vm, codeBlock, wasFound ? AccessCase::InHit : AccessCase::InMiss, wasFound ? slot.cachedOffset() : invalidOffset, structure, conditionSet, WTFMove(prototypeAccessChain));
+            vm, codeBlock, wasFound ? AccessCase::InHit : AccessCase::InMiss, ident, wasFound ? slot.cachedOffset() : invalidOffset, structure, conditionSet, WTFMove(prototypeAccessChain));
 
         result = stubInfo.addAccessCase(locker, codeBlock, ident, WTFMove(newCase));
 
@@ -772,7 +867,7 @@ static InlineCacheAction tryCacheInstanceOf(
         }
         
         if (!newCase)
-            newCase = AccessCase::create(vm, codeBlock, AccessCase::InstanceOfGeneric);
+            newCase = AccessCase::create(vm, codeBlock, AccessCase::InstanceOfGeneric, Identifier());
         
         LOG_IC((ICEvent::InstanceOfAddAccessCase, structure->classInfo(), Identifier()));
         
@@ -1251,9 +1346,9 @@ void linkPolymorphicCall(JSGlobalObject* globalObject, CallFrame* callFrame, Cal
         callLinkInfo.remove();
 }
 
-void resetGetByID(CodeBlock* codeBlock, StructureStubInfo& stubInfo, GetByIDKind kind)
+void resetGetBy(CodeBlock* codeBlock, StructureStubInfo& stubInfo, GetByKind kind)
 {
-    ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
+    ftlThunkAwareRepatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByFunction(kind));
     InlineAccess::rewireStubAsJump(stubInfo, stubInfo.slowPathStartLocation());
 }
 
index 9a527cc..fdda8a7 100644 (file)
 
 namespace JSC {
 
-enum class GetByIDKind {
+enum class GetByKind {
     Normal,
+    NormalByVal,
     Try,
     WithThis,
     Direct
 };
 
-void repatchGetByID(JSGlobalObject*, CodeBlock*, JSValue, const Identifier&, const PropertySlot&, StructureStubInfo&, GetByIDKind);
+void repatchArrayGetByVal(JSGlobalObject*, CodeBlock*, JSValue base, JSValue index, StructureStubInfo&);
+void repatchGetBy(JSGlobalObject*, CodeBlock*, JSValue, const Identifier&, const PropertySlot&, StructureStubInfo&, GetByKind);
 void repatchPutByID(JSGlobalObject*, CodeBlock*, JSValue, Structure*, const Identifier&, const PutPropertySlot&, StructureStubInfo&, PutKind);
 void repatchInByID(JSGlobalObject*, CodeBlock*, JSObject*, const Identifier&, bool wasFound, const PropertySlot&, StructureStubInfo&);
 void repatchInstanceOf(JSGlobalObject*, CodeBlock*, JSValue value, JSValue prototype, StructureStubInfo&, bool wasFound);
@@ -48,7 +50,7 @@ void linkDirectFor(CallFrame*, CallLinkInfo&, CodeBlock*, MacroAssemblerCodePtr<
 void linkSlowFor(CallFrame*, CallLinkInfo&);
 void unlinkFor(VM&, CallLinkInfo&);
 void linkPolymorphicCall(JSGlobalObject*, CallFrame*, CallLinkInfo&, CallVariant);
-void resetGetByID(CodeBlock*, StructureStubInfo&, GetByIDKind);
+void resetGetBy(CodeBlock*, StructureStubInfo&, GetByKind);
 void resetPutByID(CodeBlock*, StructureStubInfo&);
 void resetInByID(CodeBlock*, StructureStubInfo&);
 void resetInstanceOf(StructureStubInfo&);
index e4c9074..7eea758 100644 (file)
@@ -32,7 +32,7 @@
 namespace JSC {
 
 // The following is a set of alias for the opcode names. This is needed
-// because there is code (e.g. in GetByIdStatus.cpp and PutByIdStatus.cpp)
+// because there is code (e.g. in GetByStatus.cpp and PutByIdStatus.cpp)
 // which refers to the opcodes expecting them to be prefixed with "llint_".
 // In the CLoop implementation, the 2 are equivalent. Hence, we set up this
 // alias here.