We should be able to generate more types of ICs inline
authorsbarati@apple.com <sbarati@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sun, 19 Jun 2016 19:42:18 +0000 (19:42 +0000)
committersbarati@apple.com <sbarati@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sun, 19 Jun 2016 19:42:18 +0000 (19:42 +0000)
https://bugs.webkit.org/show_bug.cgi?id=158719
<rdar://problem/26825641>

Reviewed by Filip Pizlo.

This patch changes how we emit code for *byId ICs inline.
We no longer keep data labels to patch structure checks, etc.
Instead, we just regenerate the entire IC into a designated
region of code that the Baseline/DFG/FTL JIT will emit inline.
This makes it much simpler to patch inline ICs. All that's
needed to patch an inline IC is to memcpy the code from
a macro assembler inline using LinkBuffer. This architecture
will be easy to extend into other forms of ICs, such as one
for add, in the future.

To support this change, I've reworked the fields inside
StructureStubInfo. It now has one field that is the CodeLocationLabel
of the start of the inline IC. Then it has a few ints that track deltas
to other locations in the IC such as the slow path start, slow path call, the
ICs 'done' location. We used to perform math on these ints in a bunch of different
places. I've consolidated that math into methods inside StructureStubInfo.

To generate inline ICs, I've implemented a new class called InlineAccess.
InlineAccess is stateless: it just has a bunch of static methods for
generating code into the inline region specified by StructureStubInfo.
Repatch will now decide when it wants to generate such an inline
IC, and it will ask InlineAccess to do so.

I've implemented three types of inline ICs to begin with (extending
this in the future should be easy):
- Self property loads (both inline and out of line offsets).
- Self property replace (both inline and out of line offsets).
- Array length on specific array types.
(An easy extension would be to implement JSString length.)

To know how much inline space to reserve, I've implemented a
method that stubs out the various inline cache shapes and
dumps their size. This is used to determine how much space
to save inline. When InlineAccess ends up generating more
code than can fit inline, we will fall back to generating
code with PolymorphicAccess instead.

To make generating code into already allocated executable memory
efficient, I've made AssemblerData have 128 bytes of inline storage.
This saves us a malloc when splatting code into the inline region.

This patch also tidies up LinkBuffer's API for generating
into already allocated executable memory. Now, when generating
code that has less size than the already allocated space, LinkBuffer
will fill the extra space with nops. Also, if branch compaction shrinks
the code, LinkBuffer will add a nop sled at the end of the shrunken
code to take up the entire allocated size.

This looks like it could be a 1% octane progression.

* CMakeLists.txt:
* JavaScriptCore.xcodeproj/project.pbxproj:
* assembler/ARM64Assembler.h:
(JSC::ARM64Assembler::nop):
(JSC::ARM64Assembler::fillNops):
* assembler/ARMv7Assembler.h:
(JSC::ARMv7Assembler::nopw):
(JSC::ARMv7Assembler::nopPseudo16):
(JSC::ARMv7Assembler::nopPseudo32):
(JSC::ARMv7Assembler::fillNops):
(JSC::ARMv7Assembler::dmbSY):
* assembler/AbstractMacroAssembler.h:
(JSC::AbstractMacroAssembler::addLinkTask):
(JSC::AbstractMacroAssembler::emitNops):
(JSC::AbstractMacroAssembler::AbstractMacroAssembler):
* assembler/AssemblerBuffer.h:
(JSC::AssemblerData::AssemblerData):
(JSC::AssemblerData::operator=):
(JSC::AssemblerData::~AssemblerData):
(JSC::AssemblerData::buffer):
(JSC::AssemblerData::grow):
(JSC::AssemblerData::isInlineBuffer):
(JSC::AssemblerBuffer::AssemblerBuffer):
(JSC::AssemblerBuffer::ensureSpace):
(JSC::AssemblerBuffer::codeSize):
(JSC::AssemblerBuffer::setCodeSize):
(JSC::AssemblerBuffer::label):
(JSC::AssemblerBuffer::debugOffset):
(JSC::AssemblerBuffer::releaseAssemblerData):
* assembler/LinkBuffer.cpp:
(JSC::LinkBuffer::copyCompactAndLinkCode):
(JSC::LinkBuffer::linkCode):
(JSC::LinkBuffer::allocate):
(JSC::LinkBuffer::performFinalization):
(JSC::LinkBuffer::shrink): Deleted.
* assembler/LinkBuffer.h:
(JSC::LinkBuffer::LinkBuffer):
(JSC::LinkBuffer::debugAddress):
(JSC::LinkBuffer::size):
(JSC::LinkBuffer::wasAlreadyDisassembled):
(JSC::LinkBuffer::didAlreadyDisassemble):
(JSC::LinkBuffer::applyOffset):
(JSC::LinkBuffer::code):
* assembler/MacroAssemblerARM64.h:
(JSC::MacroAssemblerARM64::patchableBranch32):
(JSC::MacroAssemblerARM64::patchableBranch64):
* assembler/MacroAssemblerARMv7.h:
(JSC::MacroAssemblerARMv7::patchableBranch32):
(JSC::MacroAssemblerARMv7::patchableBranchPtrWithPatch):
* assembler/X86Assembler.h:
(JSC::X86Assembler::nop):
(JSC::X86Assembler::fillNops):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::printGetByIdCacheStatus):
* bytecode/InlineAccess.cpp: Added.
(JSC::InlineAccess::dumpCacheSizesAndCrash):
(JSC::linkCodeInline):
(JSC::InlineAccess::generateSelfPropertyAccess):
(JSC::getScratchRegister):
(JSC::hasFreeRegister):
(JSC::InlineAccess::canGenerateSelfPropertyReplace):
(JSC::InlineAccess::generateSelfPropertyReplace):
(JSC::InlineAccess::isCacheableArrayLength):
(JSC::InlineAccess::generateArrayLength):
(JSC::InlineAccess::rewireStubAsJump):
* bytecode/InlineAccess.h: Added.
(JSC::InlineAccess::sizeForPropertyAccess):
(JSC::InlineAccess::sizeForPropertyReplace):
(JSC::InlineAccess::sizeForLengthAccess):
* bytecode/PolymorphicAccess.cpp:
(JSC::PolymorphicAccess::regenerate):
* bytecode/StructureStubInfo.cpp:
(JSC::StructureStubInfo::initGetByIdSelf):
(JSC::StructureStubInfo::initArrayLength):
(JSC::StructureStubInfo::initPutByIdReplace):
(JSC::StructureStubInfo::deref):
(JSC::StructureStubInfo::aboutToDie):
(JSC::StructureStubInfo::propagateTransitions):
(JSC::StructureStubInfo::containsPC):
* bytecode/StructureStubInfo.h:
(JSC::StructureStubInfo::considerCaching):
(JSC::StructureStubInfo::slowPathCallLocation):
(JSC::StructureStubInfo::doneLocation):
(JSC::StructureStubInfo::slowPathStartLocation):
(JSC::StructureStubInfo::patchableJumpForIn):
(JSC::StructureStubInfo::valueRegs):
* dfg/DFGJITCompiler.cpp:
(JSC::DFG::JITCompiler::link):
* dfg/DFGOSRExitCompilerCommon.cpp:
(JSC::DFG::reifyInlinedCallFrames):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::cachedGetById):
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::compileIn):
(JSC::FTL::DFG::LowerDFGToB3::getById):
* jit/JITInlineCacheGenerator.cpp:
(JSC::JITByIdGenerator::finalize):
(JSC::JITByIdGenerator::generateFastCommon):
(JSC::JITGetByIdGenerator::JITGetByIdGenerator):
(JSC::JITGetByIdGenerator::generateFastPath):
(JSC::JITPutByIdGenerator::JITPutByIdGenerator):
(JSC::JITPutByIdGenerator::generateFastPath):
(JSC::JITPutByIdGenerator::slowPathFunction):
(JSC::JITByIdGenerator::generateFastPathChecks): Deleted.
* jit/JITInlineCacheGenerator.h:
(JSC::JITByIdGenerator::reportSlowPathCall):
(JSC::JITByIdGenerator::slowPathBegin):
(JSC::JITByIdGenerator::slowPathJump):
(JSC::JITGetByIdGenerator::JITGetByIdGenerator):
* jit/JITPropertyAccess.cpp:
(JSC::JIT::emitGetByValWithCachedId):
(JSC::JIT::emit_op_try_get_by_id):
(JSC::JIT::emit_op_get_by_id):
* jit/JITPropertyAccess32_64.cpp:
(JSC::JIT::emitGetByValWithCachedId):
(JSC::JIT::emit_op_try_get_by_id):
(JSC::JIT::emit_op_get_by_id):
* jit/Repatch.cpp:
(JSC::repatchCall):
(JSC::tryCacheGetByID):
(JSC::repatchGetByID):
(JSC::appropriateGenericPutByIdFunction):
(JSC::tryCachePutByID):
(JSC::repatchPutByID):
(JSC::tryRepatchIn):
(JSC::repatchIn):
(JSC::linkSlowFor):
(JSC::resetGetByID):
(JSC::resetPutByID):
(JSC::resetIn):
(JSC::repatchByIdSelfAccess): Deleted.
(JSC::resetGetByIDCheckAndLoad): Deleted.
(JSC::resetPutByIDCheckAndLoad): Deleted.
(JSC::replaceWithJump): Deleted.

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@202214 268f45cc-cd09-0410-ab3c-d52691b4dbfc

28 files changed:
Source/JavaScriptCore/CMakeLists.txt
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/assembler/ARM64Assembler.h
Source/JavaScriptCore/assembler/ARMv7Assembler.h
Source/JavaScriptCore/assembler/AbstractMacroAssembler.h
Source/JavaScriptCore/assembler/AssemblerBuffer.h
Source/JavaScriptCore/assembler/LinkBuffer.cpp
Source/JavaScriptCore/assembler/LinkBuffer.h
Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
Source/JavaScriptCore/assembler/MacroAssemblerARMv7.h
Source/JavaScriptCore/assembler/X86Assembler.h
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/bytecode/InlineAccess.cpp [new file with mode: 0644]
Source/JavaScriptCore/bytecode/InlineAccess.h [new file with mode: 0644]
Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp
Source/JavaScriptCore/bytecode/StructureStubInfo.cpp
Source/JavaScriptCore/bytecode/StructureStubInfo.h
Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
Source/JavaScriptCore/jit/JITInlineCacheGenerator.cpp
Source/JavaScriptCore/jit/JITInlineCacheGenerator.h
Source/JavaScriptCore/jit/JITPropertyAccess.cpp
Source/JavaScriptCore/jit/JITPropertyAccess32_64.cpp
Source/JavaScriptCore/jit/Repatch.cpp

index 4d68a73..63656d2 100644 (file)
@@ -200,6 +200,7 @@ set(JavaScriptCore_SOURCES
     bytecode/ExitingJITType.cpp
     bytecode/GetByIdStatus.cpp
     bytecode/GetByIdVariant.cpp
+    bytecode/InlineAccess.cpp
     bytecode/InlineCallFrame.cpp
     bytecode/InlineCallFrameSet.cpp
     bytecode/JumpTable.cpp
index 028b5e1..7f1ea43 100644 (file)
@@ -1,3 +1,198 @@
+2016-06-19  Saam Barati  <sbarati@apple.com>
+
+        We should be able to generate more types of ICs inline
+        https://bugs.webkit.org/show_bug.cgi?id=158719
+        <rdar://problem/26825641>
+
+        Reviewed by Filip Pizlo.
+
+        This patch changes how we emit code for *byId ICs inline.
+        We no longer keep data labels to patch structure checks, etc.
+        Instead, we just regenerate the entire IC into a designated
+        region of code that the Baseline/DFG/FTL JIT will emit inline.
+        This makes it much simpler to patch inline ICs. All that's
+        needed to patch an inline IC is to memcpy the code from
+        a macro assembler inline using LinkBuffer. This architecture
+        will be easy to extend into other forms of ICs, such as one
+        for add, in the future.
+
+        To support this change, I've reworked the fields inside
+        StructureStubInfo. It now has one field that is the CodeLocationLabel 
+        of the start of the inline IC. Then it has a few ints that track deltas
+        to other locations in the IC such as the slow path start, slow path call, the
+        ICs 'done' location. We used to perform math on these ints in a bunch of different
+        places. I've consolidated that math into methods inside StructureStubInfo.
+
+        To generate inline ICs, I've implemented a new class called InlineAccess.
+        InlineAccess is stateless: it just has a bunch of static methods for
+        generating code into the inline region specified by StructureStubInfo.
+        Repatch will now decide when it wants to generate such an inline
+        IC, and it will ask InlineAccess to do so.
+
+        I've implemented three types of inline ICs to begin with (extending
+        this in the future should be easy):
+        - Self property loads (both inline and out of line offsets).
+        - Self property replace (both inline and out of line offsets).
+        - Array length on specific array types.
+        (An easy extension would be to implement JSString length.)
+
+        To know how much inline space to reserve, I've implemented a
+        method that stubs out the various inline cache shapes and 
+        dumps their size. This is used to determine how much space
+        to save inline. When InlineAccess ends up generating more
+        code than can fit inline, we will fall back to generating
+        code with PolymorphicAccess instead.
+
+        To make generating code into already allocated executable memory
+        efficient, I've made AssemblerData have 128 bytes of inline storage.
+        This saves us a malloc when splatting code into the inline region.
+
+        This patch also tidies up LinkBuffer's API for generating
+        into already allocated executable memory. Now, when generating
+        code that has less size than the already allocated space, LinkBuffer
+        will fill the extra space with nops. Also, if branch compaction shrinks
+        the code, LinkBuffer will add a nop sled at the end of the shrunken
+        code to take up the entire allocated size.
+
+        This looks like it could be a 1% octane progression.
+
+        * CMakeLists.txt:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * assembler/ARM64Assembler.h:
+        (JSC::ARM64Assembler::nop):
+        (JSC::ARM64Assembler::fillNops):
+        * assembler/ARMv7Assembler.h:
+        (JSC::ARMv7Assembler::nopw):
+        (JSC::ARMv7Assembler::nopPseudo16):
+        (JSC::ARMv7Assembler::nopPseudo32):
+        (JSC::ARMv7Assembler::fillNops):
+        (JSC::ARMv7Assembler::dmbSY):
+        * assembler/AbstractMacroAssembler.h:
+        (JSC::AbstractMacroAssembler::addLinkTask):
+        (JSC::AbstractMacroAssembler::emitNops):
+        (JSC::AbstractMacroAssembler::AbstractMacroAssembler):
+        * assembler/AssemblerBuffer.h:
+        (JSC::AssemblerData::AssemblerData):
+        (JSC::AssemblerData::operator=):
+        (JSC::AssemblerData::~AssemblerData):
+        (JSC::AssemblerData::buffer):
+        (JSC::AssemblerData::grow):
+        (JSC::AssemblerData::isInlineBuffer):
+        (JSC::AssemblerBuffer::AssemblerBuffer):
+        (JSC::AssemblerBuffer::ensureSpace):
+        (JSC::AssemblerBuffer::codeSize):
+        (JSC::AssemblerBuffer::setCodeSize):
+        (JSC::AssemblerBuffer::label):
+        (JSC::AssemblerBuffer::debugOffset):
+        (JSC::AssemblerBuffer::releaseAssemblerData):
+        * assembler/LinkBuffer.cpp:
+        (JSC::LinkBuffer::copyCompactAndLinkCode):
+        (JSC::LinkBuffer::linkCode):
+        (JSC::LinkBuffer::allocate):
+        (JSC::LinkBuffer::performFinalization):
+        (JSC::LinkBuffer::shrink): Deleted.
+        * assembler/LinkBuffer.h:
+        (JSC::LinkBuffer::LinkBuffer):
+        (JSC::LinkBuffer::debugAddress):
+        (JSC::LinkBuffer::size):
+        (JSC::LinkBuffer::wasAlreadyDisassembled):
+        (JSC::LinkBuffer::didAlreadyDisassemble):
+        (JSC::LinkBuffer::applyOffset):
+        (JSC::LinkBuffer::code):
+        * assembler/MacroAssemblerARM64.h:
+        (JSC::MacroAssemblerARM64::patchableBranch32):
+        (JSC::MacroAssemblerARM64::patchableBranch64):
+        * assembler/MacroAssemblerARMv7.h:
+        (JSC::MacroAssemblerARMv7::patchableBranch32):
+        (JSC::MacroAssemblerARMv7::patchableBranchPtrWithPatch):
+        * assembler/X86Assembler.h:
+        (JSC::X86Assembler::nop):
+        (JSC::X86Assembler::fillNops):
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::printGetByIdCacheStatus):
+        * bytecode/InlineAccess.cpp: Added.
+        (JSC::InlineAccess::dumpCacheSizesAndCrash):
+        (JSC::linkCodeInline):
+        (JSC::InlineAccess::generateSelfPropertyAccess):
+        (JSC::getScratchRegister):
+        (JSC::hasFreeRegister):
+        (JSC::InlineAccess::canGenerateSelfPropertyReplace):
+        (JSC::InlineAccess::generateSelfPropertyReplace):
+        (JSC::InlineAccess::isCacheableArrayLength):
+        (JSC::InlineAccess::generateArrayLength):
+        (JSC::InlineAccess::rewireStubAsJump):
+        * bytecode/InlineAccess.h: Added.
+        (JSC::InlineAccess::sizeForPropertyAccess):
+        (JSC::InlineAccess::sizeForPropertyReplace):
+        (JSC::InlineAccess::sizeForLengthAccess):
+        * bytecode/PolymorphicAccess.cpp:
+        (JSC::PolymorphicAccess::regenerate):
+        * bytecode/StructureStubInfo.cpp:
+        (JSC::StructureStubInfo::initGetByIdSelf):
+        (JSC::StructureStubInfo::initArrayLength):
+        (JSC::StructureStubInfo::initPutByIdReplace):
+        (JSC::StructureStubInfo::deref):
+        (JSC::StructureStubInfo::aboutToDie):
+        (JSC::StructureStubInfo::propagateTransitions):
+        (JSC::StructureStubInfo::containsPC):
+        * bytecode/StructureStubInfo.h:
+        (JSC::StructureStubInfo::considerCaching):
+        (JSC::StructureStubInfo::slowPathCallLocation):
+        (JSC::StructureStubInfo::doneLocation):
+        (JSC::StructureStubInfo::slowPathStartLocation):
+        (JSC::StructureStubInfo::patchableJumpForIn):
+        (JSC::StructureStubInfo::valueRegs):
+        * dfg/DFGJITCompiler.cpp:
+        (JSC::DFG::JITCompiler::link):
+        * dfg/DFGOSRExitCompilerCommon.cpp:
+        (JSC::DFG::reifyInlinedCallFrames):
+        * dfg/DFGSpeculativeJIT32_64.cpp:
+        (JSC::DFG::SpeculativeJIT::cachedGetById):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::cachedGetById):
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::compileIn):
+        (JSC::FTL::DFG::LowerDFGToB3::getById):
+        * jit/JITInlineCacheGenerator.cpp:
+        (JSC::JITByIdGenerator::finalize):
+        (JSC::JITByIdGenerator::generateFastCommon):
+        (JSC::JITGetByIdGenerator::JITGetByIdGenerator):
+        (JSC::JITGetByIdGenerator::generateFastPath):
+        (JSC::JITPutByIdGenerator::JITPutByIdGenerator):
+        (JSC::JITPutByIdGenerator::generateFastPath):
+        (JSC::JITPutByIdGenerator::slowPathFunction):
+        (JSC::JITByIdGenerator::generateFastPathChecks): Deleted.
+        * jit/JITInlineCacheGenerator.h:
+        (JSC::JITByIdGenerator::reportSlowPathCall):
+        (JSC::JITByIdGenerator::slowPathBegin):
+        (JSC::JITByIdGenerator::slowPathJump):
+        (JSC::JITGetByIdGenerator::JITGetByIdGenerator):
+        * jit/JITPropertyAccess.cpp:
+        (JSC::JIT::emitGetByValWithCachedId):
+        (JSC::JIT::emit_op_try_get_by_id):
+        (JSC::JIT::emit_op_get_by_id):
+        * jit/JITPropertyAccess32_64.cpp:
+        (JSC::JIT::emitGetByValWithCachedId):
+        (JSC::JIT::emit_op_try_get_by_id):
+        (JSC::JIT::emit_op_get_by_id):
+        * jit/Repatch.cpp:
+        (JSC::repatchCall):
+        (JSC::tryCacheGetByID):
+        (JSC::repatchGetByID):
+        (JSC::appropriateGenericPutByIdFunction):
+        (JSC::tryCachePutByID):
+        (JSC::repatchPutByID):
+        (JSC::tryRepatchIn):
+        (JSC::repatchIn):
+        (JSC::linkSlowFor):
+        (JSC::resetGetByID):
+        (JSC::resetPutByID):
+        (JSC::resetIn):
+        (JSC::repatchByIdSelfAccess): Deleted.
+        (JSC::resetGetByIDCheckAndLoad): Deleted.
+        (JSC::resetPutByIDCheckAndLoad): Deleted.
+        (JSC::replaceWithJump): Deleted.
+
 2016-06-19  Filip Pizlo  <fpizlo@apple.com>
 
         REGRESSION(concurrent baseline JIT): Kraken/ai-astar runs 20% slower
index 4fa3b0b..65fffa8 100644 (file)
                70ECA6091AFDBEA200449739 /* TemplateRegistryKey.h in Headers */ = {isa = PBXBuildFile; fileRef = 70ECA6041AFDBEA200449739 /* TemplateRegistryKey.h */; settings = {ATTRIBUTES = (Private, ); }; };
                72AAF7CD1D0D31B3005E60BE /* JSCustomGetterSetterFunction.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 72AAF7CB1D0D318B005E60BE /* JSCustomGetterSetterFunction.cpp */; };
                72AAF7CE1D0D31B3005E60BE /* JSCustomGetterSetterFunction.h in Headers */ = {isa = PBXBuildFile; fileRef = 72AAF7CC1D0D318B005E60BE /* JSCustomGetterSetterFunction.h */; };
+               7905BB681D12050E0019FE57 /* InlineAccess.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 7905BB661D12050E0019FE57 /* InlineAccess.cpp */; };
+               7905BB691D12050E0019FE57 /* InlineAccess.h in Headers */ = {isa = PBXBuildFile; fileRef = 7905BB671D12050E0019FE57 /* InlineAccess.h */; settings = {ATTRIBUTES = (Private, ); }; };
                79160DBD1C8E3EC8008C085A /* ProxyRevoke.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 79160DBB1C8E3EC8008C085A /* ProxyRevoke.cpp */; };
                79160DBE1C8E3EC8008C085A /* ProxyRevoke.h in Headers */ = {isa = PBXBuildFile; fileRef = 79160DBC1C8E3EC8008C085A /* ProxyRevoke.h */; settings = {ATTRIBUTES = (Private, ); }; };
                792CB3491C4EED5C00D13AF3 /* PCToCodeOriginMap.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 792CB3471C4EED5C00D13AF3 /* PCToCodeOriginMap.cpp */; };
                70ECA6041AFDBEA200449739 /* TemplateRegistryKey.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = TemplateRegistryKey.h; sourceTree = "<group>"; };
                72AAF7CB1D0D318B005E60BE /* JSCustomGetterSetterFunction.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSCustomGetterSetterFunction.cpp; sourceTree = "<group>"; };
                72AAF7CC1D0D318B005E60BE /* JSCustomGetterSetterFunction.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSCustomGetterSetterFunction.h; sourceTree = "<group>"; };
+               7905BB661D12050E0019FE57 /* InlineAccess.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = InlineAccess.cpp; sourceTree = "<group>"; };
+               7905BB671D12050E0019FE57 /* InlineAccess.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = InlineAccess.h; sourceTree = "<group>"; };
                79160DBB1C8E3EC8008C085A /* ProxyRevoke.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ProxyRevoke.cpp; sourceTree = "<group>"; };
                79160DBC1C8E3EC8008C085A /* ProxyRevoke.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ProxyRevoke.h; sourceTree = "<group>"; };
                792CB3471C4EED5C00D13AF3 /* PCToCodeOriginMap.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = PCToCodeOriginMap.cpp; sourceTree = "<group>"; };
                                0F0332C118B01763005F979A /* GetByIdVariant.cpp */,
                                0F0332C218B01763005F979A /* GetByIdVariant.h */,
                                0F0B83A814BCF55E00885B4F /* HandlerInfo.h */,
+                               7905BB661D12050E0019FE57 /* InlineAccess.cpp */,
+                               7905BB671D12050E0019FE57 /* InlineAccess.h */,
                                148A7BED1B82975A002D9157 /* InlineCallFrame.cpp */,
                                148A7BEE1B82975A002D9157 /* InlineCallFrame.h */,
                                0F24E55317F0B71C00ABB217 /* InlineCallFrameSet.cpp */,
                                0F2B9CE919D0BA7D00B1D1B5 /* DFGObjectMaterializationData.h in Headers */,
                                43C392AB1C3BEB0500241F53 /* AssemblerCommon.h in Headers */,
                                86EC9DD01328DF82002B2AD7 /* DFGOperations.h in Headers */,
+                               7905BB691D12050E0019FE57 /* InlineAccess.h in Headers */,
                                A7D89CFE17A0B8CC00773AD8 /* DFGOSRAvailabilityAnalysisPhase.h in Headers */,
                                0FD82E57141DAF1000179C94 /* DFGOSREntry.h in Headers */,
                                0F40E4A71C497F7400A577FA /* AirOpcode.h in Headers */,
                                0F6B8AD81C4EDDA200969052 /* B3DuplicateTails.cpp in Sources */,
                                527773DE1AAF83AC00BDE7E8 /* RuntimeType.cpp in Sources */,
                                0F7700921402FF3C0078EB39 /* SamplingCounter.cpp in Sources */,
+                               7905BB681D12050E0019FE57 /* InlineAccess.cpp in Sources */,
                                0FE050271AA9095600D33B33 /* ScopedArguments.cpp in Sources */,
                                0FE0502F1AAA806900D33B33 /* ScopedArgumentsTable.cpp in Sources */,
                                992ABCF91BEA9BD2006403A0 /* RemoteAutomationTarget.cpp in Sources */,
index 0be7b41..86f13ed 100644 (file)
@@ -1484,13 +1484,16 @@ public:
         insn(nopPseudo());
     }
     
-    static void fillNops(void* base, size_t size)
+    static void fillNops(void* base, size_t size, bool isCopyingToExecutableMemory)
     {
         RELEASE_ASSERT(!(size % sizeof(int32_t)));
         size_t n = size / sizeof(int32_t);
         for (int32_t* ptr = static_cast<int32_t*>(base); n--;) {
             int insn = nopPseudo();
-            performJITMemcpy(ptr++, &insn, sizeof(int));
+            if (isCopyingToExecutableMemory)
+                performJITMemcpy(ptr++, &insn, sizeof(int));
+            else
+                memcpy(ptr++, &insn, sizeof(int));
         }
     }
     
index e54ecff..af076e8 100644 (file)
@@ -2004,6 +2004,43 @@ public:
         m_formatter.twoWordOp16Op16(OP_NOP_T2a, OP_NOP_T2b);
     }
     
+    static constexpr int16_t nopPseudo16()
+    {
+        return OP_NOP_T1;
+    }
+
+    static constexpr int32_t nopPseudo32()
+    {
+        return OP_NOP_T2a | (OP_NOP_T2b << 16);
+    }
+
+    static void fillNops(void* base, size_t size, bool isCopyingToExecutableMemory)
+    {
+        RELEASE_ASSERT(!(size % sizeof(int16_t)));
+
+        char* ptr = static_cast<char*>(base);
+        const size_t num32s = size / sizeof(int32_t);
+        for (size_t i = 0; i < num32s; i++) {
+            const int32_t insn = nopPseudo32();
+            if (isCopyingToExecutableMemory)
+                performJITMemcpy(ptr, &insn, sizeof(int32_t));
+            else
+                memcpy(ptr, &insn, sizeof(int32_t));
+            ptr += sizeof(int32_t);
+        }
+
+        const size_t num16s = (size % sizeof(int32_t)) / sizeof(int16_t);
+        ASSERT(num16s == 0 || num16s == 1);
+        ASSERT(num16s * sizeof(int16_t) + num32s * sizeof(int32_t) == size);
+        if (num16s) {
+            const int16_t insn = nopPseudo16();
+            if (isCopyingToExecutableMemory)
+                performJITMemcpy(ptr, &insn, sizeof(int16_t));
+            else
+                memcpy(ptr, &insn, sizeof(int16_t));
+        }
+    }
+
     void dmbSY()
     {
         m_formatter.twoWordOp16Op16(OP_DMB_SY_T2a, OP_DMB_SY_T2b);
index cc231fc..50bc264 100644 (file)
@@ -27,7 +27,6 @@
 #define AbstractMacroAssembler_h
 
 #include "AbortReason.h"
-#include "AssemblerBuffer.h"
 #include "CodeLocation.h"
 #include "MacroAssemblerCodeRef.h"
 #include "Options.h"
@@ -1040,6 +1039,17 @@ public:
         m_linkTasks.append(createSharedTask<void(LinkBuffer&)>(functor));
     }
 
+    void emitNops(size_t memoryToFillWithNopsInBytes)
+    {
+        AssemblerBuffer& buffer = m_assembler.buffer();
+        size_t startCodeSize = buffer.codeSize();
+        size_t targetCodeSize = startCodeSize + memoryToFillWithNopsInBytes;
+        buffer.ensureSpace(memoryToFillWithNopsInBytes);
+        bool isCopyingToExecutableMemory = false;
+        AssemblerType::fillNops(static_cast<char*>(buffer.data()) + startCodeSize, memoryToFillWithNopsInBytes, isCopyingToExecutableMemory);
+        buffer.setCodeSize(targetCodeSize);
+    }
+
 protected:
     AbstractMacroAssembler()
         : m_randomSource(0)
index 9d1f4ef..d616b00 100644 (file)
@@ -62,39 +62,62 @@ namespace JSC {
     };
 
     class AssemblerData {
+        WTF_MAKE_NONCOPYABLE(AssemblerData);
+        static const size_t InlineCapacity = 128;
     public:
         AssemblerData()
-            : m_buffer(nullptr)
-            , m_capacity(0)
+            : m_buffer(m_inlineBuffer)
+            , m_capacity(InlineCapacity)
         {
         }
 
-        AssemblerData(unsigned initialCapacity)
+        AssemblerData(size_t initialCapacity)
         {
-            m_capacity = initialCapacity;
-            m_buffer = static_cast<char*>(fastMalloc(m_capacity));
+            if (initialCapacity <= InlineCapacity) {
+                m_capacity = InlineCapacity;
+                m_buffer = m_inlineBuffer;
+            } else {
+                m_capacity = initialCapacity;
+                m_buffer = static_cast<char*>(fastMalloc(m_capacity));
+            }
         }
 
         AssemblerData(AssemblerData&& other)
         {
-            m_buffer = other.m_buffer;
-            other.m_buffer = nullptr;
+            if (other.isInlineBuffer()) {
+                ASSERT(other.m_capacity == InlineCapacity);
+                memcpy(m_inlineBuffer, other.m_inlineBuffer, InlineCapacity);
+                m_buffer = m_inlineBuffer;
+            } else
+                m_buffer = other.m_buffer;
             m_capacity = other.m_capacity;
+
+            other.m_buffer = nullptr;
             other.m_capacity = 0;
         }
 
         AssemblerData& operator=(AssemblerData&& other)
         {
-            m_buffer = other.m_buffer;
-            other.m_buffer = nullptr;
+            if (m_buffer && !isInlineBuffer())
+                fastFree(m_buffer);
+
+            if (other.isInlineBuffer()) {
+                ASSERT(other.m_capacity == InlineCapacity);
+                memcpy(m_inlineBuffer, other.m_inlineBuffer, InlineCapacity);
+                m_buffer = m_inlineBuffer;
+            } else
+                m_buffer = other.m_buffer;
             m_capacity = other.m_capacity;
+
+            other.m_buffer = nullptr;
             other.m_capacity = 0;
             return *this;
         }
 
         ~AssemblerData()
         {
-            fastFree(m_buffer);
+            if (m_buffer && !isInlineBuffer())
+                fastFree(m_buffer);
         }
 
         char* buffer() const { return m_buffer; }
@@ -104,19 +127,24 @@ namespace JSC {
         void grow(unsigned extraCapacity = 0)
         {
             m_capacity = m_capacity + m_capacity / 2 + extraCapacity;
-            m_buffer = static_cast<char*>(fastRealloc(m_buffer, m_capacity));
+            if (isInlineBuffer()) {
+                m_buffer = static_cast<char*>(fastMalloc(m_capacity));
+                memcpy(m_buffer, m_inlineBuffer, InlineCapacity);
+            } else
+                m_buffer = static_cast<char*>(fastRealloc(m_buffer, m_capacity));
         }
 
     private:
+        bool isInlineBuffer() const { return m_buffer == m_inlineBuffer; }
         char* m_buffer;
+        char m_inlineBuffer[InlineCapacity];
         unsigned m_capacity;
     };
 
     class AssemblerBuffer {
-        static const int initialCapacity = 128;
     public:
         AssemblerBuffer()
-            : m_storage(initialCapacity)
+            : m_storage()
             , m_index(0)
         {
         }
@@ -128,7 +156,7 @@ namespace JSC {
 
         void ensureSpace(unsigned space)
         {
-            if (!isAvailable(space))
+            while (!isAvailable(space))
                 outOfLineGrow();
         }
 
@@ -156,6 +184,15 @@ namespace JSC {
             return m_index;
         }
 
+        void setCodeSize(size_t index)
+        {
+            // Warning: Only use this if you know exactly what you are doing.
+            // For example, say you want 40 bytes of nops, it's ok to grow
+            // and then fill 40 bytes of nops using bigger instructions.
+            m_index = index;
+            ASSERT(m_index <= m_storage.capacity());
+        }
+
         AssemblerLabel label() const
         {
             return AssemblerLabel(m_index);
@@ -163,7 +200,7 @@ namespace JSC {
 
         unsigned debugOffset() { return m_index; }
 
-        AssemblerData releaseAssemblerData() { return WTFMove(m_storage); }
+        AssemblerData&& releaseAssemblerData() { return WTFMove(m_storage); }
 
         // LocalWriter is a trick to keep the storage buffer and the index
         // in memory while issuing multiple Stores.
index 1259c8f..326d20d 100644 (file)
@@ -98,62 +98,71 @@ static ALWAYS_INLINE void recordLinkOffsets(AssemblerData& assemblerData, int32_
 template <typename InstructionType>
 void LinkBuffer::copyCompactAndLinkCode(MacroAssembler& macroAssembler, void* ownerUID, JITCompilationEffort effort)
 {
-    m_initialSize = macroAssembler.m_assembler.codeSize();
-    allocate(m_initialSize, ownerUID, effort);
+    allocate(macroAssembler, ownerUID, effort);
+    const size_t initialSize = macroAssembler.m_assembler.codeSize();
     if (didFailToAllocate())
         return;
+
     Vector<LinkRecord, 0, UnsafeVectorOverflow>& jumpsToLink = macroAssembler.jumpsToLink();
     m_assemblerStorage = macroAssembler.m_assembler.buffer().releaseAssemblerData();
     uint8_t* inData = reinterpret_cast<uint8_t*>(m_assemblerStorage.buffer());
 
     AssemblerData outBuffer(m_size);
+
     uint8_t* outData = reinterpret_cast<uint8_t*>(outBuffer.buffer());
     uint8_t* codeOutData = reinterpret_cast<uint8_t*>(m_code);
 
     int readPtr = 0;
     int writePtr = 0;
     unsigned jumpCount = jumpsToLink.size();
-    for (unsigned i = 0; i < jumpCount; ++i) {
-        int offset = readPtr - writePtr;
-        ASSERT(!(offset & 1));
-            
-        // Copy the instructions from the last jump to the current one.
-        size_t regionSize = jumpsToLink[i].from() - readPtr;
-        InstructionType* copySource = reinterpret_cast_ptr<InstructionType*>(inData + readPtr);
-        InstructionType* copyEnd = reinterpret_cast_ptr<InstructionType*>(inData + readPtr + regionSize);
-        InstructionType* copyDst = reinterpret_cast_ptr<InstructionType*>(outData + writePtr);
-        ASSERT(!(regionSize % 2));
-        ASSERT(!(readPtr % 2));
-        ASSERT(!(writePtr % 2));
-        while (copySource != copyEnd)
-            *copyDst++ = *copySource++;
-        recordLinkOffsets(m_assemblerStorage, readPtr, jumpsToLink[i].from(), offset);
-        readPtr += regionSize;
-        writePtr += regionSize;
-            
-        // Calculate absolute address of the jump target, in the case of backwards
-        // branches we need to be precise, forward branches we are pessimistic
-        const uint8_t* target;
-        if (jumpsToLink[i].to() >= jumpsToLink[i].from())
-            target = codeOutData + jumpsToLink[i].to() - offset; // Compensate for what we have collapsed so far
-        else
-            target = codeOutData + jumpsToLink[i].to() - executableOffsetFor(jumpsToLink[i].to());
-            
-        JumpLinkType jumpLinkType = MacroAssembler::computeJumpType(jumpsToLink[i], codeOutData + writePtr, target);
-        // Compact branch if we can...
-        if (MacroAssembler::canCompact(jumpsToLink[i].type())) {
-            // Step back in the write stream
-            int32_t delta = MacroAssembler::jumpSizeDelta(jumpsToLink[i].type(), jumpLinkType);
-            if (delta) {
-                writePtr -= delta;
-                recordLinkOffsets(m_assemblerStorage, jumpsToLink[i].from() - delta, readPtr, readPtr - writePtr);
+    if (m_shouldPerformBranchCompaction) {
+        for (unsigned i = 0; i < jumpCount; ++i) {
+            int offset = readPtr - writePtr;
+            ASSERT(!(offset & 1));
+                
+            // Copy the instructions from the last jump to the current one.
+            size_t regionSize = jumpsToLink[i].from() - readPtr;
+            InstructionType* copySource = reinterpret_cast_ptr<InstructionType*>(inData + readPtr);
+            InstructionType* copyEnd = reinterpret_cast_ptr<InstructionType*>(inData + readPtr + regionSize);
+            InstructionType* copyDst = reinterpret_cast_ptr<InstructionType*>(outData + writePtr);
+            ASSERT(!(regionSize % 2));
+            ASSERT(!(readPtr % 2));
+            ASSERT(!(writePtr % 2));
+            while (copySource != copyEnd)
+                *copyDst++ = *copySource++;
+            recordLinkOffsets(m_assemblerStorage, readPtr, jumpsToLink[i].from(), offset);
+            readPtr += regionSize;
+            writePtr += regionSize;
+                
+            // Calculate absolute address of the jump target, in the case of backwards
+            // branches we need to be precise, forward branches we are pessimistic
+            const uint8_t* target;
+            if (jumpsToLink[i].to() >= jumpsToLink[i].from())
+                target = codeOutData + jumpsToLink[i].to() - offset; // Compensate for what we have collapsed so far
+            else
+                target = codeOutData + jumpsToLink[i].to() - executableOffsetFor(jumpsToLink[i].to());
+                
+            JumpLinkType jumpLinkType = MacroAssembler::computeJumpType(jumpsToLink[i], codeOutData + writePtr, target);
+            // Compact branch if we can...
+            if (MacroAssembler::canCompact(jumpsToLink[i].type())) {
+                // Step back in the write stream
+                int32_t delta = MacroAssembler::jumpSizeDelta(jumpsToLink[i].type(), jumpLinkType);
+                if (delta) {
+                    writePtr -= delta;
+                    recordLinkOffsets(m_assemblerStorage, jumpsToLink[i].from() - delta, readPtr, readPtr - writePtr);
+                }
             }
+            jumpsToLink[i].setFrom(writePtr);
+        }
+    } else {
+        if (!ASSERT_DISABLED) {
+            for (unsigned i = 0; i < jumpCount; ++i)
+                ASSERT(!MacroAssembler::canCompact(jumpsToLink[i].type()));
         }
-        jumpsToLink[i].setFrom(writePtr);
     }
     // Copy everything after the last jump
-    memcpy(outData + writePtr, inData + readPtr, m_initialSize - readPtr);
-    recordLinkOffsets(m_assemblerStorage, readPtr, m_initialSize, readPtr - writePtr);
+    memcpy(outData + writePtr, inData + readPtr, initialSize - readPtr);
+    recordLinkOffsets(m_assemblerStorage, readPtr, initialSize, readPtr - writePtr);
         
     for (unsigned i = 0; i < jumpCount; ++i) {
         uint8_t* location = codeOutData + jumpsToLink[i].from();
@@ -162,12 +171,21 @@ void LinkBuffer::copyCompactAndLinkCode(MacroAssembler& macroAssembler, void* ow
     }
 
     jumpsToLink.clear();
-    shrink(writePtr + m_initialSize - readPtr);
 
-    performJITMemcpy(m_code, outBuffer.buffer(), m_size);
+    size_t compactSize = writePtr + initialSize - readPtr;
+    if (m_executableMemory) {
+        m_size = compactSize;
+        m_executableMemory->shrink(m_size);
+    } else {
+        size_t nopSizeInBytes = initialSize - compactSize;
+        bool isCopyingToExecutableMemory = false;
+        MacroAssembler::AssemblerType_T::fillNops(outData + compactSize, nopSizeInBytes, isCopyingToExecutableMemory);
+    }
+
+    performJITMemcpy(m_code, outData, m_size);
 
 #if DUMP_LINK_STATISTICS
-    dumpLinkStatistics(m_code, m_initialSize, m_size);
+    dumpLinkStatistics(m_code, initialSize, m_size);
 #endif
 #if DUMP_CODE
     dumpCode(m_code, m_size);
@@ -182,11 +200,11 @@ void LinkBuffer::linkCode(MacroAssembler& macroAssembler, void* ownerUID, JITCom
 #if defined(ASSEMBLER_HAS_CONSTANT_POOL) && ASSEMBLER_HAS_CONSTANT_POOL
     macroAssembler.m_assembler.buffer().flushConstantPool(false);
 #endif
-    AssemblerBuffer& buffer = macroAssembler.m_assembler.buffer();
-    allocate(buffer.codeSize(), ownerUID, effort);
+    allocate(macroAssembler, ownerUID, effort);
     if (!m_didAllocate)
         return;
     ASSERT(m_code);
+    AssemblerBuffer& buffer = macroAssembler.m_assembler.buffer();
 #if CPU(ARM_TRADITIONAL)
     macroAssembler.m_assembler.prepareExecutableCopy(m_code);
 #endif
@@ -198,19 +216,21 @@ void LinkBuffer::linkCode(MacroAssembler& macroAssembler, void* ownerUID, JITCom
     copyCompactAndLinkCode<uint16_t>(macroAssembler, ownerUID, effort);
 #elif CPU(ARM64)
     copyCompactAndLinkCode<uint32_t>(macroAssembler, ownerUID, effort);
-#endif
+#endif // !ENABLE(BRANCH_COMPACTION)
 
     m_linkTasks = WTFMove(macroAssembler.m_linkTasks);
 }
 
-void LinkBuffer::allocate(size_t initialSize, void* ownerUID, JITCompilationEffort effort)
+void LinkBuffer::allocate(MacroAssembler& macroAssembler, void* ownerUID, JITCompilationEffort effort)
 {
+    size_t initialSize = macroAssembler.m_assembler.codeSize();
     if (m_code) {
         if (initialSize > m_size)
             return;
         
+        size_t nopsToFillInBytes = m_size - initialSize;
+        macroAssembler.emitNops(nopsToFillInBytes);
         m_didAllocate = true;
-        m_size = initialSize;
         return;
     }
     
@@ -223,14 +243,6 @@ void LinkBuffer::allocate(size_t initialSize, void* ownerUID, JITCompilationEffo
     m_didAllocate = true;
 }
 
-void LinkBuffer::shrink(size_t newSize)
-{
-    if (!m_executableMemory)
-        return;
-    m_size = newSize;
-    m_executableMemory->shrink(m_size);
-}
-
 void LinkBuffer::performFinalization()
 {
     for (auto& task : m_linkTasks)
index 1cc647a..9422ce5 100644 (file)
@@ -82,9 +82,6 @@ class LinkBuffer {
 public:
     LinkBuffer(VM& vm, MacroAssembler& macroAssembler, void* ownerUID, JITCompilationEffort effort = JITCompilationMustSucceed)
         : m_size(0)
-#if ENABLE(BRANCH_COMPACTION)
-        , m_initialSize(0)
-#endif
         , m_didAllocate(false)
         , m_code(0)
         , m_vm(&vm)
@@ -95,11 +92,8 @@ public:
         linkCode(macroAssembler, ownerUID, effort);
     }
 
-    LinkBuffer(MacroAssembler& macroAssembler, void* code, size_t size, JITCompilationEffort effort = JITCompilationMustSucceed)
+    LinkBuffer(MacroAssembler& macroAssembler, void* code, size_t size, JITCompilationEffort effort = JITCompilationMustSucceed, bool shouldPerformBranchCompaction = true)
         : m_size(size)
-#if ENABLE(BRANCH_COMPACTION)
-        , m_initialSize(0)
-#endif
         , m_didAllocate(false)
         , m_code(code)
         , m_vm(0)
@@ -107,6 +101,11 @@ public:
         , m_completed(false)
 #endif
     {
+#if ENABLE(BRANCH_COMPACTION)
+        m_shouldPerformBranchCompaction = shouldPerformBranchCompaction;
+#else
+        UNUSED_PARAM(shouldPerformBranchCompaction);
+#endif
         linkCode(macroAssembler, 0, effort);
     }
 
@@ -250,11 +249,7 @@ public:
         return m_code;
     }
 
-    // FIXME: this does not account for the AssemblerData size!
-    size_t size()
-    {
-        return m_size;
-    }
+    size_t size() const { return m_size; }
     
     bool wasAlreadyDisassembled() const { return m_alreadyDisassembled; }
     void didAlreadyDisassemble() { m_alreadyDisassembled = true; }
@@ -278,15 +273,14 @@ private:
 #endif
         return src;
     }
-    
+
     // Keep this private! - the underlying code should only be obtained externally via finalizeCode().
     void* code()
     {
         return m_code;
     }
     
-    void allocate(size_t initialSize, void* ownerUID, JITCompilationEffort);
-    void shrink(size_t newSize);
+    void allocate(MacroAssembler&, void* ownerUID, JITCompilationEffort);
 
     JS_EXPORT_PRIVATE void linkCode(MacroAssembler&, void* ownerUID, JITCompilationEffort);
 #if ENABLE(BRANCH_COMPACTION)
@@ -307,8 +301,8 @@ private:
     RefPtr<ExecutableMemoryHandle> m_executableMemory;
     size_t m_size;
 #if ENABLE(BRANCH_COMPACTION)
-    size_t m_initialSize;
     AssemblerData m_assemblerStorage;
+    bool m_shouldPerformBranchCompaction { true };
 #endif
     bool m_didAllocate;
     void* m_code;
index 7d94674..768c45f 100644 (file)
@@ -3087,6 +3087,14 @@ public:
         return PatchableJump(result);
     }
 
+    PatchableJump patchableBranch32(RelationalCondition cond, Address left, TrustedImm32 imm)
+    {
+        m_makeJumpPatchable = true;
+        Jump result = branch32(cond, left, imm);
+        m_makeJumpPatchable = false;
+        return PatchableJump(result);
+    }
+
     PatchableJump patchableBranch64(RelationalCondition cond, RegisterID reg, TrustedImm64 imm)
     {
         m_makeJumpPatchable = true;
index 6586c29..ef6a542 100644 (file)
@@ -1868,6 +1868,14 @@ public:
         return PatchableJump(result);
     }
 
+    PatchableJump patchableBranch32(RelationalCondition cond, Address left, TrustedImm32 imm)
+    {
+        m_makeJumpPatchable = true;
+        Jump result = branch32(cond, left, imm);
+        m_makeJumpPatchable = false;
+        return PatchableJump(result);
+    }
+
     PatchableJump patchableBranchPtrWithPatch(RelationalCondition cond, Address left, DataLabelPtr& dataLabel, TrustedImmPtr initialRightValue = TrustedImmPtr(0))
     {
         m_makeJumpPatchable = true;
index 2e48f71..f68d5e5 100644 (file)
@@ -2946,8 +2946,9 @@ public:
         m_formatter.oneByteOp(OP_NOP);
     }
 
-    static void fillNops(void* base, size_t size)
+    static void fillNops(void* base, size_t size, bool isCopyingToExecutableMemory)
     {
+        UNUSED_PARAM(isCopyingToExecutableMemory);
 #if CPU(X86_64)
         static const uint8_t nops[10][10] = {
             // nop
index ec0007d..1b1cde2 100644 (file)
@@ -441,6 +441,9 @@ void CodeBlock::printGetByIdCacheStatus(PrintStream& out, ExecState* exec, int l
         case CacheType::Unset:
             out.printf("unset");
             break;
+        case CacheType::ArrayLength:
+            out.printf("ArrayLength");
+            break;
         default:
             RELEASE_ASSERT_NOT_REACHED();
             break;
diff --git a/Source/JavaScriptCore/bytecode/InlineAccess.cpp b/Source/JavaScriptCore/bytecode/InlineAccess.cpp
new file mode 100644 (file)
index 0000000..54bf7b7
--- /dev/null
@@ -0,0 +1,299 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "InlineAccess.h"
+
+#if ENABLE(JIT)
+
+#include "CCallHelpers.h"
+#include "JSArray.h"
+#include "JSCellInlines.h"
+#include "LinkBuffer.h"
+#include "ScratchRegisterAllocator.h"
+#include "Structure.h"
+#include "StructureStubInfo.h"
+#include "VM.h"
+
+namespace JSC {
+
+void InlineAccess::dumpCacheSizesAndCrash(VM& vm)
+{
+    GPRReg base = GPRInfo::regT0;
+    GPRReg value = GPRInfo::regT1;
+#if USE(JSVALUE32_64)
+    JSValueRegs regs(base, value);
+#else
+    JSValueRegs regs(base);
+#endif
+
+    {
+        CCallHelpers jit(&vm);
+
+        GPRReg scratchGPR = value;
+        jit.load8(CCallHelpers::Address(base, JSCell::indexingTypeOffset()), value);
+        jit.and32(CCallHelpers::TrustedImm32(IsArray | IndexingShapeMask), value);
+        jit.patchableBranch32(
+            CCallHelpers::NotEqual, value, CCallHelpers::TrustedImm32(IsArray | ContiguousShape));
+        jit.loadPtr(CCallHelpers::Address(base, JSObject::butterflyOffset()), value);
+        jit.load32(CCallHelpers::Address(value, ArrayStorage::lengthOffset()), value);
+        jit.boxInt32(scratchGPR, regs);
+
+        dataLog("array length size: ", jit.m_assembler.buffer().codeSize(), "\n");
+    }
+
+    {
+        CCallHelpers jit(&vm);
+
+        jit.patchableBranch32(
+            MacroAssembler::NotEqual,
+            MacroAssembler::Address(base, JSCell::structureIDOffset()),
+            MacroAssembler::TrustedImm32(0x000ab21ca));
+        jit.loadPtr(
+            CCallHelpers::Address(base, JSObject::butterflyOffset()),
+            value);
+        GPRReg storageGPR = value;
+        jit.loadValue(
+            CCallHelpers::Address(storageGPR, 0x000ab21ca), regs);
+
+        dataLog("out of line offset cache size: ", jit.m_assembler.buffer().codeSize(), "\n");
+    }
+
+    {
+        CCallHelpers jit(&vm);
+
+        jit.patchableBranch32(
+            MacroAssembler::NotEqual,
+            MacroAssembler::Address(base, JSCell::structureIDOffset()),
+            MacroAssembler::TrustedImm32(0x000ab21ca));
+        jit.loadValue(
+            MacroAssembler::Address(base, 0x000ab21ca), regs);
+
+        dataLog("inline offset cache size: ", jit.m_assembler.buffer().codeSize(), "\n");
+    }
+
+    {
+        CCallHelpers jit(&vm);
+
+        jit.patchableBranch32(
+            MacroAssembler::NotEqual,
+            MacroAssembler::Address(base, JSCell::structureIDOffset()),
+            MacroAssembler::TrustedImm32(0x000ab21ca));
+
+        jit.storeValue(
+            regs, MacroAssembler::Address(base, 0x000ab21ca));
+
+        dataLog("replace cache size: ", jit.m_assembler.buffer().codeSize(), "\n");
+    }
+
+    {
+        CCallHelpers jit(&vm);
+
+        jit.patchableBranch32(
+            MacroAssembler::NotEqual,
+            MacroAssembler::Address(base, JSCell::structureIDOffset()),
+            MacroAssembler::TrustedImm32(0x000ab21ca));
+
+        jit.loadPtr(MacroAssembler::Address(base, JSObject::butterflyOffset()), value);
+        jit.storeValue(
+            regs,
+            MacroAssembler::Address(base, 120342));
+
+        dataLog("replace out of line cache size: ", jit.m_assembler.buffer().codeSize(), "\n");
+    }
+
+    CRASH();
+}
+
+
+template <typename Function>
+ALWAYS_INLINE static bool linkCodeInline(const char* name, CCallHelpers& jit, StructureStubInfo& stubInfo, const Function& function)
+{
+    if (jit.m_assembler.buffer().codeSize() <= stubInfo.patch.inlineSize) {
+        bool needsBranchCompaction = false;
+        LinkBuffer linkBuffer(jit, stubInfo.patch.start.dataLocation(), stubInfo.patch.inlineSize, JITCompilationMustSucceed, needsBranchCompaction);
+        ASSERT(linkBuffer.isValid());
+        function(linkBuffer);
+        FINALIZE_CODE(linkBuffer, ("InlineAccessType: '%s'", name));
+        return true;
+    }
+
+    // This is helpful when determining the size for inline ICs on various
+    // platforms. You want to choose a size that usually succeeds, but sometimes
+    // there may be variability in the length of the code we generate just because
+    // of randomness. It's helpful to flip this on when running tests or browsing
+    // the web just to see how often it fails. You don't want an IC size that always fails.
+    const bool failIfCantInline = false;
+    if (failIfCantInline) {
+        dataLog("Failure for: ", name, "\n");
+        dataLog("real size: ", jit.m_assembler.buffer().codeSize(), " inline size:", stubInfo.patch.inlineSize, "\n");
+        CRASH();
+    }
+
+    return false;
+}
+
+bool InlineAccess::generateSelfPropertyAccess(VM& vm, StructureStubInfo& stubInfo, Structure* structure, PropertyOffset offset)
+{
+    CCallHelpers jit(&vm);
+
+    GPRReg base = static_cast<GPRReg>(stubInfo.patch.baseGPR);
+    JSValueRegs value = stubInfo.valueRegs();
+
+    auto branchToSlowPath = jit.patchableBranch32(
+        MacroAssembler::NotEqual,
+        MacroAssembler::Address(base, JSCell::structureIDOffset()),
+        MacroAssembler::TrustedImm32(bitwise_cast<uint32_t>(structure->id())));
+    GPRReg storage;
+    if (isInlineOffset(offset))
+        storage = base;
+    else {
+        jit.loadPtr(CCallHelpers::Address(base, JSObject::butterflyOffset()), value.payloadGPR());
+        storage = value.payloadGPR();
+    }
+
+    jit.loadValue(
+        MacroAssembler::Address(storage, offsetRelativeToBase(offset)), value);
+
+    bool linkedCodeInline = linkCodeInline("property access", jit, stubInfo, [&] (LinkBuffer& linkBuffer) {
+        linkBuffer.link(branchToSlowPath, stubInfo.slowPathStartLocation());
+    });
+    return linkedCodeInline;
+}
+
+ALWAYS_INLINE static GPRReg getScratchRegister(StructureStubInfo& stubInfo)
+{
+    ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters);
+    allocator.lock(static_cast<GPRReg>(stubInfo.patch.baseGPR));
+    allocator.lock(static_cast<GPRReg>(stubInfo.patch.valueGPR));
+#if USE(JSVALUE32_64)
+    allocator.lock(static_cast<GPRReg>(stubInfo.patch.baseTagGPR));
+    allocator.lock(static_cast<GPRReg>(stubInfo.patch.valueTagGPR));
+#endif
+    GPRReg scratch = allocator.allocateScratchGPR();
+    if (allocator.didReuseRegisters())
+        return InvalidGPRReg;
+    return scratch;
+}
+
+ALWAYS_INLINE static bool hasFreeRegister(StructureStubInfo& stubInfo)
+{
+    return getScratchRegister(stubInfo) != InvalidGPRReg;
+}
+
+bool InlineAccess::canGenerateSelfPropertyReplace(StructureStubInfo& stubInfo, PropertyOffset offset)
+{
+    if (isInlineOffset(offset))
+        return true;
+
+    return hasFreeRegister(stubInfo);
+}
+
+bool InlineAccess::generateSelfPropertyReplace(VM& vm, StructureStubInfo& stubInfo, Structure* structure, PropertyOffset offset)
+{
+    ASSERT(canGenerateSelfPropertyReplace(stubInfo, offset));
+
+    CCallHelpers jit(&vm);
+
+    GPRReg base = static_cast<GPRReg>(stubInfo.patch.baseGPR);
+    JSValueRegs value = stubInfo.valueRegs();
+
+    auto branchToSlowPath = jit.patchableBranch32(
+        MacroAssembler::NotEqual,
+        MacroAssembler::Address(base, JSCell::structureIDOffset()),
+        MacroAssembler::TrustedImm32(bitwise_cast<uint32_t>(structure->id())));
+
+    GPRReg storage;
+    if (isInlineOffset(offset))
+        storage = base;
+    else {
+        storage = getScratchRegister(stubInfo);
+        ASSERT(storage != InvalidGPRReg);
+        jit.loadPtr(CCallHelpers::Address(base, JSObject::butterflyOffset()), storage);
+    }
+
+    jit.storeValue(
+        value, MacroAssembler::Address(storage, offsetRelativeToBase(offset)));
+
+    bool linkedCodeInline = linkCodeInline("property replace", jit, stubInfo, [&] (LinkBuffer& linkBuffer) {
+        linkBuffer.link(branchToSlowPath, stubInfo.slowPathStartLocation());
+    });
+    return linkedCodeInline;
+}
+
+bool InlineAccess::isCacheableArrayLength(StructureStubInfo& stubInfo, JSArray* array)
+{
+    ASSERT(array->indexingType() & IsArray);
+
+    if (!hasFreeRegister(stubInfo))
+        return false;
+
+    return array->indexingType() == ArrayWithInt32
+        || array->indexingType() == ArrayWithDouble
+        || array->indexingType() == ArrayWithContiguous;
+}
+
+bool InlineAccess::generateArrayLength(VM& vm, StructureStubInfo& stubInfo, JSArray* array)
+{
+    ASSERT(isCacheableArrayLength(stubInfo, array));
+
+    CCallHelpers jit(&vm);
+
+    GPRReg base = static_cast<GPRReg>(stubInfo.patch.baseGPR);
+    JSValueRegs value = stubInfo.valueRegs();
+    GPRReg scratch = getScratchRegister(stubInfo);
+
+    jit.load8(CCallHelpers::Address(base, JSCell::indexingTypeOffset()), scratch);
+    jit.and32(CCallHelpers::TrustedImm32(IsArray | IndexingShapeMask), scratch);
+    auto branchToSlowPath = jit.patchableBranch32(
+        CCallHelpers::NotEqual, scratch, CCallHelpers::TrustedImm32(array->indexingType()));
+    jit.loadPtr(CCallHelpers::Address(base, JSObject::butterflyOffset()), value.payloadGPR());
+    jit.load32(CCallHelpers::Address(value.payloadGPR(), ArrayStorage::lengthOffset()), value.payloadGPR());
+    jit.boxInt32(value.payloadGPR(), value);
+
+    bool linkedCodeInline = linkCodeInline("array length", jit, stubInfo, [&] (LinkBuffer& linkBuffer) {
+        linkBuffer.link(branchToSlowPath, stubInfo.slowPathStartLocation());
+    });
+    return linkedCodeInline;
+}
+
+void InlineAccess::rewireStubAsJump(VM& vm, StructureStubInfo& stubInfo, CodeLocationLabel target)
+{
+    CCallHelpers jit(&vm);
+
+    auto jump = jit.jump();
+
+    // We don't need a nop sled here because nobody should be jumping into the middle of an IC.
+    bool needsBranchCompaction = false;
+    LinkBuffer linkBuffer(jit, stubInfo.patch.start.dataLocation(), jit.m_assembler.buffer().codeSize(), JITCompilationMustSucceed, needsBranchCompaction);
+    RELEASE_ASSERT(linkBuffer.isValid());
+    linkBuffer.link(jump, target);
+
+    FINALIZE_CODE(linkBuffer, ("InlineAccess: linking constant jump"));
+}
+
+} // namespace JSC
+
+#endif // ENABLE(JIT)
diff --git a/Source/JavaScriptCore/bytecode/InlineAccess.h b/Source/JavaScriptCore/bytecode/InlineAccess.h
new file mode 100644 (file)
index 0000000..b6e6271
--- /dev/null
@@ -0,0 +1,119 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef InlineAccess_h
+#define InlineAccess_h
+
+#if ENABLE(JIT)
+
+#include "CodeLocation.h"
+#include "PropertyOffset.h"
+
+namespace JSC {
+
+class JSArray;
+class Structure;
+class StructureStubInfo;
+class VM;
+
+class InlineAccess {
+public:
+
+    static constexpr size_t sizeForPropertyAccess()
+    {
+#if CPU(X86_64)
+        return 23;
+#elif CPU(X86)
+        return 27;
+#elif CPU(ARM64)
+        return 40;
+#elif CPU(ARM)
+#if CPU(ARM_THUMB2)
+        return 48;
+#else
+        return 50;
+#endif
+#else
+#error "unsupported platform"
+#endif
+    }
+
+    static constexpr size_t sizeForPropertyReplace()
+    {
+#if CPU(X86_64)
+        return 23;
+#elif CPU(X86)
+        return 27;
+#elif CPU(ARM64)
+        return 40;
+#elif CPU(ARM)
+#if CPU(ARM_THUMB2)
+        return 48;
+#else
+        return 50;
+#endif
+#else
+#error "unsupported platform"
+#endif
+    }
+
+    static constexpr size_t sizeForLengthAccess()
+    {
+#if CPU(X86_64)
+        return 26;
+#elif CPU(X86)
+        return 27;
+#elif CPU(ARM64)
+        return 32;
+#elif CPU(ARM)
+#if CPU(ARM_THUMB2)
+        return 30;
+#else
+        return 50;
+#endif
+#else
+#error "unsupported platform"
+#endif
+    }
+
+    static bool generateSelfPropertyAccess(VM&, StructureStubInfo&, Structure*, PropertyOffset);
+    static bool canGenerateSelfPropertyReplace(StructureStubInfo&, PropertyOffset);
+    static bool generateSelfPropertyReplace(VM&, StructureStubInfo&, Structure*, PropertyOffset);
+    static bool isCacheableArrayLength(StructureStubInfo&, JSArray*);
+    static bool generateArrayLength(VM&, StructureStubInfo&, JSArray*);
+    static void rewireStubAsJump(VM&, StructureStubInfo&, CodeLocationLabel);
+
+    // This is helpful when determining the size of an IC on
+    // various platforms. When adding a new type of IC, implement
+    // its placeholder code here, and log the size. That way we
+    // can intelligently choose sizes on various platforms.
+    NO_RETURN_DUE_TO_CRASH void dumpCacheSizesAndCrash(VM&);
+};
+
+} // namespace JSC
+
+#endif // ENABLE(JIT)
+
+#endif // InlineAccess_h
index 9b3e7a1..1dc926f 100644 (file)
@@ -1551,11 +1551,7 @@ AccessGenerationResult PolymorphicAccess::regenerate(
     state.ident = &ident;
     
     state.baseGPR = static_cast<GPRReg>(stubInfo.patch.baseGPR);
-    state.valueRegs = JSValueRegs(
-#if USE(JSVALUE32_64)
-        static_cast<GPRReg>(stubInfo.patch.valueTagGPR),
-#endif
-        static_cast<GPRReg>(stubInfo.patch.valueGPR));
+    state.valueRegs = stubInfo.valueRegs();
 
     ScratchRegisterAllocator allocator(stubInfo.patch.usedRegisters);
     state.allocator = &allocator;
@@ -1753,14 +1749,11 @@ AccessGenerationResult PolymorphicAccess::regenerate(
         return AccessGenerationResult::GaveUp;
     }
 
-    CodeLocationLabel successLabel =
-        stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToDone);
+    CodeLocationLabel successLabel = stubInfo.doneLocation();
         
     linkBuffer.link(state.success, successLabel);
 
-    linkBuffer.link(
-        failure,
-        stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
+    linkBuffer.link(failure, stubInfo.slowPathStartLocation());
     
     if (verbose)
         dataLog(*codeBlock, " ", stubInfo.codeOrigin, ": Generating polymorphic access stub for ", listDump(cases), "\n");
index 4133f3a..30dff72 100644 (file)
@@ -63,6 +63,11 @@ void StructureStubInfo::initGetByIdSelf(CodeBlock* codeBlock, Structure* baseObj
     u.byIdSelf.offset = offset;
 }
 
+void StructureStubInfo::initArrayLength()
+{
+    cacheType = CacheType::ArrayLength;
+}
+
 void StructureStubInfo::initPutByIdReplace(CodeBlock* codeBlock, Structure* baseObjectStructure, PropertyOffset offset)
 {
     cacheType = CacheType::PutByIdReplace;
@@ -87,6 +92,7 @@ void StructureStubInfo::deref()
     case CacheType::Unset:
     case CacheType::GetByIdSelf:
     case CacheType::PutByIdReplace:
+    case CacheType::ArrayLength:
         return;
     }
 
@@ -102,6 +108,7 @@ void StructureStubInfo::aboutToDie()
     case CacheType::Unset:
     case CacheType::GetByIdSelf:
     case CacheType::PutByIdReplace:
+    case CacheType::ArrayLength:
         return;
     }
 
@@ -257,6 +264,7 @@ bool StructureStubInfo::propagateTransitions(SlotVisitor& visitor)
 {
     switch (cacheType) {
     case CacheType::Unset:
+    case CacheType::ArrayLength:
         return true;
     case CacheType::GetByIdSelf:
     case CacheType::PutByIdReplace:
@@ -275,6 +283,7 @@ bool StructureStubInfo::containsPC(void* pc) const
         return false;
     return u.stub->containsPC(pc);
 }
-#endif
+
+#endif // ENABLE(JIT)
 
 } // namespace JSC
index d10b5f5..0138466 100644 (file)
@@ -57,7 +57,8 @@ enum class CacheType : int8_t {
     Unset,
     GetByIdSelf,
     PutByIdReplace,
-    Stub
+    Stub,
+    ArrayLength
 };
 
 class StructureStubInfo {
@@ -68,6 +69,7 @@ public:
     ~StructureStubInfo();
 
     void initGetByIdSelf(CodeBlock*, Structure* baseObjectStructure, PropertyOffset);
+    void initArrayLength();
     void initPutByIdReplace(CodeBlock*, Structure* baseObjectStructure, PropertyOffset);
     void initStub(CodeBlock*, std::unique_ptr<PolymorphicAccess>);
 
@@ -143,13 +145,11 @@ public:
         return false;
     }
 
-    CodeLocationCall callReturnLocation;
+    bool containsPC(void* pc) const;
 
     CodeOrigin codeOrigin;
     CallSiteIndex callSiteIndex;
 
-    bool containsPC(void* pc) const;
-
     union {
         struct {
             WriteBarrierBase<Structure> baseObjectStructure;
@@ -165,25 +165,39 @@ public:
     StructureSet bufferedStructures;
     
     struct {
+        CodeLocationLabel start; // This is either the start of the inline IC for *byId caches, or the location of patchable jump for 'in' caches.
+        RegisterSet usedRegisters;
+        uint32_t inlineSize;
+        int32_t deltaFromStartToSlowPathCallLocation;
+        int32_t deltaFromStartToSlowPathStart;
+
         int8_t baseGPR;
+        int8_t valueGPR;
 #if USE(JSVALUE32_64)
         int8_t valueTagGPR;
         int8_t baseTagGPR;
 #endif
-        int8_t valueGPR;
-        RegisterSet usedRegisters;
-        int32_t deltaCallToDone;
-        int32_t deltaCallToJump;
-        int32_t deltaCallToSlowCase;
-        int32_t deltaCheckImmToCall;
-#if USE(JSVALUE64)
-        int32_t deltaCallToLoadOrStore;
-#else
-        int32_t deltaCallToTagLoadOrStore;
-        int32_t deltaCallToPayloadLoadOrStore;
-#endif
     } patch;
 
+    CodeLocationCall slowPathCallLocation() { return patch.start.callAtOffset(patch.deltaFromStartToSlowPathCallLocation); }
+    CodeLocationLabel doneLocation() { return patch.start.labelAtOffset(patch.inlineSize); }
+    CodeLocationLabel slowPathStartLocation() { return patch.start.labelAtOffset(patch.deltaFromStartToSlowPathStart); }
+    CodeLocationJump patchableJumpForIn()
+    { 
+        ASSERT(accessType == AccessType::In);
+        return patch.start.jumpAtOffset(0);
+    }
+
+    JSValueRegs valueRegs() const
+    {
+        return JSValueRegs(
+#if USE(JSVALUE32_64)
+            static_cast<GPRReg>(patch.valueTagGPR),
+#endif
+            static_cast<GPRReg>(patch.valueGPR));
+    }
+
+
     AccessType accessType;
     CacheType cacheType;
     uint8_t countdown; // We repatch only when this is zero. If not zero, we decrement.
index 17c6a0b..4c89650 100644 (file)
@@ -258,11 +258,20 @@ void JITCompiler::link(LinkBuffer& linkBuffer)
 
     for (unsigned i = 0; i < m_ins.size(); ++i) {
         StructureStubInfo& info = *m_ins[i].m_stubInfo;
-        CodeLocationCall callReturnLocation = linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->call());
-        info.patch.deltaCallToDone = differenceBetweenCodePtr(callReturnLocation, linkBuffer.locationOf(m_ins[i].m_done));
-        info.patch.deltaCallToJump = differenceBetweenCodePtr(callReturnLocation, linkBuffer.locationOf(m_ins[i].m_jump));
-        info.callReturnLocation = callReturnLocation;
-        info.patch.deltaCallToSlowCase = differenceBetweenCodePtr(callReturnLocation, linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->label()));
+
+        CodeLocationLabel start = linkBuffer.locationOf(m_ins[i].m_jump);
+        info.patch.start = start;
+
+        ptrdiff_t inlineSize = MacroAssembler::differenceBetweenCodePtr(
+            start, linkBuffer.locationOf(m_ins[i].m_done));
+        RELEASE_ASSERT(inlineSize >= 0);
+        info.patch.inlineSize = inlineSize;
+
+        info.patch.deltaFromStartToSlowPathCallLocation = MacroAssembler::differenceBetweenCodePtr(
+            start, linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->call()));
+
+        info.patch.deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr(
+            start, linkBuffer.locationOf(m_ins[i].m_slowPathGenerator->label()));
     }
     
     for (unsigned i = 0; i < m_jsCalls.size(); ++i) {
index c5ca417..197a71c 100644 (file)
@@ -186,8 +186,7 @@ void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit)
                     baselineCodeBlockForCaller->findStubInfo(CodeOrigin(callBytecodeIndex));
                 RELEASE_ASSERT(stubInfo);
 
-                jumpTarget = stubInfo->callReturnLocation.labelAtOffset(
-                    stubInfo->patch.deltaCallToDone).executableAddress();
+                jumpTarget = stubInfo->doneLocation().executableAddress();
                 break;
             }
 
index cfc205c..a03589a 100644 (file)
@@ -197,9 +197,8 @@ void SpeculativeJIT::cachedGetById(
     
     CallSiteIndex callSite = m_jit.recordCallSiteAndGenerateExceptionHandlingOSRExitIfNeeded(codeOrigin, m_stream->size());
     JITGetByIdGenerator gen(
-        m_jit.codeBlock(), codeOrigin, callSite, usedRegisters,
-        JSValueRegs(baseTagGPROrNone, basePayloadGPR),
-        JSValueRegs(resultTagGPR, resultPayloadGPR), type);
+        m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, identifierUID(identifierNumber),
+        JSValueRegs(baseTagGPROrNone, basePayloadGPR), JSValueRegs(resultTagGPR, resultPayloadGPR), type);
     
     gen.generateFastPath(m_jit);
     
index 5348492..c8b4e94 100644 (file)
@@ -168,8 +168,8 @@ void SpeculativeJIT::cachedGetById(CodeOrigin codeOrigin, GPRReg baseGPR, GPRReg
         usedRegisters.set(resultGPR, false);
     }
     JITGetByIdGenerator gen(
-        m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, JSValueRegs(baseGPR),
-        JSValueRegs(resultGPR), type);
+        m_jit.codeBlock(), codeOrigin, callSite, usedRegisters, identifierUID(identifierNumber),
+        JSValueRegs(baseGPR), JSValueRegs(resultGPR), type);
     gen.generateFastPath(m_jit);
     
     JITCompiler::JumpList slowCases;
index 47b9e18..15bfe91 100644 (file)
@@ -6183,21 +6183,19 @@ private:
 
                                 jit.addLinkTask(
                                     [=] (LinkBuffer& linkBuffer) {
-                                        CodeLocationCall callReturnLocation =
-                                            linkBuffer.locationOf(slowPathCall);
-                                        stubInfo->patch.deltaCallToDone =
-                                            CCallHelpers::differenceBetweenCodePtr(
-                                                callReturnLocation,
-                                                linkBuffer.locationOf(done));
-                                        stubInfo->patch.deltaCallToJump =
-                                            CCallHelpers::differenceBetweenCodePtr(
-                                                callReturnLocation,
-                                                linkBuffer.locationOf(jump));
-                                        stubInfo->callReturnLocation = callReturnLocation;
-                                        stubInfo->patch.deltaCallToSlowCase =
-                                            CCallHelpers::differenceBetweenCodePtr(
-                                                callReturnLocation,
-                                                linkBuffer.locationOf(slowPathBegin));
+                                        CodeLocationLabel start = linkBuffer.locationOf(jump);
+                                        stubInfo->patch.start = start;
+                                        ptrdiff_t inlineSize = MacroAssembler::differenceBetweenCodePtr(
+                                            start, linkBuffer.locationOf(done));
+                                        RELEASE_ASSERT(inlineSize >= 0);
+                                        stubInfo->patch.inlineSize = inlineSize;
+
+                                        stubInfo->patch.deltaFromStartToSlowPathCallLocation = MacroAssembler::differenceBetweenCodePtr(
+                                            start, linkBuffer.locationOf(slowPathCall));
+
+                                        stubInfo->patch.deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr(
+                                            start, linkBuffer.locationOf(slowPathBegin));
+
                                     });
                             });
                     });
@@ -7616,7 +7614,7 @@ private:
 
                 auto generator = Box<JITGetByIdGenerator>::create(
                     jit.codeBlock(), node->origin.semantic, callSiteIndex,
-                    params.unavailableRegisters(), JSValueRegs(params[1].gpr()),
+                    params.unavailableRegisters(), uid, JSValueRegs(params[1].gpr()),
                     JSValueRegs(params[0].gpr()), type);
 
                 generator->generateFastPath(jit);
index 7f56a0f..0417939 100644 (file)
@@ -29,8 +29,9 @@
 #if ENABLE(JIT)
 
 #include "CodeBlock.h"
-#include "LinkBuffer.h"
+#include "InlineAccess.h"
 #include "JSCInlines.h"
+#include "LinkBuffer.h"
 #include "StructureStubInfo.h"
 
 namespace JSC {
@@ -69,25 +70,19 @@ JITByIdGenerator::JITByIdGenerator(
 
 void JITByIdGenerator::finalize(LinkBuffer& fastPath, LinkBuffer& slowPath)
 {
-    CodeLocationCall callReturnLocation = slowPath.locationOf(m_call);
-    m_stubInfo->callReturnLocation = callReturnLocation;
-    m_stubInfo->patch.deltaCheckImmToCall = MacroAssembler::differenceBetweenCodePtr(
-        fastPath.locationOf(m_structureImm), callReturnLocation);
-    m_stubInfo->patch.deltaCallToJump = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, fastPath.locationOf(m_structureCheck));
-#if USE(JSVALUE64)
-    m_stubInfo->patch.deltaCallToLoadOrStore = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, fastPath.locationOf(m_loadOrStore));
-#else
-    m_stubInfo->patch.deltaCallToTagLoadOrStore = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, fastPath.locationOf(m_tagLoadOrStore));
-    m_stubInfo->patch.deltaCallToPayloadLoadOrStore = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, fastPath.locationOf(m_loadOrStore));
-#endif
-    m_stubInfo->patch.deltaCallToSlowCase = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, slowPath.locationOf(m_slowPathBegin));
-    m_stubInfo->patch.deltaCallToDone = MacroAssembler::differenceBetweenCodePtr(
-        callReturnLocation, fastPath.locationOf(m_done));
+    ASSERT(m_start.isSet());
+    CodeLocationLabel start = fastPath.locationOf(m_start);
+    m_stubInfo->patch.start = start;
+
+    int32_t inlineSize = MacroAssembler::differenceBetweenCodePtr(
+        start, fastPath.locationOf(m_done));
+    ASSERT(inlineSize > 0);
+    m_stubInfo->patch.inlineSize = inlineSize;
+
+    m_stubInfo->patch.deltaFromStartToSlowPathCallLocation = MacroAssembler::differenceBetweenCodePtr(
+        start, slowPath.locationOf(m_slowPathCall));
+    m_stubInfo->patch.deltaFromStartToSlowPathStart = MacroAssembler::differenceBetweenCodePtr(
+        start, slowPath.locationOf(m_slowPathBegin));
 }
 
 void JITByIdGenerator::finalize(LinkBuffer& linkBuffer)
@@ -95,38 +90,30 @@ void JITByIdGenerator::finalize(LinkBuffer& linkBuffer)
     finalize(linkBuffer, linkBuffer);
 }
 
-void JITByIdGenerator::generateFastPathChecks(MacroAssembler& jit)
+void JITByIdGenerator::generateFastCommon(MacroAssembler& jit, size_t inlineICSize)
 {
-    m_structureCheck = jit.patchableBranch32WithPatch(
-        MacroAssembler::NotEqual,
-        MacroAssembler::Address(m_base.payloadGPR(), JSCell::structureIDOffset()),
-        m_structureImm, MacroAssembler::TrustedImm32(0));
+    m_start = jit.label();
+    size_t startSize = jit.m_assembler.buffer().codeSize();
+    m_slowPathJump = jit.jump();
+    size_t jumpSize = jit.m_assembler.buffer().codeSize() - startSize;
+    size_t nopsToEmitInBytes = inlineICSize - jumpSize;
+    jit.emitNops(nopsToEmitInBytes);
+    ASSERT(jit.m_assembler.buffer().codeSize() - startSize == inlineICSize);
+    m_done = jit.label();
 }
 
 JITGetByIdGenerator::JITGetByIdGenerator(
     CodeBlock* codeBlock, CodeOrigin codeOrigin, CallSiteIndex callSite, const RegisterSet& usedRegisters,
-    JSValueRegs base, JSValueRegs value, AccessType accessType)
-    : JITByIdGenerator(
-        codeBlock, codeOrigin, callSite, accessType, usedRegisters, base, value)
+    UniquedStringImpl* propertyName, JSValueRegs base, JSValueRegs value, AccessType accessType)
+    : JITByIdGenerator(codeBlock, codeOrigin, callSite, accessType, usedRegisters, base, value)
+    , m_isLengthAccess(propertyName == codeBlock->vm()->propertyNames->length.impl())
 {
     RELEASE_ASSERT(base.payloadGPR() != value.tagGPR());
 }
 
 void JITGetByIdGenerator::generateFastPath(MacroAssembler& jit)
 {
-    generateFastPathChecks(jit);
-    
-#if USE(JSVALUE64)
-    m_loadOrStore = jit.load64WithCompactAddressOffsetPatch(
-        MacroAssembler::Address(m_base.payloadGPR(), 0), m_value.payloadGPR()).label();
-#else
-    m_tagLoadOrStore = jit.load32WithCompactAddressOffsetPatch(
-        MacroAssembler::Address(m_base.payloadGPR(), 0), m_value.tagGPR()).label();
-    m_loadOrStore = jit.load32WithCompactAddressOffsetPatch(
-        MacroAssembler::Address(m_base.payloadGPR(), 0), m_value.payloadGPR()).label();
-#endif
-    
-    m_done = jit.label();
+    generateFastCommon(jit, m_isLengthAccess ? InlineAccess::sizeForLengthAccess() : InlineAccess::sizeForPropertyAccess());
 }
 
 JITPutByIdGenerator::JITPutByIdGenerator(
@@ -143,19 +130,7 @@ JITPutByIdGenerator::JITPutByIdGenerator(
 
 void JITPutByIdGenerator::generateFastPath(MacroAssembler& jit)
 {
-    generateFastPathChecks(jit);
-    
-#if USE(JSVALUE64)
-    m_loadOrStore = jit.store64WithAddressOffsetPatch(
-        m_value.payloadGPR(), MacroAssembler::Address(m_base.payloadGPR(), 0)).label();
-#else
-    m_tagLoadOrStore = jit.store32WithAddressOffsetPatch(
-        m_value.tagGPR(), MacroAssembler::Address(m_base.payloadGPR(), 0)).label();
-    m_loadOrStore = jit.store32WithAddressOffsetPatch(
-        m_value.payloadGPR(), MacroAssembler::Address(m_base.payloadGPR(), 0)).label();
-#endif
-    
-    m_done = jit.label();
+    generateFastCommon(jit, InlineAccess::sizeForPropertyReplace());
 }
 
 V_JITOperation_ESsiJJI JITPutByIdGenerator::slowPathFunction()
index ff15a18..01ce068 100644 (file)
@@ -68,30 +68,30 @@ public:
     void reportSlowPathCall(MacroAssembler::Label slowPathBegin, MacroAssembler::Call call)
     {
         m_slowPathBegin = slowPathBegin;
-        m_call = call;
+        m_slowPathCall = call;
     }
     
     MacroAssembler::Label slowPathBegin() const { return m_slowPathBegin; }
-    MacroAssembler::Jump slowPathJump() const { return m_structureCheck.m_jump; }
+    MacroAssembler::Jump slowPathJump() const
+    {
+        ASSERT(m_slowPathJump.isSet());
+        return m_slowPathJump;
+    }
 
     void finalize(LinkBuffer& fastPathLinkBuffer, LinkBuffer& slowPathLinkBuffer);
     void finalize(LinkBuffer&);
     
 protected:
-    void generateFastPathChecks(MacroAssembler&);
+    void generateFastCommon(MacroAssembler&, size_t size);
     
     JSValueRegs m_base;
     JSValueRegs m_value;
     
-    MacroAssembler::DataLabel32 m_structureImm;
-    MacroAssembler::PatchableJump m_structureCheck;
-    AssemblerLabel m_loadOrStore;
-#if USE(JSVALUE32_64)
-    AssemblerLabel m_tagLoadOrStore;
-#endif
+    MacroAssembler::Label m_start;
     MacroAssembler::Label m_done;
     MacroAssembler::Label m_slowPathBegin;
-    MacroAssembler::Call m_call;
+    MacroAssembler::Call m_slowPathCall;
+    MacroAssembler::Jump m_slowPathJump;
 };
 
 class JITGetByIdGenerator : public JITByIdGenerator {
@@ -99,10 +99,13 @@ public:
     JITGetByIdGenerator() { }
 
     JITGetByIdGenerator(
-        CodeBlock*, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters, JSValueRegs base,
-        JSValueRegs value, AccessType);
+        CodeBlock*, CodeOrigin, CallSiteIndex, const RegisterSet& usedRegisters, UniquedStringImpl* propertyName,
+        JSValueRegs base, JSValueRegs value, AccessType);
     
     void generateFastPath(MacroAssembler&);
+
+private:
+    bool m_isLengthAccess;
 };
 
 class JITPutByIdGenerator : public JITByIdGenerator {
index 786e118..4383a13 100644 (file)
@@ -222,7 +222,7 @@ JITGetByIdGenerator JIT::emitGetByValWithCachedId(Instruction* currentInstructio
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
-        JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
+        propertyName.impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
     gen.generateFastPath(*this);
 
     fastDoneCase = jump();
@@ -571,6 +571,7 @@ void JIT::emit_op_try_get_by_id(Instruction* currentInstruction)
 {
     int resultVReg = currentInstruction[1].u.operand;
     int baseVReg = currentInstruction[2].u.operand;
+    const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand));
 
     emitGetVirtualRegister(baseVReg, regT0);
 
@@ -578,7 +579,7 @@ void JIT::emit_op_try_get_by_id(Instruction* currentInstruction)
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
-        JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetPure);
+        ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::GetPure);
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIds.append(gen);
@@ -619,7 +620,7 @@ void JIT::emit_op_get_by_id(Instruction* currentInstruction)
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(m_bytecodeOffset), RegisterSet::stubUnavailableRegisters(),
-        JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
+        ident->impl(), JSValueRegs(regT0), JSValueRegs(regT0), AccessType::Get);
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIds.append(gen);
index 7a933ef..1d65624 100644 (file)
@@ -292,7 +292,7 @@ JITGetByIdGenerator JIT::emitGetByValWithCachedId(Instruction* currentInstructio
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
-        JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
+        propertyName.impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
     gen.generateFastPath(*this);
 
     fastDoneCase = jump();
@@ -587,13 +587,14 @@ void JIT::emit_op_try_get_by_id(Instruction* currentInstruction)
 {
     int dst = currentInstruction[1].u.operand;
     int base = currentInstruction[2].u.operand;
+    const Identifier* ident = &(m_codeBlock->identifier(currentInstruction[3].u.operand));
 
     emitLoad(base, regT1, regT0);
     emitJumpSlowCaseIfNotJSCell(base, regT1);
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
-        JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::GetPure);
+        ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::GetPure);
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIds.append(gen);
@@ -634,7 +635,7 @@ void JIT::emit_op_get_by_id(Instruction* currentInstruction)
 
     JITGetByIdGenerator gen(
         m_codeBlock, CodeOrigin(m_bytecodeOffset), CallSiteIndex(currentInstruction), RegisterSet::stubUnavailableRegisters(),
-        JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
+        ident->impl(), JSValueRegs::payloadOnly(regT0), JSValueRegs(regT1, regT0), AccessType::Get);
     gen.generateFastPath(*this);
     addSlowCase(gen.slowPathJump());
     m_getByIds.append(gen);
index 8298453..131663d 100644 (file)
@@ -38,6 +38,7 @@
 #include "GCAwareJITStubRoutine.h"
 #include "GetterSetter.h"
 #include "ICStats.h"
+#include "InlineAccess.h"
 #include "JIT.h"
 #include "JITInlines.h"
 #include "LinkBuffer.h"
@@ -90,95 +91,6 @@ static void repatchCall(CodeBlock* codeBlock, CodeLocationCall call, FunctionPtr
     MacroAssembler::repatchCall(call, newCalleeFunction);
 }
 
-static void repatchByIdSelfAccess(
-    CodeBlock* codeBlock, StructureStubInfo& stubInfo, Structure* structure,
-    PropertyOffset offset, const FunctionPtr& slowPathFunction,
-    bool compact)
-{
-    // Only optimize once!
-    repatchCall(codeBlock, stubInfo.callReturnLocation, slowPathFunction);
-
-    // Patch the structure check & the offset of the load.
-    MacroAssembler::repatchInt32(
-        stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall),
-        bitwise_cast<int32_t>(structure->id()));
-#if USE(JSVALUE64)
-    if (compact)
-        MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToLoadOrStore), offsetRelativeToBase(offset));
-    else
-        MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToLoadOrStore), offsetRelativeToBase(offset));
-#elif USE(JSVALUE32_64)
-    if (compact) {
-        MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag));
-        MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload));
-    } else {
-        MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag));
-        MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), offsetRelativeToBase(offset) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload));
-    }
-#endif
-}
-
-static void resetGetByIDCheckAndLoad(StructureStubInfo& stubInfo)
-{
-    CodeLocationDataLabel32 structureLabel = stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall);
-    if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
-        MacroAssembler::revertJumpReplacementToPatchableBranch32WithPatch(
-            MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(structureLabel),
-            MacroAssembler::Address(
-                static_cast<MacroAssembler::RegisterID>(stubInfo.patch.baseGPR),
-                JSCell::structureIDOffset()),
-            static_cast<int32_t>(unusedPointer));
-    }
-    MacroAssembler::repatchInt32(structureLabel, static_cast<int32_t>(unusedPointer));
-#if USE(JSVALUE64)
-    MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToLoadOrStore), 0);
-#else
-    MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), 0);
-    MacroAssembler::repatchCompact(stubInfo.callReturnLocation.dataLabelCompactAtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), 0);
-#endif
-}
-
-static void resetPutByIDCheckAndLoad(StructureStubInfo& stubInfo)
-{
-    CodeLocationDataLabel32 structureLabel = stubInfo.callReturnLocation.dataLabel32AtOffset(-(intptr_t)stubInfo.patch.deltaCheckImmToCall);
-    if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
-        MacroAssembler::revertJumpReplacementToPatchableBranch32WithPatch(
-            MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(structureLabel),
-            MacroAssembler::Address(
-                static_cast<MacroAssembler::RegisterID>(stubInfo.patch.baseGPR),
-                JSCell::structureIDOffset()),
-            static_cast<int32_t>(unusedPointer));
-    }
-    MacroAssembler::repatchInt32(structureLabel, static_cast<int32_t>(unusedPointer));
-#if USE(JSVALUE64)
-    MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToLoadOrStore), 0);
-#else
-    MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToTagLoadOrStore), 0);
-    MacroAssembler::repatchInt32(stubInfo.callReturnLocation.dataLabel32AtOffset(stubInfo.patch.deltaCallToPayloadLoadOrStore), 0);
-#endif
-}
-
-static void replaceWithJump(StructureStubInfo& stubInfo, const MacroAssemblerCodePtr target)
-{
-    RELEASE_ASSERT(target);
-    
-    if (MacroAssembler::canJumpReplacePatchableBranch32WithPatch()) {
-        MacroAssembler::replaceWithJump(
-            MacroAssembler::startOfPatchableBranch32WithPatchOnAddress(
-                stubInfo.callReturnLocation.dataLabel32AtOffset(
-                    -(intptr_t)stubInfo.patch.deltaCheckImmToCall)),
-            CodeLocationLabel(target));
-        return;
-    }
-
-    resetGetByIDCheckAndLoad(stubInfo);
-    
-    MacroAssembler::repatchJump(
-        stubInfo.callReturnLocation.jumpAtOffset(
-            stubInfo.patch.deltaCallToJump),
-        CodeLocationLabel(target));
-}
-
 enum InlineCacheAction {
     GiveUpOnCache,
     RetryCacheLater,
@@ -241,9 +153,21 @@ static InlineCacheAction tryCacheGetByID(ExecState* exec, JSValue baseValue, con
     std::unique_ptr<AccessCase> newCase;
 
     if (propertyName == vm.propertyNames->length) {
-        if (isJSArray(baseValue))
+        if (isJSArray(baseValue)) {
+            if (stubInfo.cacheType == CacheType::Unset
+                && slot.slotBase() == baseValue
+                && InlineAccess::isCacheableArrayLength(stubInfo, jsCast<JSArray*>(baseValue))) {
+
+                bool generatedCodeInline = InlineAccess::generateArrayLength(*codeBlock->vm(), stubInfo, jsCast<JSArray*>(baseValue));
+                if (generatedCodeInline) {
+                    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
+                    stubInfo.initArrayLength();
+                    return RetryCacheLater;
+                }
+            }
+
             newCase = AccessCase::getLength(vm, codeBlock, AccessCase::ArrayLength);
-        else if (isJSString(baseValue))
+        else if (isJSString(baseValue))
             newCase = AccessCase::getLength(vm, codeBlock, AccessCase::StringLength);
         else if (DirectArguments* arguments = jsDynamicCast<DirectArguments*>(baseValue)) {
             // If there were overrides, then we can handle this as a normal property load! Guarding
@@ -276,22 +200,23 @@ static InlineCacheAction tryCacheGetByID(ExecState* exec, JSValue baseValue, con
         InlineCacheAction action = actionForCell(vm, baseCell);
         if (action != AttemptToCache)
             return action;
-        
+
         // Optimize self access.
         if (stubInfo.cacheType == CacheType::Unset
             && slot.isCacheableValue()
             && slot.slotBase() == baseValue
             && !slot.watchpointSet()
-            && isInlineOffset(slot.cachedOffset())
-            && MacroAssembler::isCompactPtrAlignedAddressOffset(maxOffsetRelativeToBase(slot.cachedOffset()))
-            && action == AttemptToCache
             && !structure->needImpurePropertyWatchpoint()
             && !loadTargetFromProxy) {
-            LOG_IC((ICEvent::GetByIdSelfPatch, structure->classInfo(), propertyName));
-            structure->startWatchingPropertyForReplacements(vm, slot.cachedOffset());
-            repatchByIdSelfAccess(codeBlock, stubInfo, structure, slot.cachedOffset(), appropriateOptimizingGetByIdFunction(kind), true);
-            stubInfo.initGetByIdSelf(codeBlock, structure, slot.cachedOffset());
-            return RetryCacheLater;
+
+            bool generatedCodeInline = InlineAccess::generateSelfPropertyAccess(*codeBlock->vm(), stubInfo, structure, slot.cachedOffset());
+            if (generatedCodeInline) {
+                LOG_IC((ICEvent::GetByIdSelfPatch, structure->classInfo(), propertyName));
+                structure->startWatchingPropertyForReplacements(vm, slot.cachedOffset());
+                repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
+                stubInfo.initGetByIdSelf(codeBlock, structure, slot.cachedOffset());
+                return RetryCacheLater;
+            }
         }
 
         PropertyOffset offset = slot.isUnset() ? invalidOffset : slot.cachedOffset();
@@ -370,7 +295,7 @@ static InlineCacheAction tryCacheGetByID(ExecState* exec, JSValue baseValue, con
         LOG_IC((ICEvent::GetByIdReplaceWithJump, baseValue.classInfoOrNull(), propertyName));
         
         RELEASE_ASSERT(result.code());
-        replaceWithJump(stubInfo, result.code());
+        InlineAccess::rewireStubAsJump(exec->vm(), stubInfo, CodeLocationLabel(result.code()));
     }
     
     return result.shouldGiveUpNow() ? GiveUpOnCache : RetryCacheLater;
@@ -382,7 +307,7 @@ void repatchGetByID(ExecState* exec, JSValue baseValue, const Identifier& proper
     GCSafeConcurrentJITLocker locker(exec->codeBlock()->m_lock, exec->vm().heap);
     
     if (tryCacheGetByID(exec, baseValue, propertyName, slot, stubInfo, kind) == GiveUpOnCache)
-        repatchCall(exec->codeBlock(), stubInfo.callReturnLocation, appropriateGenericGetByIdFunction(kind));
+        repatchCall(exec->codeBlock(), stubInfo.slowPathCallLocation(), appropriateGenericGetByIdFunction(kind));
 }
 
 static V_JITOperation_ESsiJJI appropriateGenericPutByIdFunction(const PutPropertySlot &slot, PutKind putKind)
@@ -433,18 +358,17 @@ static InlineCacheAction tryCachePutByID(ExecState* exec, JSValue baseValue, Str
             structure->didCachePropertyReplacement(vm, slot.cachedOffset());
         
             if (stubInfo.cacheType == CacheType::Unset
-                && isInlineOffset(slot.cachedOffset())
-                && MacroAssembler::isPtrAlignedAddressOffset(maxOffsetRelativeToBase(slot.cachedOffset()))
+                && InlineAccess::canGenerateSelfPropertyReplace(stubInfo, slot.cachedOffset())
                 && !structure->needImpurePropertyWatchpoint()
                 && !structure->inferredTypeFor(ident.impl())) {
                 
-                LOG_IC((ICEvent::PutByIdSelfPatch, structure->classInfo(), ident));
-                
-                repatchByIdSelfAccess(
-                    codeBlock, stubInfo, structure, slot.cachedOffset(),
-                    appropriateOptimizingPutByIdFunction(slot, putKind), false);
-                stubInfo.initPutByIdReplace(codeBlock, structure, slot.cachedOffset());
-                return RetryCacheLater;
+                bool generatedCodeInline = InlineAccess::generateSelfPropertyReplace(vm, stubInfo, structure, slot.cachedOffset());
+                if (generatedCodeInline) {
+                    LOG_IC((ICEvent::PutByIdSelfPatch, structure->classInfo(), ident));
+                    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingPutByIdFunction(slot, putKind));
+                    stubInfo.initPutByIdReplace(codeBlock, structure, slot.cachedOffset());
+                    return RetryCacheLater;
+                }
             }
 
             newCase = AccessCase::replace(vm, codeBlock, structure, slot.cachedOffset());
@@ -524,11 +448,8 @@ static InlineCacheAction tryCachePutByID(ExecState* exec, JSValue baseValue, Str
         LOG_IC((ICEvent::PutByIdReplaceWithJump, structure->classInfo(), ident));
         
         RELEASE_ASSERT(result.code());
-        resetPutByIDCheckAndLoad(stubInfo);
-        MacroAssembler::repatchJump(
-            stubInfo.callReturnLocation.jumpAtOffset(
-                stubInfo.patch.deltaCallToJump),
-            CodeLocationLabel(result.code()));
+
+        InlineAccess::rewireStubAsJump(vm, stubInfo, CodeLocationLabel(result.code()));
     }
     
     return result.shouldGiveUpNow() ? GiveUpOnCache : RetryCacheLater;
@@ -540,7 +461,7 @@ void repatchPutByID(ExecState* exec, JSValue baseValue, Structure* structure, co
     GCSafeConcurrentJITLocker locker(exec->codeBlock()->m_lock, exec->vm().heap);
     
     if (tryCachePutByID(exec, baseValue, structure, propertyName, slot, stubInfo, putKind) == GiveUpOnCache)
-        repatchCall(exec->codeBlock(), stubInfo.callReturnLocation, appropriateGenericPutByIdFunction(slot, putKind));
+        repatchCall(exec->codeBlock(), stubInfo.slowPathCallLocation(), appropriateGenericPutByIdFunction(slot, putKind));
 }
 
 static InlineCacheAction tryRepatchIn(
@@ -586,8 +507,9 @@ static InlineCacheAction tryRepatchIn(
         LOG_IC((ICEvent::InReplaceWithJump, structure->classInfo(), ident));
         
         RELEASE_ASSERT(result.code());
+
         MacroAssembler::repatchJump(
-            stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump),
+            stubInfo.patchableJumpForIn(),
             CodeLocationLabel(result.code()));
     }
     
@@ -600,7 +522,7 @@ void repatchIn(
 {
     SuperSamplerScope superSamplerScope(false);
     if (tryRepatchIn(exec, base, ident, wasFound, slot, stubInfo) == GiveUpOnCache)
-        repatchCall(exec->codeBlock(), stubInfo.callReturnLocation, operationIn);
+        repatchCall(exec->codeBlock(), stubInfo.slowPathCallLocation(), operationIn);
 }
 
 static void linkSlowFor(VM*, CallLinkInfo& callLinkInfo, MacroAssemblerCodeRef codeRef)
@@ -972,14 +894,13 @@ void linkPolymorphicCall(
 
 void resetGetByID(CodeBlock* codeBlock, StructureStubInfo& stubInfo, GetByIDKind kind)
 {
-    repatchCall(codeBlock, stubInfo.callReturnLocation, appropriateOptimizingGetByIdFunction(kind));
-    resetGetByIDCheckAndLoad(stubInfo);
-    MacroAssembler::repatchJump(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump), stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
+    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), appropriateOptimizingGetByIdFunction(kind));
+    InlineAccess::rewireStubAsJump(*codeBlock->vm(), stubInfo, stubInfo.slowPathStartLocation());
 }
 
 void resetPutByID(CodeBlock* codeBlock, StructureStubInfo& stubInfo)
 {
-    V_JITOperation_ESsiJJI unoptimizedFunction = bitwise_cast<V_JITOperation_ESsiJJI>(readCallTarget(codeBlock, stubInfo.callReturnLocation).executableAddress());
+    V_JITOperation_ESsiJJI unoptimizedFunction = bitwise_cast<V_JITOperation_ESsiJJI>(readCallTarget(codeBlock, stubInfo.slowPathCallLocation()).executableAddress());
     V_JITOperation_ESsiJJI optimizedFunction;
     if (unoptimizedFunction == operationPutByIdStrict || unoptimizedFunction == operationPutByIdStrictOptimize)
         optimizedFunction = operationPutByIdStrictOptimize;
@@ -991,14 +912,14 @@ void resetPutByID(CodeBlock* codeBlock, StructureStubInfo& stubInfo)
         ASSERT(unoptimizedFunction == operationPutByIdDirectNonStrict || unoptimizedFunction == operationPutByIdDirectNonStrictOptimize);
         optimizedFunction = operationPutByIdDirectNonStrictOptimize;
     }
-    repatchCall(codeBlock, stubInfo.callReturnLocation, optimizedFunction);
-    resetPutByIDCheckAndLoad(stubInfo);
-    MacroAssembler::repatchJump(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump), stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
+
+    repatchCall(codeBlock, stubInfo.slowPathCallLocation(), optimizedFunction);
+    InlineAccess::rewireStubAsJump(*codeBlock->vm(), stubInfo, stubInfo.slowPathStartLocation());
 }
 
 void resetIn(CodeBlock*, StructureStubInfo& stubInfo)
 {
-    MacroAssembler::repatchJump(stubInfo.callReturnLocation.jumpAtOffset(stubInfo.patch.deltaCallToJump), stubInfo.callReturnLocation.labelAtOffset(stubInfo.patch.deltaCallToSlowCase));
+    MacroAssembler::repatchJump(stubInfo.patchableJumpForIn(), stubInfo.slowPathStartLocation());
 }
 
 } // namespace JSC