Support arm64 CPUs with a 32-bit address space
authorkeith_miller@apple.com <keith_miller@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 16 Oct 2018 07:19:13 +0000 (07:19 +0000)
committerkeith_miller@apple.com <keith_miller@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 16 Oct 2018 07:19:13 +0000 (07:19 +0000)
https://bugs.webkit.org/show_bug.cgi?id=190273

Reviewed by Michael Saboff.

Source/JavaScriptCore:

This patch adds support for arm64_32 in the LLInt. In order to
make this work we needed to add a new type that reflects the size
of a cpu register. This type is called CPURegister or UCPURegister
for the unsigned version. Most places that used void* or intptr_t
to refer to a register have been changed to use this new type.

* JavaScriptCore.xcodeproj/project.pbxproj:
* assembler/ARM64Assembler.h:
(JSC::isInt):
(JSC::is4ByteAligned):
(JSC::PairPostIndex::PairPostIndex):
(JSC::PairPreIndex::PairPreIndex):
(JSC::ARM64Assembler::readPointer):
(JSC::ARM64Assembler::readCallTarget):
(JSC::ARM64Assembler::computeJumpType):
(JSC::ARM64Assembler::linkCompareAndBranch):
(JSC::ARM64Assembler::linkConditionalBranch):
(JSC::ARM64Assembler::linkTestAndBranch):
(JSC::ARM64Assembler::loadRegisterLiteral):
(JSC::ARM64Assembler::loadStoreRegisterPairPostIndex):
(JSC::ARM64Assembler::loadStoreRegisterPairPreIndex):
(JSC::ARM64Assembler::loadStoreRegisterPairOffset):
(JSC::ARM64Assembler::loadStoreRegisterPairNonTemporal):
(JSC::isInt7): Deleted.
(JSC::isInt11): Deleted.
* assembler/CPU.h:
(JSC::isAddress64Bit):
(JSC::isAddress32Bit):
* assembler/MacroAssembler.h:
(JSC::MacroAssembler::shouldBlind):
* assembler/MacroAssemblerARM64.cpp:
(JSC::MacroAssemblerARM64::collectCPUFeatures):
* assembler/MacroAssemblerARM64.h:
(JSC::MacroAssemblerARM64::load):
(JSC::MacroAssemblerARM64::store):
(JSC::MacroAssemblerARM64::isInIntRange): Deleted.
* assembler/Printer.h:
* assembler/ProbeContext.h:
(JSC::Probe::CPUState::gpr):
(JSC::Probe::CPUState::spr):
(JSC::Probe::Context::gpr):
(JSC::Probe::Context::spr):
* b3/B3ConstPtrValue.h:
* b3/B3StackmapSpecial.cpp:
(JSC::B3::StackmapSpecial::isArgValidForRep):
* b3/air/AirArg.h:
(JSC::B3::Air::Arg::stackSlot const):
(JSC::B3::Air::Arg::special const):
* b3/air/testair.cpp:
* b3/testb3.cpp:
(JSC::B3::testStoreConstantPtr):
(JSC::B3::testInterpreter):
(JSC::B3::testAddShl32):
(JSC::B3::testLoadBaseIndexShift32):
* bindings/ScriptFunctionCall.cpp:
(Deprecated::ScriptCallArgumentHandler::appendArgument):
* bindings/ScriptFunctionCall.h:
* bytecode/CodeBlock.cpp:
(JSC::roundCalleeSaveSpaceAsVirtualRegisters):
* dfg/DFGOSRExit.cpp:
(JSC::DFG::restoreCalleeSavesFor):
(JSC::DFG::saveCalleeSavesFor):
(JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer):
(JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer):
* dfg/DFGOSRExitCompilerCommon.cpp:
(JSC::DFG::reifyInlinedCallFrames):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* disassembler/UDis86Disassembler.cpp:
(JSC::tryToDisassembleWithUDis86):
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::compileWeakMapGet):
* heap/MachineStackMarker.cpp:
(JSC::copyMemory):
* interpreter/CallFrame.h:
(JSC::ExecState::returnPC const):
(JSC::ExecState::hasReturnPC const):
(JSC::ExecState::clearReturnPC):
(JSC::ExecState::returnPCOffset):
(JSC::ExecState::isGlobalExec const):
(JSC::ExecState::setReturnPC):
* interpreter/CalleeBits.h:
(JSC::CalleeBits::boxWasm):
(JSC::CalleeBits::isWasm const):
(JSC::CalleeBits::asWasmCallee const):
* interpreter/Interpreter.cpp:
(JSC::UnwindFunctor::copyCalleeSavesToEntryFrameCalleeSavesBuffer const):
* interpreter/VMEntryRecord.h:
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::clearStackFrame):
* jit/RegisterAtOffset.h:
(JSC::RegisterAtOffset::offsetAsIndex const):
* jit/RegisterAtOffsetList.cpp:
(JSC::RegisterAtOffsetList::RegisterAtOffsetList):
* llint/LLIntData.cpp:
(JSC::LLInt::Data::performAssertions):
* llint/LLIntOfflineAsmConfig.h:
* llint/LowLevelInterpreter.asm:
* llint/LowLevelInterpreter64.asm:
* offlineasm/arm64.rb:
* offlineasm/asm.rb:
* offlineasm/ast.rb:
* offlineasm/backends.rb:
* offlineasm/parser.rb:
* offlineasm/x86.rb:
* runtime/BasicBlockLocation.cpp:
(JSC::BasicBlockLocation::dumpData const):
(JSC::BasicBlockLocation::emitExecuteCode const):
* runtime/BasicBlockLocation.h:
* runtime/HasOwnPropertyCache.h:
* runtime/JSBigInt.cpp:
(JSC::JSBigInt::inplaceMultiplyAdd):
(JSC::JSBigInt::digitDiv):
* runtime/JSBigInt.h:
* runtime/JSObject.h:
* runtime/Options.cpp:
(JSC::jitEnabledByDefault):
* runtime/Options.h:
* runtime/RegExp.cpp:
(JSC::RegExp::printTraceData):
* runtime/SamplingProfiler.cpp:
(JSC::CFrameWalker::walk):
* runtime/SlowPathReturnType.h:
(JSC::encodeResult):
(JSC::decodeResult):
* tools/SigillCrashAnalyzer.cpp:
(JSC::SigillCrashAnalyzer::dumpCodeBlock):

Source/WebCore:

Fix missing namespace annotation.

* cssjit/SelectorCompiler.cpp:
(WebCore::SelectorCompiler::SelectorCodeGenerator::generateAddStyleRelation):

Source/WTF:

Use WTF_CPU_ADDRESS64/32 to decide if the system is running on arm64_32.

* wtf/MathExtras.h:
(getLSBSet):
* wtf/Platform.h:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@237173 268f45cc-cd09-0410-ab3c-d52691b4dbfc

57 files changed:
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/assembler/ARM64Assembler.h
Source/JavaScriptCore/assembler/CPU.h
Source/JavaScriptCore/assembler/MacroAssembler.h
Source/JavaScriptCore/assembler/MacroAssemblerARM64.cpp
Source/JavaScriptCore/assembler/MacroAssemblerARM64.h
Source/JavaScriptCore/assembler/Printer.h
Source/JavaScriptCore/assembler/ProbeContext.h
Source/JavaScriptCore/b3/B3ConstPtrValue.h
Source/JavaScriptCore/b3/B3StackmapSpecial.cpp
Source/JavaScriptCore/b3/air/AirArg.h
Source/JavaScriptCore/b3/air/testair.cpp
Source/JavaScriptCore/b3/testb3.cpp
Source/JavaScriptCore/bindings/ScriptFunctionCall.cpp
Source/JavaScriptCore/bindings/ScriptFunctionCall.h
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/dfg/DFGOSRExit.cpp
Source/JavaScriptCore/dfg/DFGOSRExitCompilerCommon.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
Source/JavaScriptCore/disassembler/UDis86Disassembler.cpp
Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
Source/JavaScriptCore/heap/MachineStackMarker.cpp
Source/JavaScriptCore/interpreter/CallFrame.h
Source/JavaScriptCore/interpreter/CalleeBits.h
Source/JavaScriptCore/interpreter/Interpreter.cpp
Source/JavaScriptCore/interpreter/VMEntryRecord.h
Source/JavaScriptCore/jit/AssemblyHelpers.h
Source/JavaScriptCore/jit/RegisterAtOffset.h
Source/JavaScriptCore/jit/RegisterAtOffsetList.cpp
Source/JavaScriptCore/llint/LLIntData.cpp
Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h
Source/JavaScriptCore/llint/LowLevelInterpreter.asm
Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
Source/JavaScriptCore/offlineasm/arm64.rb
Source/JavaScriptCore/offlineasm/asm.rb
Source/JavaScriptCore/offlineasm/ast.rb
Source/JavaScriptCore/offlineasm/backends.rb
Source/JavaScriptCore/offlineasm/parser.rb
Source/JavaScriptCore/offlineasm/x86.rb
Source/JavaScriptCore/runtime/BasicBlockLocation.cpp
Source/JavaScriptCore/runtime/BasicBlockLocation.h
Source/JavaScriptCore/runtime/HasOwnPropertyCache.h
Source/JavaScriptCore/runtime/JSBigInt.cpp
Source/JavaScriptCore/runtime/JSBigInt.h
Source/JavaScriptCore/runtime/JSObject.h
Source/JavaScriptCore/runtime/Options.cpp
Source/JavaScriptCore/runtime/Options.h
Source/JavaScriptCore/runtime/RegExp.cpp
Source/JavaScriptCore/runtime/SamplingProfiler.cpp
Source/JavaScriptCore/runtime/SlowPathReturnType.h
Source/JavaScriptCore/tools/SigillCrashAnalyzer.cpp
Source/WTF/ChangeLog
Source/WTF/wtf/MathExtras.h
Source/WTF/wtf/Platform.h
Source/WebCore/ChangeLog
Source/WebCore/cssjit/SelectorCompiler.cpp

index 6d8bccc..4e57551 100644 (file)
@@ -1,3 +1,138 @@
+2018-10-15  Keith Miller  <keith_miller@apple.com>
+
+        Support arm64 CPUs with a 32-bit address space
+        https://bugs.webkit.org/show_bug.cgi?id=190273
+
+        Reviewed by Michael Saboff.
+
+        This patch adds support for arm64_32 in the LLInt. In order to
+        make this work we needed to add a new type that reflects the size
+        of a cpu register. This type is called CPURegister or UCPURegister
+        for the unsigned version. Most places that used void* or intptr_t
+        to refer to a register have been changed to use this new type.
+
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * assembler/ARM64Assembler.h:
+        (JSC::isInt):
+        (JSC::is4ByteAligned):
+        (JSC::PairPostIndex::PairPostIndex):
+        (JSC::PairPreIndex::PairPreIndex):
+        (JSC::ARM64Assembler::readPointer):
+        (JSC::ARM64Assembler::readCallTarget):
+        (JSC::ARM64Assembler::computeJumpType):
+        (JSC::ARM64Assembler::linkCompareAndBranch):
+        (JSC::ARM64Assembler::linkConditionalBranch):
+        (JSC::ARM64Assembler::linkTestAndBranch):
+        (JSC::ARM64Assembler::loadRegisterLiteral):
+        (JSC::ARM64Assembler::loadStoreRegisterPairPostIndex):
+        (JSC::ARM64Assembler::loadStoreRegisterPairPreIndex):
+        (JSC::ARM64Assembler::loadStoreRegisterPairOffset):
+        (JSC::ARM64Assembler::loadStoreRegisterPairNonTemporal):
+        (JSC::isInt7): Deleted.
+        (JSC::isInt11): Deleted.
+        * assembler/CPU.h:
+        (JSC::isAddress64Bit):
+        (JSC::isAddress32Bit):
+        * assembler/MacroAssembler.h:
+        (JSC::MacroAssembler::shouldBlind):
+        * assembler/MacroAssemblerARM64.cpp:
+        (JSC::MacroAssemblerARM64::collectCPUFeatures):
+        * assembler/MacroAssemblerARM64.h:
+        (JSC::MacroAssemblerARM64::load):
+        (JSC::MacroAssemblerARM64::store):
+        (JSC::MacroAssemblerARM64::isInIntRange): Deleted.
+        * assembler/Printer.h:
+        * assembler/ProbeContext.h:
+        (JSC::Probe::CPUState::gpr):
+        (JSC::Probe::CPUState::spr):
+        (JSC::Probe::Context::gpr):
+        (JSC::Probe::Context::spr):
+        * b3/B3ConstPtrValue.h:
+        * b3/B3StackmapSpecial.cpp:
+        (JSC::B3::StackmapSpecial::isArgValidForRep):
+        * b3/air/AirArg.h:
+        (JSC::B3::Air::Arg::stackSlot const):
+        (JSC::B3::Air::Arg::special const):
+        * b3/air/testair.cpp:
+        * b3/testb3.cpp:
+        (JSC::B3::testStoreConstantPtr):
+        (JSC::B3::testInterpreter):
+        (JSC::B3::testAddShl32):
+        (JSC::B3::testLoadBaseIndexShift32):
+        * bindings/ScriptFunctionCall.cpp:
+        (Deprecated::ScriptCallArgumentHandler::appendArgument):
+        * bindings/ScriptFunctionCall.h:
+        * bytecode/CodeBlock.cpp:
+        (JSC::roundCalleeSaveSpaceAsVirtualRegisters):
+        * dfg/DFGOSRExit.cpp:
+        (JSC::DFG::restoreCalleeSavesFor):
+        (JSC::DFG::saveCalleeSavesFor):
+        (JSC::DFG::restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer):
+        (JSC::DFG::copyCalleeSavesToVMEntryFrameCalleeSavesBuffer):
+        * dfg/DFGOSRExitCompilerCommon.cpp:
+        (JSC::DFG::reifyInlinedCallFrames):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::compile):
+        * disassembler/UDis86Disassembler.cpp:
+        (JSC::tryToDisassembleWithUDis86):
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::compileWeakMapGet):
+        * heap/MachineStackMarker.cpp:
+        (JSC::copyMemory):
+        * interpreter/CallFrame.h:
+        (JSC::ExecState::returnPC const):
+        (JSC::ExecState::hasReturnPC const):
+        (JSC::ExecState::clearReturnPC):
+        (JSC::ExecState::returnPCOffset):
+        (JSC::ExecState::isGlobalExec const):
+        (JSC::ExecState::setReturnPC):
+        * interpreter/CalleeBits.h:
+        (JSC::CalleeBits::boxWasm):
+        (JSC::CalleeBits::isWasm const):
+        (JSC::CalleeBits::asWasmCallee const):
+        * interpreter/Interpreter.cpp:
+        (JSC::UnwindFunctor::copyCalleeSavesToEntryFrameCalleeSavesBuffer const):
+        * interpreter/VMEntryRecord.h:
+        * jit/AssemblyHelpers.h:
+        (JSC::AssemblyHelpers::clearStackFrame):
+        * jit/RegisterAtOffset.h:
+        (JSC::RegisterAtOffset::offsetAsIndex const):
+        * jit/RegisterAtOffsetList.cpp:
+        (JSC::RegisterAtOffsetList::RegisterAtOffsetList):
+        * llint/LLIntData.cpp:
+        (JSC::LLInt::Data::performAssertions):
+        * llint/LLIntOfflineAsmConfig.h:
+        * llint/LowLevelInterpreter.asm:
+        * llint/LowLevelInterpreter64.asm:
+        * offlineasm/arm64.rb:
+        * offlineasm/asm.rb:
+        * offlineasm/ast.rb:
+        * offlineasm/backends.rb:
+        * offlineasm/parser.rb:
+        * offlineasm/x86.rb:
+        * runtime/BasicBlockLocation.cpp:
+        (JSC::BasicBlockLocation::dumpData const):
+        (JSC::BasicBlockLocation::emitExecuteCode const):
+        * runtime/BasicBlockLocation.h:
+        * runtime/HasOwnPropertyCache.h:
+        * runtime/JSBigInt.cpp:
+        (JSC::JSBigInt::inplaceMultiplyAdd):
+        (JSC::JSBigInt::digitDiv):
+        * runtime/JSBigInt.h:
+        * runtime/JSObject.h:
+        * runtime/Options.cpp:
+        (JSC::jitEnabledByDefault):
+        * runtime/Options.h:
+        * runtime/RegExp.cpp:
+        (JSC::RegExp::printTraceData):
+        * runtime/SamplingProfiler.cpp:
+        (JSC::CFrameWalker::walk):
+        * runtime/SlowPathReturnType.h:
+        (JSC::encodeResult):
+        (JSC::decodeResult):
+        * tools/SigillCrashAnalyzer.cpp:
+        (JSC::SigillCrashAnalyzer::dumpCodeBlock):
+
 2018-10-15  Justin Fan  <justin_fan@apple.com>
 
         Add WebGPU 2018 feature flag and experimental feature flag
index 2e37e93..8d50dc3 100644 (file)
                        );
                        runOnlyForDeploymentPostprocessing = 0;
                        shellPath = /bin/sh;
-                       shellScript = "if [[ \"${ACTION}\" == \"installhdrs\" ]]; then\n    exit 0\nfi\n\ncd \"${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore\"\n\n/usr/bin/env ruby JavaScriptCore/offlineasm/asm.rb \"-I${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore\" JavaScriptCore/llint/LowLevelInterpreter.asm \"${BUILT_PRODUCTS_DIR}/JSCLLIntOffsetsExtractor\" LLIntAssembly.h || exit 1";
+                       shellScript = "if [[ \"${ACTION}\" == \"installhdrs\" ]]; then\n    exit 0\nfi\n\ncd \"${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore\"\n\n/usr/bin/env ruby JavaScriptCore/offlineasm/asm.rb \"-I${BUILT_PRODUCTS_DIR}/DerivedSources/JavaScriptCore\" JavaScriptCore/llint/LowLevelInterpreter.asm \"${BUILT_PRODUCTS_DIR}/JSCLLIntOffsetsExtractor\" LLIntAssembly.h || exit 1\n";
                };
                65FB3F6509D11E9100F49DEB /* Generate Derived Sources */ = {
                        isa = PBXShellScriptBuildPhase;
index 886cff5..6a00757 100644 (file)
@@ -29,6 +29,7 @@
 
 #include "AssemblerBuffer.h"
 #include "AssemblerCommon.h"
+#include "CPU.h"
 #include "JSCPtrTag.h"
 #include <limits.h>
 #include <wtf/Assertions.h>
 
 namespace JSC {
 
-ALWAYS_INLINE bool isInt7(int32_t value)
+template<size_t bits, typename Type>
+ALWAYS_INLINE constexpr bool isInt(Type t)
 {
-    return value == ((value << 25) >> 25);
+    constexpr size_t shift = sizeof(Type) * CHAR_BIT - bits;
+    static_assert(sizeof(Type) * CHAR_BIT > shift, "shift is larger than the size of the value");
+    return ((t << shift) >> shift) == t;
 }
 
-ALWAYS_INLINE bool isInt11(int32_t value)
+static ALWAYS_INLINE bool is4ByteAligned(const void* ptr)
 {
-    return value == ((value << 21) >> 21);
+    return !(reinterpret_cast<intptr_t>(ptr) & 0x3);
 }
 
 ALWAYS_INLINE bool isUInt5(int32_t value)
@@ -130,7 +134,7 @@ public:
     explicit PairPostIndex(int value)
         : m_value(value)
     {
-        ASSERT(isInt11(value));
+        ASSERT(isInt<11>(value));
     }
 
     operator int() { return m_value; }
@@ -144,7 +148,7 @@ public:
     explicit PairPreIndex(int value)
         : m_value(value)
     {
-        ASSERT(isInt11(value));
+        ASSERT(isInt<11>(value));
     }
 
     operator int() { return m_value; }
@@ -460,8 +464,8 @@ public:
     private:
         union {
             struct RealTypes {
-                intptr_t m_from : 48;
-                intptr_t m_to : 48;
+                int64_t m_from;
+                int64_t m_to;
                 JumpType m_type : 8;
                 JumpLinkType m_linkType : 8;
                 Condition m_condition : 4;
@@ -2805,16 +2809,18 @@ public:
         ASSERT_UNUSED(expected, expected && sf && opc == MoveWideOp_K && hw == 1 && rd == rdFirst);
         result |= static_cast<uintptr_t>(imm16) << 16;
 
+#if CPU(ADDRESS64)
         expected = disassembleMoveWideImediate(address + 2, sf, opc, hw, imm16, rd);
         ASSERT_UNUSED(expected, expected && sf && opc == MoveWideOp_K && hw == 2 && rd == rdFirst);
         result |= static_cast<uintptr_t>(imm16) << 32;
+#endif
 
         return reinterpret_cast<void*>(result);
     }
 
     static void* readCallTarget(void* from)
     {
-        return readPointer(reinterpret_cast<int*>(from) - 4);
+        return readPointer(reinterpret_cast<int*>(from) - (isAddress64Bit() ? 4 : 3));
     }
 
     // The static relink, repatch, and replace methods can use can
@@ -2931,31 +2937,31 @@ public:
         case JumpNoCondition:
             return LinkJumpNoCondition;
         case JumpCondition: {
-            ASSERT(!(reinterpret_cast<intptr_t>(from) & 0x3));
-            ASSERT(!(reinterpret_cast<intptr_t>(to) & 0x3));
+            ASSERT(is4ByteAligned(from));
+            ASSERT(is4ByteAligned(to));
             intptr_t relative = reinterpret_cast<intptr_t>(to) - (reinterpret_cast<intptr_t>(from));
 
-            if (((relative << 43) >> 43) == relative)
+            if (isInt<21>(relative))
                 return LinkJumpConditionDirect;
 
             return LinkJumpCondition;
             }
         case JumpCompareAndBranch:  {
-            ASSERT(!(reinterpret_cast<intptr_t>(from) & 0x3));
-            ASSERT(!(reinterpret_cast<intptr_t>(to) & 0x3));
+            ASSERT(is4ByteAligned(from));
+            ASSERT(is4ByteAligned(to));
             intptr_t relative = reinterpret_cast<intptr_t>(to) - (reinterpret_cast<intptr_t>(from));
 
-            if (((relative << 43) >> 43) == relative)
+            if (isInt<21>(relative))
                 return LinkJumpCompareAndBranchDirect;
 
             return LinkJumpCompareAndBranch;
         }
         case JumpTestBit:   {
-            ASSERT(!(reinterpret_cast<intptr_t>(from) & 0x3));
-            ASSERT(!(reinterpret_cast<intptr_t>(to) & 0x3));
+            ASSERT(is4ByteAligned(from));
+            ASSERT(is4ByteAligned(to));
             intptr_t relative = reinterpret_cast<intptr_t>(to) - (reinterpret_cast<intptr_t>(from));
 
-            if (((relative << 50) >> 50) == relative)
+            if (isInt<14>(relative))
                 return LinkJumpTestBitDirect;
 
             return LinkJumpTestBit;
@@ -3073,9 +3079,9 @@ protected:
         ASSERT(!(reinterpret_cast<intptr_t>(from) & 3));
         ASSERT(!(reinterpret_cast<intptr_t>(to) & 3));
         intptr_t offset = (reinterpret_cast<intptr_t>(to) - reinterpret_cast<intptr_t>(fromInstruction)) >> 2;
-        ASSERT(((offset << 38) >> 38) == offset);
+        ASSERT(isInt<26>(offset));
 
-        bool useDirect = ((offset << 45) >> 45) == offset; // Fits in 19 bits
+        bool useDirect = isInt<19>(offset);
         ASSERT(!isDirect || useDirect);
 
         if (useDirect || isDirect) {
@@ -3101,9 +3107,9 @@ protected:
         ASSERT(!(reinterpret_cast<intptr_t>(from) & 3));
         ASSERT(!(reinterpret_cast<intptr_t>(to) & 3));
         intptr_t offset = (reinterpret_cast<intptr_t>(to) - reinterpret_cast<intptr_t>(fromInstruction)) >> 2;
-        ASSERT(((offset << 38) >> 38) == offset);
+        ASSERT(isInt<26>(offset));
 
-        bool useDirect = ((offset << 45) >> 45) == offset; // Fits in 19 bits
+        bool useDirect = isInt<19>(offset);
         ASSERT(!isDirect || useDirect);
 
         if (useDirect || isDirect) {
@@ -3130,9 +3136,9 @@ protected:
         ASSERT(!(reinterpret_cast<intptr_t>(to) & 3));
         intptr_t offset = (reinterpret_cast<intptr_t>(to) - reinterpret_cast<intptr_t>(fromInstruction)) >> 2;
         ASSERT(static_cast<int>(offset) == offset);
-        ASSERT(((offset << 38) >> 38) == offset);
+        ASSERT(isInt<26>(offset));
 
-        bool useDirect = ((offset << 50) >> 50) == offset; // Fits in 14 bits
+        bool useDirect = isInt<14>(offset);
         ASSERT(!isDirect || useDirect);
 
         if (useDirect || isDirect) {
@@ -3511,7 +3517,7 @@ protected:
     // 'V' means vector
     ALWAYS_INLINE static int loadRegisterLiteral(LdrLiteralOp opc, bool V, int imm19, FPRegisterID rt)
     {
-        ASSERT(((imm19 << 13) >> 13) == imm19);
+        ASSERT(isInt<19>(imm19));
         return (0x18000000 | opc << 30 | V << 26 | (imm19 & 0x7ffff) << 5 | rt);
     }
 
@@ -3542,7 +3548,7 @@ protected:
         ASSERT(V || (size != MemPairOp_LoadSigned_32) || (opc == MemOp_LOAD)); // There isn't an integer store signed.
         unsigned immedShiftAmount = memPairOffsetShift(V, size);
         int imm7 = immediate >> immedShiftAmount;
-        ASSERT((imm7 << immedShiftAmount) == immediate && isInt7(imm7));
+        ASSERT((imm7 << immedShiftAmount) == immediate && isInt<7>(imm7));
         return (0x28800000 | size << 30 | V << 26 | opc << 22 | (imm7 & 0x7f) << 15 | rt2 << 10 | xOrSp(rn) << 5 | rt);
     }
 
@@ -3573,7 +3579,7 @@ protected:
         ASSERT(V || (size != MemPairOp_LoadSigned_32) || (opc == MemOp_LOAD)); // There isn't an integer store signed.
         unsigned immedShiftAmount = memPairOffsetShift(V, size);
         int imm7 = immediate >> immedShiftAmount;
-        ASSERT((imm7 << immedShiftAmount) == immediate && isInt7(imm7));
+        ASSERT((imm7 << immedShiftAmount) == immediate && isInt<7>(imm7));
         return (0x29800000 | size << 30 | V << 26 | opc << 22 | (imm7 & 0x7f) << 15 | rt2 << 10 | xOrSp(rn) << 5 | rt);
     }
 
@@ -3590,7 +3596,7 @@ protected:
         ASSERT(V || (size != MemPairOp_LoadSigned_32) || (opc == MemOp_LOAD)); // There isn't an integer store signed.
         unsigned immedShiftAmount = memPairOffsetShift(V, size);
         int imm7 = immediate >> immedShiftAmount;
-        ASSERT((imm7 << immedShiftAmount) == immediate && isInt7(imm7));
+        ASSERT((imm7 << immedShiftAmount) == immediate && isInt<7>(imm7));
         return (0x29000000 | size << 30 | V << 26 | opc << 22 | (imm7 & 0x7f) << 15 | rt2 << 10 | xOrSp(rn) << 5 | rt);
     }
 
@@ -3607,7 +3613,7 @@ protected:
         ASSERT(V || (size != MemPairOp_LoadSigned_32) || (opc == MemOp_LOAD)); // There isn't an integer store signed.
         unsigned immedShiftAmount = memPairOffsetShift(V, size);
         int imm7 = immediate >> immedShiftAmount;
-        ASSERT((imm7 << immedShiftAmount) == immediate && isInt7(imm7));
+        ASSERT((imm7 << immedShiftAmount) == immediate && isInt<7>(imm7));
         return (0x28000000 | size << 30 | V << 26 | opc << 22 | (imm7 & 0x7f) << 15 | rt2 << 10 | xOrSp(rn) << 5 | rt);
     }
 
index c16f0d8..617993f 100644 (file)
 
 namespace JSC {
 
+#if USE(JSVALUE64)
+using CPURegister = int64_t;
+using UCPURegister = uint64_t;
+#else
+using CPURegister = int32_t;
+using UCPURegister = uint32_t;
+#endif
+
 constexpr bool isARMv7IDIVSupported()
 {
 #if HAVE(ARM_IDIV_INSTRUCTIONS)
@@ -79,6 +87,16 @@ constexpr bool is32Bit()
     return !is64Bit();
 }
 
+constexpr bool isAddress64Bit()
+{
+    return sizeof(void*) == 8;
+}
+
+constexpr bool isAddress32Bit()
+{
+    return !isAddress64Bit();
+}
+
 constexpr bool isMIPS()
 {
 #if CPU(MIPS)
index e89cb14..cd1b545 100644 (file)
@@ -1272,7 +1272,7 @@ public:
 
         // First off we'll special case common, "safe" values to avoid hurting
         // performance too much
-        uintptr_t value = imm.asTrustedImmPtr().asIntptr();
+        uint64_t value = imm.asTrustedImmPtr().asIntptr();
         switch (value) {
         case 0xffff:
         case 0xffffff:
@@ -1293,7 +1293,7 @@ public:
         if (!shouldConsiderBlinding())
             return false;
 
-        return shouldBlindPointerForSpecificArch(value);
+        return shouldBlindPointerForSpecificArch(static_cast<uintptr_t>(value));
     }
 
     uint8_t generateRotationSeed(size_t widthInBits)
index b2948c0..f15b5b0 100644 (file)
@@ -48,7 +48,12 @@ using namespace ARM64Registers;
 
 // The following are offsets for Probe::State fields accessed
 // by the ctiMasmProbeTrampoline stub.
+#if CPU(ADDRESS64)
 #define PTR_SIZE 8
+#else
+#define PTR_SIZE 4
+#endif
+
 #define PROBE_PROBE_FUNCTION_OFFSET (0 * PTR_SIZE)
 #define PROBE_ARG_OFFSET (1 * PTR_SIZE)
 #define PROBE_INIT_STACK_FUNCTION_OFFSET (2 * PTR_SIZE)
@@ -131,8 +136,8 @@ using namespace ARM64Registers;
 #define PROBE_CPU_Q31_OFFSET (PROBE_FIRST_FPREG_OFFSET + (31 * FPREG_SIZE))
 #define PROBE_SIZE (PROBE_FIRST_FPREG_OFFSET + (32 * FPREG_SIZE))
 
-#define SAVED_PROBE_RETURN_PC_OFFSET        (PROBE_SIZE + (0 * PTR_SIZE))
-#define PROBE_SIZE_PLUS_EXTRAS              (PROBE_SIZE + (3 * PTR_SIZE))
+#define SAVED_PROBE_RETURN_PC_OFFSET        (PROBE_SIZE + (0 * GPREG_SIZE))
+#define PROBE_SIZE_PLUS_EXTRAS              (PROBE_SIZE + (3 * GPREG_SIZE))
 
 // These ASSERTs remind you that if you change the layout of Probe::State,
 // you need to change ctiMasmProbeTrampoline offsets above to match.
@@ -221,7 +226,7 @@ static_assert(PROBE_OFFSETOF(cpu.fprs[ARM64Registers::q31]) == PROBE_CPU_Q31_OFF
 static_assert(sizeof(Probe::State) == PROBE_SIZE, "Probe::State's size matches ctiMasmProbeTrampoline");
 
 // Conditions for using ldp and stp.
-static_assert(PROBE_CPU_PC_OFFSET == PROBE_CPU_SP_OFFSET + PTR_SIZE, "PROBE_CPU_SP_OFFSET and PROBE_CPU_PC_OFFSET must be adjacent");
+static_assert(PROBE_CPU_PC_OFFSET == PROBE_CPU_SP_OFFSET + GPREG_SIZE, "PROBE_CPU_SP_OFFSET and PROBE_CPU_PC_OFFSET must be adjacent");
 static_assert(!(PROBE_SIZE_PLUS_EXTRAS & 0xf), "PROBE_SIZE_PLUS_EXTRAS should be 16 byte aligned"); // the Probe::State copying code relies on this.
 
 #undef PROBE_OFFSETOF
@@ -229,21 +234,21 @@ static_assert(!(PROBE_SIZE_PLUS_EXTRAS & 0xf), "PROBE_SIZE_PLUS_EXTRAS should be
 #define FPR_OFFSET(fpr) (PROBE_CPU_##fpr##_OFFSET - PROBE_CPU_Q0_OFFSET)
 
 struct IncomingProbeRecord {
-    uintptr_t x24;
-    uintptr_t x25;
-    uintptr_t x26;
-    uintptr_t x27;
-    uintptr_t x28;
-    uintptr_t x30; // lr
+    UCPURegister x24;
+    UCPURegister x25;
+    UCPURegister x26;
+    UCPURegister x27;
+    UCPURegister x28;
+    UCPURegister x30; // lr
 };
 
-#define IN_X24_OFFSET (0 * PTR_SIZE)
-#define IN_X25_OFFSET (1 * PTR_SIZE)
-#define IN_X26_OFFSET (2 * PTR_SIZE)
-#define IN_X27_OFFSET (3 * PTR_SIZE)
-#define IN_X28_OFFSET (4 * PTR_SIZE)
-#define IN_X30_OFFSET (5 * PTR_SIZE)
-#define IN_SIZE       (6 * PTR_SIZE)
+#define IN_X24_OFFSET (0 * GPREG_SIZE)
+#define IN_X25_OFFSET (1 * GPREG_SIZE)
+#define IN_X26_OFFSET (2 * GPREG_SIZE)
+#define IN_X27_OFFSET (3 * GPREG_SIZE)
+#define IN_X28_OFFSET (4 * GPREG_SIZE)
+#define IN_X30_OFFSET (5 * GPREG_SIZE)
+#define IN_SIZE       (6 * GPREG_SIZE)
 
 static_assert(IN_X24_OFFSET == offsetof(IncomingProbeRecord, x24), "IN_X24_OFFSET is incorrect");
 static_assert(IN_X25_OFFSET == offsetof(IncomingProbeRecord, x25), "IN_X25_OFFSET is incorrect");
@@ -255,21 +260,21 @@ static_assert(IN_SIZE == sizeof(IncomingProbeRecord), "IN_SIZE is incorrect");
 static_assert(!(sizeof(IncomingProbeRecord) & 0xf), "IncomingProbeStack must be 16-byte aligned");
 
 struct OutgoingProbeRecord {
-    uintptr_t nzcv;
-    uintptr_t fpsr;
-    uintptr_t x27;
-    uintptr_t x28;
-    uintptr_t fp;
-    uintptr_t lr;
+    UCPURegister nzcv;
+    UCPURegister fpsr;
+    UCPURegister x27;
+    UCPURegister x28;
+    UCPURegister fp;
+    UCPURegister lr;
 };
 
-#define OUT_NZCV_OFFSET (0 * PTR_SIZE)
-#define OUT_FPSR_OFFSET (1 * PTR_SIZE)
-#define OUT_X27_OFFSET  (2 * PTR_SIZE)
-#define OUT_X28_OFFSET  (3 * PTR_SIZE)
-#define OUT_FP_OFFSET   (4 * PTR_SIZE)
-#define OUT_LR_OFFSET   (5 * PTR_SIZE)
-#define OUT_SIZE        (6 * PTR_SIZE)
+#define OUT_NZCV_OFFSET (0 * GPREG_SIZE)
+#define OUT_FPSR_OFFSET (1 * GPREG_SIZE)
+#define OUT_X27_OFFSET  (2 * GPREG_SIZE)
+#define OUT_X28_OFFSET  (3 * GPREG_SIZE)
+#define OUT_FP_OFFSET   (4 * GPREG_SIZE)
+#define OUT_LR_OFFSET   (5 * GPREG_SIZE)
+#define OUT_SIZE        (6 * GPREG_SIZE)
 
 static_assert(OUT_NZCV_OFFSET == offsetof(OutgoingProbeRecord, nzcv), "OUT_NZCV_OFFSET is incorrect");
 static_assert(OUT_FPSR_OFFSET == offsetof(OutgoingProbeRecord, fpsr), "OUT_FPSR_OFFSET is incorrect");
@@ -281,12 +286,12 @@ static_assert(OUT_SIZE == sizeof(OutgoingProbeRecord), "OUT_SIZE is incorrect");
 static_assert(!(sizeof(OutgoingProbeRecord) & 0xf), "OutgoingProbeStack must be 16-byte aligned");
 
 struct LRRestorationRecord {
-    uintptr_t lr;
-    uintptr_t unusedDummyToEnsureSizeIs16ByteAligned;
+    UCPURegister lr;
+    UCPURegister unusedDummyToEnsureSizeIs16ByteAligned;
 };
 
-#define LR_RESTORATION_LR_OFFSET (0 * PTR_SIZE)
-#define LR_RESTORATION_SIZE      (2 * PTR_SIZE)
+#define LR_RESTORATION_LR_OFFSET (0 * GPREG_SIZE)
+#define LR_RESTORATION_SIZE      (2 * GPREG_SIZE)
 
 static_assert(LR_RESTORATION_LR_OFFSET == offsetof(LRRestorationRecord, lr), "LR_RESTORATION_LR_OFFSET is incorrect");
 static_assert(LR_RESTORATION_SIZE == sizeof(LRRestorationRecord), "LR_RESTORATION_SIZE is incorrect");
@@ -348,7 +353,7 @@ asm (
 
     "str       x30, [sp, #" STRINGIZE_VALUE_OF(SAVED_PROBE_RETURN_PC_OFFSET) "]" "\n" // Save a duplicate copy of return pc (in lr).
 
-    "add       x30, x30, #" STRINGIZE_VALUE_OF(2 * PTR_SIZE) "\n" // The PC after the probe is at 2 instructions past the return point.
+    "add       x30, x30, #" STRINGIZE_VALUE_OF(2 * GPREG_SIZE) "\n" // The PC after the probe is at 2 instructions past the return point.
     "str       x30, [sp, #" STRINGIZE_VALUE_OF(PROBE_CPU_PC_OFFSET) "]" "\n"
 
     "stp       x0, x1, [sp, #" STRINGIZE_VALUE_OF(PROBE_CPU_NZCV_OFFSET) "]" "\n" // Store nzcv and fpsr (preloaded into x0 and x1 above).
@@ -472,7 +477,7 @@ asm (
     "ldr       x30, [sp, #" STRINGIZE_VALUE_OF(PROBE_CPU_SP_OFFSET) "]" "\n" // preload the target sp.
     "ldr       x27, [sp, #" STRINGIZE_VALUE_OF(SAVED_PROBE_RETURN_PC_OFFSET) "]" "\n"
     "ldr       x28, [sp, #" STRINGIZE_VALUE_OF(PROBE_CPU_PC_OFFSET) "]" "\n"
-    "add       x27, x27, #" STRINGIZE_VALUE_OF(2 * PTR_SIZE) "\n"
+    "add       x27, x27, #" STRINGIZE_VALUE_OF(2 * GPREG_SIZE) "\n"
     "cmp       x27, x28" "\n"
     "bne     " LOCAL_LABEL_STRING(ctiMasmProbeTrampolineEnd) "\n"
 
@@ -502,11 +507,11 @@ asm (
     "mov       sp, x30" "\n"
 
     // Restore the remaining registers and pop the OutgoingProbeRecord.
-    "ldp       x27, x28, [sp], #" STRINGIZE_VALUE_OF(2 * PTR_SIZE) "\n"
+    "ldp       x27, x28, [sp], #" STRINGIZE_VALUE_OF(2 * GPREG_SIZE) "\n"
     "msr       nzcv, x27" "\n"
     "msr       fpsr, x28" "\n"
-    "ldp       x27, x28, [sp], #" STRINGIZE_VALUE_OF(2 * PTR_SIZE) "\n"
-    "ldp       x29, x30, [sp], #" STRINGIZE_VALUE_OF(2 * PTR_SIZE) "\n"
+    "ldp       x27, x28, [sp], #" STRINGIZE_VALUE_OF(2 * GPREG_SIZE) "\n"
+    "ldp       x29, x30, [sp], #" STRINGIZE_VALUE_OF(2 * GPREG_SIZE) "\n"
     "ret" "\n"
 );
 #endif // COMPILER(GCC_COMPATIBLE)
@@ -544,7 +549,7 @@ void MacroAssemblerARM64::collectCPUFeatures()
         // is shipped and implemented in some CPUs. In that case, even if the CPU has
         // that feature, the kernel does not tell it to users.), it is a stable approach.
         // https://www.kernel.org/doc/Documentation/arm64/elf_hwcaps.txt
-        unsigned long hwcaps = getauxval(AT_HWCAP);
+        uint64_t hwcaps = getauxval(AT_HWCAP);
 
 #if !defined(HWCAP_JSCVT)
 #define HWCAP_JSCVT (1 << 13)
index b0eb48a..d0f26d1 100644 (file)
@@ -53,9 +53,9 @@ public:
 protected:
     static const ARM64Registers::FPRegisterID fpTempRegister = ARM64Registers::q31;
     static const Assembler::SetFlags S = Assembler::S;
-    static const intptr_t maskHalfWord0 = 0xffffl;
-    static const intptr_t maskHalfWord1 = 0xffff0000l;
-    static const intptr_t maskUpperWord = 0xffffffff00000000l;
+    static const int64_t maskHalfWord0 = 0xffffl;
+    static const int64_t maskHalfWord1 = 0xffff0000l;
+    static const int64_t maskUpperWord = 0xffffffff00000000l;
 
     static constexpr size_t INSTRUCTION_SIZE = 4;
 
@@ -4009,11 +4009,6 @@ protected:
         return m_cachedMemoryTempRegister;
     }
 
-    ALWAYS_INLINE bool isInIntRange(intptr_t value)
-    {
-        return value == ((value << 32) >> 32);
-    }
-
     template<typename ImmediateType, typename rawType>
     void moveInternal(ImmediateType imm, RegisterID dest)
     {
@@ -4148,7 +4143,7 @@ protected:
             if (dest == memoryTempRegister)
                 cachedMemoryTempRegister().invalidate();
 
-            if (isInIntRange(addressDelta)) {
+            if (isInt<32>(addressDelta)) {
                 if (Assembler::canEncodeSImmOffset(addressDelta)) {
                     m_assembler.ldur<datasize>(dest,  memoryTempRegister, addressDelta);
                     return;
@@ -4185,7 +4180,7 @@ protected:
             intptr_t addressAsInt = reinterpret_cast<intptr_t>(address);
             intptr_t addressDelta = addressAsInt - currentRegisterContents;
 
-            if (isInIntRange(addressDelta)) {
+            if (isInt<32>(addressDelta)) {
                 if (Assembler::canEncodeSImmOffset(addressDelta)) {
                     m_assembler.stur<datasize>(src, memoryTempRegister, addressDelta);
                     return;
index 641deae..6d46cd9 100644 (file)
@@ -79,9 +79,9 @@ union Data {
     uintptr_t value;
     const void* pointer;
 #if USE(JSVALUE64)
-    uintptr_t buffer[4];
+    UCPURegister buffer[4];
 #elif USE(JSVALUE32_64)
-    uintptr_t buffer[6];
+    UCPURegister buffer[6];
 #endif
 };
 
index 932eae2..4ee6eea 100644 (file)
@@ -41,8 +41,8 @@ struct CPUState {
     static inline const char* gprName(RegisterID id) { return MacroAssembler::gprName(id); }
     static inline const char* sprName(SPRegisterID id) { return MacroAssembler::sprName(id); }
     static inline const char* fprName(FPRegisterID id) { return MacroAssembler::fprName(id); }
-    inline uintptr_t& gpr(RegisterID);
-    inline uintptr_t& spr(SPRegisterID);
+    inline UCPURegister& gpr(RegisterID);
+    inline UCPURegister& spr(SPRegisterID);
     inline double& fpr(FPRegisterID);
 
     template<typename T> T gpr(RegisterID) const;
@@ -56,18 +56,18 @@ struct CPUState {
     template<typename T> T fp() const;
     template<typename T> T sp() const;
 
-    uintptr_t gprs[MacroAssembler::numberOfRegisters()];
-    uintptr_t sprs[MacroAssembler::numberOfSPRegisters()];
+    UCPURegister gprs[MacroAssembler::numberOfRegisters()];
+    UCPURegister sprs[MacroAssembler::numberOfSPRegisters()];
     double fprs[MacroAssembler::numberOfFPRegisters()];
 };
 
-inline uintptr_t& CPUState::gpr(RegisterID id)
+inline UCPURegister& CPUState::gpr(RegisterID id)
 {
     ASSERT(id >= MacroAssembler::firstRegister() && id <= MacroAssembler::lastRegister());
     return gprs[id];
 }
 
-inline uintptr_t& CPUState::spr(SPRegisterID id)
+inline UCPURegister& CPUState::spr(SPRegisterID id)
 {
     ASSERT(id >= MacroAssembler::firstSPRegister() && id <= MacroAssembler::lastSPRegister());
     return sprs[id];
@@ -198,8 +198,8 @@ public:
     template<typename T>
     T arg() { return reinterpret_cast<T>(m_state->arg); }
 
-    uintptr_t& gpr(RegisterID id) { return cpu.gpr(id); }
-    uintptr_t& spr(SPRegisterID id) { return cpu.spr(id); }
+    UCPURegister& gpr(RegisterID id) { return cpu.gpr(id); }
+    UCPURegister& spr(SPRegisterID id) { return cpu.spr(id); }
     double& fpr(FPRegisterID id) { return cpu.fpr(id); }
     const char* gprName(RegisterID id) { return cpu.gprName(id); }
     const char* sprName(SPRegisterID id) { return cpu.sprName(id); }
index 78bcba3..9c993fc 100644 (file)
@@ -36,7 +36,7 @@ namespace JSC { namespace B3 {
 // platform-agnostic code. Note that a ConstPtrValue will behave like either a Const32Value or
 // Const64Value depending on platform.
 
-#if USE(JSVALUE64)
+#if CPU(ADDRESS64)
 typedef Const64Value ConstPtrValueBase;
 #else
 typedef Const32Value ConstPtrValueBase;
index 620d6f8..cfd1a1e 100644 (file)
@@ -263,7 +263,7 @@ bool StackmapSpecial::isArgValidForRep(Air::Code& code, const Air::Arg& arg, con
             return true;
         if ((arg.isAddr() || arg.isExtendedOffsetAddr()) && code.frameSize()) {
             if (arg.base() == Tmp(GPRInfo::callFrameRegister)
-                && arg.offset() == rep.offsetFromSP() - code.frameSize())
+                && arg.offset() == static_cast<int64_t>(rep.offsetFromSP()) - code.frameSize())
                 return true;
             if (arg.base() == Tmp(MacroAssembler::stackPointerRegister)
                 && arg.offset() == rep.offsetFromSP())
index cd2162d..964548b 100644 (file)
@@ -973,7 +973,7 @@ public:
     StackSlot* stackSlot() const
     {
         ASSERT(kind() == Stack);
-        return bitwise_cast<StackSlot*>(m_offset);
+        return bitwise_cast<StackSlot*>(static_cast<uintptr_t>(m_offset));
     }
 
     Air::Tmp index() const
@@ -996,7 +996,7 @@ public:
     Air::Special* special() const
     {
         ASSERT(kind() == Special);
-        return bitwise_cast<Air::Special*>(m_offset);
+        return bitwise_cast<Air::Special*>(static_cast<uintptr_t>(m_offset));
     }
 
     Width width() const
index 08b6393..912194f 100644 (file)
@@ -138,9 +138,10 @@ void loadConstantImpl(BasicBlock* block, T value, B3::Air::Opcode move, Tmp tmp,
     block->append(move, nullptr, Arg::addr(scratch), tmp);
 }
 
-void loadConstant(BasicBlock* block, intptr_t value, Tmp tmp)
+template<typename T>
+void loadConstant(BasicBlock* block, T value, Tmp tmp)
 {
-    loadConstantImpl<intptr_t>(block, value, Move, tmp, tmp);
+    loadConstantImpl(block, value, Move, tmp, tmp);
 }
 
 void loadDoubleConstant(BasicBlock* block, double value, Tmp tmp, Tmp scratch)
index 523f10e..b338979 100644 (file)
@@ -5473,10 +5473,11 @@ void testStoreConstantPtr(intptr_t value)
     Procedure proc;
     BasicBlock* root = proc.addBlock();
     intptr_t slot;
-    if (is64Bit())
-        slot = (static_cast<intptr_t>(0xbaadbeef) << 32) + static_cast<intptr_t>(0xbaadbeef);
-    else
-        slot = 0xbaadbeef;
+#if CPU(ADDRESS64)
+    slot = (static_cast<intptr_t>(0xbaadbeef) << 32) + static_cast<intptr_t>(0xbaadbeef);
+#else
+    slot = 0xbaadbeef;
+#endif
     root->appendNew<MemoryValue>(
         proc, Store, Origin(),
         root->appendNew<ConstPtrValue>(proc, Origin(), value),
@@ -13194,9 +13195,9 @@ void testInterpreter()
     
     auto interpreter = compileProc(proc);
     
-    Vector<intptr_t> data;
-    Vector<intptr_t> code;
-    Vector<intptr_t> stream;
+    Vector<uintptr_t> data;
+    Vector<uintptr_t> code;
+    Vector<uintptr_t> stream;
     
     data.append(1);
     data.append(0);
@@ -14497,7 +14498,7 @@ void testAddShl32()
     root->appendNew<Value>(proc, Return, Origin(), result);
     
     auto code = compileProc(proc);
-    CHECK_EQ(invoke<intptr_t>(*code, 1, 2), 1 + (static_cast<intptr_t>(2) << static_cast<intptr_t>(32)));
+    CHECK_EQ(invoke<int64_t>(*code, 1, 2), 1 + (static_cast<int64_t>(2) << static_cast<int64_t>(32)));
 }
 
 void testAddShl64()
@@ -14607,6 +14608,7 @@ void testLoadBaseIndexShift2()
 
 void testLoadBaseIndexShift32()
 {
+#if CPU(ADDRESS64)
     Procedure proc;
     BasicBlock* root = proc.addBlock();
     root->appendNew<Value>(
@@ -14625,6 +14627,7 @@ void testLoadBaseIndexShift32()
     char* ptr = bitwise_cast<char*>(&value);
     for (unsigned i = 0; i < 10; ++i)
         CHECK_EQ(invoke<int32_t>(*code, ptr - (static_cast<intptr_t>(1) << static_cast<intptr_t>(32)) * i, i), 12341234);
+#endif
 }
 
 void testOptimizeMaterialization()
index 44bff54..99e5f5c 100644 (file)
@@ -75,7 +75,7 @@ void ScriptCallArgumentHandler::appendArgument(unsigned int argument)
     m_arguments.append(jsNumber(argument));
 }
 
-void ScriptCallArgumentHandler::appendArgument(unsigned long argument)
+void ScriptCallArgumentHandler::appendArgument(uint64_t argument)
 {
     JSLockHolder lock(m_exec);
     m_arguments.append(jsNumber(argument));
index 6978414..1a7a537 100644 (file)
@@ -51,7 +51,7 @@ public:
     void appendArgument(long);
     void appendArgument(long long);
     void appendArgument(unsigned int);
-    void appendArgument(unsigned long);
+    void appendArgument(uint64_t);
     void appendArgument(int);
     void appendArgument(bool);
 
index 3e974bf..4976662 100644 (file)
@@ -2160,8 +2160,8 @@ void CodeBlock::setCalleeSaveRegisters(std::unique_ptr<RegisterAtOffsetList> reg
     
 static size_t roundCalleeSaveSpaceAsVirtualRegisters(size_t calleeSaveRegisters)
 {
-    static const unsigned cpuRegisterSize = sizeof(void*);
-    return (WTF::roundUpToMultipleOf(sizeof(Register), calleeSaveRegisters * cpuRegisterSize) / sizeof(Register));
+
+    return (WTF::roundUpToMultipleOf(sizeof(Register), calleeSaveRegisters * sizeof(CPURegister)) / sizeof(Register));
 
 }
 
index f921abf..a8a2339 100644 (file)
@@ -86,7 +86,7 @@ static void restoreCalleeSavesFor(Context& context, CodeBlock* codeBlock)
     RegisterSet dontRestoreRegisters = RegisterSet(RegisterSet::stackRegisters(), RegisterSet::allFPRs());
     unsigned registerCount = calleeSaves->size();
 
-    uintptr_t* physicalStackFrame = context.fp<uintptr_t*>();
+    UCPURegister* physicalStackFrame = context.fp<UCPURegister*>();
     for (unsigned i = 0; i < registerCount; i++) {
         RegisterAtOffset entry = calleeSaves->at(i);
         if (dontRestoreRegisters.get(entry.reg()))
@@ -94,8 +94,8 @@ static void restoreCalleeSavesFor(Context& context, CodeBlock* codeBlock)
         // The callee saved values come from the original stack, not the recovered stack.
         // Hence, we read the values directly from the physical stack memory instead of
         // going through context.stack().
-        ASSERT(!(entry.offset() % sizeof(uintptr_t)));
-        context.gpr(entry.reg().gpr()) = physicalStackFrame[entry.offset() / sizeof(uintptr_t)];
+        ASSERT(!(entry.offset() % sizeof(UCPURegister)));
+        context.gpr(entry.reg().gpr()) = physicalStackFrame[entry.offset() / sizeof(UCPURegister)];
     }
 }
 
@@ -113,7 +113,7 @@ static void saveCalleeSavesFor(Context& context, CodeBlock* codeBlock)
         RegisterAtOffset entry = calleeSaves->at(i);
         if (dontSaveRegisters.get(entry.reg()))
             continue;
-        stack.set(context.fp(), entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr()));
+        stack.set(context.fp(), entry.offset(), context.gpr<UCPURegister>(entry.reg().gpr()));
     }
 }
 
@@ -127,14 +127,14 @@ static void restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(Context& context
     unsigned registerCount = allCalleeSaves->size();
 
     VMEntryRecord* entryRecord = vmEntryRecord(vm.topEntryFrame);
-    uintptr_t* calleeSaveBuffer = reinterpret_cast<uintptr_t*>(entryRecord->calleeSaveRegistersBuffer);
+    UCPURegister* calleeSaveBuffer = reinterpret_cast<UCPURegister*>(entryRecord->calleeSaveRegistersBuffer);
 
     // Restore all callee saves.
     for (unsigned i = 0; i < registerCount; i++) {
         RegisterAtOffset entry = allCalleeSaves->at(i);
         if (dontRestoreRegisters.get(entry.reg()))
             continue;
-        size_t uintptrOffset = entry.offset() / sizeof(uintptr_t);
+        size_t uintptrOffset = entry.offset() / sizeof(UCPURegister);
         if (entry.reg().isGPR())
             context.gpr(entry.reg().gpr()) = calleeSaveBuffer[uintptrOffset];
         else
@@ -160,9 +160,9 @@ static void copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(Context& context)
         if (dontCopyRegisters.get(entry.reg()))
             continue;
         if (entry.reg().isGPR())
-            stack.set(calleeSaveBuffer, entry.offset(), context.gpr<uintptr_t>(entry.reg().gpr()));
+            stack.set(calleeSaveBuffer, entry.offset(), context.gpr<UCPURegister>(entry.reg().gpr()));
         else
-            stack.set(calleeSaveBuffer, entry.offset(), context.fpr<uintptr_t>(entry.reg().fpr()));
+            stack.set(calleeSaveBuffer, entry.offset(), context.fpr<UCPURegister>(entry.reg().fpr()));
     }
 }
 
index 76ca3b8..111ebd3 100644 (file)
@@ -230,7 +230,7 @@ void reifyInlinedCallFrames(CCallHelpers& jit, const OSRExitBase& exit)
         if (!inlineCallFrame->isVarargs())
             jit.store32(AssemblyHelpers::TrustedImm32(inlineCallFrame->argumentCountIncludingThis), AssemblyHelpers::payloadFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount)));
 #if USE(JSVALUE64)
-        jit.store64(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
+        jit.storePtr(callerFrameGPR, AssemblyHelpers::addressForByteOffset(inlineCallFrame->callerFrameOffset()));
         uint32_t locationBits = CallSiteIndex(codeOrigin->bytecodeIndex).bits();
         jit.store32(AssemblyHelpers::TrustedImm32(locationBits), AssemblyHelpers::tagFor((VirtualRegister)(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount)));
         if (!inlineCallFrame->isClosureCall)
index 41c2cdc..2c7db12 100644 (file)
@@ -4407,8 +4407,10 @@ void SpeculativeJIT::compile(Node* node)
         m_jit.load32(MacroAssembler::Address(objectGPR, JSCell::structureIDOffset()), structureIDGPR);
         m_jit.add32(structureIDGPR, hashGPR);
         m_jit.and32(TrustedImm32(HasOwnPropertyCache::mask), hashGPR);
-        static_assert(sizeof(HasOwnPropertyCache::Entry) == 16, "Strong assumption of that here.");
-        m_jit.lshift32(TrustedImm32(4), hashGPR);
+        if (hasOneBitSet(sizeof(HasOwnPropertyCache::Entry))) // is a power of 2
+            m_jit.lshift32(TrustedImm32(getLSBSet(sizeof(HasOwnPropertyCache::Entry))), hashGPR);
+        else
+            m_jit.mul32(TrustedImm32(sizeof(HasOwnPropertyCache::Entry)), hashGPR, hashGPR);
         ASSERT(m_jit.vm()->hasOwnPropertyCache());
         m_jit.move(TrustedImmPtr(m_jit.vm()->hasOwnPropertyCache()), tempGPR);
         slowPath.append(m_jit.branchPtr(MacroAssembler::NotEqual, 
index 1ff8155..2c167f2 100644 (file)
@@ -49,7 +49,7 @@ bool tryToDisassembleWithUDis86(const MacroAssemblerCodePtr<DisassemblyPtrTag>&
     uint64_t currentPC = disassembler.pc;
     while (ud_disassemble(&disassembler)) {
         char pcString[20];
-        snprintf(pcString, sizeof(pcString), "0x%lx", static_cast<unsigned long>(currentPC));
+        snprintf(pcString, sizeof(pcString), "0x%lx", static_cast<uintptr_t>(currentPC));
         out.printf("%s%16s: %s\n", prefix, pcString, ud_insn_asm(&disassembler));
         currentPC = disassembler.pc;
     }
index 40e39f9..5f04663 100644 (file)
@@ -9663,12 +9663,13 @@ private:
         LValue index = m_out.bitAnd(mask, unmaskedIndex);
 
         LValue bucket;
+
         if (m_node->child1().useKind() == WeakMapObjectUse) {
-            static_assert(sizeof(WeakMapBucket<WeakMapBucketDataKeyValue>) == 16, "");
-            bucket = m_out.add(buffer, m_out.shl(m_out.zeroExt(index, Int64), m_out.constInt32(4)));
+            static_assert(hasOneBitSet(sizeof(WeakMapBucket<WeakMapBucketDataKeyValue>)), "Should be a power of 2");
+            bucket = m_out.add(buffer, m_out.shl(m_out.zeroExt(index, Int64), m_out.constInt32(getLSBSet(sizeof(WeakMapBucket<WeakMapBucketDataKeyValue>)))));
         } else {
-            static_assert(sizeof(WeakMapBucket<WeakMapBucketDataKey>) == 8, "");
-            bucket = m_out.add(buffer, m_out.shl(m_out.zeroExt(index, Int64), m_out.constInt32(3)));
+            static_assert(hasOneBitSet(sizeof(WeakMapBucket<WeakMapBucketDataKey>)), "Should be a power of 2");
+            bucket = m_out.add(buffer, m_out.shl(m_out.zeroExt(index, Int64), m_out.constInt32(getLSBSet(sizeof(WeakMapBucket<WeakMapBucketDataKey>)))));
         }
 
         LValue bucketKey = m_out.load64(bucket, m_heaps.WeakMapBucket_key);
index 8f6c620..0bc0b8e 100644 (file)
@@ -87,13 +87,13 @@ static void copyMemory(void* dst, const void* src, size_t size)
 {
     size_t dstAsSize = reinterpret_cast<size_t>(dst);
     size_t srcAsSize = reinterpret_cast<size_t>(src);
-    RELEASE_ASSERT(dstAsSize == WTF::roundUpToMultipleOf<sizeof(intptr_t)>(dstAsSize));
-    RELEASE_ASSERT(srcAsSize == WTF::roundUpToMultipleOf<sizeof(intptr_t)>(srcAsSize));
-    RELEASE_ASSERT(size == WTF::roundUpToMultipleOf<sizeof(intptr_t)>(size));
+    RELEASE_ASSERT(dstAsSize == WTF::roundUpToMultipleOf<sizeof(CPURegister)>(dstAsSize));
+    RELEASE_ASSERT(srcAsSize == WTF::roundUpToMultipleOf<sizeof(CPURegister)>(srcAsSize));
+    RELEASE_ASSERT(size == WTF::roundUpToMultipleOf<sizeof(CPURegister)>(size));
 
-    intptr_t* dstPtr = reinterpret_cast<intptr_t*>(dst);
-    const intptr_t* srcPtr = reinterpret_cast<const intptr_t*>(src);
-    size /= sizeof(intptr_t);
+    CPURegister* dstPtr = reinterpret_cast<CPURegister*>(dst);
+    const CPURegister* srcPtr = reinterpret_cast<const CPURegister*>(src);
+    size /= sizeof(CPURegister);
     while (size--)
         *dstPtr++ = *srcPtr++;
 }
index 63aa9aa..ff49a23 100644 (file)
@@ -67,10 +67,11 @@ namespace JSC  {
         uint32_t m_bits;
     };
 
+    // arm64_32 expects caller frame and return pc to use 8 bytes 
     struct CallerFrameAndPC {
-        CallFrame* callerFrame;
-        Instruction* pc;
-        static const int sizeInRegisters = 2 * sizeof(void*) / sizeof(Register);
+        alignas(CPURegister) CallFrame* callerFrame;
+        alignas(CPURegister) Instruction* returnPC;
+        static const int sizeInRegisters = 2 * sizeof(CPURegister) / sizeof(Register);
     };
     static_assert(CallerFrameAndPC::sizeInRegisters == sizeof(CallerFrameAndPC) / sizeof(Register), "CallerFrameAndPC::sizeInRegisters is incorrect.");
 
@@ -147,10 +148,10 @@ namespace JSC  {
 
         static ptrdiff_t callerFrameOffset() { return OBJECT_OFFSETOF(CallerFrameAndPC, callerFrame); }
 
-        ReturnAddressPtr returnPC() const { return ReturnAddressPtr(callerFrameAndPC().pc); }
-        bool hasReturnPC() const { return !!callerFrameAndPC().pc; }
-        void clearReturnPC() { callerFrameAndPC().pc = 0; }
-        static ptrdiff_t returnPCOffset() { return OBJECT_OFFSETOF(CallerFrameAndPC, pc); }
+        ReturnAddressPtr returnPC() const { return ReturnAddressPtr(callerFrameAndPC().returnPC); }
+        bool hasReturnPC() const { return !!callerFrameAndPC().returnPC; }
+        void clearReturnPC() { callerFrameAndPC().returnPC = 0; }
+        static ptrdiff_t returnPCOffset() { return OBJECT_OFFSETOF(CallerFrameAndPC, returnPC); }
         AbstractPC abstractReturnPC(VM& vm) { return AbstractPC(vm, this); }
 
         bool callSiteBitsAreBytecodeOffset() const;
@@ -253,7 +254,7 @@ namespace JSC  {
         static CallFrame* noCaller() { return nullptr; }
         bool isGlobalExec() const
         {
-            return callerFrameAndPC().callerFrame == noCaller() && callerFrameAndPC().pc == nullptr;
+            return callerFrameAndPC().callerFrame == noCaller() && callerFrameAndPC().returnPC == nullptr;
         }
 
         void convertToStackOverflowFrame(VM&);
@@ -263,7 +264,7 @@ namespace JSC  {
         void setArgumentCountIncludingThis(int count) { static_cast<Register*>(this)[CallFrameSlot::argumentCount].payload() = count; }
         void setCallee(JSObject* callee) { static_cast<Register*>(this)[CallFrameSlot::callee] = callee; }
         void setCodeBlock(CodeBlock* codeBlock) { static_cast<Register*>(this)[CallFrameSlot::codeBlock] = codeBlock; }
-        void setReturnPC(void* value) { callerFrameAndPC().pc = reinterpret_cast<Instruction*>(value); }
+        void setReturnPC(void* value) { callerFrameAndPC().returnPC = reinterpret_cast<Instruction*>(value); }
 
         String friendlyFunctionName();
 
index e4a984d..85c3ed8 100644 (file)
@@ -51,7 +51,7 @@ public:
 #if ENABLE(WEBASSEMBLY)
     static void* boxWasm(Wasm::Callee* callee)
     {
-        CalleeBits result(bitwise_cast<void*>(bitwise_cast<uintptr_t>(callee) | TagBitsWasm));
+        CalleeBits result(reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(callee) | TagBitsWasm));
         ASSERT(result.isWasm());
         return result.rawPtr();
     }
@@ -60,7 +60,7 @@ public:
     bool isWasm() const
     {
 #if ENABLE(WEBASSEMBLY)
-        return (bitwise_cast<uintptr_t>(m_ptr) & TagWasmMask) == TagBitsWasm;
+        return (reinterpret_cast<uintptr_t>(m_ptr) & TagWasmMask) == TagBitsWasm;
 #else
         return false;
 #endif
@@ -77,7 +77,7 @@ public:
     Wasm::Callee* asWasmCallee() const
     {
         ASSERT(isWasm());
-        return bitwise_cast<Wasm::Callee*>(bitwise_cast<uintptr_t>(m_ptr) & ~TagBitsWasm);
+        return reinterpret_cast<Wasm::Callee*>(reinterpret_cast<uintptr_t>(m_ptr) & ~TagBitsWasm);
     }
 #endif
 
index 6e668c7..b45d890 100644 (file)
@@ -569,7 +569,7 @@ private:
 
         RegisterAtOffsetList* allCalleeSaves = RegisterSet::vmCalleeSaveRegisterOffsets();
         RegisterSet dontCopyRegisters = RegisterSet::stackRegisters();
-        intptr_t* frame = reinterpret_cast<intptr_t*>(m_callFrame->registers());
+        CPURegister* frame = reinterpret_cast<CPURegister*>(m_callFrame->registers());
 
         unsigned registerCount = currentCalleeSaves->size();
         VMEntryRecord* record = vmEntryRecord(m_vm.topEntryFrame);
index 21ae35b..6b6eb73 100644 (file)
@@ -47,7 +47,7 @@ struct VMEntryRecord {
     JSObject* callee() const { return m_callee; }
 
 #if !ENABLE(C_LOOP) && NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
-    intptr_t calleeSaveRegistersBuffer[NUMBER_OF_CALLEE_SAVES_REGISTERS];
+    CPURegister calleeSaveRegistersBuffer[NUMBER_OF_CALLEE_SAVES_REGISTERS];
 #endif
 
     ExecState* prevTopCallFrame() { return m_prevTopCallFrame; }
index 9da132a..9a352e1 100644 (file)
@@ -461,11 +461,11 @@ public:
     {
         ASSERT(frameSize % stackAlignmentBytes() == 0);
         if (frameSize <= 128) {
-            for (unsigned offset = 0; offset < frameSize; offset += sizeof(intptr_t))
+            for (unsigned offset = 0; offset < frameSize; offset += sizeof(CPURegister))
                 storePtr(TrustedImm32(0), Address(currentTop, -8 - offset));
         } else {
             constexpr unsigned storeBytesPerIteration = stackAlignmentBytes();
-            constexpr unsigned storesPerIteration = storeBytesPerIteration / sizeof(intptr_t);
+            constexpr unsigned storesPerIteration = storeBytesPerIteration / sizeof(CPURegister);
 
             move(currentTop, temp);
             Label zeroLoop = label();
@@ -475,7 +475,7 @@ public:
             storePair64(ARM64Registers::zr, ARM64Registers::zr, temp);
 #else
             for (unsigned i = storesPerIteration; i-- != 0;)
-                storePtr(TrustedImm32(0), Address(temp, sizeof(intptr_t) * i));
+                storePtr(TrustedImm32(0), Address(temp, sizeof(CPURegister) * i));
 #endif
             branchPtr(NotEqual, temp, newTop).linkTo(zeroLoop, this);
         }
index cd415b6..8e59746 100644 (file)
@@ -49,7 +49,7 @@ public:
     
     Reg reg() const { return m_reg; }
     ptrdiff_t offset() const { return m_offset; }
-    int offsetAsIndex() const { return offset() / sizeof(void*); }
+    int offsetAsIndex() const { ASSERT(!(offset() % sizeof(CPURegister))); return offset() / static_cast<int>(sizeof(CPURegister)); }
     
     bool operator==(const RegisterAtOffset& other) const
     {
@@ -69,7 +69,7 @@ public:
 
 private:
     Reg m_reg;
-    ptrdiff_t m_offset : sizeof(ptrdiff_t) * 8 - sizeof(Reg) * 8;
+    ptrdiff_t m_offset : (sizeof(ptrdiff_t) - sizeof(Reg)) * CHAR_BIT;
 };
 
 } // namespace JSC
index 6fe06e7..49252fc 100644 (file)
@@ -40,12 +40,12 @@ RegisterAtOffsetList::RegisterAtOffsetList(RegisterSet registerSet, OffsetBaseTy
     ptrdiff_t offset = 0;
     
     if (offsetBaseType == FramePointerBased)
-        offset = -(static_cast<ptrdiff_t>(numberOfRegisters) * sizeof(void*));
+        offset = -(static_cast<ptrdiff_t>(numberOfRegisters) * sizeof(CPURegister));
 
     m_registers.reserveInitialCapacity(numberOfRegisters);
     registerSet.forEach([&] (Reg reg) {
         m_registers.append(RegisterAtOffset(reg, offset));
-        offset += sizeof(void*);
+        offset += sizeof(CPURegister);
     });
 }
 
index 0dd0454..d061b63 100644 (file)
@@ -76,22 +76,20 @@ void Data::performAssertions(VM& vm)
     // prepared to change LowLevelInterpreter.asm as well!!
 
 #if USE(JSVALUE64)
-    const ptrdiff_t PtrSize = 8;
     const ptrdiff_t CallFrameHeaderSlots = 5;
 #else // USE(JSVALUE64) // i.e. 32-bit version
-    const ptrdiff_t PtrSize = 4;
     const ptrdiff_t CallFrameHeaderSlots = 4;
 #endif
+    const ptrdiff_t MachineRegisterSize = sizeof(CPURegister);
     const ptrdiff_t SlotSize = 8;
 
-    STATIC_ASSERT(sizeof(void*) == PtrSize);
     STATIC_ASSERT(sizeof(Register) == SlotSize);
     STATIC_ASSERT(CallFrame::headerSizeInRegisters == CallFrameHeaderSlots);
 
     ASSERT(!CallFrame::callerFrameOffset());
-    STATIC_ASSERT(CallerFrameAndPC::sizeInRegisters == (PtrSize * 2) / SlotSize);
-    ASSERT(CallFrame::returnPCOffset() == CallFrame::callerFrameOffset() + PtrSize);
-    ASSERT(CallFrameSlot::codeBlock * sizeof(Register) == CallFrame::returnPCOffset() + PtrSize);
+    STATIC_ASSERT(CallerFrameAndPC::sizeInRegisters == (MachineRegisterSize * 2) / SlotSize);
+    ASSERT(CallFrame::returnPCOffset() == CallFrame::callerFrameOffset() + MachineRegisterSize);
+    ASSERT(CallFrameSlot::codeBlock * sizeof(Register) == CallFrame::returnPCOffset() + MachineRegisterSize);
     STATIC_ASSERT(CallFrameSlot::callee * sizeof(Register) == CallFrameSlot::codeBlock * sizeof(Register) + SlotSize);
     STATIC_ASSERT(CallFrameSlot::argumentCount * sizeof(Register) == CallFrameSlot::callee * sizeof(Register) + SlotSize);
     STATIC_ASSERT(CallFrameSlot::thisArgument * sizeof(Register) == CallFrameSlot::argumentCount * sizeof(Register) + SlotSize);
index ba6d516..6b9b798 100644 (file)
 #define OFFLINE_ASM_JSVALUE64 0
 #endif
 
+#if CPU(ADDRESS64)
+#define OFFLINE_ASM_ADDRESS64 1
+#else
+#define OFFLINE_ASM_ADDRESS64 0
+#endif
+
 #if ENABLE(POISON)
 #define OFFLINE_ASM_POISON 1
 #else
index 2b4350c..e8aa300 100644 (file)
@@ -157,9 +157,11 @@ const PtrSize = constexpr (sizeof(void*))
 
 if JSVALUE64
     const CallFrameHeaderSlots = 5
+    const MachineRegisterSize = 8
 else
     const CallFrameHeaderSlots = 4
     const CallFrameAlignSlots = 1
+    const MachineRegisterSize = 4
 end
 const SlotSize = 8
 
@@ -170,11 +172,11 @@ const StackAlignment = 16
 const StackAlignmentSlots = 2
 const StackAlignmentMask = StackAlignment - 1
 
-const CallerFrameAndPCSize = 2 * PtrSize
+const CallerFrameAndPCSize = constexpr (sizeof(CallerFrameAndPC))
 
 const CallerFrame = 0
-const ReturnPC = CallerFrame + PtrSize
-const CodeBlock = ReturnPC + PtrSize
+const ReturnPC = CallerFrame + MachineRegisterSize
+const CodeBlock = ReturnPC + MachineRegisterSize
 const Callee = CodeBlock + SlotSize
 const ArgumentCount = Callee + SlotSize
 const ThisArgumentOffset = ArgumentCount + SlotSize
@@ -294,35 +296,35 @@ if JSVALUE64
     end
 
     macro loadisFromInstruction(offset, dest)
-        loadis offset * 8[PB, PC, 8], dest
+        loadis offset * PtrSize[PB, PC, PtrSize], dest
     end
     
     macro loadpFromInstruction(offset, dest)
-        loadp offset * 8[PB, PC, 8], dest
+        loadp offset * PtrSize[PB, PC, PtrSize], dest
     end
 
     macro loadisFromStruct(offset, dest)
-        loadis offset[PB, PC, 8], dest
+        loadis offset[PB, PC, PtrSize], dest
     end
 
     macro loadpFromStruct(offset, dest)
-        loadp offset[PB, PC, 8], dest
+        loadp offset[PB, PC, PtrSize], dest
     end
 
     macro storeisToInstruction(value, offset)
-        storei value, offset * 8[PB, PC, 8]
+        storei value, offset * PtrSize[PB, PC, PtrSize]
     end
 
     macro storepToInstruction(value, offset)
-        storep value, offset * 8[PB, PC, 8]
+        storep value, offset * PtrSize[PB, PC, PtrSize]
     end
 
     macro storeisFromStruct(value, offset)
-        storei value, offset[PB, PC, 8]
+        storei value, offset[PB, PC, PtrSize]
     end
 
     macro storepFromStruct(value, offset)
-        storep value, offset[PB, PC, 8]
+        storep value, offset[PB, PC, PtrSize]
     end
 
 else
@@ -574,7 +576,7 @@ elsif X86 or X86_WIN
     const CalleeSaveRegisterCount = 3
 end
 
-const CalleeRegisterSaveSize = CalleeSaveRegisterCount * PtrSize
+const CalleeRegisterSaveSize = CalleeSaveRegisterCount * MachineRegisterSize
 
 # VMEntryTotalFrameSize includes the space for struct VMEntryRecord and the
 # callee save registers rounded up to keep the stack aligned
@@ -697,16 +699,16 @@ macro copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(vm, temp)
         vmEntryRecord(temp, temp)
         leap VMEntryRecord::calleeSaveRegistersBuffer[temp], temp
         if ARM64 or ARM64E
-            storep csr0, [temp]
-            storep csr1, 8[temp]
-            storep csr2, 16[temp]
-            storep csr3, 24[temp]
-            storep csr4, 32[temp]
-            storep csr5, 40[temp]
-            storep csr6, 48[temp]
-            storep csr7, 56[temp]
-            storep csr8, 64[temp]
-            storep csr9, 72[temp]
+            storeq csr0, [temp]
+            storeq csr1, 8[temp]
+            storeq csr2, 16[temp]
+            storeq csr3, 24[temp]
+            storeq csr4, 32[temp]
+            storeq csr5, 40[temp]
+            storeq csr6, 48[temp]
+            storeq csr7, 56[temp]
+            storeq csr8, 64[temp]
+            storeq csr9, 72[temp]
             stored csfr0, 80[temp]
             stored csfr1, 88[temp]
             stored csfr2, 96[temp]
@@ -716,19 +718,19 @@ macro copyCalleeSavesToVMEntryFrameCalleeSavesBuffer(vm, temp)
             stored csfr6, 128[temp]
             stored csfr7, 136[temp]
         elsif X86_64
-            storep csr0, [temp]
-            storep csr1, 8[temp]
-            storep csr2, 16[temp]
-            storep csr3, 24[temp]
-            storep csr4, 32[temp]
+            storeq csr0, [temp]
+            storeq csr1, 8[temp]
+            storeq csr2, 16[temp]
+            storeq csr3, 24[temp]
+            storeq csr4, 32[temp]
         elsif X86_64_WIN
-            storep csr0, [temp]
-            storep csr1, 8[temp]
-            storep csr2, 16[temp]
-            storep csr3, 24[temp]
-            storep csr4, 32[temp]
-            storep csr5, 40[temp]
-            storep csr6, 48[temp]
+            storeq csr0, [temp]
+            storeq csr1, 8[temp]
+            storeq csr2, 16[temp]
+            storeq csr3, 24[temp]
+            storeq csr4, 32[temp]
+            storeq csr5, 40[temp]
+            storeq csr6, 48[temp]
         end
     end
 end
@@ -739,16 +741,16 @@ macro restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(vm, temp)
         vmEntryRecord(temp, temp)
         leap VMEntryRecord::calleeSaveRegistersBuffer[temp], temp
         if ARM64 or ARM64E
-            loadp [temp], csr0
-            loadp 8[temp], csr1
-            loadp 16[temp], csr2
-            loadp 24[temp], csr3
-            loadp 32[temp], csr4
-            loadp 40[temp], csr5
-            loadp 48[temp], csr6
-            loadp 56[temp], csr7
-            loadp 64[temp], csr8
-            loadp 72[temp], csr9
+            loadq [temp], csr0
+            loadq 8[temp], csr1
+            loadq 16[temp], csr2
+            loadq 24[temp], csr3
+            loadq 32[temp], csr4
+            loadq 40[temp], csr5
+            loadq 48[temp], csr6
+            loadq 56[temp], csr7
+            loadq 64[temp], csr8
+            loadq 72[temp], csr9
             loadd 80[temp], csfr0
             loadd 88[temp], csfr1
             loadd 96[temp], csfr2
@@ -758,19 +760,19 @@ macro restoreCalleeSavesFromVMEntryFrameCalleeSavesBuffer(vm, temp)
             loadd 128[temp], csfr6
             loadd 136[temp], csfr7
         elsif X86_64
-            loadp [temp], csr0
-            loadp 8[temp], csr1
-            loadp 16[temp], csr2
-            loadp 24[temp], csr3
-            loadp 32[temp], csr4
+            loadq [temp], csr0
+            loadq 8[temp], csr1
+            loadq 16[temp], csr2
+            loadq 24[temp], csr3
+            loadq 32[temp], csr4
         elsif X86_64_WIN
-            loadp [temp], csr0
-            loadp 8[temp], csr1
-            loadp 16[temp], csr2
-            loadp 24[temp], csr3
-            loadp 32[temp], csr4
-            loadp 40[temp], csr5
-            loadp 48[temp], csr6
+            loadq [temp], csr0
+            loadq 8[temp], csr1
+            loadq 16[temp], csr2
+            loadq 24[temp], csr3
+            loadq 32[temp], csr4
+            loadq 40[temp], csr5
+            loadq 48[temp], csr6
         end
     end
 end
@@ -884,9 +886,9 @@ macro prepareForTailCall(callee, temp1, temp2, temp3, callPtrTag)
     andi ~StackAlignmentMask, temp2
 
     if ARM or ARMv7_TRADITIONAL or ARMv7 or ARM64 or ARM64E or C_LOOP or MIPS
-        addp 2 * PtrSize, sp
-        subi 2 * PtrSize, temp2
-        loadp PtrSize[cfr], lr
+        addp CallerFrameAndPCSize, sp
+        subi CallerFrameAndPCSize, temp2
+        loadp CallerFrameAndPC::returnPC[cfr], lr
     else
         addp PtrSize, sp
         subi PtrSize, temp2
@@ -903,10 +905,17 @@ macro prepareForTailCall(callee, temp1, temp2, temp3, callPtrTag)
     loadp [cfr], cfr
 
 .copyLoop:
-    subi PtrSize, temp2
-    loadp [sp, temp2, 1], temp3
-    storep temp3, [temp1, temp2, 1]
-    btinz temp2, .copyLoop
+    if ARM64 and not ADDRESS64
+        subi MachineRegisterSize, temp2
+        loadq [sp, temp2, 1], temp3
+        storeq temp3, [temp1, temp2, 1]
+        btinz temp2, .copyLoop
+    else
+        subi PtrSize, temp2
+        loadp [sp, temp2, 1], temp3
+        storep temp3, [temp1, temp2, 1]
+        btinz temp2, .copyLoop
+    end
 
     move temp1, sp
     jmp callee, callPtrTag
@@ -1109,7 +1118,7 @@ macro prologue(codeBlockGetter, codeBlockSetter, osrSlowPath, traceSlowPath)
 
     if JSVALUE64
         move TagTypeNumber, tagTypeNumber
-        addp TagBitTypeOther, tagTypeNumber, tagMask
+        addq TagBitTypeOther, tagTypeNumber, tagMask
     end
 end
 
@@ -1263,7 +1272,7 @@ macro setEntryAddress(index, label)
     elsif ARM64 or ARM64E
         pcrtoaddr label, t1
         move index, t4
-        storep t1, [a0, t4, 8]
+        storep t1, [a0, t4, PtrSize]
     elsif ARM or ARMv7 or ARMv7_TRADITIONAL
         mvlbl (label - _relativePCBase), t4
         addp t4, t1, t4
index 5db450b..a7ad2c1 100644 (file)
@@ -24,7 +24,7 @@
 
 # Utilities.
 macro jumpToInstruction()
-    jmp [PB, PC, 8], BytecodePtrTag
+    jmp [PB, PC, PtrSize], BytecodePtrTag
 end
 
 macro dispatch(advance)
@@ -38,7 +38,7 @@ macro dispatchInt(advance)
 end
 
 macro dispatchIntIndirect(offset)
-    dispatchInt(offset * 8[PB, PC, 8])
+    dispatchInt(offset * PtrSize[PB, PC, PtrSize])
 end
 
 macro dispatchAfterCall()
@@ -300,13 +300,13 @@ _handleUncaughtException:
 
 
 macro prepareStateForCCall()
-    leap [PB, PC, 8], PC
+    leap [PB, PC, PtrSize], PC
 end
 
 macro restoreStateAfterCCall()
     move r0, PC
     subp PB, PC
-    rshiftp 3, PC
+    rshiftp constexpr (getLSBSet(sizeof(void*))), PC
 end
 
 macro callSlowPath(slowPath)
@@ -487,7 +487,7 @@ macro structureIDToStructureWithScratch(structureIDThenStructure, scratch, scrat
     loadp CodeBlock::m_poisonedVM[scratch], scratch
     unpoison(_g_CodeBlockPoison, scratch, scratch2)
     loadp VM::heap + Heap::m_structureIDTable + StructureIDTable::m_table[scratch], scratch
-    loadp [scratch, structureIDThenStructure, 8], structureIDThenStructure
+    loadp [scratch, structureIDThenStructure, PtrSize], structureIDThenStructure
 end
 
 macro loadStructureWithScratch(cell, structure, scratch, scratch2)
@@ -549,7 +549,8 @@ macro functionArityCheck(doneLabel, slowPath)
     subp CalleeSaveSpaceAsVirtualRegisters * 8, t3
     addi CalleeSaveSpaceAsVirtualRegisters, t2
     move t1, t0
-    lshiftp 3, t0
+    # Adds to sp are always 64-bit on arm64 so we need maintain t0's high bits.
+    lshiftq 3, t0
     addp t0, cfr
     addp t0, sp
 .copyLoop:
@@ -588,7 +589,7 @@ macro branchIfException(label)
     loadp Callee[cfr], t3
     andp MarkedBlockMask, t3
     loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
-    btqz VM::m_exception[t3], .noException
+    btpz VM::m_exception[t3], .noException
     jmp label
 .noException:
 end
@@ -1547,7 +1548,7 @@ _llint_op_put_by_id:
     loadi JSCell::m_structureID[t2], t2
     # Now, t1 has the Structure* and t2 has the StructureID that we want that Structure* to have.
     bineq t2, Structure::m_blob + StructureIDBlob::u.fields.structureID[t1], .opPutByIdSlow
-    addp 8, t3
+    addp PtrSize, t3
     loadq Structure::m_prototype[t1], t2
     bqneq t2, ValueNull, .opPutByIdTransitionChainLoop
 
@@ -1746,7 +1747,7 @@ macro contiguousPutByVal(storeCallback)
 
 .outOfBounds:
     biaeq t3, -sizeof IndexingHeader + IndexingHeader::u.lengths.vectorLength[t0], .opPutByValOutOfBounds
-    loadp 32[PB, PC, 8], t2
+    loadpFromInstruction(4, t2)
     storeb 1, ArrayProfile::m_mayStoreToHole[t2]
     addi 1, t3, t2
     storei t2, -sizeof IndexingHeader + IndexingHeader::u.lengths.publicLength[t0]
@@ -1770,8 +1771,8 @@ macro putByVal(slowPath)
     contiguousPutByVal(
         macro (operand, scratch, address)
             loadConstantOrVariable(operand, scratch)
-            bpb scratch, tagTypeNumber, .opPutByValSlow
-            storep scratch, address
+            bqb scratch, tagTypeNumber, .opPutByValSlow
+            storeq scratch, address
             writeBarrierOnOperands(1, 3)
         end)
 
@@ -1784,7 +1785,7 @@ macro putByVal(slowPath)
             ci2d scratch, ft0
             jmp .ready
         .notInt:
-            addp tagTypeNumber, scratch
+            addq tagTypeNumber, scratch
             fq2d scratch, ft0
             bdnequn ft0, ft0, .opPutByValSlow
         .ready:
@@ -1797,7 +1798,7 @@ macro putByVal(slowPath)
     contiguousPutByVal(
         macro (operand, scratch, address)
             loadConstantOrVariable(operand, scratch)
-            storep scratch, address
+            storeq scratch, address
             writeBarrierOnOperands(1, 3)
         end)
 
@@ -1906,12 +1907,12 @@ _llint_op_jneq_ptr:
     loadisFromInstruction(2, t1)
     loadp CodeBlock[cfr], t2
     loadp CodeBlock::m_globalObject[t2], t2
-    loadp JSGlobalObject::m_specialPointers[t2, t1, 8], t1
+    loadp JSGlobalObject::m_specialPointers[t2, t1, PtrSize], t1
     bpneq t1, [cfr, t0, 8], .opJneqPtrTarget
     dispatch(5)
 
 .opJneqPtrTarget:
-    storei 1, 32[PB, PC, 8]
+    storeisToInstruction(1, 4)
     dispatchIntIndirect(3)
 
 
@@ -2134,7 +2135,7 @@ _llint_op_catch:
     unpoison(_g_CodeBlockPoison, PB, t2)
     loadp VM::targetInterpreterPCForThrow[t3], PC
     subp PB, PC
-    rshiftp 3, PC
+    rshiftp constexpr (getLSBSet(sizeof(void*))), PC
 
     callSlowPath(_llint_slow_path_check_if_exception_is_uncatchable_and_notify_profiler)
     bpeq r1, 0, .isCatchableException
@@ -2145,8 +2146,8 @@ _llint_op_catch:
     andp MarkedBlockMask, t3
     loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
 
-    loadq VM::m_exception[t3], t0
-    storeq 0, VM::m_exception[t3]
+    loadp VM::m_exception[t3], t0
+    storep 0, VM::m_exception[t3]
     loadisFromInstruction(1, t2)
     storeq t0, [cfr, t2, 8]
 
@@ -2228,7 +2229,7 @@ macro nativeCallTrampoline(executableOffsetToFunction)
     andp MarkedBlockMask, t3
     loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
 
-    btqnz VM::m_exception[t3], .handleException
+    btpnz VM::m_exception[t3], .handleException
 
     functionEpilogue()
     ret
@@ -2271,7 +2272,7 @@ macro internalFunctionCallTrampoline(offsetOfFunction)
     andp MarkedBlockMask, t3
     loadp MarkedBlockFooterOffset + MarkedBlock::Footer::m_vm[t3], t3
 
-    btqnz VM::m_exception[t3], .handleException
+    btpnz VM::m_exception[t3], .handleException
 
     functionEpilogue()
     ret
@@ -2297,7 +2298,7 @@ end
 macro resolveScope()
     loadisFromInstruction(5, t2)
     loadisFromInstruction(2, t0)
-    loadp [cfr, t0, 8], t0
+    loadq [cfr, t0, 8], t0
     btiz t2, .resolveScopeLoopEnd
 
 .resolveScopeLoop:
@@ -2593,7 +2594,7 @@ _llint_op_put_to_scope:
 _llint_op_get_from_arguments:
     traceExecution()
     loadVariable(2, t0)
-    loadi 24[PB, PC, 8], t1
+    loadi 3 * PtrSize[PB, PC, PtrSize], t1
     loadq DirectArguments_storage[t0, t1, 8], t0
     valueProfile(t0, 4, t1)
     loadisFromInstruction(1, t1)
@@ -2604,7 +2605,7 @@ _llint_op_get_from_arguments:
 _llint_op_put_to_arguments:
     traceExecution()
     loadVariable(1, t0)
-    loadi 16[PB, PC, 8], t1
+    loadi 2 * PtrSize[PB, PC, PtrSize], t1
     loadisFromInstruction(3, t3)
     loadConstantOrVariable(t3, t2)
     storeq t2, DirectArguments_storage[t0, t1, 8]
index 61102c9..eeb90ed 100644 (file)
@@ -75,9 +75,12 @@ def arm64GPRName(name, kind)
     raise "bad GPR name #{name}" unless name =~ /^x/
     number = name[1..-1]
     case kind
-    when :int
+    when :word
         "w" + number
     when :ptr
+        prefix = $currentSettings["ADDRESS64"] ? "x" : "w"
+        prefix + number
+    when :quad
         "x" + number
     else
         raise "Wrong kind: #{kind}"
@@ -199,7 +202,7 @@ end
 class Address
     def arm64Operand(kind)
         raise "Invalid offset #{offset.value} at #{codeOriginString}" if offset.value < -255 or offset.value > 4095
-        "[#{base.arm64Operand(:ptr)}, \##{offset.value}]"
+        "[#{base.arm64Operand(:quad)}, \##{offset.value}]"
     end
     
     def arm64EmitLea(destination, kind)
@@ -210,7 +213,7 @@ end
 class BaseIndex
     def arm64Operand(kind)
         raise "Invalid offset #{offset.value} at #{codeOriginString}" if offset.value != 0
-        "[#{base.arm64Operand(:ptr)}, #{index.arm64Operand(:ptr)}, lsl \##{scaleShift}]"
+        "[#{base.arm64Operand(:quad)}, #{index.arm64Operand(:quad)}, lsl \##{scaleShift}]"
     end
 
     def arm64EmitLea(destination, kind)
@@ -233,23 +236,30 @@ end
 def arm64LowerMalformedLoadStoreAddresses(list)
     newList = []
 
-    def isAddressMalformed(operand)
-        operand.is_a? Address and not (-255..4095).include? operand.offset.value
+    def isAddressMalformed(opcode, operand)
+        malformed = false
+        if operand.is_a? Address
+            malformed ||= (not (-255..4095).include? operand.offset.value)
+            if opcode =~ /q$/ and $currentSettings["ADDRESS64"]
+                malformed ||= operand.offset.value % 8
+            end
+        end
+        malformed
     end
 
     list.each {
         | node |
         if node.is_a? Instruction
-            if node.opcode =~ /^store/ and isAddressMalformed(node.operands[1])
+            if node.opcode =~ /^store/ and isAddressMalformed(node.opcode, node.operands[1])
                 address = node.operands[1]
                 tmp = Tmp.new(codeOrigin, :gpr)
                 newList << Instruction.new(node.codeOrigin, "move", [address.offset, tmp])
-                newList << Instruction.new(node.codeOrigin, node.opcode, [node.operands[0], BaseIndex.new(node.codeOrigin, address.base, tmp, 1, Immediate.new(codeOrigin, 0))], node.annotation)
-            elsif node.opcode =~ /^load/ and isAddressMalformed(node.operands[0])
+                newList << Instruction.new(node.codeOrigin, node.opcode, [node.operands[0], BaseIndex.new(node.codeOrigin, address.base, tmp, Immediate.new(codeOrigin, 1), Immediate.new(codeOrigin, 0))], node.annotation)
+            elsif node.opcode =~ /^load/ and isAddressMalformed(node.opcode, node.operands[0])
                 address = node.operands[0]
                 tmp = Tmp.new(codeOrigin, :gpr)
                 newList << Instruction.new(node.codeOrigin, "move", [address.offset, tmp])
-                newList << Instruction.new(node.codeOrigin, node.opcode, [BaseIndex.new(node.codeOrigin, address.base, tmp, 1, Immediate.new(codeOrigin, 0)), node.operands[1]], node.annotation)
+                newList << Instruction.new(node.codeOrigin, node.opcode, [BaseIndex.new(node.codeOrigin, address.base, tmp, Immediate.new(codeOrigin, 1), Immediate.new(codeOrigin, 0)), node.operands[1]], node.annotation)
             else
                 newList << node
             end
@@ -285,6 +295,43 @@ def arm64LowerLabelReferences(list)
     newList
 end
 
+def arm64FixSpecialRegisterArithmeticMode(list)
+    newList = []
+    def usesSpecialRegister(node)
+        node.children.any? {
+            |operand|
+            if operand.is_a? RegisterID and operand.name =~ /sp/
+                true
+            elsif operand.is_a? Address or operand.is_a? BaseIndex
+                usesSpecialRegister(operand)
+            else
+                false
+            end
+        }
+    end
+
+
+    list.each {
+        | node |
+        if node.is_a? Instruction
+            case node.opcode
+            when "addp", "subp", "mulp", "divp", "leap"
+                if not $currentSettings["ADDRESS64"] and usesSpecialRegister(node)
+                    newOpcode = node.opcode.sub(/(.*)p/, '\1q')
+                    node = Instruction.new(node.codeOrigin, newOpcode, node.operands, node.annotation)
+                end
+            when /^bp/
+                if not $currentSettings["ADDRESS64"] and usesSpecialRegister(node)
+                    newOpcode = node.opcode.sub(/^bp(.*)/, 'bq\1')
+                    node = Instruction.new(node.codeOrigin, newOpcode, node.operands, node.annotation)
+                end
+            end
+        end
+        newList << node
+    }
+    newList
+end
+
 # Workaround for Cortex-A53 erratum (835769)
 def arm64CortexA53Fix835769(list)
     newList = []
@@ -318,7 +365,8 @@ class Sequence
         result = @list
         result = riscLowerNot(result)
         result = riscLowerSimpleBranchOps(result)
-        result = riscLowerHardBranchOps64(result)
+
+        result = $currentSettings["ADDRESS64"] ? riscLowerHardBranchOps64(result) : riscLowerHardBranchOps(result)
         result = riscLowerShiftOps(result)
         result = arm64LowerMalformedLoadStoreAddresses(result)
         result = arm64LowerLabelReferences(result)
@@ -337,7 +385,7 @@ class Sequence
                 "urshiftp", "urshiftq", "addp", "addq", "mulp", "mulq", "andp", "andq", "orp", "orq", "subp", "subq", "xorp", "xorq", "addd",
                 "divd", "subd", "muld", "sqrtd", /^bp/, /^bq/, /^btp/, /^btq/, /^cp/, /^cq/, /^tp/, /^tq/, /^bd/,
                 "jmp", "call", "leap", "leaq"
-                size = 8
+                size = $currentSettings["ADDRESS64"] ? 8 : 4
             else
                 raise "Bad instruction #{node.opcode} for heap access at #{node.codeOriginString}"
             end
@@ -368,6 +416,7 @@ class Sequence
             end
         }
         result = riscLowerTest(result)
+        result = arm64FixSpecialRegisterArithmeticMode(result)
         result = assignRegistersToTemporaries(result, :gpr, ARM64_EXTRA_GPRS)
         result = assignRegistersToTemporaries(result, :fpr, ARM64_EXTRA_FPRS)
         result = arm64CortexA53Fix835769(result)
@@ -449,10 +498,11 @@ end
 
 def emitARM64Access(opcode, opcodeNegativeOffset, register, memory, kind)
     if memory.is_a? Address and memory.offset.value < 0
+        raise unless -256 <= memory.offset.value
         $asm.puts "#{opcodeNegativeOffset} #{register.arm64Operand(kind)}, #{memory.arm64Operand(kind)}"
         return
     end
-    
+
     $asm.puts "#{opcode} #{register.arm64Operand(kind)}, #{memory.arm64Operand(kind)}"
 end
 
@@ -479,7 +529,7 @@ end
 
 def emitARM64Compare(operands, kind, compareCode)
     emitARM64Unflipped("subs #{arm64GPRName('xzr', kind)}, ", operands[0..-2], kind)
-    $asm.puts "csinc #{operands[-1].arm64Operand(:int)}, wzr, wzr, #{compareCode}"
+    $asm.puts "csinc #{operands[-1].arm64Operand(:word)}, wzr, wzr, #{compareCode}"
 end
 
 def emitARM64MoveImmediate(value, target)
@@ -491,13 +541,13 @@ def emitARM64MoveImmediate(value, target)
         next if currentValue == (isNegative ? 0xffff : 0) and (shift != 0 or !first)
         if first
             if isNegative
-                $asm.puts "movn #{target.arm64Operand(:ptr)}, \##{(~currentValue) & 0xffff}, lsl \##{shift}"
+                $asm.puts "movn #{target.arm64Operand(:quad)}, \##{(~currentValue) & 0xffff}, lsl \##{shift}"
             else
-                $asm.puts "movz #{target.arm64Operand(:ptr)}, \##{currentValue}, lsl \##{shift}"
+                $asm.puts "movz #{target.arm64Operand(:quad)}, \##{currentValue}, lsl \##{shift}"
             end
             first = false
         else
-            $asm.puts "movk #{target.arm64Operand(:ptr)}, \##{currentValue}, lsl \##{shift}"
+            $asm.puts "movk #{target.arm64Operand(:quad)}, \##{currentValue}, lsl \##{shift}"
         end
     }
 end
@@ -506,124 +556,127 @@ class Instruction
     def lowerARM64
         case opcode
         when 'addi'
-            emitARM64Add("add", operands, :int)
+            emitARM64Add("add", operands, :word)
         when 'addis'
-            emitARM64Add("adds", operands, :int)
+            emitARM64Add("adds", operands, :word)
         when 'addp'
             emitARM64Add("add", operands, :ptr)
         when 'addps'
             emitARM64Add("adds", operands, :ptr)
         when 'addq'
-            emitARM64Add("add", operands, :ptr)
+            emitARM64Add("add", operands, :quad)
         when "andi"
-            emitARM64TAC("and", operands, :int)
+            emitARM64TAC("and", operands, :word)
         when "andp"
             emitARM64TAC("and", operands, :ptr)
         when "andq"
-            emitARM64TAC("and", operands, :ptr)
+            emitARM64TAC("and", operands, :quad)
         when "ori"
-            emitARM64TAC("orr", operands, :int)
+            emitARM64TAC("orr", operands, :word)
         when "orp"
             emitARM64TAC("orr", operands, :ptr)
         when "orq"
-            emitARM64TAC("orr", operands, :ptr)
+            emitARM64TAC("orr", operands, :quad)
         when "xori"
-            emitARM64TAC("eor", operands, :int)
+            emitARM64TAC("eor", operands, :word)
         when "xorp"
             emitARM64TAC("eor", operands, :ptr)
         when "xorq"
-            emitARM64TAC("eor", operands, :ptr)
+            emitARM64TAC("eor", operands, :quad)
         when "lshifti"
-            emitARM64Shift("lslv", "ubfm", operands, :int) {
+            emitARM64Shift("lslv", "ubfm", operands, :word) {
                 | value |
                 [32 - value, 31 - value]
             }
         when "lshiftp"
             emitARM64Shift("lslv", "ubfm", operands, :ptr) {
                 | value |
-                [64 - value, 63 - value]
+                bitSize = $currentSettings["ADDRESS64"] ? 64 : 32
+                [bitSize - value, bitSize - 1 - value]
             }
         when "lshiftq"
-            emitARM64Shift("lslv", "ubfm", operands, :ptr) {
+            emitARM64Shift("lslv", "ubfm", operands, :quad) {
                 | value |
                 [64 - value, 63 - value]
             }
         when "rshifti"
-            emitARM64Shift("asrv", "sbfm", operands, :int) {
+            emitARM64Shift("asrv", "sbfm", operands, :word) {
                 | value |
                 [value, 31]
             }
         when "rshiftp"
             emitARM64Shift("asrv", "sbfm", operands, :ptr) {
                 | value |
-                [value, 63]
+                bitSize = $currentSettings["ADDRESS64"] ? 64 : 32
+                [value, bitSize - 1]
             }
         when "rshiftq"
-            emitARM64Shift("asrv", "sbfm", operands, :ptr) {
+            emitARM64Shift("asrv", "sbfm", operands, :quad) {
                 | value |
                 [value, 63]
             }
         when "urshifti"
-            emitARM64Shift("lsrv", "ubfm", operands, :int) {
+            emitARM64Shift("lsrv", "ubfm", operands, :word) {
                 | value |
                 [value, 31]
             }
         when "urshiftp"
             emitARM64Shift("lsrv", "ubfm", operands, :ptr) {
                 | value |
-                [value, 63]
+                bitSize = $currentSettings["ADDRESS64"] ? 64 : 32
+                [value, bitSize - 1]
             }
         when "urshiftq"
-            emitARM64Shift("lsrv", "ubfm", operands, :ptr) {
+            emitARM64Shift("lsrv", "ubfm", operands, :quad) {
                 | value |
                 [value, 63]
             }
         when "muli"
-            $asm.puts "madd #{arm64TACOperands(operands, :int)}, wzr"
+            $asm.puts "madd #{arm64TACOperands(operands, :word)}, wzr"
         when "mulp"
-            $asm.puts "madd #{arm64TACOperands(operands, :ptr)}, xzr"
+            $asm.puts "madd #{arm64TACOperands(operands, :ptr)}, #{arm64GPRName('xzr', :ptr)}"
         when "mulq"
-            $asm.puts "madd #{arm64TACOperands(operands, :ptr)}, xzr"
+            $asm.puts "madd #{arm64TACOperands(operands, :quad)}, xzr"
         when "subi"
-            emitARM64TAC("sub", operands, :int)
+            emitARM64TAC("sub", operands, :word)
         when "subp"
             emitARM64TAC("sub", operands, :ptr)
         when "subq"
-            emitARM64TAC("sub", operands, :ptr)
+            emitARM64TAC("sub", operands, :quad)
         when "subis"
-            emitARM64TAC("subs", operands, :int)
+            emitARM64TAC("subs", operands, :word)
         when "negi"
-            $asm.puts "sub #{operands[0].arm64Operand(:int)}, wzr, #{operands[0].arm64Operand(:int)}"
+            $asm.puts "sub #{operands[0].arm64Operand(:word)}, wzr, #{operands[0].arm64Operand(:word)}"
         when "negp"
-            $asm.puts "sub #{operands[0].arm64Operand(:ptr)}, xzr, #{operands[0].arm64Operand(:ptr)}"
+            $asm.puts "sub #{operands[0].arm64Operand(:ptr)}, #{arm64GPRName('xzr', :ptr)}, #{operands[0].arm64Operand(:ptr)}"
         when "negq"
-            $asm.puts "sub #{operands[0].arm64Operand(:ptr)}, xzr, #{operands[0].arm64Operand(:ptr)}"
+            $asm.puts "sub #{operands[0].arm64Operand(:quad)}, xzr, #{operands[0].arm64Operand(:quad)}"
         when "loadi"
-            emitARM64Access("ldr", "ldur", operands[1], operands[0], :int)
+            emitARM64Access("ldr", "ldur", operands[1], operands[0], :word)
         when "loadis"
-            emitARM64Access("ldrsw", "ldursw", operands[1], operands[0], :ptr)
+            emitARM64Access("ldrsw", "ldursw", operands[1], operands[0], :quad)
         when "loadp"
             emitARM64Access("ldr", "ldur", operands[1], operands[0], :ptr)
         when "loadq"
-            emitARM64Access("ldr", "ldur", operands[1], operands[0], :ptr)
+            emitARM64Access("ldr", "ldur", operands[1], operands[0], :quad)
         when "storei"
-            emitARM64Unflipped("str", operands, :int)
+            emitARM64Unflipped("str", operands, :word)
         when "storep"
             emitARM64Unflipped("str", operands, :ptr)
         when "storeq"
-            emitARM64Unflipped("str", operands, :ptr)
+            emitARM64Unflipped("str", operands, :quad)
         when "loadb"
-            emitARM64Access("ldrb", "ldurb", operands[1], operands[0], :int)
+            emitARM64Access("ldrb", "ldurb", operands[1], operands[0], :word)
         when "loadbs"
-            emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :int)
+            emitARM64Access("ldrsb", "ldursb", operands[1], operands[0], :word)
         when "storeb"
-            emitARM64Unflipped("strb", operands, :int)
+            emitARM64Unflipped("strb", operands, :word)
         when "loadh"
-            emitARM64Access("ldrh", "ldurh", operands[1], operands[0], :int)
+            emitARM64Access("ldrh", "ldurh", operands[1], operands[0], :word)
         when "loadhs"
-            emitARM64Access("ldrsh", "ldursh", operands[1], operands[0], :int)
+            emitARM64Access("ldrsh", "ldursh", operands[1], operands[0], :word)
         when "storeh"
-            emitARM64Unflipped("strh", operands, :int)
+            emitARM64Unflipped("strh", operands, :word)
         when "loadd"
             emitARM64Access("ldr", "ldur", operands[1], operands[0], :double)
         when "stored"
@@ -639,7 +692,7 @@ class Instruction
         when "sqrtd"
             emitARM64("fsqrt", operands, :double)
         when "ci2d"
-            emitARM64("scvtf", operands, [:int, :double])
+            emitARM64("scvtf", operands, [:word, :double])
         when "bdeq"
             emitARM64Branch("fcmp", operands, :double, "b.eq")
         when "bdneq"
@@ -675,7 +728,7 @@ class Instruction
             # currently does not use it.
             raise "ARM64 does not support this opcode yet, #{codeOriginString}"
         when "td2i"
-            emitARM64("fcvtzs", operands, [:double, :int])
+            emitARM64("fcvtzs", operands, [:double, :word])
         when "bcd2i"
             # FIXME: Remove this instruction, or use it and implement it. Currently it's not
             # used.
@@ -695,36 +748,36 @@ class Instruction
                 # So for example, if we did push(A, B, C, D), we would then pop(D, C, B, A).
                 # But since the ordering of arguments doesn't change on arm64 between the stp and ldp 
                 # instructions we need to flip flop the argument positions that were passed to us.
-                $asm.puts "ldp #{ops[1].arm64Operand(:ptr)}, #{ops[0].arm64Operand(:ptr)}, [sp], #16"
+                $asm.puts "ldp #{ops[1].arm64Operand(:quad)}, #{ops[0].arm64Operand(:quad)}, [sp], #16"
             }
         when "push"
             operands.each_slice(2) {
                 | ops |
-                $asm.puts "stp #{ops[0].arm64Operand(:ptr)}, #{ops[1].arm64Operand(:ptr)}, [sp, #-16]!"
+                $asm.puts "stp #{ops[0].arm64Operand(:quad)}, #{ops[1].arm64Operand(:quad)}, [sp, #-16]!"
             }
         when "move"
             if operands[0].immediate?
                 emitARM64MoveImmediate(operands[0].value, operands[1])
             else
-                emitARM64("mov", operands, :ptr)
+                emitARM64("mov", operands, :quad)
             end
         when "sxi2p"
-            emitARM64("sxtw", operands, [:int, :ptr])
+            emitARM64("sxtw", operands, [:word, :ptr])
         when "sxi2q"
-            emitARM64("sxtw", operands, [:int, :ptr])
+            emitARM64("sxtw", operands, [:word, :quad])
         when "zxi2p"
-            emitARM64("uxtw", operands, [:int, :ptr])
+            emitARM64("uxtw", operands, [:word, :ptr])
         when "zxi2q"
-            emitARM64("uxtw", operands, [:int, :ptr])
+            emitARM64("uxtw", operands, [:word, :quad])
         when "nop"
             $asm.puts "nop"
         when "bieq", "bbeq"
             if operands[0].immediate? and operands[0].value == 0
-                $asm.puts "cbz #{operands[1].arm64Operand(:int)}, #{operands[2].asmLabel}"
+                $asm.puts "cbz #{operands[1].arm64Operand(:word)}, #{operands[2].asmLabel}"
             elsif operands[1].immediate? and operands[1].value == 0
-                $asm.puts "cbz #{operands[0].arm64Operand(:int)}, #{operands[2].asmLabel}"
+                $asm.puts "cbz #{operands[0].arm64Operand(:word)}, #{operands[2].asmLabel}"
             else
-                emitARM64Branch("subs wzr, ", operands, :int, "b.eq")
+                emitARM64Branch("subs wzr, ", operands, :word, "b.eq")
             end
         when "bpeq"
             if operands[0].immediate? and operands[0].value == 0
@@ -732,23 +785,23 @@ class Instruction
             elsif operands[1].immediate? and operands[1].value == 0
                 $asm.puts "cbz #{operands[0].arm64Operand(:ptr)}, #{operands[2].asmLabel}"
             else
-                emitARM64Branch("subs xzr, ", operands, :ptr, "b.eq")
+                emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.eq")
             end
         when "bqeq"
             if operands[0].immediate? and operands[0].value == 0
-                $asm.puts "cbz #{operands[1].arm64Operand(:ptr)}, #{operands[2].asmLabel}"
+                $asm.puts "cbz #{operands[1].arm64Operand(:quad)}, #{operands[2].asmLabel}"
             elsif operands[1].immediate? and operands[1].value == 0
-                $asm.puts "cbz #{operands[0].arm64Operand(:ptr)}, #{operands[2].asmLabel}"
+                $asm.puts "cbz #{operands[0].arm64Operand(:quad)}, #{operands[2].asmLabel}"
             else
-                emitARM64Branch("subs xzr, ", operands, :ptr, "b.eq")
+                emitARM64Branch("subs xzr, ", operands, :quad, "b.eq")
             end
         when "bineq", "bbneq"
             if operands[0].immediate? and operands[0].value == 0
-                $asm.puts "cbnz #{operands[1].arm64Operand(:int)}, #{operands[2].asmLabel}"
+                $asm.puts "cbnz #{operands[1].arm64Operand(:word)}, #{operands[2].asmLabel}"
             elsif operands[1].immediate? and operands[1].value == 0
-                $asm.puts "cbnz #{operands[0].arm64Operand(:int)}, #{operands[2].asmLabel}"
+                $asm.puts "cbnz #{operands[0].arm64Operand(:word)}, #{operands[2].asmLabel}"
             else
-                emitARM64Branch("subs wzr, ", operands, :int, "b.ne")
+                emitARM64Branch("subs wzr, ", operands, :word, "b.ne")
             end
         when "bpneq"
             if operands[0].immediate? and operands[0].value == 0
@@ -756,152 +809,152 @@ class Instruction
             elsif operands[1].immediate? and operands[1].value == 0
                 $asm.puts "cbnz #{operands[0].arm64Operand(:ptr)}, #{operands[2].asmLabel}"
             else
-                emitARM64Branch("subs xzr, ", operands, :ptr, "b.ne")
+                emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.ne")
             end
         when "bqneq"
             if operands[0].immediate? and operands[0].value == 0
-                $asm.puts "cbnz #{operands[1].arm64Operand(:ptr)}, #{operands[2].asmLabel}"
+                $asm.puts "cbnz #{operands[1].arm64Operand(:quad)}, #{operands[2].asmLabel}"
             elsif operands[1].immediate? and operands[1].value == 0
-                $asm.puts "cbnz #{operands[0].arm64Operand(:ptr)}, #{operands[2].asmLabel}"
+                $asm.puts "cbnz #{operands[0].arm64Operand(:quad)}, #{operands[2].asmLabel}"
             else
-                emitARM64Branch("subs xzr, ", operands, :ptr, "b.ne")
+                emitARM64Branch("subs xzr, ", operands, :quad, "b.ne")
             end
         when "bia", "bba"
-            emitARM64Branch("subs wzr, ", operands, :int, "b.hi")
+            emitARM64Branch("subs wzr, ", operands, :word, "b.hi")
         when "bpa"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.hi")
+            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.hi")
         when "bqa"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.hi")
+            emitARM64Branch("subs xzr, ", operands, :quad, "b.hi")
         when "biaeq", "bbaeq"
-            emitARM64Branch("subs wzr, ", operands, :int, "b.hs")
+            emitARM64Branch("subs wzr, ", operands, :word, "b.hs")
         when "bpaeq"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.hs")
+            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.hs")
         when "bqaeq"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.hs")
+            emitARM64Branch("subs xzr, ", operands, :quad, "b.hs")
         when "bib", "bbb"
-            emitARM64Branch("subs wzr, ", operands, :int, "b.lo")
+            emitARM64Branch("subs wzr, ", operands, :word, "b.lo")
         when "bpb"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.lo")
+            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.lo")
         when "bqb"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.lo")
+            emitARM64Branch("subs xzr, ", operands, :quad, "b.lo")
         when "bibeq", "bbbeq"
-            emitARM64Branch("subs wzr, ", operands, :int, "b.ls")
+            emitARM64Branch("subs wzr, ", operands, :word, "b.ls")
         when "bpbeq"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.ls")
+            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.ls")
         when "bqbeq"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.ls")
+            emitARM64Branch("subs xzr, ", operands, :quad, "b.ls")
         when "bigt", "bbgt"
-            emitARM64Branch("subs wzr, ", operands, :int, "b.gt")
+            emitARM64Branch("subs wzr, ", operands, :word, "b.gt")
         when "bpgt"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.gt")
+            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.gt")
         when "bqgt"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.gt")
+            emitARM64Branch("subs xzr, ", operands, :quad, "b.gt")
         when "bigteq", "bbgteq"
-            emitARM64Branch("subs wzr, ", operands, :int, "b.ge")
+            emitARM64Branch("subs wzr, ", operands, :word, "b.ge")
         when "bpgteq"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.ge")
+            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.ge")
         when "bqgteq"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.ge")
+            emitARM64Branch("subs xzr, ", operands, :quad, "b.ge")
         when "bilt", "bblt"
-            emitARM64Branch("subs wzr, ", operands, :int, "b.lt")
+            emitARM64Branch("subs wzr, ", operands, :word, "b.lt")
         when "bplt"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.lt")
+            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.lt")
         when "bqlt"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.lt")
+            emitARM64Branch("subs xzr, ", operands, :quad, "b.lt")
         when "bilteq", "bblteq"
-            emitARM64Branch("subs wzr, ", operands, :int, "b.le")
+            emitARM64Branch("subs wzr, ", operands, :word, "b.le")
         when "bplteq"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.le")
+            emitARM64Branch("subs #{arm64GPRName('xzr', :ptr)}, ", operands, :ptr, "b.le")
         when "bqlteq"
-            emitARM64Branch("subs xzr, ", operands, :ptr, "b.le")
+            emitARM64Branch("subs xzr, ", operands, :quad, "b.le")
         when "jmp"
             if operands[0].label?
                 $asm.puts "b #{operands[0].asmLabel}"
             else
-                emitARM64Unflipped("br", operands, :ptr)
+                emitARM64Unflipped("br", operands, :quad)
             end
         when "call"
             if operands[0].label?
                 $asm.puts "bl #{operands[0].asmLabel}"
             else
-                emitARM64Unflipped("blr", operands, :ptr)
+                emitARM64Unflipped("blr", operands, :quad)
             end
         when "break"
             $asm.puts "brk \#0"
         when "ret"
             $asm.puts "ret"
         when "cieq", "cbeq"
-            emitARM64Compare(operands, :int, "ne")
+            emitARM64Compare(operands, :word, "ne")
         when "cpeq"
             emitARM64Compare(operands, :ptr, "ne")
         when "cqeq"
-            emitARM64Compare(operands, :ptr, "ne")
+            emitARM64Compare(operands, :quad, "ne")
         when "cineq", "cbneq"
-            emitARM64Compare(operands, :int, "eq")
+            emitARM64Compare(operands, :word, "eq")
         when "cpneq"
             emitARM64Compare(operands, :ptr, "eq")
         when "cqneq"
-            emitARM64Compare(operands, :ptr, "eq")
+            emitARM64Compare(operands, :quad, "eq")
         when "cia", "cba"
-            emitARM64Compare(operands, :int, "ls")
+            emitARM64Compare(operands, :word, "ls")
         when "cpa"
             emitARM64Compare(operands, :ptr, "ls")
         when "cqa"
-            emitARM64Compare(operands, :ptr, "ls")
+            emitARM64Compare(operands, :quad, "ls")
         when "ciaeq", "cbaeq"
-            emitARM64Compare(operands, :int, "lo")
+            emitARM64Compare(operands, :word, "lo")
         when "cpaeq"
             emitARM64Compare(operands, :ptr, "lo")
         when "cqaeq"
-            emitARM64Compare(operands, :ptr, "lo")
+            emitARM64Compare(operands, :quad, "lo")
         when "cib", "cbb"
-            emitARM64Compare(operands, :int, "hs")
+            emitARM64Compare(operands, :word, "hs")
         when "cpb"
             emitARM64Compare(operands, :ptr, "hs")
         when "cqb"
-            emitARM64Compare(operands, :ptr, "hs")
+            emitARM64Compare(operands, :quad, "hs")
         when "cibeq", "cbbeq"
-            emitARM64Compare(operands, :int, "hi")
+            emitARM64Compare(operands, :word, "hi")
         when "cpbeq"
             emitARM64Compare(operands, :ptr, "hi")
         when "cqbeq"
-            emitARM64Compare(operands, :ptr, "hi")
+            emitARM64Compare(operands, :quad, "hi")
         when "cilt", "cblt"
-            emitARM64Compare(operands, :int, "ge")
+            emitARM64Compare(operands, :word, "ge")
         when "cplt"
             emitARM64Compare(operands, :ptr, "ge")
         when "cqlt"
-            emitARM64Compare(operands, :ptr, "ge")
+            emitARM64Compare(operands, :quad, "ge")
         when "cilteq", "cblteq"
-            emitARM64Compare(operands, :int, "gt")
+            emitARM64Compare(operands, :word, "gt")
         when "cplteq"
             emitARM64Compare(operands, :ptr, "gt")
         when "cqlteq"
-            emitARM64Compare(operands, :ptr, "gt")
+            emitARM64Compare(operands, :quad, "gt")
         when "cigt", "cbgt"
-            emitARM64Compare(operands, :int, "le")
+            emitARM64Compare(operands, :word, "le")
         when "cpgt"
             emitARM64Compare(operands, :ptr, "le")
         when "cqgt"
-            emitARM64Compare(operands, :ptr, "le")
+            emitARM64Compare(operands, :quad, "le")
         when "cigteq", "cbgteq"
-            emitARM64Compare(operands, :int, "lt")
+            emitARM64Compare(operands, :word, "lt")
         when "cpgteq"
             emitARM64Compare(operands, :ptr, "lt")
         when "cqgteq"
-            emitARM64Compare(operands, :ptr, "lt")
+            emitARM64Compare(operands, :quad, "lt")
         when "peek"
-            $asm.puts "ldr #{operands[1].arm64Operand(:ptr)}, [sp, \##{operands[0].value * 8}]"
+            $asm.puts "ldr #{operands[1].arm64Operand(:quad)}, [sp, \##{operands[0].value * 8}]"
         when "poke"
-            $asm.puts "str #{operands[1].arm64Operand(:ptr)}, [sp, \##{operands[0].value * 8}]"
+            $asm.puts "str #{operands[1].arm64Operand(:quad)}, [sp, \##{operands[0].value * 8}]"
         when "fp2d"
             emitARM64("fmov", operands, [:ptr, :double])
         when "fq2d"
-            emitARM64("fmov", operands, [:ptr, :double])
+            emitARM64("fmov", operands, [:quad, :double])
         when "fd2p"
             emitARM64("fmov", operands, [:double, :ptr])
         when "fd2q"
-            emitARM64("fmov", operands, [:double, :ptr])
+            emitARM64("fmov", operands, [:double, :quad])
         when "bo"
             $asm.puts "b.vs #{operands[0].asmLabel}"
         when "bs"
@@ -911,17 +964,17 @@ class Instruction
         when "bnz"
             $asm.puts "b.ne #{operands[0].asmLabel}"
         when "leai"
-            operands[0].arm64EmitLea(operands[1], :int)
+            operands[0].arm64EmitLea(operands[1], :word)
         when "leap"
             operands[0].arm64EmitLea(operands[1], :ptr)
         when "leaq"
-            operands[0].arm64EmitLea(operands[1], :ptr)
+            operands[0].arm64EmitLea(operands[1], :quad)
         when "smulli"
-            $asm.puts "smaddl #{operands[2].arm64Operand(:ptr)}, #{operands[0].arm64Operand(:int)}, #{operands[1].arm64Operand(:int)}, xzr"
+            $asm.puts "smaddl #{operands[2].arm64Operand(:quad)}, #{operands[0].arm64Operand(:word)}, #{operands[1].arm64Operand(:word)}, xzr"
         when "memfence"
             $asm.puts "dmb sy"
         when "pcrtoaddr"
-            $asm.puts "adr #{operands[1].arm64Operand(:ptr)}, #{operands[0].value}"
+            $asm.puts "adr #{operands[1].arm64Operand(:quad)}, #{operands[0].value}"
         when "nopCortexA53Fix835769"
             $asm.putStr("#if CPU(ARM64_CORTEXA53)")
             $asm.puts "nop"
@@ -933,14 +986,14 @@ class Instruction
             # the labels required for the .loh directive.
             $asm.putStr("#if OS(DARWIN)")
             $asm.puts "L_offlineasm_loh_adrp_#{uid}:"
-            $asm.puts "adrp #{operands[1].arm64Operand(:ptr)}, #{operands[0].asmLabel}@GOTPAGE"
+            $asm.puts "adrp #{operands[1].arm64Operand(:quad)}, #{operands[0].asmLabel}@GOTPAGE"
             $asm.puts "L_offlineasm_loh_ldr_#{uid}:"
-            $asm.puts "ldr #{operands[1].arm64Operand(:ptr)}, [#{operands[1].arm64Operand(:ptr)}, #{operands[0].asmLabel}@GOTPAGEOFF]"
+            $asm.puts "ldr #{operands[1].arm64Operand(:quad)}, [#{operands[1].arm64Operand(:quad)}, #{operands[0].asmLabel}@GOTPAGEOFF]"
 
             # On Linux, use ELF GOT relocation specifiers.
             $asm.putStr("#elif OS(LINUX)")
-            $asm.puts "adrp #{operands[1].arm64Operand(:ptr)}, :got:#{operands[0].asmLabel}"
-            $asm.puts "ldr #{operands[1].arm64Operand(:ptr)}, [#{operands[1].arm64Operand(:ptr)}, :got_lo12:#{operands[0].asmLabel}]"
+            $asm.puts "adrp #{operands[1].arm64Operand(:quad)}, :got:#{operands[0].asmLabel}"
+            $asm.puts "ldr #{operands[1].arm64Operand(:quad)}, [#{operands[1].arm64Operand(:quad)}, :got_lo12:#{operands[0].asmLabel}]"
 
             # Throw a compiler error everywhere else.
             $asm.putStr("#else")
index 0604149..bfae515 100644 (file)
@@ -389,6 +389,7 @@ File.open(outputFlnm, "w") {
             lowLevelAST = lowLevelAST.resolve(buildOffsetsMap(lowLevelAST, offsetsList))
             lowLevelAST.validate
             emitCodeInConfiguration(concreteSettings, lowLevelAST, backend) {
+                 $currentSettings = concreteSettings
                 $asm.inAsm {
                     lowLevelAST.lower(backend)
                 }
index bfe866b..f4f9c3e 100644 (file)
@@ -811,12 +811,16 @@ class BaseIndex < Node
         @base = base
         @index = index
         @scale = scale
-        raise unless [1, 2, 4, 8].member? @scale
         @offset = offset
     end
-    
+
+    def scaleValue
+        raise unless [1, 2, 4, 8].member? scale.value
+        scale.value
+    end
+
     def scaleShift
-        case scale
+        case scaleValue
         when 1
             0
         when 2
@@ -826,7 +830,7 @@ class BaseIndex < Node
         when 8
             3
         else
-            raise "Bad scale at #{codeOriginString}"
+            raise "Bad scale: #{scale.value} at #{codeOriginString}"
         end
     end
     
@@ -839,11 +843,11 @@ class BaseIndex < Node
     end
     
     def mapChildren
-        BaseIndex.new(codeOrigin, (yield @base), (yield @index), @scale, (yield @offset))
+        BaseIndex.new(codeOrigin, (yield @base), (yield @index), (yield @scale), (yield @offset))
     end
     
     def dump
-        "#{offset.dump}[#{base.dump}, #{index.dump}, #{scale}]"
+        "#{offset.dump}[#{base.dump}, #{index.dump}, #{scale.value}]"
     end
     
     def address?
index 4549ab1..ff065d5 100644 (file)
@@ -86,6 +86,7 @@ def canonicalizeBackendNames(backendNames)
         backendName = backendName.upcase
         if backendName =~ /ARM.*/
             backendName.sub!(/ARMV7(S?)(.*)/) { | _ | 'ARMv7' + $1.downcase + $2 }
+            backendName = "ARM64" if backendName == "ARM64_32"
         end
         backendName = "X86" if backendName == "I386"
         newBackendNames << backendName
index cf8a107..c1138ef 100644 (file)
@@ -363,6 +363,23 @@ class Parser
         @idx += 1
         result
     end
+
+    def parseConstExpr
+        if @tokens[@idx] == "constexpr"
+            @idx += 1
+            skipNewLine
+            if @tokens[@idx] == "("
+                codeOrigin, text = parseTextInParens
+                text = text.join
+            else
+                codeOrigin, text = parseColonColon
+                text = text.join("::")
+            end
+            ConstExpr.forName(codeOrigin, text)
+        else
+            parseError
+        end
+    end
     
     def parseAddress(offset)
         parseError unless @tokens[@idx] == "["
@@ -387,13 +404,18 @@ class Parser
             @idx += 1
             b = parseVariable
             if @tokens[@idx] == "]"
-                result = BaseIndex.new(codeOrigin, a, b, 1, offset)
+                result = BaseIndex.new(codeOrigin, a, b, Immediate.new(codeOrigin, 1), offset)
             else
                 parseError unless @tokens[@idx] == ","
                 @idx += 1
-                parseError unless ["1", "2", "4", "8"].member? @tokens[@idx].string
-                c = @tokens[@idx].string.to_i
-                @idx += 1
+                if ["1", "2", "4", "8"].member? @tokens[@idx].string
+                    c = Immediate.new(codeOrigin, @tokens[@idx].string.to_i)
+                    @idx += 1
+                elsif @tokens[@idx] == "constexpr"
+                    c = parseConstExpr
+                else
+                    c = parseVariable
+                end
                 parseError unless @tokens[@idx] == "]"
                 result = BaseIndex.new(codeOrigin, a, b, c, offset)
             end
@@ -478,16 +500,7 @@ class Parser
             codeOrigin, names = parseColonColon
             Sizeof.forName(codeOrigin, names.join('::'))
         elsif @tokens[@idx] == "constexpr"
-            @idx += 1
-            skipNewLine
-            if @tokens[@idx] == "("
-                codeOrigin, text = parseTextInParens
-                text = text.join
-            else
-                codeOrigin, text = parseColonColon
-                text = text.join("::")
-            end
-            ConstExpr.forName(codeOrigin, text)
+            parseConstExpr
         elsif isLabel @tokens[@idx]
             result = LabelReference.new(@tokens[@idx].codeOrigin, Label.forName(@tokens[@idx].codeOrigin, @tokens[@idx].string))
             @idx += 1
index a059bc7..99ffb8a 100644 (file)
@@ -421,9 +421,9 @@ class BaseIndex
     
     def x86AddressOperand(addressKind)
         if !isIntelSyntax
-            "#{offset.value}(#{base.x86Operand(addressKind)}, #{index.x86Operand(addressKind)}, #{scale})"
+            "#{offset.value}(#{base.x86Operand(addressKind)}, #{index.x86Operand(addressKind)}, #{scaleValue})"
         else
-            "#{getSizeString(addressKind)}[#{offset.value} + #{base.x86Operand(addressKind)} + #{index.x86Operand(addressKind)} * #{scale}]"
+            "#{getSizeString(addressKind)}[#{offset.value} + #{base.x86Operand(addressKind)} + #{index.x86Operand(addressKind)} * #{scaleValue}]"
         end
     end
     
@@ -431,7 +431,7 @@ class BaseIndex
         if !isIntelSyntax
             x86AddressOperand(:ptr)
         else
-            "#{getSizeString(kind)}[#{offset.value} + #{base.x86Operand(:ptr)} + #{index.x86Operand(:ptr)} * #{scale}]"
+            "#{getSizeString(kind)}[#{offset.value} + #{base.x86Operand(:ptr)} + #{index.x86Operand(:ptr)} * #{scaleValue}]"
         end
     end
 
index d97b037..7dabf73 100644 (file)
@@ -74,15 +74,17 @@ Vector<std::pair<int, int>> BasicBlockLocation::getExecutedRanges() const
 void BasicBlockLocation::dumpData() const
 {
     Vector<Gap> executedRanges = getExecutedRanges();
-    for (Gap gap : executedRanges)
-        dataLogF("\tBasicBlock: [%d, %d] hasExecuted: %s, executionCount:%zu\n", gap.first, gap.second, hasExecuted() ? "true" : "false", m_executionCount);
+    for (Gap gap : executedRanges) {
+        dataLogF("\tBasicBlock: [%d, %d] hasExecuted: %s, executionCount:", gap.first, gap.second, hasExecuted() ? "true" : "false");
+        dataLogLn(m_executionCount);
+    }
 }
 
 #if ENABLE(JIT)
 #if USE(JSVALUE64)
 void BasicBlockLocation::emitExecuteCode(CCallHelpers& jit) const
 {
-    static_assert(sizeof(size_t) == 8, "Assuming size_t is 64 bits on 64 bit platforms.");
+    static_assert(sizeof(UCPURegister) == 8, "Assuming size_t is 64 bits on 64 bit platforms.");
     jit.add64(CCallHelpers::TrustedImm32(1), CCallHelpers::AbsoluteAddress(&m_executionCount));
 }
 #else
index 12fb476..e70bea6 100644 (file)
@@ -62,8 +62,8 @@ private:
 
     int m_startOffset;
     int m_endOffset;
-    size_t m_executionCount;
     Vector<Gap> m_gaps;
+    UCPURegister m_executionCount;
 };
 
 } // namespace JSC
index adadc89..0612eb7 100644 (file)
@@ -33,7 +33,7 @@ namespace JSC {
 
 class HasOwnPropertyCache {
     static const uint32_t size = 2 * 1024;
-    static_assert(!(size & (size - 1)), "size should be a power of two.");
+    static_assert(hasOneBitSet(size), "size should be a power of two.");
 public:
     static const uint32_t mask = size - 1;
 
index 3a6c1b7..0f9a4f4 100644 (file)
@@ -226,11 +226,8 @@ inline bool JSBigInt::isZero()
 }
 
 // Multiplies {this} with {factor} and adds {summand} to the result.
-inline void JSBigInt::inplaceMultiplyAdd(uintptr_t factor, uintptr_t summand)
+void JSBigInt::inplaceMultiplyAdd(Digit factor, Digit summand)
 {
-    STATIC_ASSERT(sizeof(factor) == sizeof(Digit));
-    STATIC_ASSERT(sizeof(summand) == sizeof(Digit));
-
     internalMultiplyAdd(this, factor, summand, length(), this);
 }
 
@@ -563,8 +560,8 @@ inline JSBigInt::Digit JSBigInt::digitDiv(Digit high, Digit low, Digit divisor,
     // left operand". We mask the right operand of the shift by {shiftMask} (`digitBits - 1`), which makes `digitBits - 0` zero.
     // This shifting produces a value which covers 0 < {s} <= (digitBits - 1) cases. {s} == digitBits never happen as we asserted.
     // Since {sZeroMask} clears the value in the case of {s} == 0, {s} == 0 case is also covered.
-    STATIC_ASSERT(sizeof(intptr_t) == sizeof(Digit));
-    Digit sZeroMask = static_cast<Digit>((-static_cast<intptr_t>(s)) >> (digitBits - 1));
+    STATIC_ASSERT(sizeof(CPURegister) == sizeof(Digit));
+    Digit sZeroMask = static_cast<Digit>((-static_cast<CPURegister>(s)) >> (digitBits - 1));
     static constexpr unsigned shiftMask = digitBits - 1;
     Digit un32 = (high << s) | ((low >> ((digitBits - s) & shiftMask)) & sZeroMask);
 
index b40f239..805a115 100644 (file)
@@ -25,6 +25,7 @@
 
 #pragma once
 
+#include "CPU.h"
 #include "ExceptionHelpers.h"
 #include "JSObject.h"
 #include "ParseInt.h"
@@ -118,7 +119,7 @@ public:
 
 private:
 
-    using Digit = uintptr_t;
+    using Digit = UCPURegister;
     static constexpr unsigned bitsPerByte = 8;
     static constexpr unsigned digitBits = sizeof(Digit) * bitsPerByte;
     static constexpr unsigned halfDigitBits = digitBits / 2;
index 2523b79..4d5fc83 100644 (file)
@@ -1074,7 +1074,7 @@ private:
     PropertyOffset prepareToPutDirectWithoutTransition(VM&, PropertyName, unsigned attributes, StructureID, Structure*);
 
     AuxiliaryBarrier<Butterfly*> m_butterfly;
-#if USE(JSVALUE32_64)
+#if CPU(ADDRESS32)
     unsigned m_32BitPadding;
 #endif
 };
index 132818c..d2852b0 100644 (file)
@@ -219,6 +219,11 @@ static int32_t computePriorityDeltaOfWorkerThreads(int32_t twoCorePriorityDelta,
     return multiCorePriorityDelta;
 }
 
+static bool jitEnabledByDefault()
+{
+    return is32Bit() || isAddress64Bit();
+}
+
 static unsigned computeNumberOfGCMarkers(unsigned maxNumberOfGCMarkers)
 {
     return computeNumberOfWorkerThreads(maxNumberOfGCMarkers);
index 9865bfb..7c8f7b8 100644 (file)
@@ -130,7 +130,7 @@ constexpr bool enableWebAssemblyStreamingApi = false;
     v(optionString, configFile, nullptr, Normal, "file to configure JSC options and logging location") \
     \
     v(bool, useLLInt,  true, Normal, "allows the LLINT to be used if true") \
-    v(bool, useJIT,    true, Normal, "allows the executable pages to be allocated for JIT and thunks if true") \
+    v(bool, useJIT, jitEnabledByDefault(), Normal, "allows the executable pages to be allocated for JIT and thunks if true") \
     v(bool, useBaselineJIT, true, Normal, "allows the baseline JIT to be used if true") \
     v(bool, useDFGJIT, true, Normal, "allows the DFG JIT to be used if true") \
     v(bool, useRegExpJIT, true, Normal, "allows the RegExp JIT to be used if true") \
index 47d0d32..e1ba09c 100644 (file)
@@ -494,10 +494,10 @@ void RegExp::matchCompareWithInterpreter(const String& s, int startOffset, int*
             snprintf(jit8BitMatchAddr, jitAddrSize, "fallback    ");
             snprintf(jit16BitMatchAddr, jitAddrSize, "----      ");
         } else {
-            snprintf(jit8BitMatchOnlyAddr, jitAddrSize, "0x%014lx", reinterpret_cast<unsigned long int>(codeBlock.get8BitMatchOnlyAddr()));
-            snprintf(jit16BitMatchOnlyAddr, jitAddrSize, "0x%014lx", reinterpret_cast<unsigned long int>(codeBlock.get16BitMatchOnlyAddr()));
-            snprintf(jit8BitMatchAddr, jitAddrSize, "0x%014lx", reinterpret_cast<unsigned long int>(codeBlock.get8BitMatchAddr()));
-            snprintf(jit16BitMatchAddr, jitAddrSize, "0x%014lx", reinterpret_cast<unsigned long int>(codeBlock.get16BitMatchAddr()));
+            snprintf(jit8BitMatchOnlyAddr, jitAddrSize, "0x%014lx", reinterpret_cast<uintptr_t>(codeBlock.get8BitMatchOnlyAddr()));
+            snprintf(jit16BitMatchOnlyAddr, jitAddrSize, "0x%014lx", reinterpret_cast<uintptr_t>(codeBlock.get16BitMatchOnlyAddr()));
+            snprintf(jit8BitMatchAddr, jitAddrSize, "0x%014lx", reinterpret_cast<uintptr_t>(codeBlock.get8BitMatchAddr()));
+            snprintf(jit16BitMatchAddr, jitAddrSize, "0x%014lx", reinterpret_cast<uintptr_t>(codeBlock.get16BitMatchAddr()));
         }
 #else
         const char* jit8BitMatchOnlyAddr = "JIT Off";
index ff07dba..2848ae1 100644 (file)
@@ -228,7 +228,7 @@ public:
 
             if (isCFrame()) {
                 RELEASE_ASSERT(!LLInt::isLLIntPC(frame()->callerFrame));
-                stackTrace[m_depth] = UnprocessedStackFrame(frame()->pc);
+                stackTrace[m_depth] = UnprocessedStackFrame(frame()->returnPC);
                 m_depth++;
             } else
                 recordJSFrame(stackTrace);
index 4cc59a8..c07cc19 100644 (file)
@@ -25,6 +25,7 @@
 
 #pragma once
 
+#include "CPU.h"
 #include <wtf/StdLibExtras.h>
 
 namespace JSC {
@@ -34,22 +35,23 @@ namespace JSC {
 // 'extern "C"') needs to be POD; hence putting any constructors into it could cause either compiler
 // warnings, or worse, a change in the ABI used to return these types.
 struct SlowPathReturnType {
-    void* a;
-    void* b;
+    CPURegister a;
+    CPURegister b;
 };
+static_assert(sizeof(SlowPathReturnType) >= sizeof(void*) * 2, "SlowPathReturnType should fit in two machine registers");
 
 inline SlowPathReturnType encodeResult(void* a, void* b)
 {
     SlowPathReturnType result;
-    result.a = a;
-    result.b = b;
+    result.a = reinterpret_cast<CPURegister>(a);
+    result.b = reinterpret_cast<CPURegister>(b);
     return result;
 }
 
 inline void decodeResult(SlowPathReturnType result, void*& a, void*& b)
 {
-    a = result.a;
-    b = result.b;
+    a = reinterpret_cast<void*>(result.a);
+    b = reinterpret_cast<void*>(result.b);
 }
 
 #else // USE(JSVALUE32_64)
index ef753f5..4165ea4 100644 (file)
@@ -321,10 +321,10 @@ void SigillCrashAnalyzer::dumpCodeBlock(CodeBlock* codeBlock, void* machinePC)
     while (byteCount) {
         char pcString[24];
         if (currentPC == machinePC) {
-            snprintf(pcString, sizeof(pcString), "* 0x%lx", reinterpret_cast<unsigned long>(currentPC));
+            snprintf(pcString, sizeof(pcString), "* 0x%lx", reinterpret_cast<uintptr_t>(currentPC));
             log("%20s: %s    <=========================", pcString, m_arm64Opcode.disassemble(currentPC));
         } else {
-            snprintf(pcString, sizeof(pcString), "0x%lx", reinterpret_cast<unsigned long>(currentPC));
+            snprintf(pcString, sizeof(pcString), "0x%lx", reinterpret_cast<uintptr_t>(currentPC));
             log("%20s: %s", pcString, m_arm64Opcode.disassemble(currentPC));
         }
         currentPC++;
index a45b6bc..787596b 100644 (file)
@@ -1,3 +1,16 @@
+2018-10-15  Keith Miller  <keith_miller@apple.com>
+
+        Support arm64 CPUs with a 32-bit address space
+        https://bugs.webkit.org/show_bug.cgi?id=190273
+
+        Reviewed by Michael Saboff.
+
+        Use WTF_CPU_ADDRESS64/32 to decide if the system is running on arm64_32.
+
+        * wtf/MathExtras.h:
+        (getLSBSet):
+        * wtf/Platform.h:
+
 2018-10-15  Timothy Hatcher  <timothy@apple.com>
 
         Add support for prefers-color-scheme media query
index 3dfbbe2..1b6978d 100644 (file)
@@ -206,7 +206,7 @@ template<typename T> constexpr bool hasTwoOrMoreBitsSet(T value)
     return !hasZeroOrOneBitsSet(value);
 }
 
-template <typename T> inline unsigned getLSBSet(T value)
+template <typename T> constexpr unsigned getLSBSet(T value)
 {
     typedef typename std::make_unsigned<T>::type UnsignedT;
     unsigned result = 0;
index c424501..bc92823 100644 (file)
 #endif
 
 #if !defined(USE_JSVALUE64) && !defined(USE_JSVALUE32_64)
-#if CPU(ADDRESS64)
+#if CPU(ADDRESS64) || CPU(ARM64)
 #define USE_JSVALUE64 1
 #else
 #define USE_JSVALUE32_64 1
 
 /* The JIT is enabled by default on all x86, x86-64, ARM & MIPS platforms except ARMv7k. */
 #if !defined(ENABLE_JIT) \
-    && (CPU(X86) || CPU(X86_64) || CPU(ARM) || (CPU(ARM64) && !defined(__ILP32__)) || CPU(MIPS)) \
+    && (CPU(X86) || CPU(X86_64) || CPU(ARM) || CPU(ARM64) || CPU(MIPS)) \
     && !CPU(APPLE_ARMV7K)
 #define ENABLE_JIT 1
 #endif
 #endif
 
 #if !defined(ENABLE_WEBASSEMBLY)
-#if ENABLE(B3_JIT) && PLATFORM(COCOA)
+#if ENABLE(B3_JIT) && PLATFORM(COCOA) && CPU(ADDRESS64)
 #define ENABLE_WEBASSEMBLY 1
 #else
 #define ENABLE_WEBASSEMBLY 0
index 8052ee5..0c17bcf 100644 (file)
@@ -1,3 +1,15 @@
+2018-10-15  Keith Miller  <keith_miller@apple.com>
+
+        Support arm64 CPUs with a 32-bit address space
+        https://bugs.webkit.org/show_bug.cgi?id=190273
+
+        Reviewed by Michael Saboff.
+
+        Fix missing namespace annotation.
+
+        * cssjit/SelectorCompiler.cpp:
+        (WebCore::SelectorCompiler::SelectorCodeGenerator::generateAddStyleRelation):
+
 2018-10-15  Justin Fan  <justin_fan@apple.com>
 
         Add WebGPU 2018 feature flag and experimental feature flag
index e3eb5f7..803d825 100644 (file)
@@ -2232,7 +2232,7 @@ void SelectorCodeGenerator::generateAddStyleRelation(Assembler::RegisterID check
         static_assert(1 << 4 == 16, "");
         m_assembler.lshiftPtr(Assembler::TrustedImm32(4), sizeAndTarget);
 #else
-        m_assembler.mul32(TrustedImm32(sizeof(Style::Relation)), sizeAndTarget, sizeAndTarget);
+        m_assembler.mul32(Assembler::TrustedImm32(sizeof(Style::Relation)), sizeAndTarget, sizeAndTarget);
 #endif
         m_assembler.addPtr(dataAddress, sizeAndTarget);
     };