[Win64] ASM LLINT is not enabled.
authorcommit-queue@webkit.org <commit-queue@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 25 Jun 2014 16:37:19 +0000 (16:37 +0000)
committercommit-queue@webkit.org <commit-queue@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 25 Jun 2014 16:37:19 +0000 (16:37 +0000)
https://bugs.webkit.org/show_bug.cgi?id=130638

Source/JavaScriptCore:
This patch adds a new LLINT assembler backend for Win64, and implements it.
It makes adjustments to follow the Win64 ABI spec. where it's found to be needed.
Also, LLINT and JIT is enabled for Win64.

Patch by peavo@outlook.com <peavo@outlook.com> on 2014-06-25
Reviewed by Mark Lam.

* JavaScriptCore.vcxproj/JavaScriptCore.vcxproj: Added JITStubsMSVC64.asm.
* JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters: Ditto.
* JavaScriptCore/JavaScriptCore.vcxproj/jsc/jscCommon.props: Increased stack size to avoid stack overflow in tests.
* JavaScriptCore.vcxproj/LLInt/LLIntAssembly/build-LLIntAssembly.sh: Generate assembler source file for Win64.
* assembler/MacroAssemblerX86_64.h:
(JSC::MacroAssemblerX86_64::call): Follow Win64 ABI spec.
* jit/JITStubsMSVC64.asm: Added.
* jit/Repatch.cpp:
(JSC::emitPutTransitionStub): Compile fix.
* jit/ThunkGenerators.cpp:
(JSC::nativeForGenerator): Follow Win64 ABI spec.
* llint/LLIntData.cpp:
(JSC::LLInt::Data::performAssertions): Ditto.
* llint/LLIntOfflineAsmConfig.h: Enable new llint backend for Win64.
* llint/LowLevelInterpreter.asm: Implement new Win64 backend, and follow Win64 ABI spec.
* llint/LowLevelInterpreter64.asm: Ditto.
* offlineasm/asm.rb: Compile fix.
* offlineasm/backends.rb: Add new llint backend for Win64.
* offlineasm/settings.rb: Compile fix.
* offlineasm/x86.rb: Implement new llint Win64 backend.

Source/WTF:
Patch by peavo@outlook.com <peavo@outlook.com> on 2014-06-25
Reviewed by Mark Lam.

* wtf/Platform.h: Enable LLINT and JIT for Win64.

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@170428 268f45cc-cd09-0410-ab3c-d52691b4dbfc

19 files changed:
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj
Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters
Source/JavaScriptCore/JavaScriptCore.vcxproj/LLInt/LLIntAssembly/build-LLIntAssembly.sh
Source/JavaScriptCore/JavaScriptCore.vcxproj/jsc/jscCommon.props
Source/JavaScriptCore/assembler/MacroAssemblerX86_64.h
Source/JavaScriptCore/jit/JITStubsMSVC64.asm [new file with mode: 0644]
Source/JavaScriptCore/jit/Repatch.cpp
Source/JavaScriptCore/jit/ThunkGenerators.cpp
Source/JavaScriptCore/llint/LLIntData.cpp
Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h
Source/JavaScriptCore/llint/LowLevelInterpreter.asm
Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
Source/JavaScriptCore/offlineasm/asm.rb
Source/JavaScriptCore/offlineasm/backends.rb
Source/JavaScriptCore/offlineasm/settings.rb
Source/JavaScriptCore/offlineasm/x86.rb
Source/WTF/ChangeLog
Source/WTF/wtf/Platform.h

index 6cbf5c4..ba335aa 100644 (file)
@@ -1,3 +1,35 @@
+2014-06-25  peavo@outlook.com  <peavo@outlook.com>
+
+        [Win64] ASM LLINT is not enabled.
+        https://bugs.webkit.org/show_bug.cgi?id=130638
+
+        This patch adds a new LLINT assembler backend for Win64, and implements it.
+        It makes adjustments to follow the Win64 ABI spec. where it's found to be needed.
+        Also, LLINT and JIT is enabled for Win64.
+
+        Reviewed by Mark Lam.
+
+        * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj: Added JITStubsMSVC64.asm.
+        * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj.filters: Ditto.
+        * JavaScriptCore/JavaScriptCore.vcxproj/jsc/jscCommon.props: Increased stack size to avoid stack overflow in tests.
+        * JavaScriptCore.vcxproj/LLInt/LLIntAssembly/build-LLIntAssembly.sh: Generate assembler source file for Win64.
+        * assembler/MacroAssemblerX86_64.h: 
+        (JSC::MacroAssemblerX86_64::call): Follow Win64 ABI spec.
+        * jit/JITStubsMSVC64.asm: Added.
+        * jit/Repatch.cpp:
+        (JSC::emitPutTransitionStub): Compile fix.
+        * jit/ThunkGenerators.cpp:
+        (JSC::nativeForGenerator): Follow Win64 ABI spec.
+        * llint/LLIntData.cpp:
+        (JSC::LLInt::Data::performAssertions): Ditto.
+        * llint/LLIntOfflineAsmConfig.h: Enable new llint backend for Win64.
+        * llint/LowLevelInterpreter.asm: Implement new Win64 backend, and follow Win64 ABI spec.
+        * llint/LowLevelInterpreter64.asm: Ditto.
+        * offlineasm/asm.rb: Compile fix.
+        * offlineasm/backends.rb: Add new llint backend for Win64.
+        * offlineasm/settings.rb: Compile fix.
+        * offlineasm/x86.rb: Implement new llint Win64 backend.
+
 2014-06-25  Laszlo Gombos  <l.gombos@samsung.com>
 
         Remove build guard for progress element
index b551947..19e2bd4 100644 (file)
       <UseSafeExceptionHandlers Condition="'$(Configuration)|$(Platform)'=='Debug_WinCairo|Win32'">true</UseSafeExceptionHandlers>
       <UseSafeExceptionHandlers Condition="'$(Configuration)|$(Platform)'=='Release_WinCairo|Win32'">true</UseSafeExceptionHandlers>
     </MASM>
+    <MASM Include="..\jit\JITStubsMSVC64.asm">
+      <ExcludedFromBuild Condition="'$(Configuration)|$(Platform)'=='Debug_WinCairo|Win32'">true</ExcludedFromBuild>
+      <ExcludedFromBuild Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">true</ExcludedFromBuild>
+      <ExcludedFromBuild Condition="'$(Configuration)|$(Platform)'=='DebugSuffix|Win32'">true</ExcludedFromBuild>
+      <ExcludedFromBuild Condition="'$(Configuration)|$(Platform)'=='Release_WinCairo|Win32'">true</ExcludedFromBuild>
+      <ExcludedFromBuild Condition="'$(Configuration)|$(Platform)'=='Production|Win32'">true</ExcludedFromBuild>
+      <ExcludedFromBuild Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">true</ExcludedFromBuild>
+    </MASM>
   </ItemGroup>
   <Import Project="$(VCTargetsPath)\Microsoft.Cpp.targets" />
   <ImportGroup Label="ExtensionTargets">
     <Import Project="$(VCTargetsPath)\BuildCustomizations\masm.targets" />
   </ImportGroup>
-</Project>
+</Project>
\ No newline at end of file
index 9e24166..70ce26b 100644 (file)
   </ItemGroup>
   <ItemGroup>
     <MASM Include="$(ConfigurationBuildDir)\obj$(PlatformArchitecture)\$(ProjectName)\DerivedSources\LowLevelInterpreterWin.asm" />
+    <MASM Include="..\jit\JITStubsMSVC64.asm">
+      <Filter>jit</Filter>
+    </MASM>
   </ItemGroup>
-</Project>
+</Project>
\ No newline at end of file
index a35e2c7..747ef39 100644 (file)
@@ -25,13 +25,8 @@ cd "${BUILT_PRODUCTS_DIR}/JavaScriptCore/DerivedSources"
 
 printf "END" > LowLevelInterpreterWin.asm
 
-# Win32 is using the LLINT x86 backend, and should generate an assembler file.
-# Win64 is using the LLINT C backend, and should generate a header file.
-
-if [ "${PLATFORMARCHITECTURE}" == "32" ]; then
-    OUTPUTFILENAME="LowLevelInterpreterWin.asm"
-else
-    OUTPUTFILENAME="LLIntAssembly.h"
-fi
+# If you want to enable the LLINT C loop, set OUTPUTFILENAME to "LLIntAssembly.h"
+
+OUTPUTFILENAME="LowLevelInterpreterWin.asm"
 
 /usr/bin/env ruby "${SRCROOT}/offlineasm/asm.rb" "-I." "${SRCROOT}/llint/LowLevelInterpreter.asm" "${BUILT_PRODUCTS_DIR}/LLIntOffsetsExtractor/LLIntOffsetsExtractor${3}.exe" "${OUTPUTFILENAME}" || exit 1
index d4eec54..8545424 100644 (file)
@@ -16,6 +16,7 @@
       <ModuleDefinitionFile>
       </ModuleDefinitionFile>
       <SubSystem>Console</SubSystem>
+      <StackReserveSize>2097152</StackReserveSize>
     </Link>
   </ItemDefinitionGroup>
   <ItemGroup />
index bdfbc2c..a93a8f6 100644 (file)
@@ -153,8 +153,33 @@ public:
 
     Call call()
     {
+#if OS(WINDOWS)
+        // JIT relies on the CallerFrame (frame pointer) being put on the stack,
+        // On Win64 we need to manually copy the frame pointer to the stack, since MSVC may not maintain a frame pointer on 64-bit.
+        // See http://msdn.microsoft.com/en-us/library/9z1stfyw.aspx where it's stated that rbp MAY be used as a frame pointer.
+        store64(X86Registers::ebp, Address(X86Registers::esp, -16));
+
+        // On Windows we need to copy the arguments that don't fit in registers to the stack location where the callee expects to find them.
+        // We don't know the number of arguments at this point, so the arguments (5, 6, ...) should always be copied.
+
+        // Copy argument 5
+        load64(Address(X86Registers::esp, 4 * sizeof(int64_t)), scratchRegister);
+        store64(scratchRegister, Address(X86Registers::esp, -4 * sizeof(int64_t)));
+
+        // Copy argument 6
+        load64(Address(X86Registers::esp, 5 * sizeof(int64_t)), scratchRegister);
+        store64(scratchRegister, Address(X86Registers::esp, -3 * sizeof(int64_t)));
+
+        // We also need to allocate the shadow space on the stack for the 4 parameter registers.
+        // Also, we should allocate 16 bytes for the frame pointer, and return address (not populated).
+        // In addition, we need to allocate 16 bytes for two more parameters, since the call can have up to 6 parameters.
+        sub64(TrustedImm32(8 * sizeof(int64_t)), X86Registers::esp);
+#endif
         DataLabelPtr label = moveWithPatch(TrustedImmPtr(0), scratchRegister);
         Call result = Call(m_assembler.call(scratchRegister), Call::Linkable);
+#if OS(WINDOWS)
+        add64(TrustedImm32(8 * sizeof(int64_t)), X86Registers::esp);
+#endif
         ASSERT_UNUSED(label, differenceBetween(label, result) == REPTACH_OFFSET_CALL_R11);
         return result;
     }
diff --git a/Source/JavaScriptCore/jit/JITStubsMSVC64.asm b/Source/JavaScriptCore/jit/JITStubsMSVC64.asm
new file mode 100644 (file)
index 0000000..d073a24
--- /dev/null
@@ -0,0 +1,44 @@
+;/*
+; Copyright (C) 2014 Apple Inc. All rights reserved.
+;
+; Redistribution and use in source and binary forms, with or without
+; modification, are permitted provided that the following conditions
+; are met:
+; 1. Redistributions of source code must retain the above copyright
+;    notice, this list of conditions and the following disclaimer.
+; 2. Redistributions in binary form must reproduce the above copyright
+;    notice, this list of conditions and the following disclaimer in the
+;    documentation and/or other materials provided with the distribution.
+;
+; THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+; EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+; IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+; PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+; CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+; EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+; PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+; PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+; OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+; (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+; OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+;*/
+
+EXTERN getHostCallReturnValueWithExecState : near
+
+PUBLIC getHostCallReturnValue
+
+_TEXT   SEGMENT
+
+getHostCallReturnValue PROC
+    mov rcx, rbp
+    ; Allocate space for all 4 parameter registers, and align stack pointer to 16 bytes boundary by allocating another 8 bytes.
+    ; The stack alignment is needed to fix a crash in the CRT library on a floating point instruction.
+    sub rsp, 40
+    call getHostCallReturnValueWithExecState
+    add rsp, 40
+    ret
+getHostCallReturnValue ENDP
+
+_TEXT   ENDS
+
+END
index 379ee4c..96b18c4 100644 (file)
@@ -1082,7 +1082,12 @@ static void emitPutTransitionStub(
     ASSERT(oldStructure->typeInfo().type() == structure->typeInfo().type());
     ASSERT(oldStructure->typeInfo().inlineTypeFlags() == structure->typeInfo().inlineTypeFlags());
     ASSERT(oldStructure->indexingType() == structure->indexingType());
-    stubJit.store32(MacroAssembler::TrustedImm32(reinterpret_cast<uint32_t>(structure->id())), MacroAssembler::Address(baseGPR, JSCell::structureIDOffset()));
+#if USE(JSVALUE64)
+    uint32_t val = structure->id();
+#else
+    uint32_t val = reinterpret_cast<uint32_t>(structure->id());
+#endif
+    stubJit.store32(MacroAssembler::TrustedImm32(val), MacroAssembler::Address(baseGPR, JSCell::structureIDOffset()));
 #if USE(JSVALUE64)
     if (isInlineOffset(slot.cachedOffset()))
         stubJit.store64(valueGPR, MacroAssembler::Address(baseGPR, JSObject::offsetOfInlineStorage() + offsetInInlineStorage(slot.cachedOffset()) * sizeof(JSValue)));
index 98d11fb..317fc6b 100644 (file)
@@ -315,14 +315,15 @@ static MacroAssemblerCodeRef nativeForGenerator(VM* vm, CodeSpecializationKind k
     // Host function signature: f(ExecState*);
     jit.move(JSInterfaceJIT::callFrameRegister, X86Registers::ecx);
 
-    // Leave space for the callee parameter home addresses and align the stack.
-    jit.subPtr(JSInterfaceJIT::TrustedImm32(4 * sizeof(int64_t) + 16 - sizeof(int64_t)), JSInterfaceJIT::stackPointerRegister);
+    // Leave space for the callee parameter home addresses.
+    // At this point the stack is aligned to 16 bytes, but if this changes at some point, we need to emit code to align it.
+    jit.subPtr(JSInterfaceJIT::TrustedImm32(4 * sizeof(int64_t)), JSInterfaceJIT::stackPointerRegister);
 
     jit.emitGetFromCallFrameHeaderPtr(JSStack::Callee, X86Registers::edx);
     jit.loadPtr(JSInterfaceJIT::Address(X86Registers::edx, JSFunction::offsetOfExecutable()), X86Registers::r9);
     jit.call(JSInterfaceJIT::Address(X86Registers::r9, executableOffsetToFunction));
 
-    jit.addPtr(JSInterfaceJIT::TrustedImm32(4 * sizeof(int64_t) + 16 - sizeof(int64_t)), JSInterfaceJIT::stackPointerRegister);
+    jit.addPtr(JSInterfaceJIT::TrustedImm32(4 * sizeof(int64_t)), JSInterfaceJIT::stackPointerRegister);
 #endif
 
 #elif CPU(ARM64)
@@ -398,12 +399,18 @@ static MacroAssemblerCodeRef nativeForGenerator(VM* vm, CodeSpecializationKind k
     jit.loadPtr(JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister), JSInterfaceJIT::regT0);
     jit.push(JSInterfaceJIT::regT0);
 #else
+#if OS(WINDOWS)
+    // Allocate space on stack for the 4 parameter registers.
+    jit.subPtr(JSInterfaceJIT::TrustedImm32(4 * sizeof(int64_t)), JSInterfaceJIT::stackPointerRegister);
+#endif
     jit.loadPtr(JSInterfaceJIT::Address(JSInterfaceJIT::callFrameRegister), JSInterfaceJIT::argumentGPR0);
 #endif
     jit.move(JSInterfaceJIT::TrustedImmPtr(FunctionPtr(operationVMHandleException).value()), JSInterfaceJIT::regT3);
     jit.call(JSInterfaceJIT::regT3);
 #if CPU(X86) && USE(JSVALUE32_64)
     jit.addPtr(JSInterfaceJIT::TrustedImm32(16), JSInterfaceJIT::stackPointerRegister);
+#elif OS(WINDOWS)
+    jit.addPtr(JSInterfaceJIT::TrustedImm32(4 * sizeof(int64_t)), JSInterfaceJIT::stackPointerRegister);
 #endif
 
     jit.jumpToExceptionHandler();
index 7ed20da..5327813 100644 (file)
@@ -123,12 +123,14 @@ void Data::performAssertions(VM& vm)
     ASSERT(ValueUndefined == (TagBitTypeOther | TagBitUndefined));
     ASSERT(ValueNull == TagBitTypeOther);
 #endif
-#if CPU(X86_64) || CPU(ARM64) || !ENABLE(JIT)
+#if (CPU(X86_64) && !OS(WINDOWS)) || CPU(ARM64) || !ENABLE(JIT)
     ASSERT(!maxFrameExtentForSlowPathCall);
 #elif CPU(ARM) || CPU(SH4)
     ASSERT(maxFrameExtentForSlowPathCall == 24);
 #elif CPU(X86) || CPU(MIPS)
     ASSERT(maxFrameExtentForSlowPathCall == 40);
+#elif CPU(X86_64) && OS(WINDOWS)
+    ASSERT(maxFrameExtentForSlowPathCall == 64);
 #endif
     ASSERT(StringType == 5);
     ASSERT(ObjectType == 18);
index ba8bd13..da85879 100644 (file)
@@ -39,6 +39,7 @@
 #define OFFLINE_ASM_ARMv7_TRADITIONAL 0
 #define OFFLINE_ASM_ARM64 0
 #define OFFLINE_ASM_X86_64 0
+#define OFFLINE_ASM_X86_64_WIN 0
 #define OFFLINE_ASM_ARMv7s 0
 #define OFFLINE_ASM_MIPS 0
 #define OFFLINE_ASM_SH4 0
 #define OFFLINE_ASM_ARM 0
 #endif
 
-#if CPU(X86_64)
+#if CPU(X86_64) && !PLATFORM(WIN)
 #define OFFLINE_ASM_X86_64 1
 #else
 #define OFFLINE_ASM_X86_64 0
 #endif
 
+#if CPU(X86_64) && PLATFORM(WIN)
+#define OFFLINE_ASM_X86_64_WIN 1
+#else
+#define OFFLINE_ASM_X86_64_WIN 0
+#endif
+
 #if CPU(MIPS)
 #define OFFLINE_ASM_MIPS 1
 #else
index d28687a..45a604c 100644 (file)
@@ -84,6 +84,8 @@ elsif X86 or X86_WIN
 const maxFrameExtentForSlowPathCall = 40
 elsif MIPS
 const maxFrameExtentForSlowPathCall = 40
+elsif X86_64_WIN
+const maxFrameExtentForSlowPathCall = 64
 end
 
 # Watchpoint states
@@ -248,7 +250,7 @@ macro preserveCallerPCAndCFR()
     if C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS or SH4
         push lr
         push cfr
-    elsif X86 or X86_WIN or X86_64
+    elsif X86 or X86_WIN or X86_64 or X86_64_WIN
         push cfr
     elsif ARM64
         pushLRAndFP
@@ -263,7 +265,7 @@ macro restoreCallerPCAndCFR()
     if C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or MIPS or SH4
         pop cfr
         pop lr
-    elsif X86 or X86_WIN or X86_64
+    elsif X86 or X86_WIN or X86_64 or X86_64_WIN
         pop cfr
     elsif ARM64
         popLRAndFP
@@ -274,7 +276,7 @@ macro preserveReturnAddressAfterCall(destinationRegister)
     if C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or ARM64 or MIPS or SH4
         # In C_LOOP case, we're only preserving the bytecode vPC.
         move lr, destinationRegister
-    elsif X86 or X86_WIN or X86_64
+    elsif X86 or X86_WIN or X86_64 or X86_64_WIN
         pop destinationRegister
     else
         error
@@ -285,7 +287,7 @@ macro restoreReturnAddressBeforeReturn(sourceRegister)
     if C_LOOP or ARM or ARMv7 or ARMv7_TRADITIONAL or ARM64 or MIPS or SH4
         # In C_LOOP case, we're only restoring the bytecode vPC.
         move sourceRegister, lr
-    elsif X86 or X86_WIN or X86_64
+    elsif X86 or X86_WIN or X86_64 or X86_64_WIN
         push sourceRegister
     else
         error
@@ -293,7 +295,7 @@ macro restoreReturnAddressBeforeReturn(sourceRegister)
 end
 
 macro functionPrologue()
-    if X86 or X86_WIN or X86_64
+    if X86 or X86_WIN or X86_64 or X86_64_WIN
         push cfr
     elsif ARM64
         pushLRAndFP
@@ -305,7 +307,7 @@ macro functionPrologue()
 end
 
 macro functionEpilogue()
-    if X86 or X86_WIN or X86_64
+    if X86 or X86_WIN or X86_64 or X86_64_WIN
         pop cfr
     elsif ARM64
         popLRAndFP
@@ -316,7 +318,7 @@ macro functionEpilogue()
 end
 
 macro callToJavaScriptPrologue()
-    if X86_64
+    if X86_64 or X86_64_WIN
         push cfr
         push t0
     elsif X86 or X86_WIN
@@ -371,7 +373,7 @@ macro callToJavaScriptEpilogue()
     end
 
     popCalleeSaves
-    if X86_64
+    if X86_64 or X86_64_WIN
         pop t2
         pop cfr
     elsif X86 or X86_WIN
@@ -656,6 +658,10 @@ _sanitizeStackForVMImpl:
         const vm = t4
         const address = t1
         const zeroValue = t0
+    elsif X86_64_WIN
+        const vm = t2
+        const address = t1
+        const zeroValue = t0
     elsif X86 or X86_WIN
         const vm = t2
         const address = t1
@@ -692,7 +698,7 @@ _llint_entry:
     crash()
 else
 macro initPCRelative(pcBase)
-    if X86_64
+    if X86_64 or X86_64_WIN
         call _relativePCBase
     _relativePCBase:
         pop pcBase
@@ -725,6 +731,10 @@ macro setEntryAddress(index, label)
         leap (label - _relativePCBase)[t1], t0
         move index, t2
         storep t0, [t4, t2, 8]
+    elsif X86_64_WIN
+        leap (label - _relativePCBase)[t1], t0
+        move index, t4
+        storep t0, [t2, t4, 8]
     elsif X86 or X86_WIN
         leap (label - _relativePCBase)[t1], t0
         move index, t2
index 72e91af..486b7b3 100644 (file)
@@ -57,6 +57,24 @@ macro cCall2(function, arg1, arg2)
         move arg1, t4
         move arg2, t5
         call function
+    elsif X86_64_WIN
+        # Note: this implementation is only correct if the return type size is > 8 bytes.
+        # See macro cCall2Void for an implementation when the return type <= 8 bytes.
+        # On Win64, when the return type is larger than 8 bytes, we need to allocate space on the stack for the return value.
+        # On entry rcx (t2), should contain a pointer to this stack space. The other parameters are shifted to the right,
+        # rdx (t1) should contain the first argument, and r8 (t6) should contain the second argument.
+        # On return, rax contains a pointer to this stack value, and we then need to copy the 16 byte return value into rax (t0) and rdx (t1)
+        # since the return value is expected to be split between the two.
+        # See http://msdn.microsoft.com/en-us/library/7572ztz4.aspx
+        move arg1, t1
+        move arg2, t6
+        subp 48, sp
+        move sp, t2
+        addp 32, t2
+        call function
+        addp 48, sp
+        move 8[t0], t1
+        move [t0], t0
     elsif ARM64
         move arg1, t0
         move arg2, t1
@@ -71,6 +89,17 @@ end
 macro cCall2Void(function, arg1, arg2)
     if C_LOOP
         cloopCallSlowPathVoid function, arg1, arg2
+    elsif X86_64_WIN
+        # Note: we cannot use the cCall2 macro for Win64 in this case,
+        # as the Win64 cCall2 implemenation is only correct when the return type size is > 8 bytes.
+        # On Win64, rcx and rdx are used for passing the first two parameters.
+        # We also need to make room on the stack for all four parameter registers.
+        # See http://msdn.microsoft.com/en-us/library/ms235286.aspx
+        move arg2, t1
+        move arg1, t2
+        subp 32, sp 
+        call function
+        addp 32, sp 
     else
         cCall2(function, arg1, arg2)
     end
@@ -85,6 +114,17 @@ macro cCall4(function, arg1, arg2, arg3, arg4)
         move arg3, t1
         move arg4, t2
         call function
+    elsif X86_64_WIN
+        # On Win64, rcx, rdx, r8, and r9 are used for passing the first four parameters.
+        # We also need to make room on the stack for all four parameter registers.
+        # See http://msdn.microsoft.com/en-us/library/ms235286.aspx
+        move arg1, t2
+        move arg2, t1
+        move arg3, t6
+        move arg4, t7
+        subp 32, sp 
+        call function
+        addp 32, sp 
     elsif ARM64
         move arg1, t0
         move arg2, t1
@@ -109,6 +149,16 @@ macro doCallToJavaScript(makeCall)
         const temp1 = t0
         const temp2 = t3
         const temp3 = t6
+    elsif X86_64_WIN
+        const entry = t2
+        const vm = t1
+        const protoCallFrame = t6
+
+        const previousCFR = t0
+        const previousPC = t4
+        const temp1 = t0
+        const temp2 = t3
+        const temp3 = t7
     elsif ARM64 or C_LOOP
         const entry = a0
         const vm = a1
@@ -126,6 +176,10 @@ macro doCallToJavaScript(makeCall)
     if X86_64
         loadp 7*8[sp], previousPC
         move 6*8[sp], previousCFR
+    elsif X86_64_WIN
+        # Win64 pushes two more registers
+        loadp 9*8[sp], previousPC
+        move 8*8[sp], previousCFR
     elsif ARM64
         move cfr, previousCFR
     end
@@ -142,10 +196,7 @@ macro doCallToJavaScript(makeCall)
     loadp VM::topCallFrame[vm], temp2
     storep temp2, ScopeChain[cfr]
     storep 1, CodeBlock[cfr]
-    if X86_64
-        loadp 7*8[sp], previousPC
-        loadp 6*8[sp], previousCFR
-    end
+
     storep previousPC, ReturnPC[cfr]
     storep previousCFR, CallerFrame[cfr]
 
@@ -238,7 +289,7 @@ macro doCallToJavaScript(makeCall)
 
     checkStackPointerAlignment(temp3, 0xbad0dc04)
 
-    if X86_64
+    if X86_64 or X86_64_WIN
         pop t5
     end
     callToJavaScriptEpilogue()
@@ -262,6 +313,8 @@ macro makeHostFunctionCall(entry, temp)
     move entry, temp
     if X86_64
         move sp, t4
+    elsif X86_64_WIN
+        move sp, t2
     elsif ARM64 or C_LOOP
         move sp, a0
     end
@@ -269,8 +322,18 @@ macro makeHostFunctionCall(entry, temp)
         storep cfr, [sp]
         storep lr, 8[sp]
         cloopCallNative temp
+    elsif X86_64_WIN
+        # For a host function call, JIT relies on that the CallerFrame (frame pointer) is put on the stack,
+        # On Win64 we need to manually copy the frame pointer to the stack, since MSVC may not maintain a frame pointer on 64-bit.
+        # See http://msdn.microsoft.com/en-us/library/9z1stfyw.aspx where it's stated that rbp MAY be used as a frame pointer.
+        storep cfr, [sp]
+
+        # We need to allocate 32 bytes on the stack for the shadow space.
+        subp 32, sp
+        call temp
+        addp 32, sp
     else
-        addp 16, sp 
+        addp 16, sp
         call temp
         subp 16, sp
     end
@@ -957,7 +1020,7 @@ _llint_op_sub:
 
 _llint_op_div:
     traceExecution()
-    if X86_64
+    if X86_64 or X86_64_WIN
         binaryOpCustomStore(
             macro (left, right, slow, index)
                 # Assume t3 is scratchable.
@@ -1968,7 +2031,16 @@ macro nativeCallTrampoline(executableOffsetToFunction)
 
     functionPrologue()
     storep 0, CodeBlock[cfr]
-    if X86_64
+    if X86_64 or X86_64_WIN
+        if X86_64
+            const arg1 = t4  # t4 = rdi
+            const arg2 = t5  # t5 = rsi
+            const temp = t1
+        elsif X86_64_WIN
+            const arg1 = t2  # t2 = rcx
+            const arg2 = t1  # t1 = rdx
+            const temp = t0
+        end
         loadp ScopeChain[cfr], t0
         andp MarkedBlockMask, t0
         loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t0], t0
@@ -1976,11 +2048,17 @@ macro nativeCallTrampoline(executableOffsetToFunction)
         loadp CallerFrame[cfr], t0
         loadq ScopeChain[t0], t1
         storeq t1, ScopeChain[cfr]
-        move cfr, t4  # t4 = rdi
-        loadp Callee[cfr], t5 # t5 = rsi
-        loadp JSFunction::m_executable[t5], t1
+        move cfr, arg1
+        loadp Callee[cfr], arg2
+        loadp JSFunction::m_executable[arg2], temp
         checkStackPointerAlignment(t3, 0xdead0001)
-        call executableOffsetToFunction[t1]
+        if X86_64_WIN
+            subp 32, sp
+        end
+        call executableOffsetToFunction[temp]
+        if X86_64_WIN
+            addp 32, sp
+        end
         loadp ScopeChain[cfr], t3
         andp MarkedBlockMask, t3
         loadp MarkedBlock::m_weakSet + WeakSet::m_vm[t3], t3
index e089fa0..88c7d7a 100644 (file)
@@ -262,7 +262,7 @@ class Assembler
             @codeOrigin = nil
             @commentState = :many
         when :many
-            @outp.puts "// #{text}" if $enableCodeOriginComments
+            @outp.puts $commentPrefix + " #{text}" if $enableCodeOriginComments
         else
             raise
         end
index 8d14abc..4fa3e19 100644 (file)
@@ -35,6 +35,7 @@ BACKENDS =
      "X86",
      "X86_WIN",
      "X86_64",
+     "X86_64_WIN",
      "ARM",
      "ARMv7",
      "ARMv7_TRADITIONAL",
@@ -54,6 +55,7 @@ WORKING_BACKENDS =
      "X86",
      "X86_WIN",
      "X86_64",
+     "X86_64_WIN",
      "ARM",
      "ARMv7",
      "ARMv7_TRADITIONAL",
index ec36e30..b35d3d0 100644 (file)
@@ -177,7 +177,9 @@ def emitCodeInConfiguration(concreteSettings, ast, backend)
     if !$emitWinAsm
         $output.puts cppSettingsTest(concreteSettings)
     else
-        $output.puts ".MODEL FLAT, C"
+        if backend == "X86_WIN"
+            $output.puts ".MODEL FLAT, C"
+        end
         $output.puts "INCLUDE #{File.basename($output.path)}.sym"
         $output.puts "_TEXT SEGMENT"
     end
index 1f9e4c4..3be1d26 100644 (file)
@@ -32,6 +32,8 @@ def isX64
         false
     when "X86_64"
         true
+    when "X86_64_WIN"
+        true
     else
         raise "bad value for $activeBackend: #{$activeBackend}"
     end
@@ -45,6 +47,8 @@ def useX87
         true
     when "X86_64"
         false
+    when "X86_64_WIN"
+        false
     else
         raise "bad value for $activeBackend: #{$activeBackend}"
     end
@@ -100,11 +104,13 @@ def getSizeString(kind)
     when :int
         size = "dword"
     when :ptr
-        size = "dword"
+        size =  isX64 ? "qword" : "dword"
     when :double
         size = "qword"
+    when :quad
+        size = "qword"
     else
-        raise
+        raise "Invalid kind #{kind}"
     end
 
     return size + " " + "ptr" + " ";
@@ -116,13 +122,13 @@ class SpecialRegister < NoChildren
         raise unless isX64
         case kind
         when :half
-            "%" + @name + "w"
+            register(@name + "w")
         when :int
-            "%" + @name + "d"
+            register(@name + "d")
         when :ptr
-            "%" + @name
+            register(@name)
         when :quad
-            "%" + @name
+            register(@name)
         else
             raise
         end
@@ -284,49 +290,49 @@ class RegisterID
             raise "Cannot use #{name} in 32-bit X86 at #{codeOriginString}" unless isX64
             case kind
             when :half
-                "%r8w"
+                register("r8w")
             when :int
-                "%r8d"
+                register("r8d")
             when :ptr
-                "%r8"
+                register("r8")
             when :quad
-                "%r8"
+                register("r8")
             end
         when "t7"
             raise "Cannot use #{name} in 32-bit X86 at #{codeOriginString}" unless isX64
             case kind
             when :half
-                "%r9w"
+                register("r9w")
             when :int
-                "%r9d"
+                register("r9d")
             when :ptr
-                "%r9"
+                register("r9")
             when :quad
-                "%r9"
+                register("r9")
             end
         when "csr1"
             raise "Cannot use #{name} in 32-bit X86 at #{codeOriginString}" unless isX64
             case kind
             when :half
-                "%r14w"
+                register("r14w")
             when :int
-                "%r14d"
+                register("r14d")
             when :ptr
-                "%r14"
+                register("r14")
             when :quad
-                "%r14"
+                register("r14")
             end
         when "csr2"
             raise "Cannot use #{name} in 32-bit X86 at #{codeOriginString}" unless isX64
             case kind
             when :half
-                "%r15w"
+                register("r15w")
             when :int
-                "%r15d"
+                register("r15d")
             when :ptr
-                "%r15"
+                register("r15")
             when :quad
-                "%r15"
+                register("r15")
             end
         else
             raise "Bad register #{name} for X86 at #{codeOriginString}"
@@ -343,17 +349,17 @@ class FPRegisterID
         raise if useX87
         case name
         when "ft0", "fa0", "fr"
-            "%xmm0"
+            register("xmm0")
         when "ft1", "fa1"
-            "%xmm1"
+            register("xmm1")
         when "ft2", "fa2"
-            "%xmm2"
+            register("xmm2")
         when "ft3", "fa3"
-            "%xmm3"
+            register("xmm3")
         when "ft4"
-            "%xmm4"
+            register("xmm4")
         when "ft5"
-            "%xmm5"
+            register("xmm5")
         else
             raise "Bad register #{name} for X86 at #{codeOriginString}"
         end
@@ -510,6 +516,9 @@ class Sequence
         
         return newList
     end
+    def getModifiedListX86_64_WIN
+        getModifiedListX86_64
+    end
 end
 
 class Instruction
@@ -604,9 +613,9 @@ class Instruction
         else
             case mode
             when :normal
-                $asm.puts "ucomisd #{operands[1].x86Operand(:double)}, #{operands[0].x86Operand(:double)}"
+                $asm.puts "ucomisd #{orderOperands(operands[1].x86Operand(:double), operands[0].x86Operand(:double))}"
             when :reverse
-                $asm.puts "ucomisd #{operands[0].x86Operand(:double)}, #{operands[1].x86Operand(:double)}"
+                $asm.puts "ucomisd #{orderOperands(operands[0].x86Operand(:double), operands[1].x86Operand(:double))}"
             else
                 raise mode.inspect
             end
@@ -848,6 +857,11 @@ class Instruction
         lowerX86Common
     end
 
+    def lowerX86_64_WIN
+        raise unless $activeBackend == "X86_64_WIN"
+        lowerX86Common
+    end
+
     def lowerX86Common
         $asm.codeOrigin codeOriginString if $enableCodeOriginComments
         $asm.annotation annotation if $enableInstrAnnotations
@@ -919,7 +933,11 @@ class Instruction
             $asm.puts "mov#{x86Suffix(:int)} #{x86Operands(:int, :int)}"
         when "loadis"
             if isX64
-                $asm.puts "movslq #{x86Operands(:int, :quad)}"
+                if !isIntelSyntax
+                    $asm.puts "movslq #{x86Operands(:int, :quad)}"
+                else
+                    $asm.puts "movsxd #{x86Operands(:int, :quad)}"
+                end
             else
                 $asm.puts "mov#{x86Suffix(:int)} #{x86Operands(:int, :int)}"
             end
@@ -1021,13 +1039,13 @@ class Instruction
                 $asm.puts "fild#{x86Suffix(:ptr)} #{getSizeString(:ptr)}#{offsetRegister(-4, sp.x86Operand(:ptr))}"
                 $asm.puts "fstp #{operands[1].x87Operand(1)}"
             else
-                $asm.puts "cvtsi2sd #{operands[0].x86Operand(:int)}, #{operands[1].x86Operand(:double)}"
+                $asm.puts "cvtsi2sd #{orderOperands(operands[0].x86Operand(:int), operands[1].x86Operand(:double))}"
             end
         when "bdeq"
             if useX87
                 handleX87Compare(:normal)
             else
-                $asm.puts "ucomisd #{operands[0].x86Operand(:double)}, #{operands[1].x86Operand(:double)}"
+                $asm.puts "ucomisd #{orderOperands(operands[0].x86Operand(:double), operands[1].x86Operand(:double))}"
             end
             if operands[0] == operands[1]
                 # This is just a jump ordered, which is a jnp.
@@ -1054,7 +1072,7 @@ class Instruction
             if useX87
                 handleX87Compare(:normal)
             else
-                $asm.puts "ucomisd #{operands[0].x86Operand(:double)}, #{operands[1].x86Operand(:double)}"
+                $asm.puts "ucomisd #{orderOperands(operands[0].x86Operand(:double), operands[1].x86Operand(:double))}"
             end
             if operands[0] == operands[1]
                 # This is just a jump unordered, which is a jp.
@@ -1130,11 +1148,15 @@ class Instruction
             }
         when "popCalleeSaves"
             if isX64
-                $asm.puts "pop %rbx"
-                $asm.puts "pop %r15"
-                $asm.puts "pop %r14"
-                $asm.puts "pop %r13"
-                $asm.puts "pop %r12"
+                if isMSVC
+                    $asm.puts "pop " + register("rsi")
+                    $asm.puts "pop " + register("rdi")
+                end
+                $asm.puts "pop " + register("rbx")
+                $asm.puts "pop " + register("r15")
+                $asm.puts "pop " + register("r14")
+                $asm.puts "pop " + register("r13")
+                $asm.puts "pop " + register("r12")
             else
                 $asm.puts "pop " + register("ebx")
                 $asm.puts "pop " + register("edi")
@@ -1142,11 +1164,15 @@ class Instruction
             end
         when "pushCalleeSaves"
             if isX64
-                $asm.puts "push %r12"
-                $asm.puts "push %r13"
-                $asm.puts "push %r14"
-                $asm.puts "push %r15"
-                $asm.puts "push %rbx"
+                $asm.puts "push " + register("r12")
+                $asm.puts "push " + register("r13")
+                $asm.puts "push " + register("r14")
+                $asm.puts "push " + register("r15")
+                $asm.puts "push " + register("rbx")
+                if isMSVC
+                    $asm.puts "push " + register("rdi")
+                    $asm.puts "push " + register("rsi")
+                end
             else
                 $asm.puts "push " + register("esi")
                 $asm.puts "push " + register("edi")
@@ -1155,7 +1181,11 @@ class Instruction
         when "move"
             handleMove
         when "sxi2q"
-            $asm.puts "movslq #{operands[0].x86Operand(:int)}, #{operands[1].x86Operand(:quad)}"
+            if !isIntelSyntax
+                $asm.puts "movslq #{operands[0].x86Operand(:int)}, #{operands[1].x86Operand(:quad)}"
+            else
+                $asm.puts "movsxd #{orderOperands(operands[0].x86Operand(:int), operands[1].x86Operand(:quad))}"
+            end
         when "zxi2q"
             $asm.puts "mov#{x86Suffix(:int)} #{orderOperands(operands[0].x86Operand(:int), operands[1].x86Operand(:int))}"
         when "nop"
@@ -1441,7 +1471,7 @@ class Instruction
         when "cdqi"
             $asm.puts "cdq"
         when "idivi"
-            $asm.puts "idivl #{operands[0].x86Operand(:int)}"
+            $asm.puts "idiv#{x86Suffix(:int)} #{operands[0].x86Operand(:int)}"
         when "fii2d"
             if useX87
                 sp = RegisterID.new(nil, "sp")
@@ -1479,7 +1509,13 @@ class Instruction
                 $asm.puts "fldl -8(#{sp.x86Operand(:ptr)})"
                 $asm.puts "fstp #{operands[1].x87Operand(1)}"
             else
-                $asm.puts "movq #{operands[0].x86Operand(:quad)}, #{operands[1].x86Operand(:double)}"
+                if !isIntelSyntax
+                    $asm.puts "movq #{operands[0].x86Operand(:quad)}, #{operands[1].x86Operand(:double)}"
+                else
+                    # MASM does not accept register operands with movq.
+                    # Debugging shows that movd actually moves a qword when using MASM.
+                    $asm.puts "movd #{operands[1].x86Operand(:double)}, #{operands[0].x86Operand(:quad)}"
+                end
             end
         when "fd2q"
             if useX87
@@ -1492,7 +1528,13 @@ class Instruction
                 end
                 $asm.puts "movq -8(#{sp.x86Operand(:ptr)}), #{operands[1].x86Operand(:quad)}"
             else
-                $asm.puts "movq #{operands[0].x86Operand(:double)}, #{operands[1].x86Operand(:quad)}"
+                if !isIntelSyntax
+                    $asm.puts "movq #{operands[0].x86Operand(:double)}, #{operands[1].x86Operand(:quad)}"
+                else
+                    # MASM does not accept register operands with movq.
+                    # Debugging shows that movd actually moves a qword when using MASM.
+                    $asm.puts "movd #{operands[1].x86Operand(:quad)}, #{operands[0].x86Operand(:double)}"
+                end
             end
         when "bo"
             $asm.puts "jo #{operands[0].asmLabel}"
index c7cefbd..3d645a2 100644 (file)
@@ -1,3 +1,12 @@
+2014-06-25  peavo@outlook.com  <peavo@outlook.com>
+
+        [Win64] ASM LLINT is not enabled.
+        https://bugs.webkit.org/show_bug.cgi?id=130638
+
+        Reviewed by Mark Lam.
+
+        * wtf/Platform.h: Enable LLINT and JIT for Win64.
+
 2014-06-25  Laszlo Gombos  <l.gombos@samsung.com>
 
         Remove build guard for progress element
index 6a284b9..5de8d95 100644 (file)
 #if !defined(ENABLE_JIT) \
     && (CPU(X86) || CPU(X86_64) || CPU(ARM) || CPU(ARM64) || CPU(MIPS)) \
     && !CPU(APPLE_ARMV7K)                                                           \
-    && !OS(WINCE) \
-    && !(OS(WINDOWS) && CPU(X86_64))
+    && !OS(WINCE)
 #define ENABLE_JIT 1
 #endif