[JSC] Enable LLInt ASM interpreter on X64 and ARM64 in non JIT configuration
authoryusukesuzuki@slowstart.org <yusukesuzuki@slowstart.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 22 Sep 2018 05:26:44 +0000 (05:26 +0000)
committeryusukesuzuki@slowstart.org <yusukesuzuki@slowstart.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 22 Sep 2018 05:26:44 +0000 (05:26 +0000)
https://bugs.webkit.org/show_bug.cgi?id=189778

Reviewed by Keith Miller.

.:

ENABLE_SAMPLING_PROFILER does not depend on ENABLE_JIT now since it can be
used with LLInt ASM interpreter.

* Source/cmake/WebKitFeatures.cmake:

Source/JavaScriptCore:

LLInt ASM interpreter is 2x and 15% faster than CLoop interpreter on
Linux and macOS respectively. We would like to enable it for non JIT
configurations in X86_64 and ARM64.

This patch enables LLInt for non JIT builds in X86_64 and ARM64 architectures.
Previously, we switch LLInt ASM interpreter and CLoop by using ENABLE(JIT)
configuration. But it is wrong in the new scenario since we have a build
configuration that uses LLInt ASM interpreter and JIT is disabled. We introduce
ENABLE(C_LOOP) option, which represents that we use CLoop. And we replace
ENABLE(JIT) with ENABLE(C_LOOP) if the previous ENABLE(JIT) is essentially just
related to LLInt ASM interpreter and not related to JIT.

We also replace some ENABLE(JIT) configurations with ENABLE(ASSEMBLER).
ENABLE(ASSEMBLER) is now enabled even if we disable JIT since MacroAssembler
has machine register information that is used in LLInt ASM interpreter.

* API/tests/PingPongStackOverflowTest.cpp:
(testPingPongStackOverflow):
* CMakeLists.txt:
* JavaScriptCore.xcodeproj/project.pbxproj:
* assembler/MaxFrameExtentForSlowPathCall.h:
* bytecode/CallReturnOffsetToBytecodeOffset.h: Removed. It is no longer used.
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::finishCreation):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::calleeSaveRegisters const):
(JSC::CodeBlock::numberOfLLIntBaselineCalleeSaveRegisters):
(JSC::CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters):
(JSC::CodeBlock::calleeSaveSpaceAsVirtualRegisters):
* bytecode/Opcode.h:
(JSC::padOpcodeName):
* heap/Heap.cpp:
(JSC::Heap::gatherJSStackRoots):
(JSC::Heap::stopThePeriphery):
* interpreter/CLoopStack.cpp:
* interpreter/CLoopStack.h:
* interpreter/CLoopStackInlines.h:
* interpreter/EntryFrame.h:
* interpreter/Interpreter.cpp:
(JSC::Interpreter::Interpreter):
(JSC::UnwindFunctor::copyCalleeSavesToEntryFrameCalleeSavesBuffer const):
* interpreter/Interpreter.h:
* interpreter/StackVisitor.cpp:
(JSC::StackVisitor::Frame::calleeSaveRegisters):
* interpreter/VMEntryRecord.h:
* jit/ExecutableAllocator.h:
* jit/FPRInfo.h:
(WTF::printInternal):
* jit/GPRInfo.cpp:
* jit/GPRInfo.h:
(WTF::printInternal):
* jit/HostCallReturnValue.cpp:
(JSC::getHostCallReturnValueWithExecState): Moved. They are used in LLInt ASM interpreter too.
* jit/HostCallReturnValue.h:
* jit/JITOperations.cpp:
(JSC::getHostCallReturnValueWithExecState): Deleted.
* jit/JITOperationsMSVC64.cpp:
* jit/Reg.cpp:
* jit/Reg.h:
* jit/RegisterAtOffset.cpp:
* jit/RegisterAtOffset.h:
* jit/RegisterAtOffsetList.cpp:
* jit/RegisterAtOffsetList.h:
* jit/RegisterMap.h:
* jit/RegisterSet.cpp:
* jit/RegisterSet.h:
* jit/TempRegisterSet.cpp:
* jit/TempRegisterSet.h:
* llint/LLIntCLoop.cpp:
* llint/LLIntCLoop.h:
* llint/LLIntData.cpp:
(JSC::LLInt::initialize):
(JSC::LLInt::Data::performAssertions):
* llint/LLIntData.h:
* llint/LLIntOfflineAsmConfig.h:
* llint/LLIntOpcode.h:
* llint/LLIntPCRanges.h:
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::LLINT_SLOW_PATH_DECL):
* llint/LLIntSlowPaths.h:
* llint/LLIntThunks.cpp:
* llint/LowLevelInterpreter.cpp:
* llint/LowLevelInterpreter.h:
* runtime/JSCJSValue.h:
* runtime/MachineContext.h:
* runtime/SamplingProfiler.cpp:
(JSC::SamplingProfiler::processUnverifiedStackTraces): Enable SamplingProfiler
for LLInt ASM interpreter with non JIT configuration.
* runtime/TestRunnerUtils.cpp:
(JSC::optimizeNextInvocation):
* runtime/VM.cpp:
(JSC::VM::VM):
(JSC::VM::getHostFunction):
(JSC::VM::updateSoftReservedZoneSize):
(JSC::sanitizeStackForVM):
(JSC::VM::committedStackByteCount):
* runtime/VM.h:
* runtime/VMInlines.h:
(JSC::VM::ensureStackCapacityFor):
(JSC::VM::isSafeToRecurseSoft const):

Source/WTF:

This patch adds ENABLE(C_LOOP) which indicates we use CLoop as the interpreter.
Previously, we used !ENABLE(JIT) for this configuration. But now, we have
a build configuration that has LLInt ASM interpreter (not CLoop) and !ENABLE(JIT).

We enable LLInt ASM interpreter for non JIT environment in X86_64 and ARM64 architectures.
And we enable ENABLE(ASSEMBLER) for non JIT environment since it offers machine register
information which is used for LLInt and SamplingProfiler.

* wtf/Platform.h:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@236381 268f45cc-cd09-0410-ab3c-d52691b4dbfc

60 files changed:
ChangeLog
Source/JavaScriptCore/API/tests/PingPongStackOverflowTest.cpp
Source/JavaScriptCore/CMakeLists.txt
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/assembler/MaxFrameExtentForSlowPathCall.h
Source/JavaScriptCore/bytecode/CallReturnOffsetToBytecodeOffset.h [deleted file]
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/bytecode/CodeBlock.h
Source/JavaScriptCore/bytecode/Opcode.h
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/interpreter/CLoopStack.cpp
Source/JavaScriptCore/interpreter/CLoopStack.h
Source/JavaScriptCore/interpreter/CLoopStackInlines.h
Source/JavaScriptCore/interpreter/EntryFrame.h
Source/JavaScriptCore/interpreter/Interpreter.cpp
Source/JavaScriptCore/interpreter/Interpreter.h
Source/JavaScriptCore/interpreter/StackVisitor.cpp
Source/JavaScriptCore/interpreter/VMEntryRecord.h
Source/JavaScriptCore/jit/ExecutableAllocator.h
Source/JavaScriptCore/jit/FPRInfo.h
Source/JavaScriptCore/jit/GPRInfo.cpp
Source/JavaScriptCore/jit/GPRInfo.h
Source/JavaScriptCore/jit/HostCallReturnValue.cpp
Source/JavaScriptCore/jit/HostCallReturnValue.h
Source/JavaScriptCore/jit/JITOperations.cpp
Source/JavaScriptCore/jit/JITOperationsMSVC64.cpp
Source/JavaScriptCore/jit/Reg.cpp
Source/JavaScriptCore/jit/Reg.h
Source/JavaScriptCore/jit/RegisterAtOffset.cpp
Source/JavaScriptCore/jit/RegisterAtOffset.h
Source/JavaScriptCore/jit/RegisterAtOffsetList.cpp
Source/JavaScriptCore/jit/RegisterAtOffsetList.h
Source/JavaScriptCore/jit/RegisterMap.h
Source/JavaScriptCore/jit/RegisterSet.cpp
Source/JavaScriptCore/jit/RegisterSet.h
Source/JavaScriptCore/jit/TempRegisterSet.cpp
Source/JavaScriptCore/jit/TempRegisterSet.h
Source/JavaScriptCore/llint/LLIntCLoop.cpp
Source/JavaScriptCore/llint/LLIntCLoop.h
Source/JavaScriptCore/llint/LLIntData.cpp
Source/JavaScriptCore/llint/LLIntData.h
Source/JavaScriptCore/llint/LLIntOfflineAsmConfig.h
Source/JavaScriptCore/llint/LLIntOpcode.h
Source/JavaScriptCore/llint/LLIntPCRanges.h
Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
Source/JavaScriptCore/llint/LLIntSlowPaths.h
Source/JavaScriptCore/llint/LLIntThunks.cpp
Source/JavaScriptCore/llint/LowLevelInterpreter.cpp
Source/JavaScriptCore/llint/LowLevelInterpreter.h
Source/JavaScriptCore/runtime/JSCJSValue.h
Source/JavaScriptCore/runtime/MachineContext.h
Source/JavaScriptCore/runtime/SamplingProfiler.cpp
Source/JavaScriptCore/runtime/TestRunnerUtils.cpp
Source/JavaScriptCore/runtime/VM.cpp
Source/JavaScriptCore/runtime/VM.h
Source/JavaScriptCore/runtime/VMInlines.h
Source/WTF/ChangeLog
Source/WTF/wtf/Platform.h
Source/cmake/WebKitFeatures.cmake

index 532b326..8e8693b 100644 (file)
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,3 +1,15 @@
+2018-09-21  Yusuke Suzuki  <yusukesuzuki@slowstart.org>
+
+        [JSC] Enable LLInt ASM interpreter on X64 and ARM64 in non JIT configuration
+        https://bugs.webkit.org/show_bug.cgi?id=189778
+
+        Reviewed by Keith Miller.
+
+        ENABLE_SAMPLING_PROFILER does not depend on ENABLE_JIT now since it can be
+        used with LLInt ASM interpreter.
+
+        * Source/cmake/WebKitFeatures.cmake:
+
 2018-09-21  Mike Gorse  <mgorse@suse.com>
 
         Build tools should work when the /usr/bin/python is python3
index fbe4687..832bd14 100644 (file)
@@ -127,7 +127,7 @@ int testPingPongStackOverflow()
 
     Options::softReservedZoneSize() = 128 * KB;
     Options::reservedZoneSize() = 64 * KB;
-#if ENABLE(JIT)
+#if ENABLE(C_LOOP)
     // Normally, we want to disable the LLINT to force the use of JITted code which is necessary for
     // reproducing the regression in https://bugs.webkit.org/show_bug.cgi?id=148749. However, we only
     // want to do this if the LLINT isn't the only available execution engine.
index 96e33a2..33609a5 100644 (file)
@@ -231,7 +231,9 @@ else ()
     endif ()
 
     if (NOT ENABLE_JIT)
-        set(OFFLINE_ASM_BACKEND "C_LOOP")
+        if (ENABLE_C_LOOP)
+            set(OFFLINE_ASM_BACKEND "C_LOOP")
+        endif ()
     endif ()
 endif ()
 
@@ -264,7 +266,7 @@ add_dependencies(LLIntOffsetsExtractor JavaScriptCoreForwardingHeaders)
 # LLIntOffsetsExtractor matches, no output is generated. To make this target consistent and avoid
 # running this command for every build, we artificially update LLIntAssembly.h's mtime (using touch)
 # after every asm.rb run.
-if (MSVC AND ENABLE_JIT)
+if (MSVC AND NOT ENABLE_C_LOOP)
     set(LLIntOutput LowLevelInterpreterWin.asm)
     set(OFFLINE_ASM_ARGS --assembler=MASM)
 else ()
@@ -284,7 +286,7 @@ add_custom_command(
 # the .cpp files below is similar to the one in the previous comment. However, since these .cpp
 # files are used to build JavaScriptCore itself, we can just add LLIntAssembly.h to JSC_HEADERS
 # since it is used in the add_library() call at the end of this file.
-if (MSVC AND ENABLE_JIT)
+if (MSVC AND NOT ENABLE_C_LOOP)
     enable_language(ASM_MASM)
     if (CMAKE_SIZEOF_VOID_P EQUAL 4)
         # Win32 needs /safeseh with assembly, but Win64 does not.
@@ -1182,7 +1184,7 @@ add_custom_command(
 list(APPEND JavaScriptCore_HEADERS ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/InjectedScriptSource.h)
 
 if (WTF_CPU_X86_64)
-    if (MSVC AND ENABLE_JIT)
+    if (MSVC AND NOT ENABLE_C_LOOP)
         add_custom_command(
             OUTPUT ${DERIVED_SOURCES_DIR}/JITStubsMSVC64.obj
             MAIN_DEPENDENCY ${JAVASCRIPTCORE_DIR}/jit/JITStubsMSVC64.asm
index b87568f..848628c 100644 (file)
@@ -1,3 +1,111 @@
+2018-09-21  Yusuke Suzuki  <yusukesuzuki@slowstart.org>
+
+        [JSC] Enable LLInt ASM interpreter on X64 and ARM64 in non JIT configuration
+        https://bugs.webkit.org/show_bug.cgi?id=189778
+
+        Reviewed by Keith Miller.
+
+        LLInt ASM interpreter is 2x and 15% faster than CLoop interpreter on
+        Linux and macOS respectively. We would like to enable it for non JIT
+        configurations in X86_64 and ARM64.
+
+        This patch enables LLInt for non JIT builds in X86_64 and ARM64 architectures.
+        Previously, we switch LLInt ASM interpreter and CLoop by using ENABLE(JIT)
+        configuration. But it is wrong in the new scenario since we have a build
+        configuration that uses LLInt ASM interpreter and JIT is disabled. We introduce
+        ENABLE(C_LOOP) option, which represents that we use CLoop. And we replace
+        ENABLE(JIT) with ENABLE(C_LOOP) if the previous ENABLE(JIT) is essentially just
+        related to LLInt ASM interpreter and not related to JIT.
+
+        We also replace some ENABLE(JIT) configurations with ENABLE(ASSEMBLER).
+        ENABLE(ASSEMBLER) is now enabled even if we disable JIT since MacroAssembler
+        has machine register information that is used in LLInt ASM interpreter.
+
+        * API/tests/PingPongStackOverflowTest.cpp:
+        (testPingPongStackOverflow):
+        * CMakeLists.txt:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * assembler/MaxFrameExtentForSlowPathCall.h:
+        * bytecode/CallReturnOffsetToBytecodeOffset.h: Removed. It is no longer used.
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::finishCreation):
+        * bytecode/CodeBlock.h:
+        (JSC::CodeBlock::calleeSaveRegisters const):
+        (JSC::CodeBlock::numberOfLLIntBaselineCalleeSaveRegisters):
+        (JSC::CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters):
+        (JSC::CodeBlock::calleeSaveSpaceAsVirtualRegisters):
+        * bytecode/Opcode.h:
+        (JSC::padOpcodeName):
+        * heap/Heap.cpp:
+        (JSC::Heap::gatherJSStackRoots):
+        (JSC::Heap::stopThePeriphery):
+        * interpreter/CLoopStack.cpp:
+        * interpreter/CLoopStack.h:
+        * interpreter/CLoopStackInlines.h:
+        * interpreter/EntryFrame.h:
+        * interpreter/Interpreter.cpp:
+        (JSC::Interpreter::Interpreter):
+        (JSC::UnwindFunctor::copyCalleeSavesToEntryFrameCalleeSavesBuffer const):
+        * interpreter/Interpreter.h:
+        * interpreter/StackVisitor.cpp:
+        (JSC::StackVisitor::Frame::calleeSaveRegisters):
+        * interpreter/VMEntryRecord.h:
+        * jit/ExecutableAllocator.h:
+        * jit/FPRInfo.h:
+        (WTF::printInternal):
+        * jit/GPRInfo.cpp:
+        * jit/GPRInfo.h:
+        (WTF::printInternal):
+        * jit/HostCallReturnValue.cpp:
+        (JSC::getHostCallReturnValueWithExecState): Moved. They are used in LLInt ASM interpreter too.
+        * jit/HostCallReturnValue.h:
+        * jit/JITOperations.cpp:
+        (JSC::getHostCallReturnValueWithExecState): Deleted.
+        * jit/JITOperationsMSVC64.cpp:
+        * jit/Reg.cpp:
+        * jit/Reg.h:
+        * jit/RegisterAtOffset.cpp:
+        * jit/RegisterAtOffset.h:
+        * jit/RegisterAtOffsetList.cpp:
+        * jit/RegisterAtOffsetList.h:
+        * jit/RegisterMap.h:
+        * jit/RegisterSet.cpp:
+        * jit/RegisterSet.h:
+        * jit/TempRegisterSet.cpp:
+        * jit/TempRegisterSet.h:
+        * llint/LLIntCLoop.cpp:
+        * llint/LLIntCLoop.h:
+        * llint/LLIntData.cpp:
+        (JSC::LLInt::initialize):
+        (JSC::LLInt::Data::performAssertions):
+        * llint/LLIntData.h:
+        * llint/LLIntOfflineAsmConfig.h:
+        * llint/LLIntOpcode.h:
+        * llint/LLIntPCRanges.h:
+        * llint/LLIntSlowPaths.cpp:
+        (JSC::LLInt::LLINT_SLOW_PATH_DECL):
+        * llint/LLIntSlowPaths.h:
+        * llint/LLIntThunks.cpp:
+        * llint/LowLevelInterpreter.cpp:
+        * llint/LowLevelInterpreter.h:
+        * runtime/JSCJSValue.h:
+        * runtime/MachineContext.h:
+        * runtime/SamplingProfiler.cpp:
+        (JSC::SamplingProfiler::processUnverifiedStackTraces): Enable SamplingProfiler
+        for LLInt ASM interpreter with non JIT configuration.
+        * runtime/TestRunnerUtils.cpp:
+        (JSC::optimizeNextInvocation):
+        * runtime/VM.cpp:
+        (JSC::VM::VM):
+        (JSC::VM::getHostFunction):
+        (JSC::VM::updateSoftReservedZoneSize):
+        (JSC::sanitizeStackForVM):
+        (JSC::VM::committedStackByteCount):
+        * runtime/VM.h:
+        * runtime/VMInlines.h:
+        (JSC::VM::ensureStackCapacityFor):
+        (JSC::VM::isSafeToRecurseSoft const):
+
 2018-09-21  Keith Miller  <keith_miller@apple.com>
 
         Add Promise SPI
index 37787f2..774d977 100644 (file)
                0F0B83A914BCF56200885B4F /* HandlerInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0B83A814BCF55E00885B4F /* HandlerInfo.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F0B83AB14BCF5BB00885B4F /* ExpressionRangeInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0B83AA14BCF5B900885B4F /* ExpressionRangeInfo.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F0B83B114BCF71800885B4F /* CallLinkInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0B83AF14BCF71400885B4F /* CallLinkInfo.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               0F0B83B914BCF95F00885B4F /* CallReturnOffsetToBytecodeOffset.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0B83B814BCF95B00885B4F /* CallReturnOffsetToBytecodeOffset.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F0CAEFC1EC4DA6B00970D12 /* JSHeapFinalizerPrivate.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0CAEFA1EC4DA6200970D12 /* JSHeapFinalizerPrivate.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F0CAEFF1EC4DA8800970D12 /* HeapFinalizerCallback.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0CAEFE1EC4DA8500970D12 /* HeapFinalizerCallback.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F0CD4C215F1A6070032F1C0 /* PutDirectIndexMode.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F0CD4C015F1A6040032F1C0 /* PutDirectIndexMode.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F0B83AA14BCF5B900885B4F /* ExpressionRangeInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ExpressionRangeInfo.h; sourceTree = "<group>"; };
                0F0B83AE14BCF71400885B4F /* CallLinkInfo.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallLinkInfo.cpp; sourceTree = "<group>"; };
                0F0B83AF14BCF71400885B4F /* CallLinkInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallLinkInfo.h; sourceTree = "<group>"; };
-               0F0B83B814BCF95B00885B4F /* CallReturnOffsetToBytecodeOffset.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallReturnOffsetToBytecodeOffset.h; sourceTree = "<group>"; };
                0F0CAEF91EC4DA6200970D12 /* JSHeapFinalizerPrivate.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSHeapFinalizerPrivate.cpp; sourceTree = "<group>"; };
                0F0CAEFA1EC4DA6200970D12 /* JSHeapFinalizerPrivate.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSHeapFinalizerPrivate.h; sourceTree = "<group>"; };
                0F0CAEFD1EC4DA8500970D12 /* HeapFinalizerCallback.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = HeapFinalizerCallback.cpp; sourceTree = "<group>"; };
                                0F93329414CA7DC10085F3C6 /* CallLinkStatus.h */,
                                627673211B680C1E00FD9F2E /* CallMode.cpp */,
                                627673221B680C1E00FD9F2E /* CallMode.h */,
-                               0F0B83B814BCF95B00885B4F /* CallReturnOffsetToBytecodeOffset.h */,
                                0F3B7E2419A11B8000D9BC56 /* CallVariant.cpp */,
                                0F3B7E2519A11B8000D9BC56 /* CallVariant.h */,
                                969A07900ED1D3AE00F1F681 /* CodeBlock.cpp */,
                                0F0B83B114BCF71800885B4F /* CallLinkInfo.h in Headers */,
                                0F93329E14CA7DC50085F3C6 /* CallLinkStatus.h in Headers */,
                                627673241B680C1E00FD9F2E /* CallMode.h in Headers */,
-                               0F0B83B914BCF95F00885B4F /* CallReturnOffsetToBytecodeOffset.h in Headers */,
                                0F3B7E2B19A11B8000D9BC56 /* CallVariant.h in Headers */,
                                FE80C1971D775CDD008510C0 /* CatchScope.h in Headers */,
                                0F24E54217EA9F5900ABB217 /* CCallHelpers.h in Headers */,
index c6f53b3..4321305 100644 (file)
@@ -35,7 +35,7 @@ namespace JSC {
 // that can be used for outgoing args when calling a slow path C function
 // from JS code.
 
-#if !ENABLE(JIT)
+#if !ENABLE(ASSEMBLER)
 static const size_t maxFrameExtentForSlowPathCall = 0;
 
 #elif CPU(X86_64) && OS(WINDOWS)
@@ -69,7 +69,7 @@ static const size_t maxFrameExtentForSlowPathCall = 40;
 
 COMPILE_ASSERT(!(maxFrameExtentForSlowPathCall % sizeof(Register)), extent_must_be_in_multiples_of_registers);
 
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
 // Make sure that cfr - maxFrameExtentForSlowPathCall bytes will make the stack pointer aligned
 COMPILE_ASSERT((maxFrameExtentForSlowPathCall % 16) == 16 - sizeof(CallerFrameAndPC), extent_must_align_stack_from_callframe_pointer);
 #endif
diff --git a/Source/JavaScriptCore/bytecode/CallReturnOffsetToBytecodeOffset.h b/Source/JavaScriptCore/bytecode/CallReturnOffsetToBytecodeOffset.h
deleted file mode 100644 (file)
index 2d1b00c..0000000
+++ /dev/null
@@ -1,54 +0,0 @@
-/*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#pragma once
-
-namespace JSC {
-
-#if ENABLE(JIT)
-// This structure is used to map from a call return location
-// (given as an offset in bytes into the JIT code) back to
-// the bytecode index of the corresponding bytecode operation.
-// This is then used to look up the corresponding handler.
-// FIXME: This should be made inlining aware! Currently it isn't
-// because we never inline code that has exception handlers.
-struct CallReturnOffsetToBytecodeOffset {
-    CallReturnOffsetToBytecodeOffset(unsigned callReturnOffset, unsigned bytecodeOffset)
-        : callReturnOffset(callReturnOffset)
-        , bytecodeOffset(bytecodeOffset)
-    {
-    }
-
-    unsigned callReturnOffset;
-    unsigned bytecodeOffset;
-};
-
-inline unsigned getCallReturnOffset(CallReturnOffsetToBytecodeOffset* pc)
-{
-    return pc->callReturnOffset;
-}
-#endif
-
-} // namespace JSC
index 27af0fd..654c147 100644 (file)
@@ -89,7 +89,7 @@
 #include <wtf/StringPrintStream.h>
 #include <wtf/text/UniquedStringImpl.h>
 
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
 #include "RegisterAtOffsetList.h"
 #endif
 
@@ -509,7 +509,7 @@ bool CodeBlock::finishCreation(VM& vm, ScriptExecutable* ownerExecutable, Unlink
     if (size_t size = unlinkedCodeBlock->numberOfObjectAllocationProfiles())
         m_objectAllocationProfiles = RefCountedArray<ObjectAllocationProfile>(size);
 
-#if ENABLE(JIT)
+#if !ENABLE(C_LOOP)
     setCalleeSaveRegisters(RegisterSet::llintBaselineCalleeSaveRegisters());
 #endif
 
@@ -2145,7 +2145,7 @@ unsigned CodeBlock::reoptimizationRetryCounter() const
 #endif // ENABLE(JIT)
 }
 
-#if ENABLE(JIT)
+#if !ENABLE(C_LOOP)
 void CodeBlock::setCalleeSaveRegisters(RegisterSet calleeSaveRegisters)
 {
     m_calleeSaveRegisters = std::make_unique<RegisterAtOffsetList>(calleeSaveRegisters);
@@ -2172,6 +2172,9 @@ size_t CodeBlock::calleeSaveSpaceAsVirtualRegisters()
 {
     return roundCalleeSaveSpaceAsVirtualRegisters(m_calleeSaveRegisters->size());
 }
+#endif
+
+#if ENABLE(JIT)
 
 void CodeBlock::countReoptimization()
 {
index a3a3d26..0147d7a 100644 (file)
@@ -645,11 +645,23 @@ public:
     // to avoid thrashing.
     JS_EXPORT_PRIVATE unsigned reoptimizationRetryCounter() const;
     void countReoptimization();
-#if ENABLE(JIT)
+
+#if !ENABLE(C_LOOP)
+    void setCalleeSaveRegisters(RegisterSet);
+    void setCalleeSaveRegisters(std::unique_ptr<RegisterAtOffsetList>);
+
+    RegisterAtOffsetList* calleeSaveRegisters() const { return m_calleeSaveRegisters.get(); }
+
     static unsigned numberOfLLIntBaselineCalleeSaveRegisters() { return RegisterSet::llintBaselineCalleeSaveRegisters().numberOfSetRegisters(); }
     static size_t llintBaselineCalleeSaveSpaceAsVirtualRegisters();
     size_t calleeSaveSpaceAsVirtualRegisters();
+#else
+    static unsigned numberOfLLIntBaselineCalleeSaveRegisters() { return 0; }
+    static size_t llintBaselineCalleeSaveSpaceAsVirtualRegisters() { return 0; };
+    size_t calleeSaveSpaceAsVirtualRegisters() { return 0; }
+#endif
 
+#if ENABLE(JIT)
     unsigned numberOfDFGCompiles();
 
     int32_t codeTypeThresholdMultiplier() const;
@@ -739,14 +751,7 @@ public:
     bool shouldReoptimizeNow();
     bool shouldReoptimizeFromLoopNow();
 
-    void setCalleeSaveRegisters(RegisterSet);
-    void setCalleeSaveRegisters(std::unique_ptr<RegisterAtOffsetList>);
-    
-    RegisterAtOffsetList* calleeSaveRegisters() const { return m_calleeSaveRegisters.get(); }
 #else // No JIT
-    static unsigned numberOfLLIntBaselineCalleeSaveRegisters() { return 0; }
-    static size_t llintBaselineCalleeSaveSpaceAsVirtualRegisters() { return 0; };
-    size_t calleeSaveSpaceAsVirtualRegisters() { return 0; }
     void optimizeAfterWarmUp() { }
     unsigned numberOfDFGCompiles() { return 0; }
 #endif
@@ -965,8 +970,10 @@ private:
     SentinelLinkedList<LLIntCallLinkInfo, BasicRawSentinelNode<LLIntCallLinkInfo>> m_incomingLLIntCalls;
     StructureWatchpointMap m_llintGetByIdWatchpointMap;
     PoisonedRefPtr<CodeBlockPoison, JITCode> m_jitCode;
-#if ENABLE(JIT)
+#if !ENABLE(C_LOOP)
     std::unique_ptr<RegisterAtOffsetList> m_calleeSaveRegisters;
+#endif
+#if ENABLE(JIT)
     PoisonedBag<CodeBlockPoison, StructureStubInfo> m_stubInfos;
     PoisonedBag<CodeBlockPoison, JITAddIC> m_addICs;
     PoisonedBag<CodeBlockPoison, JITMulIC> m_mulICs;
index 4ef274f..be388e1 100644 (file)
@@ -58,7 +58,7 @@ namespace JSC {
 #undef OPCODE_ID_ENUM
 
 const int maxOpcodeLength = 9;
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 const int numOpcodeIDs = NUMBER_OF_BYTECODE_IDS + NUMBER_OF_CLOOP_BYTECODE_HELPER_IDS + NUMBER_OF_BYTECODE_HELPER_IDS;
 #else
 const int numOpcodeIDs = NUMBER_OF_BYTECODE_IDS + NUMBER_OF_BYTECODE_HELPER_IDS;
index 7a46a52..cc4a336 100644 (file)
@@ -672,7 +672,7 @@ void Heap::gatherStackRoots(ConservativeRoots& roots)
 
 void Heap::gatherJSStackRoots(ConservativeRoots& roots)
 {
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
     m_vm->interpreter->cloopStack().gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks);
 #else
     UNUSED_PARAM(roots);
@@ -1605,9 +1605,8 @@ void Heap::stopThePeriphery(GCConductor conn)
             && conn == GCConductor::Collector)
             setGCDidJIT();
     }
-#else
-    UNUSED_PARAM(conn);
 #endif // ENABLE(JIT)
+    UNUSED_PARAM(conn);
     
     vm()->shadowChicken().update(*vm(), vm()->topCallFrame);
     
index abc10a8..9a20072 100644 (file)
@@ -29,7 +29,7 @@
 #include "config.h"
 #include "CLoopStack.h"
 
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 
 #include "CLoopStackInlines.h"
 #include "ConservativeRoots.h"
@@ -163,4 +163,4 @@ size_t CLoopStack::committedByteCount()
 
 } // namespace JSC
 
-#endif // !ENABLE(JIT)
+#endif // ENABLE(C_LOOP)
index 9e5dda1..1f4d99d 100644 (file)
@@ -28,7 +28,7 @@
 
 #pragma once
 
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 
 #include "Register.h"
 #include <wtf/Noncopyable.h>
@@ -107,4 +107,4 @@ namespace JSC {
 
 } // namespace JSC
 
-#endif // !ENABLE(JIT)
+#endif // ENABLE(C_LOOP)
index 997faf6..f6ba17c 100644 (file)
@@ -25,7 +25,7 @@
 
 #pragma once
 
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 
 #include "CLoopStack.h"
 #include "CallFrame.h"
@@ -59,4 +59,4 @@ inline void CLoopStack::setCLoopStackLimit(Register* newTopOfStack)
 
 } // namespace JSC
 
-#endif // !ENABLE(JIT)
+#endif // ENABLE(C_LOOP)
index be84259..950958d 100644 (file)
@@ -31,7 +31,7 @@
 namespace JSC {
 
 struct EntryFrame {
-#if ENABLE(JIT) && NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
+#if !ENABLE(C_LOOP) && NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
     static ptrdiff_t vmEntryRecordOffset()
     {
         EntryFrame* fakeEntryFrame = reinterpret_cast<EntryFrame*>(0x1000);
index 7126d02..bbc69db 100644 (file)
@@ -67,6 +67,7 @@
 #include "ProtoCallFrame.h"
 #include "RegExpObject.h"
 #include "Register.h"
+#include "RegisterAtOffsetList.h"
 #include "ScopedArguments.h"
 #include "StackAlignment.h"
 #include "StackFrame.h"
@@ -332,7 +333,7 @@ void setupForwardArgumentsFrameAndSetThis(CallFrame* execCaller, CallFrame* exec
 
 Interpreter::Interpreter(VM& vm)
     : m_vm(vm)
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
     , m_cloopStack(vm)
 #endif
 {
@@ -563,7 +564,7 @@ public:
 private:
     void copyCalleeSavesToEntryFrameCalleeSavesBuffer(StackVisitor& visitor) const
     {
-#if ENABLE(JIT) && NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
+#if !ENABLE(C_LOOP) && NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
         RegisterAtOffsetList* currentCalleeSaves = visitor->calleeSaveRegisters();
 
         if (!currentCalleeSaves)
index 08d65ba..8117506 100644 (file)
@@ -36,7 +36,7 @@
 #include "StackAlignment.h"
 #include <wtf/HashMap.h>
 
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 #include "CLoopStack.h"
 #endif
 
@@ -93,7 +93,7 @@ namespace JSC {
         Interpreter(VM &);
         ~Interpreter();
         
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
         CLoopStack& cloopStack() { return m_cloopStack; }
 #endif
         
@@ -147,7 +147,7 @@ namespace JSC {
         JSValue execute(CallFrameClosure&);
 
         VM& m_vm;
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
         CLoopStack m_cloopStack;
 #endif
         
index 60911ab..2e8c9a6 100644 (file)
@@ -257,7 +257,7 @@ RegisterAtOffsetList* StackVisitor::Frame::calleeSaveRegisters()
     if (isInlinedFrame())
         return nullptr;
 
-#if ENABLE(JIT) && NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
+#if !ENABLE(C_LOOP) && NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
 
 #if ENABLE(WEBASSEMBLY)
     if (isWasmFrame()) {
@@ -273,7 +273,7 @@ RegisterAtOffsetList* StackVisitor::Frame::calleeSaveRegisters()
     if (CodeBlock* codeBlock = this->codeBlock())
         return codeBlock->calleeSaveRegisters();
 
-#endif // ENABLE(JIT) && NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
+#endif // !ENABLE(C_LOOP) && NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
 
     return nullptr;
 }
index edb7e7e..21ae35b 100644 (file)
@@ -46,7 +46,7 @@ struct VMEntryRecord {
 
     JSObject* callee() const { return m_callee; }
 
-#if ENABLE(JIT) && NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
+#if !ENABLE(C_LOOP) && NUMBER_OF_CALLEE_SAVES_REGISTERS > 0
     intptr_t calleeSaveRegistersBuffer[NUMBER_OF_CALLEE_SAVES_REGISTERS];
 #endif
 
index 18bde41..077ed58 100644 (file)
@@ -145,7 +145,7 @@ private:
 
 #else
 inline bool isJITPC(void*) { return false; }
-#endif // ENABLE(JIT) && ENABLE(ASSEMBLER)
+#endif // ENABLE(ASSEMBLER)
 
 
 } // namespace JSC
index a24d1cb..48f4022 100644 (file)
@@ -33,7 +33,7 @@ namespace JSC {
 typedef MacroAssembler::FPRegisterID FPRReg;
 static constexpr FPRReg InvalidFPRReg { FPRReg::InvalidFPRReg };
 
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
 
 #if CPU(X86) || CPU(X86_64)
 
@@ -332,7 +332,7 @@ public:
 // We use this hack to get the FPRInfo from the FPRReg type in templates because our code is bad and we should feel bad..
 constexpr FPRInfo toInfoFromReg(FPRReg) { return FPRInfo(); }
 
-#endif // ENABLE(JIT)
+#endif // ENABLE(ASSEMBLER)
 
 } // namespace JSC
 
@@ -340,7 +340,7 @@ namespace WTF {
 
 inline void printInternal(PrintStream& out, JSC::FPRReg reg)
 {
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
     out.print("%", JSC::FPRInfo::debugName(reg));
 #else
     out.printf("%%fr%d", reg);
index 5a8005f..af8e358 100644 (file)
@@ -26,7 +26,7 @@
 #include "config.h"
 #include "GPRInfo.h"
 
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
 
 namespace JSC {
 
@@ -48,4 +48,4 @@ const GPRReg GPRInfo::patchpointScratchRegister = ARM64Registers::ip0;
 
 } // namespace JSC
 
-#endif // ENABLE(JIT)
+#endif // ENABLE(ASSEMBLER)
index 2d374e3..805449f 100644 (file)
@@ -41,7 +41,7 @@ enum NoResultTag { NoResult };
 typedef MacroAssembler::RegisterID GPRReg;
 static constexpr GPRReg InvalidGPRReg { GPRReg::InvalidGPRReg };
 
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
 
 #if USE(JSVALUE64)
 class JSValueRegs {
@@ -816,7 +816,7 @@ inline NoResultTag extractResult(NoResultTag) { return NoResult; }
 // We use this hack to get the GPRInfo from the GPRReg type in templates because our code is bad and we should feel bad..
 constexpr GPRInfo toInfoFromReg(GPRReg) { return GPRInfo(); }
 
-#endif // ENABLE(JIT)
+#endif // ENABLE(ASSEMBLER)
 
 } // namespace JSC
 
@@ -824,7 +824,7 @@ namespace WTF {
 
 inline void printInternal(PrintStream& out, JSC::GPRReg reg)
 {
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
     out.print("%", JSC::GPRInfo::debugName(reg));
 #else
     out.printf("%%r%d", reg);
index e8d0191..1375bb9 100644 (file)
@@ -26,6 +26,8 @@
 #include "config.h"
 #include "HostCallReturnValue.h"
 
+#if !ENABLE(C_LOOP)
+
 #include "CallFrame.h"
 #include "JSCJSValueInlines.h"
 #include "JSObject.h"
 
 namespace JSC {
 
-// Nothing to see here.
+// Note: getHostCallReturnValueWithExecState() needs to be placed before the
+// definition of getHostCallReturnValue() below because the Windows build
+// requires it.
+extern "C" EncodedJSValue HOST_CALL_RETURN_VALUE_OPTION getHostCallReturnValueWithExecState(ExecState* exec)
+{
+    if (!exec)
+        return JSValue::encode(JSValue());
+    return JSValue::encode(exec->vm().hostCallReturnValue);
+}
+
+#if COMPILER(GCC_OR_CLANG) && CPU(X86_64)
+asm (
+".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
+HIDE_SYMBOL(getHostCallReturnValue) "\n"
+SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
+    "lea -8(%rsp), %rdi\n"
+    "jmp " LOCAL_REFERENCE(getHostCallReturnValueWithExecState) "\n"
+);
+
+#elif COMPILER(GCC_OR_CLANG) && CPU(X86)
+asm (
+".text" "\n" \
+".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
+HIDE_SYMBOL(getHostCallReturnValue) "\n"
+SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
+    "push %ebp\n"
+    "mov %esp, %eax\n"
+    "leal -4(%esp), %esp\n"
+    "push %eax\n"
+    "call " LOCAL_REFERENCE(getHostCallReturnValueWithExecState) "\n"
+    "leal 8(%esp), %esp\n"
+    "pop %ebp\n"
+    "ret\n"
+);
+
+#elif COMPILER(GCC_OR_CLANG) && CPU(ARM_THUMB2)
+asm (
+".text" "\n"
+".align 2" "\n"
+".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
+HIDE_SYMBOL(getHostCallReturnValue) "\n"
+".thumb" "\n"
+".thumb_func " THUMB_FUNC_PARAM(getHostCallReturnValue) "\n"
+SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
+    "sub r0, sp, #8" "\n"
+    "b " LOCAL_REFERENCE(getHostCallReturnValueWithExecState) "\n"
+);
+
+#elif COMPILER(GCC_OR_CLANG) && CPU(ARM_TRADITIONAL)
+asm (
+".text" "\n"
+".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
+HIDE_SYMBOL(getHostCallReturnValue) "\n"
+INLINE_ARM_FUNCTION(getHostCallReturnValue)
+SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
+    "sub r0, sp, #8" "\n"
+    "b " LOCAL_REFERENCE(getHostCallReturnValueWithExecState) "\n"
+);
+
+#elif CPU(ARM64)
+asm (
+".text" "\n"
+".align 2" "\n"
+".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
+HIDE_SYMBOL(getHostCallReturnValue) "\n"
+SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
+     "sub x0, sp, #16" "\n"
+     "b " LOCAL_REFERENCE(getHostCallReturnValueWithExecState) "\n"
+);
+
+#elif COMPILER(GCC_OR_CLANG) && CPU(MIPS)
+
+#if WTF_MIPS_PIC
+#define LOAD_FUNCTION_TO_T9(function) \
+        ".set noreorder" "\n" \
+        ".cpload $25" "\n" \
+        ".set reorder" "\n" \
+        "la $t9, " LOCAL_REFERENCE(function) "\n"
+#else
+#define LOAD_FUNCTION_TO_T9(function) "" "\n"
+#endif
+
+asm (
+".text" "\n"
+".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
+HIDE_SYMBOL(getHostCallReturnValue) "\n"
+SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
+    LOAD_FUNCTION_TO_T9(getHostCallReturnValueWithExecState)
+    "addi $a0, $sp, -8" "\n"
+    "b " LOCAL_REFERENCE(getHostCallReturnValueWithExecState) "\n"
+);
+
+#elif COMPILER(MSVC) && CPU(X86)
+extern "C" {
+    __declspec(naked) EncodedJSValue HOST_CALL_RETURN_VALUE_OPTION getHostCallReturnValue()
+    {
+        __asm lea eax, [esp - 4]
+        __asm mov [esp + 4], eax;
+        __asm jmp getHostCallReturnValueWithExecState
+    }
+}
+#endif
 
 } // namespace JSC
 
+#endif // !ENABLE(C_LOOP)
index 62eb2a5..d211d14 100644 (file)
@@ -27,7 +27,7 @@
 
 #include "JSCJSValue.h"
 
-#if ENABLE(JIT)
+#if !ENABLE(C_LOOP)
 
 #if CALLING_CONVENTION_IS_STDCALL
 #define HOST_CALL_RETURN_VALUE_OPTION CDECL
@@ -57,4 +57,4 @@ inline void initializeHostCallReturnValue() { }
 
 } // namespace JSC
 
-#endif // ENABLE(JIT)
+#endif // !ENABLE(C_LOOP)
index b2eb418..7aacf6b 100644 (file)
@@ -2908,109 +2908,6 @@ int32_t JIT_OPERATION operationCheckIfExceptionIsUncatchableAndNotifyProfiler(Ex
 
 } // extern "C"
 
-// Note: getHostCallReturnValueWithExecState() needs to be placed before the
-// definition of getHostCallReturnValue() below because the Windows build
-// requires it.
-extern "C" EncodedJSValue HOST_CALL_RETURN_VALUE_OPTION getHostCallReturnValueWithExecState(ExecState* exec)
-{
-    if (!exec)
-        return JSValue::encode(JSValue());
-    return JSValue::encode(exec->vm().hostCallReturnValue);
-}
-
-#if COMPILER(GCC_OR_CLANG) && CPU(X86_64)
-asm (
-".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
-HIDE_SYMBOL(getHostCallReturnValue) "\n"
-SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
-    "lea -8(%rsp), %rdi\n"
-    "jmp " LOCAL_REFERENCE(getHostCallReturnValueWithExecState) "\n"
-);
-
-#elif COMPILER(GCC_OR_CLANG) && CPU(X86)
-asm (
-".text" "\n" \
-".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
-HIDE_SYMBOL(getHostCallReturnValue) "\n"
-SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
-    "push %ebp\n"
-    "mov %esp, %eax\n"
-    "leal -4(%esp), %esp\n"
-    "push %eax\n"
-    "call " LOCAL_REFERENCE(getHostCallReturnValueWithExecState) "\n"
-    "leal 8(%esp), %esp\n"
-    "pop %ebp\n"
-    "ret\n"
-);
-
-#elif COMPILER(GCC_OR_CLANG) && CPU(ARM_THUMB2)
-asm (
-".text" "\n"
-".align 2" "\n"
-".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
-HIDE_SYMBOL(getHostCallReturnValue) "\n"
-".thumb" "\n"
-".thumb_func " THUMB_FUNC_PARAM(getHostCallReturnValue) "\n"
-SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
-    "sub r0, sp, #8" "\n"
-    "b " LOCAL_REFERENCE(getHostCallReturnValueWithExecState) "\n"
-);
-
-#elif COMPILER(GCC_OR_CLANG) && CPU(ARM_TRADITIONAL)
-asm (
-".text" "\n"
-".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
-HIDE_SYMBOL(getHostCallReturnValue) "\n"
-INLINE_ARM_FUNCTION(getHostCallReturnValue)
-SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
-    "sub r0, sp, #8" "\n"
-    "b " LOCAL_REFERENCE(getHostCallReturnValueWithExecState) "\n"
-);
-
-#elif CPU(ARM64)
-asm (
-".text" "\n"
-".align 2" "\n"
-".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
-HIDE_SYMBOL(getHostCallReturnValue) "\n"
-SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
-     "sub x0, sp, #16" "\n"
-     "b " LOCAL_REFERENCE(getHostCallReturnValueWithExecState) "\n"
-);
-
-#elif COMPILER(GCC_OR_CLANG) && CPU(MIPS)
-
-#if WTF_MIPS_PIC
-#define LOAD_FUNCTION_TO_T9(function) \
-        ".set noreorder" "\n" \
-        ".cpload $25" "\n" \
-        ".set reorder" "\n" \
-        "la $t9, " LOCAL_REFERENCE(function) "\n"
-#else
-#define LOAD_FUNCTION_TO_T9(function) "" "\n"
-#endif
-
-asm (
-".text" "\n"
-".globl " SYMBOL_STRING(getHostCallReturnValue) "\n"
-HIDE_SYMBOL(getHostCallReturnValue) "\n"
-SYMBOL_STRING(getHostCallReturnValue) ":" "\n"
-    LOAD_FUNCTION_TO_T9(getHostCallReturnValueWithExecState)
-    "addi $a0, $sp, -8" "\n"
-    "b " LOCAL_REFERENCE(getHostCallReturnValueWithExecState) "\n"
-);
-
-#elif COMPILER(MSVC) && CPU(X86)
-extern "C" {
-    __declspec(naked) EncodedJSValue HOST_CALL_RETURN_VALUE_OPTION getHostCallReturnValue()
-    {
-        __asm lea eax, [esp - 4]
-        __asm mov [esp + 4], eax;
-        __asm jmp getHostCallReturnValueWithExecState
-    }
-}
-#endif
-
 } // namespace JSC
 
 #endif // ENABLE(JIT)
index 544bca3..0a37bf7 100644 (file)
@@ -25,7 +25,7 @@
 
 #include "config.h"
 
-#if !ENABLE(JIT) && COMPILER(MSVC) && CPU(X86_64)
+#if ENABLE(C_LOOP) && COMPILER(MSVC) && CPU(X86_64)
 
 #include "CallFrame.h"
 #include "JSCJSValue.h"
@@ -43,4 +43,4 @@ extern "C" EncodedJSValue getHostCallReturnValueWithExecState(ExecState*)
 
 } // namespace JSC
 
-#endif // !ENABLE(JIT) && COMPILER(MSVC) && CPU(X86_64)
+#endif // ENABLE(C_LOOP) && COMPILER(MSVC) && CPU(X86_64)
index 4aa9653..3cc49c0 100644 (file)
@@ -26,7 +26,7 @@
 #include "config.h"
 #include "Reg.h"
 
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
 
 #include "FPRInfo.h"
 #include "GPRInfo.h"
@@ -54,5 +54,5 @@ void Reg::dump(PrintStream& out) const
 
 } // namespace JSC
 
-#endif // ENABLE(JIT)
+#endif // ENABLE(ASSEMBLER)
 
index 84ae359..9d246ad 100644 (file)
@@ -25,7 +25,7 @@
 
 #pragma once
 
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
 
 #include "MacroAssembler.h"
 
@@ -245,4 +245,4 @@ template<> struct HashTraits<JSC::Reg> : SimpleClassHashTraits<JSC::Reg> {
 
 } // namespace WTF
 
-#endif // ENABLE(JIT)
+#endif // ENABLE(ASSEMBLER)
index 16a639c..0e44e2d 100644 (file)
@@ -26,7 +26,7 @@
 #include "config.h"
 #include "RegisterAtOffset.h"
 
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
 
 namespace JSC {
 
@@ -41,5 +41,5 @@ void RegisterAtOffset::dump(PrintStream& out) const
 
 } // namespace JSC
 
-#endif // ENABLE(JIT)
+#endif // ENABLE(ASSEMBLER)
 
index 0db8da4..cd415b6 100644 (file)
@@ -25,7 +25,7 @@
 
 #pragma once
 
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
 
 #include "Reg.h"
 #include <wtf/PrintStream.h>
@@ -74,4 +74,4 @@ private:
 
 } // namespace JSC
 
-#endif // ENABLE(JIT)
+#endif // ENABLE(ASSEMBLER)
index dd5b5b3..6fe06e7 100644 (file)
@@ -26,7 +26,7 @@
 #include "config.h"
 #include "RegisterAtOffsetList.h"
 
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
 
 #include <wtf/ListDump.h>
 
@@ -68,5 +68,5 @@ unsigned RegisterAtOffsetList::indexOf(Reg reg) const
 
 } // namespace JSC
 
-#endif // ENABLE(JIT)
+#endif // ENABLE(ASSEMBLER)
 
index e0b2541..5e3a3cb 100644 (file)
@@ -25,7 +25,7 @@
 
 #pragma once
 
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
 
 #include "RegisterAtOffset.h"
 #include "RegisterSet.h"
@@ -69,4 +69,4 @@ private:
 
 } // namespace JSC
 
-#endif // ENABLE(JIT)
+#endif // ENABLE(ASSEMBLER)
index 0f3f957..5a77977 100644 (file)
@@ -25,7 +25,7 @@
 
 #pragma once
 
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
 
 #include "FPRInfo.h"
 #include "GPRInfo.h"
@@ -107,4 +107,4 @@ private:
 
 } // namespace JSC
 
-#endif // ENABLE(JIT)
+#endif // ENABLE(ASSEMBLER)
index 85256f7..a602141 100644 (file)
@@ -26,7 +26,7 @@
 #include "config.h"
 #include "RegisterSet.h"
 
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
 
 #include "GPRInfo.h"
 #include "JSCInlines.h"
@@ -395,5 +395,5 @@ void RegisterSet::dump(PrintStream& out) const
 
 } // namespace JSC
 
-#endif // ENABLE(JIT)
+#endif // ENABLE(ASSEMBLER)
 
index c87eaf8..a618852 100644 (file)
@@ -25,7 +25,7 @@
 
 #pragma once
 
-#if ENABLE(JIT)
+#if !ENABLE(C_LOOP)
 
 #include "GPRInfo.h"
 #include "MacroAssembler.h"
@@ -246,4 +246,4 @@ template<> struct HashTraits<JSC::RegisterSet> : public CustomHashTraits<JSC::Re
 
 } // namespace WTF
 
-#endif // ENABLE(JIT)
+#endif // !ENABLE(C_LOOP)
index 9c2e73d..3f3521e 100644 (file)
@@ -26,9 +26,8 @@
 #include "config.h"
 #include "TempRegisterSet.h"
 
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
 
-#include "JSCInlines.h"
 #include "RegisterSet.h"
 
 namespace JSC {
@@ -51,4 +50,4 @@ TempRegisterSet::TempRegisterSet(const RegisterSet& other)
 
 } // namespace JSC
 
-#endif // ENABLE(JIT)
+#endif // ENABLE(ASSEMBLER)
index 9983229..66383f5 100644 (file)
@@ -25,7 +25,7 @@
 
 #pragma once
 
-#if ENABLE(JIT)
+#if ENABLE(ASSEMBLER)
 
 #include "FPRInfo.h"
 #include "GPRInfo.h"
@@ -205,7 +205,7 @@ private:
 
 } // namespace JSC
 
-#else // ENABLE(JIT) -> so if JIT is disabled
+#else // ENABLE(ASSEMBLER) -> so if JIT is disabled
 
 namespace JSC {
 
@@ -216,4 +216,4 @@ struct TempRegisterSet { };
 
 } // namespace JSC
 
-#endif // ENABLE(JIT)
+#endif // ENABLE(ASSEMBLER)
index e3c6c6c..c84b189 100644 (file)
@@ -26,7 +26,7 @@
 #include "config.h"
 #include "LLIntCLoop.h"
 
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 
 #include "LLIntData.h"
 
@@ -41,4 +41,4 @@ void CLoop::initialize()
 } // namespace LLInt
 } // namespace JSC
 
-#endif // !ENABLE(JIT)
+#endif // ENABLE(C_LOOP)
index a315914..02b2f24 100644 (file)
@@ -25,7 +25,7 @@
 
 #pragma once
 
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 
 #include "JSCJSValue.h"
 #include "Opcode.h"
@@ -44,4 +44,4 @@ public:
 
 using JSC::LLInt::CLoop;
 
-#endif // !ENABLE(JIT)
+#endif // ENABLE(C_LOOP)
index 41f64b4..0dd0454 100644 (file)
@@ -46,16 +46,16 @@ namespace JSC { namespace LLInt {
 Instruction Data::s_exceptionInstructions[maxOpcodeLength + 1] = { };
 Opcode Data::s_opcodeMap[numOpcodeIDs] = { };
 
-#if ENABLE(JIT)
+#if !ENABLE(C_LOOP)
 extern "C" void llint_entry(void*);
 #endif
 
 void initialize()
 {
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
     CLoop::initialize();
 
-#else // ENABLE(JIT)
+#else // !ENABLE(C_LOOP)
     llint_entry(&Data::s_opcodeMap);
 
     for (int i = 0; i < numOpcodeIDs; ++i)
@@ -64,7 +64,7 @@ void initialize()
     void* handler = Data::s_opcodeMap[llint_throw_from_slow_path_trampoline];
     for (int i = 0; i < maxOpcodeLength + 1; ++i)
         Data::s_exceptionInstructions[i].u.pointer = handler;
-#endif // ENABLE(JIT)
+#endif // ENABLE(C_LOOP)
 }
 
 IGNORE_CLANG_WARNINGS_BEGIN("missing-noreturn")
@@ -125,7 +125,7 @@ void Data::performAssertions(VM& vm)
     STATIC_ASSERT(ValueUndefined == (TagBitTypeOther | TagBitUndefined));
     STATIC_ASSERT(ValueNull == TagBitTypeOther);
 #endif
-#if (CPU(X86_64) && !OS(WINDOWS)) || CPU(ARM64) || !ENABLE(JIT)
+#if (CPU(X86_64) && !OS(WINDOWS)) || CPU(ARM64) || !ENABLE(ASSEMBLER)
     STATIC_ASSERT(!maxFrameExtentForSlowPathCall);
 #elif CPU(ARM)
     STATIC_ASSERT(maxFrameExtentForSlowPathCall == 24);
@@ -135,7 +135,7 @@ void Data::performAssertions(VM& vm)
     STATIC_ASSERT(maxFrameExtentForSlowPathCall == 64);
 #endif
 
-#if !ENABLE(JIT) || USE(JSVALUE32_64)
+#if ENABLE(C_LOOP) || USE(JSVALUE32_64)
     ASSERT(!CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters());
 #elif (CPU(X86_64) && !OS(WINDOWS))  || CPU(ARM64)
     ASSERT(CodeBlock::llintBaselineCalleeSaveSpaceAsVirtualRegisters() == 3);
index be58c00..9ca5715 100644 (file)
@@ -34,7 +34,7 @@ namespace JSC {
 class VM;
 struct Instruction;
 
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 typedef OpcodeID LLIntCode;
 #else
 typedef void (*LLIntCode)();
index a6437d4..ba6d516 100644 (file)
@@ -30,7 +30,7 @@
 #include <wtf/Gigacage.h>
 #include <wtf/Poisoned.h>
 
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 #define OFFLINE_ASM_C_LOOP 1
 #define OFFLINE_ASM_X86 0
 #define OFFLINE_ASM_X86_WIN 0
@@ -45,7 +45,7 @@
 #define OFFLINE_ASM_ARMv7s 0
 #define OFFLINE_ASM_MIPS 0
 
-#else // ENABLE(JIT)
+#else // ENABLE(C_LOOP)
 
 #define OFFLINE_ASM_C_LOOP 0
 
 #endif
 #endif
 
-#endif // ENABLE(JIT)
+#endif // ENABLE(C_LOOP)
 
 #if USE(JSVALUE64)
 #define OFFLINE_ASM_JSVALUE64 1
index 85905e3..84a9fc2 100644 (file)
 
 #pragma once
 
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 
 #define FOR_EACH_LLINT_NOJIT_NATIVE_HELPER(macro) \
     FOR_EACH_CLOOP_BYTECODE_HELPER_ID(macro)
 
-#else // ENABLE(JIT)
+#else // !ENABLE(C_LOOP)
 
 #define FOR_EACH_LLINT_NOJIT_NATIVE_HELPER(macro) \
-    // Nothing to do here. Use the JIT impl instead.
+    // Nothing to do here. Use the LLInt ASM / JIT impl instead.
 
-#endif // !ENABLE(JIT)
+#endif // ENABLE(C_LOOP)
 
 
 #define FOR_EACH_LLINT_NATIVE_HELPER(macro) \
index 942185e..82bdc55 100644 (file)
@@ -46,7 +46,7 @@ ALWAYS_INLINE bool isLLIntPC(void* pc)
     return llintStart <= pcAsInt && pcAsInt <= llintEnd;
 }
 
-#if ENABLE(JIT)
+#if !ENABLE(C_LOOP)
 static const GPRReg LLIntPC = GPRInfo::regT4;
 #endif
 
index 3c5f9a2..c1c2c67 100644 (file)
@@ -536,7 +536,7 @@ LLINT_SLOW_PATH_DECL(stack_check)
         slowPathLogF("Num vars = %u.\n", codeBlock->numVars());
     }
     slowPathLogF("Current OS stack end is at %p.\n", vm.softStackLimit());
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
     slowPathLogF("Current C Loop stack end is at %p.\n", vm.cloopStackLimit());
 #endif
 
@@ -547,7 +547,7 @@ LLINT_SLOW_PATH_DECL(stack_check)
     // For JIT enabled builds which uses the C stack, the stack is not growable.
     // Hence, if we get here, then we know a stack overflow is imminent. So, just
     // throw the StackOverflowError unconditionally.
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
     Register* topOfFrame = exec->topOfFrame();
     if (LIKELY(topOfFrame < reinterpret_cast<Register*>(exec))) {
         ASSERT(!vm.interpreter->cloopStack().containsAddress(topOfFrame));
@@ -1861,7 +1861,7 @@ extern "C" SlowPathReturnType llint_throw_stack_overflow_error(VM* vm, ProtoCall
     return encodeResult(0, 0);
 }
 
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 extern "C" SlowPathReturnType llint_stack_check_at_vm_entry(VM* vm, Register* newTopOfStack)
 {
     bool success = vm->ensureStackCapacityFor(newTopOfStack);
index 7cfeca7..9fe1ac5 100644 (file)
@@ -132,7 +132,7 @@ LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_log_shadow_chicken_tail);
 LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_super_sampler_begin);
 LLINT_SLOW_PATH_HIDDEN_DECL(slow_path_super_sampler_end);
 extern "C" SlowPathReturnType llint_throw_stack_overflow_error(VM*, ProtoCallFrame*) WTF_INTERNAL;
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 extern "C" SlowPathReturnType llint_stack_check_at_vm_entry(VM*, Register*) WTF_INTERNAL;
 #endif
 extern "C" NO_RETURN_DUE_TO_CRASH void llint_crash() WTF_INTERNAL;
index 2446bdb..5c194cf 100644 (file)
@@ -98,8 +98,9 @@ MacroAssemblerCodeRef<JITThunkPtrTag> moduleProgramEntryThunkGenerator(VM* vm)
 
 } // namespace LLInt
 
-#else // ENABLE(JIT)
+#endif
 
+#if ENABLE(C_LOOP)
 // Non-JIT (i.e. C Loop LLINT) case:
 
 EncodedJSValue vmEntryToJavaScript(void* executableAddress, VM* vm, ProtoCallFrame* protoCallFrame)
@@ -122,7 +123,6 @@ extern "C" VMEntryRecord* vmEntryRecord(EntryFrame* entryFrame)
     return reinterpret_cast<VMEntryRecord*>(reinterpret_cast<char*>(entryFrame) - VMEntryTotalFrameSize);
 }
 
-
-#endif // ENABLE(JIT)
+#endif // ENABLE(C_LOOP)
 
 } // namespace JSC
index 78bff08..47bea2d 100644 (file)
@@ -29,7 +29,7 @@
 #include "LLIntOfflineAsmConfig.h"
 #include <wtf/InlineASM.h>
 
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 #include "Bytecodes.h"
 #include "CLoopStackInlines.h"
 #include "CodeBlock.h"
@@ -559,4 +559,4 @@ JSValue CLoop::execute(OpcodeID entryOpcodeID, void* executableAddress, VM* vm,
 // for the interpreter, as compiled from LowLevelInterpreter.asm.
 #include "LLIntAssembly.h"
 
-#endif // ENABLE(JIT)
+#endif // ENABLE(C_LOOP)
index 83008e1..e4c9074 100644 (file)
@@ -27,7 +27,7 @@
 
 #include "Opcode.h"
 
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 
 namespace JSC {
 
@@ -44,4 +44,4 @@ FOR_EACH_CORE_OPCODE_ID(LLINT_OPCODE_ALIAS)
 
 } // namespace JSC
 
-#endif // !ENABLE(JIT)
+#endif // ENABLE(C_LOOP)
index 5442277..f7e6575 100644 (file)
@@ -59,7 +59,7 @@ class OSRExitCompiler;
 class SpeculativeJIT;
 }
 #endif
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 namespace LLInt {
 class CLoop;
 }
@@ -147,7 +147,7 @@ class JSValue {
     friend class DFG::OSRExitCompiler;
     friend class DFG::SpeculativeJIT;
 #endif
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
     friend class LLInt::CLoop;
 #endif
 
index 836d755..46f1554 100644 (file)
@@ -50,10 +50,10 @@ inline void setInstructionPointer(PlatformRegisters&, MacroAssemblerCodePtr<CFun
 
 template<size_t N> void*& argumentPointer(PlatformRegisters&);
 template<size_t N> void* argumentPointer(const PlatformRegisters&);
-#if ENABLE(JIT)
+#if !ENABLE(C_LOOP)
 void*& llintInstructionPointer(PlatformRegisters&);
 void* llintInstructionPointer(const PlatformRegisters&);
-#endif // ENABLE(JIT)
+#endif // !ENABLE(C_LOOP)
 
 #if HAVE(MACHINE_CONTEXT)
 
@@ -72,10 +72,10 @@ inline void setInstructionPointer(mcontext_t&, MacroAssemblerCodePtr<CFunctionPt
 
 template<size_t N> void*& argumentPointer(mcontext_t&);
 template<size_t N> void* argumentPointer(const mcontext_t&);
-#if ENABLE(JIT)
+#if !ENABLE(C_LOOP)
 void*& llintInstructionPointer(mcontext_t&);
 void* llintInstructionPointer(const mcontext_t&);
-#endif // ENABLE(JIT)
+#endif // !ENABLE(C_LOOP)
 #endif // HAVE(MACHINE_CONTEXT)
 #endif // OS(WINDOWS) || HAVE(MACHINE_CONTEXT)
 
@@ -668,7 +668,7 @@ inline void* argumentPointer(const mcontext_t& machineContext)
 }
 #endif // HAVE(MACHINE_CONTEXT)
 
-#if ENABLE(JIT)
+#if !ENABLE(C_LOOP)
 #if OS(WINDOWS) || HAVE(MACHINE_CONTEXT)
 inline void*& llintInstructionPointer(PlatformRegisters& regs)
 {
@@ -783,7 +783,7 @@ inline void* llintInstructionPointer(const mcontext_t& machineContext)
     return llintInstructionPointer(const_cast<mcontext_t&>(machineContext));
 }
 #endif // HAVE(MACHINE_CONTEXT)
-#endif // ENABLE(JIT)
+#endif // !ENABLE(C_LOOP)
 
 }
 }
index b6935e4..a99554d 100644 (file)
@@ -601,10 +601,15 @@ void SamplingProfiler::processUnverifiedStackTraces()
                     storeCalleeIntoLastFrame(unprocessedStackTrace.frames[0].unverifiedCallee);
                     startIndex = 1;
                 }
-            } else if (std::optional<CodeOrigin> codeOrigin = topCodeBlock->findPC(unprocessedStackTrace.topPC)) {
-                appendCodeOrigin(topCodeBlock, *codeOrigin);
-                storeCalleeIntoLastFrame(unprocessedStackTrace.frames[0].unverifiedCallee);
-                startIndex = 1;
+            } else {
+#if ENABLE(JIT)
+                if (std::optional<CodeOrigin> codeOrigin = topCodeBlock->findPC(unprocessedStackTrace.topPC)) {
+                    appendCodeOrigin(topCodeBlock, *codeOrigin);
+                    storeCalleeIntoLastFrame(unprocessedStackTrace.frames[0].unverifiedCallee);
+                    startIndex = 1;
+                }
+#endif
+                UNUSED_PARAM(appendCodeOrigin);
             }
         }
 
index 01ec86a..34eeca7 100644 (file)
@@ -102,10 +102,8 @@ JSValue optimizeNextInvocation(JSValue theFunctionValue)
 #if ENABLE(JIT)
     if (CodeBlock* baselineCodeBlock = getSomeBaselineCodeBlockForFunction(theFunctionValue))
         baselineCodeBlock->optimizeNextInvocation();
-#else
-    UNUSED_PARAM(theFunctionValue);
 #endif
-
+    UNUSED_PARAM(theFunctionValue);
     return jsUndefined();
 }
 
index 8e092bc..47e61b8 100644 (file)
 #include <wtf/text/AtomicStringTable.h>
 #include <wtf/text/SymbolRegistry.h>
 
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 #include "CLoopStack.h"
 #include "CLoopStackInlines.h"
 #endif
@@ -461,7 +461,7 @@ VM::VM(VMType vmType, HeapType heapType)
     ftlThunks = std::make_unique<FTL::Thunks>();
 #endif // ENABLE(FTL_JIT)
     
-#if ENABLE(JIT)
+#if !ENABLE(C_LOOP)
     initializeHostCallReturnValue(); // This is needed to convince the linker not to drop host call return support.
 #endif
     
@@ -741,9 +741,8 @@ NativeExecutable* VM::getHostFunction(NativeFunction function, Intrinsic intrins
             intrinsic != NoIntrinsic ? thunkGeneratorForIntrinsic(intrinsic) : 0,
             intrinsic, signature, name);
     }
-#else // ENABLE(JIT)
-    UNUSED_PARAM(intrinsic);
 #endif // ENABLE(JIT)
+    UNUSED_PARAM(intrinsic);
     return NativeExecutable::create(*this,
         adoptRef(*new NativeJITCode(LLInt::getCodeRef<JSEntryPtrTag>(llint_native_call_trampoline), JITCode::HostCallThunk)), function,
         adoptRef(*new NativeJITCode(LLInt::getCodeRef<JSEntryPtrTag>(llint_native_construct_trampoline), JITCode::HostCallThunk)), constructor,
@@ -876,7 +875,7 @@ size_t VM::updateSoftReservedZoneSize(size_t softReservedZoneSize)
 {
     size_t oldSoftReservedZoneSize = m_currentSoftReservedZoneSize;
     m_currentSoftReservedZoneSize = softReservedZoneSize;
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
     interpreter->cloopStack().setSoftReservedZoneSize(softReservedZoneSize);
 #endif
 
@@ -1145,7 +1144,7 @@ void sanitizeStackForVM(VM* vm)
         ASSERT(vm->currentThreadIsHoldingAPILock());
         ASSERT_UNUSED(stackBounds, stackBounds.contains(vm->lastStackTop()));
     }
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
     vm->interpreter->cloopStack().sanitizeStack();
 #else
     sanitizeStackForVMImpl(vm);
@@ -1154,7 +1153,7 @@ void sanitizeStackForVM(VM* vm)
 
 size_t VM::committedStackByteCount()
 {
-#if ENABLE(JIT)
+#if !ENABLE(C_LOOP)
     // When using the C stack, we don't know how many stack pages are actually
     // committed. So, we use the current stack usage as an estimate.
     ASSERT(Thread::current().stack().isGrowingDownward());
@@ -1166,7 +1165,7 @@ size_t VM::committedStackByteCount()
 #endif
 }
 
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
 bool VM::ensureStackCapacityForCLoop(Register* newTopOfStack)
 {
     return interpreter->cloopStack().ensureCapacityFor(newTopOfStack);
@@ -1176,7 +1175,7 @@ bool VM::isSafeToRecurseSoftCLoop() const
 {
     return interpreter->cloopStack().isSafeToRecurse();
 }
-#endif // !ENABLE(JIT)
+#endif // ENABLE(C_LOOP)
 
 #if ENABLE(EXCEPTION_SCOPE_VERIFICATION)
 void VM::verifyExceptionCheckNeedIsSatisfied(unsigned recursionDepth, ExceptionEventLocation& location)
index 40da5d3..f97af35 100644 (file)
@@ -716,7 +716,7 @@ public:
     void* stackLimit() { return m_stackLimit; }
     void* softStackLimit() { return m_softStackLimit; }
     void** addressOfSoftStackLimit() { return &m_softStackLimit; }
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
     void* cloopStackLimit() { return m_cloopStackLimit; }
     void setCLoopStackLimit(void* limit) { m_cloopStackLimit = limit; }
 #endif
@@ -929,10 +929,10 @@ private:
         m_exception = nullptr;
     }
 
-#if !ENABLE(JIT)    
+#if ENABLE(C_LOOP)
     bool ensureStackCapacityForCLoop(Register* newTopOfStack);
     bool isSafeToRecurseSoftCLoop() const;
-#endif // !ENABLE(JIT)
+#endif // ENABLE(C_LOOP)
 
     JS_EXPORT_PRIVATE void throwException(ExecState*, Exception*);
     JS_EXPORT_PRIVATE JSValue throwException(ExecState*, JSValue);
@@ -953,7 +953,7 @@ private:
     size_t m_currentSoftReservedZoneSize;
     void* m_stackLimit { nullptr };
     void* m_softStackLimit { nullptr };
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
     void* m_cloopStackLimit { nullptr };
 #endif
     void* m_lastStackTop { nullptr };
@@ -1035,7 +1035,7 @@ inline Heap* WeakSet::heap() const
     return &m_vm->heap;
 }
 
-#if ENABLE(JIT)
+#if !ENABLE(C_LOOP)
 extern "C" void sanitizeStackForVMImpl(VM*);
 #endif
 
index 15b50d6..15e5df0 100644 (file)
@@ -35,7 +35,7 @@ namespace JSC {
     
 bool VM::ensureStackCapacityFor(Register* newTopOfStack)
 {
-#if ENABLE(JIT)
+#if !ENABLE(C_LOOP)
     ASSERT(Thread::current().stack().isGrowingDownward());
     return newTopOfStack >= m_softStackLimit;
 #else
@@ -47,7 +47,7 @@ bool VM::ensureStackCapacityFor(Register* newTopOfStack)
 bool VM::isSafeToRecurseSoft() const
 {
     bool safe = isSafeToRecurse(m_softStackLimit);
-#if !ENABLE(JIT)
+#if ENABLE(C_LOOP)
     safe = safe && isSafeToRecurseSoftCLoop();
 #endif
     return safe;
index 13d7aa1..9f498d4 100644 (file)
@@ -1,3 +1,20 @@
+2018-09-21  Yusuke Suzuki  <yusukesuzuki@slowstart.org>
+
+        [JSC] Enable LLInt ASM interpreter on X64 and ARM64 in non JIT configuration
+        https://bugs.webkit.org/show_bug.cgi?id=189778
+
+        Reviewed by Keith Miller.
+
+        This patch adds ENABLE(C_LOOP) which indicates we use CLoop as the interpreter.
+        Previously, we used !ENABLE(JIT) for this configuration. But now, we have
+        a build configuration that has LLInt ASM interpreter (not CLoop) and !ENABLE(JIT).
+
+        We enable LLInt ASM interpreter for non JIT environment in X86_64 and ARM64 architectures.
+        And we enable ENABLE(ASSEMBLER) for non JIT environment since it offers machine register
+        information which is used for LLInt and SamplingProfiler.
+
+        * wtf/Platform.h:
+
 2018-09-21  Keith Miller  <keith_miller@apple.com>
 
         Add Promise SPI
index e569cdc..8835b89 100644 (file)
 #define ENABLE_JIT 0
 #endif
 
+#if !defined(ENABLE_C_LOOP)
+#if ENABLE(JIT) \
+    || CPU(X86_64) || (CPU(ARM64) && !defined(__ILP32__))
+#define ENABLE_C_LOOP 0
+#else
+#define ENABLE_C_LOOP 1
+#endif
+#endif
+
 /* The FTL *does not* work on 32-bit platforms. Disable it even if someone asked us to enable it. */
 #if USE(JSVALUE32_64)
 #undef ENABLE_FTL_JIT
  * In configurations other than Windows and Darwin, because layout of mcontext_t depends on standard libraries (like glibc),
  * sampling profiler is enabled if WebKit uses pthreads and glibc. */
 #if !defined(ENABLE_SAMPLING_PROFILER)
-#if ENABLE(JIT) && (OS(WINDOWS) || HAVE(MACHINE_CONTEXT))
+#if !ENABLE(C_LOOP) && (OS(WINDOWS) || HAVE(MACHINE_CONTEXT))
 #define ENABLE_SAMPLING_PROFILER 1
 #else
 #define ENABLE_SAMPLING_PROFILER 0
 #endif
 
 /* Determine if we need to enable Computed Goto Opcodes or not: */
-#if HAVE(COMPUTED_GOTO) || ENABLE(JIT)
+#if HAVE(COMPUTED_GOTO) || !ENABLE(C_LOOP)
 #define ENABLE_COMPUTED_GOTO_OPCODES 1
 #endif
 
-#if ENABLE(JIT) && !COMPILER(MSVC) && \
+#if !ENABLE(C_LOOP) && !COMPILER(MSVC) && \
     (CPU(X86) || CPU(X86_64) || CPU(ARM64) || (CPU(ARM_THUMB2) && OS(DARWIN)))
 /* This feature works by embedding the OpcodeID in the 32 bit just before the generated LLint code
    that executes each opcode. It cannot be supported by the CLoop since there's no way to embed the
 
 /* If either the JIT or the RegExp JIT is enabled, then the Assembler must be
    enabled as well: */
-#if ENABLE(JIT) || ENABLE(YARR_JIT)
+#if ENABLE(JIT) || ENABLE(YARR_JIT) || !ENABLE(C_LOOP)
 #if defined(ENABLE_ASSEMBLER) && !ENABLE_ASSEMBLER
 #error "Cannot enable the JIT or RegExp JIT without enabling the Assembler"
 #else
index 116e0d9..aa0da6a 100644 (file)
@@ -71,9 +71,13 @@ macro(WEBKIT_OPTION_BEGIN)
     if (WTF_CPU_ARM OR WTF_CPU_ARM64 OR WTF_CPU_MIPS OR WTF_CPU_X86_64 OR WTF_CPU_X86)
         set(ENABLE_JIT_DEFAULT ON)
         set(USE_SYSTEM_MALLOC_DEFAULT OFF)
+        set(ENABLE_C_LOOP_DEFAULT OFF)
+        set(ENABLE_SAMPLING_PROFILER_DEFAULT ON)
     else ()
         set(ENABLE_JIT_DEFAULT OFF)
         set(USE_SYSTEM_MALLOC_DEFAULT ON)
+        set(ENABLE_C_LOOP_DEFAULT ON)
+        set(ENABLE_SAMPLING_PROFILER_DEFAULT OFF)
     endif ()
 
     WEBKIT_OPTION_DEFINE(ENABLE_3D_TRANSFORMS "Toggle 3D transforms support" PRIVATE ON)
@@ -100,6 +104,7 @@ macro(WEBKIT_OPTION_BEGIN)
     WEBKIT_OPTION_DEFINE(ENABLE_CSS_SELECTORS_LEVEL4 "Toggle CSS Selectors Level 4 support" PRIVATE ON)
     WEBKIT_OPTION_DEFINE(ENABLE_CURSOR_VISIBILITY "Toggle cursor visibility support" PRIVATE OFF)
     WEBKIT_OPTION_DEFINE(ENABLE_CUSTOM_SCHEME_HANDLER "Toggle Custom Scheme Handler support" PRIVATE OFF)
+    WEBKIT_OPTION_DEFINE(ENABLE_C_LOOP "Enable CLoop interpreter" PRIVATE ${ENABLE_C_LOOP_DEFAULT})
     WEBKIT_OPTION_DEFINE(ENABLE_DASHBOARD_SUPPORT "Toggle dashboard support" PRIVATE OFF)
     WEBKIT_OPTION_DEFINE(ENABLE_DATACUE_VALUE "Toggle datacue value support" PRIVATE OFF)
     WEBKIT_OPTION_DEFINE(ENABLE_DATALIST_ELEMENT "Toggle HTML5 datalist support" PRIVATE OFF)
@@ -158,7 +163,7 @@ macro(WEBKIT_OPTION_BEGIN)
     WEBKIT_OPTION_DEFINE(ENABLE_RESOLUTION_MEDIA_QUERY "Toggle resolution media query support" PRIVATE OFF)
     WEBKIT_OPTION_DEFINE(ENABLE_RESOURCE_USAGE "Toggle resource usage support" PRIVATE OFF)
     WEBKIT_OPTION_DEFINE(ENABLE_RUBBER_BANDING "Toggle rubber banding support" PRIVATE OFF)
-    WEBKIT_OPTION_DEFINE(ENABLE_SAMPLING_PROFILER "Toggle sampling profiler support" PRIVATE ON)
+    WEBKIT_OPTION_DEFINE(ENABLE_SAMPLING_PROFILER "Toggle sampling profiler support" PRIVATE ${ENABLE_SAMPLING_PROFILER_DEFAULT})
     WEBKIT_OPTION_DEFINE(ENABLE_SERVICE_CONTROLS "Toggle service controls support" PRIVATE OFF)
     WEBKIT_OPTION_DEFINE(ENABLE_SERVICE_WORKER "Toggle ServiceWorker support" PRIVATE OFF)
     WEBKIT_OPTION_DEFINE(ENABLE_SMOOTH_SCROLLING "Toggle smooth scrolling" PRIVATE ON)
@@ -190,12 +195,14 @@ macro(WEBKIT_OPTION_BEGIN)
     WEBKIT_OPTION_DEFINE(ENABLE_XSLT "Toggle XSLT support" PRIVATE ON)
     WEBKIT_OPTION_DEFINE(USE_SYSTEM_MALLOC "Toggle system allocator instead of WebKit's custom allocator" PRIVATE ${USE_SYSTEM_MALLOC_DEFAULT})
 
+    WEBKIT_OPTION_CONFLICT(ENABLE_JIT ENABLE_C_LOOP)
+    WEBKIT_OPTION_CONFLICT(ENABLE_SAMPLING_PROFILER ENABLE_C_LOOP)
+
     WEBKIT_OPTION_DEPEND(ENABLE_WEB_RTC ENABLE_MEDIA_STREAM)
     WEBKIT_OPTION_DEPEND(ENABLE_LEGACY_ENCRYPTED_MEDIA ENABLE_VIDEO)
     WEBKIT_OPTION_DEPEND(ENABLE_DFG_JIT ENABLE_JIT)
     WEBKIT_OPTION_DEPEND(ENABLE_FTL_JIT ENABLE_DFG_JIT)
     WEBKIT_OPTION_DEPEND(ENABLE_WEBASSEMBLY ENABLE_FTL_JIT)
-    WEBKIT_OPTION_DEPEND(ENABLE_SAMPLING_PROFILER ENABLE_JIT)
     WEBKIT_OPTION_DEPEND(ENABLE_INDEXED_DATABASE_IN_WORKERS ENABLE_INDEXED_DATABASE)
     WEBKIT_OPTION_DEPEND(ENABLE_MEDIA_CONTROLS_SCRIPT ENABLE_VIDEO)
     WEBKIT_OPTION_DEPEND(ENABLE_MEDIA_SOURCE ENABLE_VIDEO)