Ensure that computed new stack pointer values do not underflow.
authorjfbastien@apple.com <jfbastien@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 28 Jun 2017 18:12:35 +0000 (18:12 +0000)
committerjfbastien@apple.com <jfbastien@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 28 Jun 2017 18:12:35 +0000 (18:12 +0000)
Re-apply this patch, it originally broke the ARM build because the llint code
generated `subs xzr, x3, sp` which isn't valid ARM64: the third operand cannot
be SP (that encoding would be ZR instead, subtracting zero). Flip the comparison
and operands to emit valid code (because the second operand can be SP).

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@218883 268f45cc-cd09-0410-ab3c-d52691b4dbfc

17 files changed:
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/bytecompiler/BytecodeGenerator.cpp
Source/JavaScriptCore/dfg/DFGGraph.cpp
Source/JavaScriptCore/dfg/DFGJITCompiler.cpp
Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
Source/JavaScriptCore/jit/JIT.cpp
Source/JavaScriptCore/jit/SetupVarargsFrame.cpp
Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
Source/JavaScriptCore/llint/LowLevelInterpreter.asm
Source/JavaScriptCore/llint/LowLevelInterpreter32_64.asm
Source/JavaScriptCore/llint/LowLevelInterpreter64.asm
Source/JavaScriptCore/runtime/MinimumReservedZoneSize.h [new file with mode: 0644]
Source/JavaScriptCore/runtime/Options.cpp
Source/JavaScriptCore/runtime/VM.cpp
Source/JavaScriptCore/wasm/WasmB3IRGenerator.cpp
Source/JavaScriptCore/wasm/js/WebAssemblyFunction.cpp

index 5ebd9ae..9a2f49e 100644 (file)
@@ -1,3 +1,82 @@
+2017-06-28  JF Bastien  <jfbastien@apple.com>
+
+        Ensure that computed new stack pointer values do not underflow.
+        https://bugs.webkit.org/show_bug.cgi?id=173700
+        <rdar://problem/32926032>
+
+        Reviewed by Filip Pizlo and Saam Barati, update reviewed by Mark Lam.
+
+        Patch by Mark Lam, with the following fix:
+
+        Re-apply this patch, it originally broke the ARM build because the llint code
+        generated `subs xzr, x3, sp` which isn't valid ARM64: the third operand cannot
+        be SP (that encoding would be ZR instead, subtracting zero). Flip the comparison
+        and operands to emit valid code (because the second operand can be SP).
+
+        1. Added a RELEASE_ASSERT to BytecodeGenerator::generate() to ensure that
+           m_numCalleeLocals is sane.
+
+        2. Added underflow checks in LLInt code and VarargsFrame code.
+
+        3. Introduce minimumReservedZoneSize, which is hardcoded to 16K.
+           Ensure that Options::reservedZoneSize() is at least minimumReservedZoneSize.
+           Ensure that Options::softReservedZoneSize() is at least greater than
+           Options::reservedZoneSize() by minimumReservedZoneSize.
+
+        4. Ensure that stack checks emitted by JIT tiers include an underflow check if
+           and only if the max size of the frame is greater than Options::reservedZoneSize().
+
+           By design, we are guaranteed to have at least Options::reservedZoneSize() bytes
+           of memory at the bottom (end) of the stack.  This means that, at any time, the
+           frame pointer must be at least Options::reservedZoneSize() bytes away from the
+           end of the stack.  Hence, if the max frame size is less than
+           Options::reservedZoneSize(), there's no way that frame pointer - max
+           frame size can underflow, and we can elide the underflow check.
+
+           Note that we use Options::reservedZoneSize() instead of
+           Options::softReservedZoneSize() for determine if we need an underflow check.
+           This is because the softStackLimit that is used for stack checks can be set
+           based on Options::reservedZoneSize() during error handling (e.g. when creating
+           strings for instantiating the Error object).  Hence, the guaranteed minimum of
+           distance between the frame pointer and the end of the stack is
+           Options::reservedZoneSize() and nor Options::softReservedZoneSize().
+
+           Note also that we ensure that Options::reservedZoneSize() is at least
+           minimumReservedZoneSize (i.e. 16K).  In typical deployments,
+           Options::reservedZoneSize() may be larger.  Using Options::reservedZoneSize()
+           instead of minimumReservedZoneSize gives us more chances to elide underflow
+           checks.
+
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * bytecompiler/BytecodeGenerator.cpp:
+        (JSC::BytecodeGenerator::generate):
+        * dfg/DFGGraph.cpp:
+        (JSC::DFG::Graph::requiredRegisterCountForExecutionAndExit):
+        * dfg/DFGJITCompiler.cpp:
+        (JSC::DFG::emitStackOverflowCheck):
+        (JSC::DFG::JITCompiler::compile):
+        (JSC::DFG::JITCompiler::compileFunction):
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::lower):
+        * jit/JIT.cpp:
+        (JSC::JIT::compileWithoutLinking):
+        * jit/SetupVarargsFrame.cpp:
+        (JSC::emitSetupVarargsFrameFastCase):
+        * llint/LLIntSlowPaths.cpp:
+        (JSC::LLInt::LLINT_SLOW_PATH_DECL):
+        * llint/LowLevelInterpreter.asm:
+        * llint/LowLevelInterpreter32_64.asm:
+        * llint/LowLevelInterpreter64.asm:
+        * runtime/MinimumReservedZoneSize.h: Added.
+        * runtime/Options.cpp:
+        (JSC::recomputeDependentOptions):
+        * runtime/VM.cpp:
+        (JSC::VM::updateStackLimits):
+        * wasm/WasmB3IRGenerator.cpp:
+        (JSC::Wasm::B3IRGenerator::B3IRGenerator):
+        * wasm/js/WebAssemblyFunction.cpp:
+        (JSC::callWebAssemblyFunction):
+
 2017-06-28  Chris Dumez  <cdumez@apple.com>
 
         Unreviewed, rolling out r218869.
index d353a0a..07c14f5 100644 (file)
                FE1C0FFF1B194FD100B53FCA /* Exception.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE1C0FFE1B194FD100B53FCA /* Exception.cpp */; };
                FE20CE9D15F04A9500DF3430 /* LLIntCLoop.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE20CE9B15F04A9500DF3430 /* LLIntCLoop.cpp */; };
                FE20CE9E15F04A9500DF3430 /* LLIntCLoop.h in Headers */ = {isa = PBXBuildFile; fileRef = FE20CE9C15F04A9500DF3430 /* LLIntCLoop.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               FE2A87601F02381600EB31B2 /* MinimumReservedZoneSize.h in Headers */ = {isa = PBXBuildFile; fileRef = FE2A875F1F02381600EB31B2 /* MinimumReservedZoneSize.h */; };
                FE2E6A7B1D6EA62C0060F896 /* ThrowScope.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE2E6A7A1D6EA5FE0060F896 /* ThrowScope.cpp */; };
                FE3022D21E3D73A500BAC493 /* SigillCrashAnalyzer.cpp in Sources */ = {isa = PBXBuildFile; fileRef = FE3022D01E3D739600BAC493 /* SigillCrashAnalyzer.cpp */; };
                FE3022D31E3D73A500BAC493 /* SigillCrashAnalyzer.h in Headers */ = {isa = PBXBuildFile; fileRef = FE3022D11E3D739600BAC493 /* SigillCrashAnalyzer.h */; settings = {ATTRIBUTES = (Private, ); }; };
                FE1C0FFE1B194FD100B53FCA /* Exception.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = Exception.cpp; sourceTree = "<group>"; };
                FE20CE9B15F04A9500DF3430 /* LLIntCLoop.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = LLIntCLoop.cpp; path = llint/LLIntCLoop.cpp; sourceTree = "<group>"; };
                FE20CE9C15F04A9500DF3430 /* LLIntCLoop.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = LLIntCLoop.h; path = llint/LLIntCLoop.h; sourceTree = "<group>"; };
+               FE2A875F1F02381600EB31B2 /* MinimumReservedZoneSize.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; path = MinimumReservedZoneSize.h; sourceTree = "<group>"; };
                FE2E6A7A1D6EA5FE0060F896 /* ThrowScope.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ThrowScope.cpp; sourceTree = "<group>"; };
                FE3022D01E3D739600BAC493 /* SigillCrashAnalyzer.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = SigillCrashAnalyzer.cpp; sourceTree = "<group>"; };
                FE3022D11E3D739600BAC493 /* SigillCrashAnalyzer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SigillCrashAnalyzer.h; sourceTree = "<group>"; };
                                90213E3B123A40C200D422F3 /* MemoryStatistics.cpp */,
                                90213E3C123A40C200D422F3 /* MemoryStatistics.h */,
                                7C008CE5187631B600955C24 /* Microtask.h */,
+                               FE2A875F1F02381600EB31B2 /* MinimumReservedZoneSize.h */,
                                E355F3501B7DC85300C50DC5 /* ModuleLoaderPrototype.cpp */,
                                E355F3511B7DC85300C50DC5 /* ModuleLoaderPrototype.h */,
                                147341DD1DC2CE9600AA29BA /* ModuleProgramExecutable.cpp */,
                                0FDB2CCA173DA523007B3C1B /* FTLValueFromBlock.h in Headers */,
                                0F5A6284188C98D40072C9DF /* FTLValueRange.h in Headers */,
                                0F0332C618B53FA9005F979A /* FTLWeight.h in Headers */,
+                               FE2A87601F02381600EB31B2 /* MinimumReservedZoneSize.h in Headers */,
                                53C6FEEF1E8ADFA900B18425 /* WasmOpcodeOrigin.h in Headers */,
                                0F0332C818B546EC005F979A /* FTLWeightedTarget.h in Headers */,
                                0F666EC1183566F900D017F1 /* FullBytecodeLiveness.h in Headers */,
index 940d587..1cc7294 100644 (file)
@@ -193,6 +193,7 @@ ParserError BytecodeGenerator::generate()
     if (isGeneratorOrAsyncFunctionBodyParseMode(m_codeBlock->parseMode()))
         performGeneratorification(m_codeBlock.get(), m_instructions, m_generatorFrameSymbolTable.get(), m_generatorFrameSymbolTableIndex);
 
+    RELEASE_ASSERT(static_cast<unsigned>(m_codeBlock->numCalleeLocals()) < static_cast<unsigned>(FirstConstantRegisterIndex));
     m_codeBlock->setInstructions(std::make_unique<UnlinkedInstructionStream>(m_instructions));
 
     m_codeBlock->shrinkToFit();
index 803d2d4..6a95720 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2013-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -1150,6 +1150,8 @@ unsigned Graph::requiredRegisterCountForExit()
 
 unsigned Graph::requiredRegisterCountForExecutionAndExit()
 {
+    // FIXME: We should make sure that frameRegisterCount() and requiredRegisterCountForExit()
+    // never overflows. https://bugs.webkit.org/show_bug.cgi?id=173852
     return std::max(frameRegisterCount(), requiredRegisterCountForExit());
 }
 
index 9584423..ee9e791 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2013-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -354,6 +354,17 @@ void JITCompiler::link(LinkBuffer& linkBuffer)
         m_codeBlock->setPCToCodeOriginMap(std::make_unique<PCToCodeOriginMap>(WTFMove(m_pcToCodeOriginMapBuilder), linkBuffer));
 }
 
+static void emitStackOverflowCheck(JITCompiler& jit, MacroAssembler::JumpList& stackOverflow)
+{
+    int frameTopOffset = virtualRegisterForLocal(jit.graph().requiredRegisterCountForExecutionAndExit() - 1).offset() * sizeof(Register);
+    unsigned maxFrameSize = -frameTopOffset;
+
+    jit.addPtr(MacroAssembler::TrustedImm32(frameTopOffset), GPRInfo::callFrameRegister, GPRInfo::regT1);
+    if (UNLIKELY(maxFrameSize > Options::reservedZoneSize()))
+        stackOverflow.append(jit.branchPtr(MacroAssembler::Above, GPRInfo::regT1, GPRInfo::callFrameRegister));
+    stackOverflow.append(jit.branchPtr(MacroAssembler::Above, MacroAssembler::AbsoluteAddress(jit.vm()->addressOfSoftStackLimit()), GPRInfo::regT1));
+}
+
 void JITCompiler::compile()
 {
     setStartOfCode();
@@ -361,8 +372,8 @@ void JITCompiler::compile()
     m_speculative = std::make_unique<SpeculativeJIT>(*this);
 
     // Plant a check that sufficient space is available in the JSStack.
-    addPtr(TrustedImm32(virtualRegisterForLocal(m_graph.requiredRegisterCountForExecutionAndExit() - 1).offset() * sizeof(Register)), GPRInfo::callFrameRegister, GPRInfo::regT1);
-    Jump stackOverflow = branchPtr(Above, AbsoluteAddress(vm()->addressOfSoftStackLimit()), GPRInfo::regT1);
+    JumpList stackOverflow;
+    emitStackOverflowCheck(*this, stackOverflow);
 
     addPtr(TrustedImm32(m_graph.stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, stackPointerRegister);
     checkStackPointerAlignment();
@@ -424,8 +435,8 @@ void JITCompiler::compileFunction()
     // so enter after this.
     Label fromArityCheck(this);
     // Plant a check that sufficient space is available in the JSStack.
-    addPtr(TrustedImm32(virtualRegisterForLocal(m_graph.requiredRegisterCountForExecutionAndExit() - 1).offset() * sizeof(Register)), GPRInfo::callFrameRegister, GPRInfo::regT1);
-    Jump stackOverflow = branchPtr(Above, AbsoluteAddress(vm()->addressOfSoftStackLimit()), GPRInfo::regT1);
+    JumpList stackOverflow;
+    emitStackOverflowCheck(*this, stackOverflow);
 
     // Move the stack pointer down to accommodate locals
     addPtr(TrustedImm32(m_graph.stackPointerOffset() * sizeof(Register)), GPRInfo::callFrameRegister, stackPointerRegister);
index ba9d7b5..a34e862 100644 (file)
@@ -214,9 +214,13 @@ public:
                 GPRReg scratch = params.gpScratch(0);
 
                 unsigned ftlFrameSize = params.proc().frameSize();
+                unsigned maxFrameSize = std::max(exitFrameSize, ftlFrameSize);
 
-                jit.addPtr(MacroAssembler::TrustedImm32(-std::max(exitFrameSize, ftlFrameSize)), fp, scratch);
-                MacroAssembler::Jump stackOverflow = jit.branchPtr(MacroAssembler::Above, addressOfStackLimit, scratch);
+                jit.addPtr(MacroAssembler::TrustedImm32(-maxFrameSize), fp, scratch);
+                MacroAssembler::JumpList stackOverflow;
+                if (UNLIKELY(maxFrameSize > Options::reservedZoneSize()))
+                    stackOverflow.append(jit.branchPtr(MacroAssembler::Above, scratch, fp));
+                stackOverflow.append(jit.branchPtr(MacroAssembler::Above, addressOfStackLimit, scratch));
 
                 params.addLatePath([=] (CCallHelpers& jit) {
                     AllowMacroScratchRegisterUsage allowScratch(jit);
index 5dce968..cc08290 100644 (file)
@@ -661,8 +661,13 @@ void JIT::compileWithoutLinking(JITCompilationEffort effort)
         }
     }
 
-    addPtr(TrustedImm32(stackPointerOffsetFor(m_codeBlock) * sizeof(Register)), callFrameRegister, regT1);
-    Jump stackOverflow = branchPtr(Above, AbsoluteAddress(m_vm->addressOfSoftStackLimit()), regT1);
+    int frameTopOffset = stackPointerOffsetFor(m_codeBlock) * sizeof(Register);
+    unsigned maxFrameSize = -frameTopOffset;
+    addPtr(TrustedImm32(frameTopOffset), callFrameRegister, regT1);
+    JumpList stackOverflow;
+    if (UNLIKELY(maxFrameSize > Options::reservedZoneSize()))
+        stackOverflow.append(branchPtr(Above, regT1, callFrameRegister));
+    stackOverflow.append(branchPtr(Above, AbsoluteAddress(m_vm->addressOfSoftStackLimit()), regT1));
 
     move(regT1, stackPointerRegister);
     checkStackPointerAlignment();
index 9fc658f..0f961a6 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -82,6 +82,7 @@ static void emitSetupVarargsFrameFastCase(VM& vm, CCallHelpers& jit, GPRReg numU
     
     emitSetVarargsFrame(jit, scratchGPR1, true, numUsedSlotsGPR, scratchGPR2);
 
+    slowCase.append(jit.branchPtr(CCallHelpers::Above, scratchGPR2, GPRInfo::callFrameRegister));
     slowCase.append(jit.branchPtr(CCallHelpers::Above, CCallHelpers::AbsoluteAddress(vm.addressOfSoftStackLimit()), scratchGPR2));
 
     // Before touching stack values, we should update the stack pointer to protect them from signal stack.
index ee4929c..34987bb 100644 (file)
@@ -510,9 +510,12 @@ LLINT_SLOW_PATH_DECL(stack_check)
     // Hence, if we get here, then we know a stack overflow is imminent. So, just
     // throw the StackOverflowError unconditionally.
 #if !ENABLE(JIT)
-    ASSERT(!vm.interpreter->cloopStack().containsAddress(exec->topOfFrame()));
-    if (LIKELY(vm.ensureStackCapacityFor(exec->topOfFrame())))
-        LLINT_RETURN_TWO(pc, 0);
+    Register* topOfFrame = exec->topOfFrame();
+    if (LIKELY(topOfFrame < reinterpret_cast<Register*>(exec))) {
+        ASSERT(!vm.interpreter->cloopStack().containsAddress(topOfFrame));
+        if (LIKELY(vm.ensureStackCapacityFor(topOfFrame)))
+            LLINT_RETURN_TWO(pc, 0);
+    }
 #endif
 
     ErrorHandlingScope errorScope(vm);
index 4fde1b0..cbd81c0 100644 (file)
@@ -994,6 +994,7 @@ macro prologue(codeBlockGetter, codeBlockSetter, osrSlowPath, traceSlowPath)
     # Get new sp in t0 and check stack height.
     getFrameRegisterSizeForCodeBlock(t1, t0)
     subp cfr, t0, t0
+    bpa t0, cfr, .needStackCheck
     loadp CodeBlock::m_vm[t1], t2
     if C_LOOP
         bpbeq VM::m_cloopStackLimit[t2], t0, .stackHeightOK
@@ -1001,6 +1002,7 @@ macro prologue(codeBlockGetter, codeBlockSetter, osrSlowPath, traceSlowPath)
         bpbeq VM::m_softStackLimit[t2], t0, .stackHeightOK
     end
 
+.needStackCheck:
     # Stack height check failed - need to call a slow_path.
     # Set up temporary stack pointer for call including callee saves
     subp maxFrameExtentForSlowPathCall, sp
index f31b926..2450d40 100644 (file)
@@ -148,6 +148,7 @@ macro doVMEntry(makeCall)
     addp CallFrameHeaderSlots, t4, t4
     lshiftp 3, t4
     subp sp, t4, t3
+    bpa t3, sp, .throwStackOverflow
 
     # Ensure that we have enough additional stack capacity for the incoming args,
     # and the frame for the JS code we're executing. We need to do this check
@@ -172,6 +173,7 @@ macro doVMEntry(makeCall)
         move t5, vm
     end
 
+.throwStackOverflow:
     subp 8, sp # Align stack for cCall2() to make a call.
     move vm, a0
     move protoCallFrame, a1
index 298ce77..c849109 100644 (file)
@@ -136,6 +136,7 @@ macro doVMEntry(makeCall)
     addp CallFrameHeaderSlots, t4, t4
     lshiftp 3, t4
     subp sp, t4, t3
+    bqbeq sp, t3, .throwStackOverflow
 
     # Ensure that we have enough additional stack capacity for the incoming args,
     # and the frame for the JS code we're executing. We need to do this check
@@ -160,6 +161,7 @@ macro doVMEntry(makeCall)
         move t5, vm
     end
 
+.throwStackOverflow:
     move vm, a0
     move protoCallFrame, a1
     cCall2(_llint_throw_stack_overflow_error)
diff --git a/Source/JavaScriptCore/runtime/MinimumReservedZoneSize.h b/Source/JavaScriptCore/runtime/MinimumReservedZoneSize.h
new file mode 100644 (file)
index 0000000..59e2ea0
--- /dev/null
@@ -0,0 +1,35 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include <wtf/StdLibExtras.h>
+
+namespace JSC {
+    
+static const unsigned minimumReservedZoneSize = 16 * KB;
+    
+} // namespace JSC
+
index 3cb7e79..b92082d 100644 (file)
@@ -29,6 +29,7 @@
 #include "AssemblerCommon.h"
 #include "LLIntCommon.h"
 #include "LLIntData.h"
+#include "MinimumReservedZoneSize.h"
 #include "SigillCrashAnalyzer.h"
 #include <algorithm>
 #include <limits>
@@ -496,6 +497,11 @@ static void recomputeDependentOptions()
 
     if (Options::useSigillCrashAnalyzer())
         enableSigillCrashAnalyzer();
+
+    if (Options::reservedZoneSize() < minimumReservedZoneSize)
+        Options::reservedZoneSize() = minimumReservedZoneSize;
+    if (Options::softReservedZoneSize() < Options::reservedZoneSize() + minimumReservedZoneSize)
+        Options::softReservedZoneSize() = Options::reservedZoneSize() + minimumReservedZoneSize;
 }
 
 void Options::initialize()
index b6d394b..48727a6 100644 (file)
@@ -78,6 +78,7 @@
 #include "LLIntData.h"
 #include "Lexer.h"
 #include "Lookup.h"
+#include "MinimumReservedZoneSize.h"
 #include "ModuleProgramCodeBlock.h"
 #include "NativeStdFunctionCell.h"
 #include "Nodes.h"
@@ -670,15 +671,22 @@ inline void VM::updateStackLimits()
     void* lastSoftStackLimit = m_softStackLimit;
 #endif
 
+    const StackBounds& stack = wtfThreadData().stack();
     size_t reservedZoneSize = Options::reservedZoneSize();
+    // We should have already ensured that Options::reservedZoneSize() >= minimumReserveZoneSize at
+    // options initialization time, and the option value should not have been changed thereafter.
+    // We don't have the ability to assert here that it hasn't changed, but we can at least assert
+    // that the value is sane.
+    RELEASE_ASSERT(reservedZoneSize >= minimumReservedZoneSize);
+
     if (m_stackPointerAtVMEntry) {
-        ASSERT(wtfThreadData().stack().isGrowingDownward());
+        ASSERT(stack.isGrowingDownward());
         char* startOfStack = reinterpret_cast<char*>(m_stackPointerAtVMEntry);
-        m_softStackLimit = wtfThreadData().stack().recursionLimit(startOfStack, Options::maxPerThreadStackUsage(), m_currentSoftReservedZoneSize);
-        m_stackLimit = wtfThreadData().stack().recursionLimit(startOfStack, Options::maxPerThreadStackUsage(), reservedZoneSize);
+        m_softStackLimit = stack.recursionLimit(startOfStack, Options::maxPerThreadStackUsage(), m_currentSoftReservedZoneSize);
+        m_stackLimit = stack.recursionLimit(startOfStack, Options::maxPerThreadStackUsage(), reservedZoneSize);
     } else {
-        m_softStackLimit = wtfThreadData().stack().recursionLimit(m_currentSoftReservedZoneSize);
-        m_stackLimit = wtfThreadData().stack().recursionLimit(reservedZoneSize);
+        m_softStackLimit = stack.recursionLimit(m_currentSoftReservedZoneSize);
+        m_stackLimit = stack.recursionLimit(reservedZoneSize);
     }
 
 #if OS(WINDOWS)
index 711fae1..76dc4a4 100644 (file)
@@ -416,12 +416,16 @@ B3IRGenerator::B3IRGenerator(const ModuleInformation& info, Procedure& procedure
                 (Checked<uint32_t>(m_maxNumJSCallArguments) * sizeof(Register) + jscCallingConvention().headerSizeInBytes()).unsafeGet()
             ));
             const int32_t checkSize = m_makesCalls ? (wasmFrameSize + extraFrameSize).unsafeGet() : wasmFrameSize.unsafeGet();
+            bool needUnderflowCheck = static_cast<unsigned>(checkSize) > Options::reservedZoneSize();
             // This allows leaf functions to not do stack checks if their frame size is within
             // certain limits since their caller would have already done the check.
-            if (m_makesCalls || wasmFrameSize >= minimumParentCheckSize) {
+            if (m_makesCalls || wasmFrameSize >= minimumParentCheckSize || needUnderflowCheck) {
                 jit.loadPtr(CCallHelpers::Address(context, Context::offsetOfCachedStackLimit()), scratch2);
                 jit.addPtr(CCallHelpers::TrustedImm32(-checkSize), fp, scratch1);
-                auto overflow = jit.branchPtr(CCallHelpers::Below, scratch1, scratch2);
+                MacroAssembler::JumpList overflow;
+                if (UNLIKELY(needUnderflowCheck))
+                    overflow.append(jit.branchPtr(CCallHelpers::Above, scratch1, fp));
+                overflow.append(jit.branchPtr(CCallHelpers::Below, scratch1, scratch2));
                 jit.addLinkTask([overflow] (LinkBuffer& linkBuffer) {
                     linkBuffer.link(overflow, CodeLocationLabel(Thunks::singleton().stub(throwStackOverflowFromWasmThunkGenerator).code()));
                 });
index 3b1073b..8605b0a 100644 (file)
@@ -134,7 +134,7 @@ static EncodedJSValue JSC_HOST_CALL callWebAssemblyFunction(ExecState* exec)
         const intptr_t sp = bitwise_cast<intptr_t>(&sp); // A proxy for the current stack pointer.
         const intptr_t frameSize = (boxedArgs.size() + CallFrame::headerSizeInRegisters) * sizeof(Register);
         const intptr_t stackSpaceUsed = 2 * frameSize; // We're making two calls. One to the wrapper, and one to the actual wasm code.
-        if (UNLIKELY((sp - stackSpaceUsed) < bitwise_cast<intptr_t>(vm.softStackLimit())))
+        if (UNLIKELY((sp < stackSpaceUsed) || ((sp - stackSpaceUsed) < bitwise_cast<intptr_t>(vm.softStackLimit()))))
             return JSValue::encode(throwException(exec, scope, createStackOverflowError(exec)));
     }
     Wasm::storeContext(vm, wasmContext);