https://bugs.webkit.org/show_bug.cgi?id=141183
Reviewed by Mark Lam.
According to JSC's internal argument numbering, 0 is "this" and 1 is the first argument.
In the "arguments[i]" expression, "this" is not accessible and i = 0 refers to the first
argument. Previously we handled the bounds check in "arguments[i]" - where "arguments" is
statically known to be the current function's arguments object - as follows:
add 1, i
branchAboveOrEqual i, callFrame.ArgumentCount, slowPath
The problem with this is that if i = -1, this passes the test, and we end up accessing
what would be the "this" argument slot. That's wrong, since we should really be bottoming
out in arguments["-1"], which is usually undefined but could be anything. It's even worse
if the function is inlined or if we're in a constructor - in that case the "this" slot
could be garbage.
It turns out that we had this bug in all of our engines.
This fixes the issue by changing the algorithm to:
load32 callFrame.ArgumentCount, tmp
sub 1, tmp
branchAboveOrEqual i, tmp, slowPath
In some engines, we would have used the modified "i" (the one that had 1 added to it) for
the subsequent argument load; since we don't do this anymore I also had to change some of
the offsets on the BaseIndex arguments load.
This also includes tests that are written in such a way as to get coverage on LLInt and
Baseline JIT (get-my-argument-by-val-wrap-around-no-warm-up), DFG and FTL
(get-my-argument-by-val-wrap-around), and DFG when we're being paranoid about the user
overwriting the "arguments" variable (get-my-argument-by-val-safe-wrap-around). This also
includes off-by-1 out-of-bounds tests for each of these cases, since in the process of
writing the patch I broke the arguments[arguments.length] case in the DFG and didn't see
any test failures.
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* ftl/FTLLowerDFGToLLVM.cpp:
(JSC::FTL::LowerDFGToLLVM::compileGetMyArgumentByVal):
* jit/AssemblyHelpers.h:
(JSC::AssemblyHelpers::offsetOfArguments):
(JSC::AssemblyHelpers::offsetOfArgumentsIncludingThis): Deleted.
* jit/JITOpcodes.cpp:
(JSC::JIT::emit_op_get_argument_by_val):
* jit/JITOpcodes32_64.cpp:
(JSC::JIT::emit_op_get_argument_by_val):
* llint/LowLevelInterpreter.asm:
* llint/LowLevelInterpreter32_64.asm:
* llint/LowLevelInterpreter64.asm:
* tests/stress/get-my-argument-by-val-out-of-bounds-no-warm-up.js: Added.
(foo):
* tests/stress/get-my-argument-by-val-out-of-bounds.js: Added.
(foo):
* tests/stress/get-my-argument-by-val-safe-out-of-bounds.js: Added.
(foo):
* tests/stress/get-my-argument-by-val-safe-wrap-around.js: Added.
(foo):
* tests/stress/get-my-argument-by-val-wrap-around-no-warm-up.js: Added.
(foo):
* tests/stress/get-my-argument-by-val-wrap-around.js: Added.
(foo):
git-svn-id: https://svn.webkit.org/repository/webkit/trunk@179538
268f45cc-cd09-0410-ab3c-
d52691b4dbfc
2015-02-02 Filip Pizlo <fpizlo@apple.com>
+ arguments[-1] should have well-defined behavior
+ https://bugs.webkit.org/show_bug.cgi?id=141183
+
+ Reviewed by Mark Lam.
+
+ According to JSC's internal argument numbering, 0 is "this" and 1 is the first argument.
+ In the "arguments[i]" expression, "this" is not accessible and i = 0 refers to the first
+ argument. Previously we handled the bounds check in "arguments[i]" - where "arguments" is
+ statically known to be the current function's arguments object - as follows:
+
+ add 1, i
+ branchAboveOrEqual i, callFrame.ArgumentCount, slowPath
+
+ The problem with this is that if i = -1, this passes the test, and we end up accessing
+ what would be the "this" argument slot. That's wrong, since we should really be bottoming
+ out in arguments["-1"], which is usually undefined but could be anything. It's even worse
+ if the function is inlined or if we're in a constructor - in that case the "this" slot
+ could be garbage.
+
+ It turns out that we had this bug in all of our engines.
+
+ This fixes the issue by changing the algorithm to:
+
+ load32 callFrame.ArgumentCount, tmp
+ sub 1, tmp
+ branchAboveOrEqual i, tmp, slowPath
+
+ In some engines, we would have used the modified "i" (the one that had 1 added to it) for
+ the subsequent argument load; since we don't do this anymore I also had to change some of
+ the offsets on the BaseIndex arguments load.
+
+ This also includes tests that are written in such a way as to get coverage on LLInt and
+ Baseline JIT (get-my-argument-by-val-wrap-around-no-warm-up), DFG and FTL
+ (get-my-argument-by-val-wrap-around), and DFG when we're being paranoid about the user
+ overwriting the "arguments" variable (get-my-argument-by-val-safe-wrap-around). This also
+ includes off-by-1 out-of-bounds tests for each of these cases, since in the process of
+ writing the patch I broke the arguments[arguments.length] case in the DFG and didn't see
+ any test failures.
+
+ * dfg/DFGSpeculativeJIT32_64.cpp:
+ (JSC::DFG::SpeculativeJIT::compile):
+ * dfg/DFGSpeculativeJIT64.cpp:
+ (JSC::DFG::SpeculativeJIT::compile):
+ * ftl/FTLLowerDFGToLLVM.cpp:
+ (JSC::FTL::LowerDFGToLLVM::compileGetMyArgumentByVal):
+ * jit/AssemblyHelpers.h:
+ (JSC::AssemblyHelpers::offsetOfArguments):
+ (JSC::AssemblyHelpers::offsetOfArgumentsIncludingThis): Deleted.
+ * jit/JITOpcodes.cpp:
+ (JSC::JIT::emit_op_get_argument_by_val):
+ * jit/JITOpcodes32_64.cpp:
+ (JSC::JIT::emit_op_get_argument_by_val):
+ * llint/LowLevelInterpreter.asm:
+ * llint/LowLevelInterpreter32_64.asm:
+ * llint/LowLevelInterpreter64.asm:
+ * tests/stress/get-my-argument-by-val-out-of-bounds-no-warm-up.js: Added.
+ (foo):
+ * tests/stress/get-my-argument-by-val-out-of-bounds.js: Added.
+ (foo):
+ * tests/stress/get-my-argument-by-val-safe-out-of-bounds.js: Added.
+ (foo):
+ * tests/stress/get-my-argument-by-val-safe-wrap-around.js: Added.
+ (foo):
+ * tests/stress/get-my-argument-by-val-wrap-around-no-warm-up.js: Added.
+ (foo):
+ * tests/stress/get-my-argument-by-val-wrap-around.js: Added.
+ (foo):
+
+2015-02-02 Filip Pizlo <fpizlo@apple.com>
+
MultiGetByOffset should be marked NodeMustGenerate
https://bugs.webkit.org/show_bug.cgi?id=140137
TrustedImm32(JSValue::EmptyValueTag)));
}
- m_jit.add32(TrustedImm32(1), indexGPR, resultPayloadGPR);
-
if (node->origin.semantic.inlineCallFrame) {
speculationCheck(
Uncountable, JSValueRegs(), 0,
m_jit.branch32(
JITCompiler::AboveOrEqual,
- resultPayloadGPR,
- Imm32(node->origin.semantic.inlineCallFrame->arguments.size())));
+ indexGPR,
+ Imm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1)));
} else {
+ m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultPayloadGPR);
+ m_jit.sub32(TrustedImm32(1), resultPayloadGPR);
speculationCheck(
Uncountable, JSValueRegs(), 0,
- m_jit.branch32(
- JITCompiler::AboveOrEqual,
- resultPayloadGPR,
- JITCompiler::payloadFor(JSStack::ArgumentCount)));
+ m_jit.branch32(JITCompiler::AboveOrEqual, indexGPR, resultPayloadGPR));
}
JITCompiler::JumpList slowArgument;
m_jit.load32(
JITCompiler::BaseIndex(
- GPRInfo::callFrameRegister, resultPayloadGPR, JITCompiler::TimesEight,
- m_jit.offsetOfArgumentsIncludingThis(node->origin.semantic) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag)),
+ GPRInfo::callFrameRegister, indexGPR, JITCompiler::TimesEight,
+ m_jit.offsetOfArguments(node->origin.semantic) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag)),
resultTagGPR);
m_jit.load32(
JITCompiler::BaseIndex(
- GPRInfo::callFrameRegister, resultPayloadGPR, JITCompiler::TimesEight,
- m_jit.offsetOfArgumentsIncludingThis(node->origin.semantic) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload)),
+ GPRInfo::callFrameRegister, indexGPR, JITCompiler::TimesEight,
+ m_jit.offsetOfArguments(node->origin.semantic) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload)),
resultPayloadGPR);
slowArgument.link(&m_jit);
JITCompiler::tagFor(m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic)),
TrustedImm32(JSValue::EmptyValueTag)));
- m_jit.add32(TrustedImm32(1), indexGPR, resultPayloadGPR);
if (node->origin.semantic.inlineCallFrame) {
slowPath.append(
m_jit.branch32(
JITCompiler::AboveOrEqual,
- resultPayloadGPR,
- Imm32(node->origin.semantic.inlineCallFrame->arguments.size())));
+ indexGPR,
+ Imm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1)));
} else {
+ m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultPayloadGPR);
+ m_jit.sub32(TrustedImm32(1), resultPayloadGPR);
slowPath.append(
- m_jit.branch32(
- JITCompiler::AboveOrEqual,
- resultPayloadGPR,
- JITCompiler::payloadFor(JSStack::ArgumentCount)));
+ m_jit.branch32(JITCompiler::AboveOrEqual, indexGPR, resultPayloadGPR));
}
JITCompiler::JumpList slowArgument;
m_jit.load32(
JITCompiler::BaseIndex(
- GPRInfo::callFrameRegister, resultPayloadGPR, JITCompiler::TimesEight,
- m_jit.offsetOfArgumentsIncludingThis(node->origin.semantic) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag)),
+ GPRInfo::callFrameRegister, indexGPR, JITCompiler::TimesEight,
+ m_jit.offsetOfArguments(node->origin.semantic) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.tag)),
resultTagGPR);
m_jit.load32(
JITCompiler::BaseIndex(
- GPRInfo::callFrameRegister, resultPayloadGPR, JITCompiler::TimesEight,
- m_jit.offsetOfArgumentsIncludingThis(node->origin.semantic) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload)),
+ GPRInfo::callFrameRegister, indexGPR, JITCompiler::TimesEight,
+ m_jit.offsetOfArguments(node->origin.semantic) + OBJECT_OFFSETOF(EncodedValueDescriptor, asBits.payload)),
resultPayloadGPR);
if (node->origin.semantic.inlineCallFrame) {
m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic))));
}
- m_jit.add32(TrustedImm32(1), indexGPR, resultGPR);
if (node->origin.semantic.inlineCallFrame) {
speculationCheck(
Uncountable, JSValueRegs(), 0,
m_jit.branch32(
JITCompiler::AboveOrEqual,
- resultGPR,
- Imm32(node->origin.semantic.inlineCallFrame->arguments.size())));
+ indexGPR,
+ Imm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1)));
} else {
+ m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultGPR);
+ m_jit.sub32(TrustedImm32(1), resultGPR);
speculationCheck(
Uncountable, JSValueRegs(), 0,
- m_jit.branch32(
- JITCompiler::AboveOrEqual,
- resultGPR,
- JITCompiler::payloadFor(JSStack::ArgumentCount)));
+ m_jit.branch32(JITCompiler::AboveOrEqual, indexGPR, resultGPR));
}
JITCompiler::JumpList slowArgument;
}
slowArgumentOutOfBounds.link(&m_jit);
- m_jit.signExtend32ToPtr(resultGPR, resultGPR);
-
m_jit.load64(
JITCompiler::BaseIndex(
- GPRInfo::callFrameRegister, resultGPR, JITCompiler::TimesEight, m_jit.offsetOfArgumentsIncludingThis(node->origin.semantic)),
+ GPRInfo::callFrameRegister, indexGPR, JITCompiler::TimesEight, m_jit.offsetOfArguments(node->origin.semantic)),
resultGPR);
slowArgument.link(&m_jit);
JITCompiler::addressFor(
m_jit.graph().machineArgumentsRegisterFor(node->origin.semantic))));
- m_jit.add32(TrustedImm32(1), indexGPR, resultGPR);
if (node->origin.semantic.inlineCallFrame) {
slowPath.append(
m_jit.branch32(
JITCompiler::AboveOrEqual,
resultGPR,
- Imm32(node->origin.semantic.inlineCallFrame->arguments.size())));
+ Imm32(node->origin.semantic.inlineCallFrame->arguments.size() - 1)));
} else {
+ m_jit.load32(JITCompiler::payloadFor(JSStack::ArgumentCount), resultGPR);
+ m_jit.sub32(TrustedImm32(1), resultGPR);
slowPath.append(
- m_jit.branch32(
- JITCompiler::AboveOrEqual,
- resultGPR,
- JITCompiler::payloadFor(JSStack::ArgumentCount)));
+ m_jit.branch32(JITCompiler::AboveOrEqual, indexGPR, resultGPR));
}
JITCompiler::JumpList slowArgument;
}
slowArgumentOutOfBounds.link(&m_jit);
- m_jit.signExtend32ToPtr(resultGPR, resultGPR);
-
m_jit.load64(
JITCompiler::BaseIndex(
- GPRInfo::callFrameRegister, resultGPR, JITCompiler::TimesEight, m_jit.offsetOfArgumentsIncludingThis(node->origin.semantic)),
+ GPRInfo::callFrameRegister, indexGPR, JITCompiler::TimesEight, m_jit.offsetOfArguments(node->origin.semantic)),
resultGPR);
if (node->origin.semantic.inlineCallFrame) {
/*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
CodeOrigin codeOrigin = m_node->origin.semantic;
- LValue zeroBasedIndex = lowInt32(m_node->child1());
- LValue oneBasedIndex = m_out.add(zeroBasedIndex, m_out.int32One);
+ LValue index = lowInt32(m_node->child1());
LValue limit;
if (codeOrigin.inlineCallFrame)
- limit = m_out.constInt32(codeOrigin.inlineCallFrame->arguments.size());
+ limit = m_out.constInt32(codeOrigin.inlineCallFrame->arguments.size() - 1);
else
- limit = m_out.load32(payloadFor(JSStack::ArgumentCount));
+ limit = m_out.sub(m_out.load32(payloadFor(JSStack::ArgumentCount)), m_out.int32One);
- speculate(Uncountable, noValue(), 0, m_out.aboveOrEqual(oneBasedIndex, limit));
+ speculate(Uncountable, noValue(), 0, m_out.aboveOrEqual(index, limit));
SymbolTable* symbolTable = m_graph.baselineCodeBlockFor(codeOrigin)->symbolTable();
if (symbolTable->slowArguments()) {
base = addressFor(virtualRegisterForArgument(1));
LValue pointer = m_out.baseIndex(
- base.value(), m_out.zeroExt(zeroBasedIndex, m_out.intPtr), ScaleEight);
+ base.value(), m_out.zeroExt(index, m_out.intPtr), ScaleEight);
setJSValue(m_out.load64(TypedPointer(m_heaps.variables.atAnyIndex(), pointer)));
}
return codeOrigin.inlineCallFrame->stackOffset * sizeof(Register);
}
- int offsetOfArgumentsIncludingThis(InlineCallFrame* inlineCallFrame)
+ int offsetOfArguments(InlineCallFrame* inlineCallFrame)
{
if (!inlineCallFrame)
- return CallFrame::argumentOffsetIncludingThis(0) * sizeof(Register);
+ return CallFrame::argumentOffset(0) * sizeof(Register);
if (inlineCallFrame->arguments.size() <= 1)
return 0;
ValueRecovery recovery = inlineCallFrame->arguments[1];
RELEASE_ASSERT(recovery.technique() == DisplacedInJSStack);
- return (recovery.virtualRegister().offset() - 1) * sizeof(Register);
+ return recovery.virtualRegister().offset() * sizeof(Register);
}
- int offsetOfArgumentsIncludingThis(const CodeOrigin& codeOrigin)
+ int offsetOfArguments(const CodeOrigin& codeOrigin)
{
- return offsetOfArgumentsIncludingThis(codeOrigin.inlineCallFrame);
+ return offsetOfArguments(codeOrigin.inlineCallFrame);
}
-
+
void emitLoadStructure(RegisterID source, RegisterID dest, RegisterID scratch)
{
#if USE(JSVALUE64)
addSlowCase(branchTest64(NonZero, addressFor(argumentsRegister)));
emitGetVirtualRegister(property, regT1);
addSlowCase(emitJumpIfNotImmediateInteger(regT1));
- add32(TrustedImm32(1), regT1);
- // regT1 now contains the integer index of the argument we want, including this
emitGetFromCallFrameHeader32(JSStack::ArgumentCount, regT2);
+ sub32(TrustedImm32(1), regT2);
addSlowCase(branch32(AboveOrEqual, regT1, regT2));
signExtend32ToPtr(regT1, regT1);
- load64(BaseIndex(callFrameRegister, regT1, TimesEight, CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register))), regT0);
+ load64(BaseIndex(callFrameRegister, regT1, TimesEight, CallFrame::argumentOffset(0) * static_cast<int>(sizeof(Register))), regT0);
emitValueProfilingSite();
emitPutVirtualRegister(dst, regT0);
}
addSlowCase(branch32(NotEqual, tagFor(argumentsRegister), TrustedImm32(JSValue::EmptyValueTag)));
emitLoad(property, regT1, regT2);
addSlowCase(branch32(NotEqual, regT1, TrustedImm32(JSValue::Int32Tag)));
- add32(TrustedImm32(1), regT2);
// regT2 now contains the integer index of the argument we want, including this
load32(payloadFor(JSStack::ArgumentCount), regT3);
+ sub32(TrustedImm32(1), regT3);
addSlowCase(branch32(AboveOrEqual, regT2, regT3));
- loadPtr(BaseIndex(callFrameRegister, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload) + CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register))), regT0);
- loadPtr(BaseIndex(callFrameRegister, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag) + CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register))), regT1);
+ loadPtr(BaseIndex(callFrameRegister, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.payload) + CallFrame::argumentOffset(0) * static_cast<int>(sizeof(Register))), regT0);
+ loadPtr(BaseIndex(callFrameRegister, regT2, TimesEight, OBJECT_OFFSETOF(JSValue, u.asBits.tag) + CallFrame::argumentOffset(0) * static_cast<int>(sizeof(Register))), regT1);
emitValueProfilingSite();
emitStore(dst, regT1, regT0);
}
-# Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+# Copyright (C) 2011-2015 Apple Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
const Callee = CodeBlock + SlotSize
const ArgumentCount = Callee + SlotSize
const ThisArgumentOffset = ArgumentCount + SlotSize
+const FirstArgumentOffset = ThisArgumentOffset + SlotSize
const CallFrameHeaderSize = ThisArgumentOffset
# Some value representation constants.
-# Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+# Copyright (C) 2011-2015 Apple Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
loadi 12[PC], t1
bineq TagOffset[cfr, t0, 8], EmptyValueTag, .opGetArgumentByValSlow
loadConstantOrVariablePayload(t1, Int32Tag, t2, .opGetArgumentByValSlow)
- addi 1, t2
loadi ArgumentCount + PayloadOffset[cfr], t1
+ subi 1, t1
biaeq t2, t1, .opGetArgumentByValSlow
loadi 4[PC], t3
- loadi ThisArgumentOffset + TagOffset[cfr, t2, 8], t0
- loadi ThisArgumentOffset + PayloadOffset[cfr, t2, 8], t1
+ loadi FirstArgumentOffset + TagOffset[cfr, t2, 8], t0
+ loadi FirstArgumentOffset + PayloadOffset[cfr, t2, 8], t1
storei t0, TagOffset[cfr, t3, 8]
storei t1, PayloadOffset[cfr, t3, 8]
valueProfile(t0, t1, 24, t2)
-# Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+# Copyright (C) 2011-2015 Apple Inc. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
loadisFromInstruction(3, t1)
btqnz [cfr, t0, 8], .opGetArgumentByValSlow
loadConstantOrVariableInt32(t1, t2, .opGetArgumentByValSlow)
- addi 1, t2
loadi ArgumentCount + PayloadOffset[cfr], t1
+ sxi2q t2, t2
+ subi 1, t1
biaeq t2, t1, .opGetArgumentByValSlow
loadisFromInstruction(1, t3)
loadpFromInstruction(6, t1)
- loadq ThisArgumentOffset[cfr, t2, 8], t0
+ loadq FirstArgumentOffset[cfr, t2, 8], t0
storeq t0, [cfr, t3, 8]
valueProfile(t0, 6, t1)
dispatch(7)
--- /dev/null
+function foo(index) {
+ return arguments[index];
+}
+
+noInline(foo);
+
+var result = foo(1);
+if (result !== void 0)
+ throw "Error: bad result at end: " + result;
--- /dev/null
+function foo(index) {
+ return arguments[index];
+}
+
+noInline(foo);
+
+for (var i = 0; i < 100000; ++i) {
+ var result = foo(1, 42);
+ if (result != 42)
+ throw "Error: bad result in loop: " + result;
+}
+
+var result = foo(1);
+if (result !== void 0)
+ throw "Error: bad result at end: " + result;
--- /dev/null
+function foo(index) {
+ if (index > 1000)
+ arguments = [1, 2, 3];
+ return arguments[index];
+}
+
+noInline(foo);
+
+for (var i = 0; i < 100000; ++i) {
+ var result = foo(1, 42);
+ if (result != 42)
+ throw "Error: bad result in loop: " + result;
+}
+
+var result = foo(1);
+if (result !== void 0)
+ throw "Error: bad result at end: " + result;
--- /dev/null
+function foo(index) {
+ if (index > 1000)
+ arguments = [1, 2, 3];
+ return arguments[index];
+}
+
+noInline(foo);
+
+for (var i = 0; i < 100000; ++i) {
+ var result = foo(1, 42);
+ if (result != 42)
+ throw "Error: bad result in loop: " + result;
+}
+
+var result = foo(-1);
+if (result !== void 0)
+ throw "Error: bad result at end: " + result;
--- /dev/null
+function foo(index) {
+ return arguments[index];
+}
+
+noInline(foo);
+
+var result = foo(-1);
+if (result !== void 0)
+ throw "Error: bad result at end: " + result;
--- /dev/null
+function foo(index) {
+ return arguments[index];
+}
+
+noInline(foo);
+
+for (var i = 0; i < 100000; ++i) {
+ var result = foo(1, 42);
+ if (result != 42)
+ throw "Error: bad result in loop: " + result;
+}
+
+var result = foo(-1);
+if (result !== void 0)
+ throw "Error: bad result at end: " + result;