We should be able optimize the pattern where we spread a function's rest parameter...
authorsbarati@apple.com <sbarati@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 30 Nov 2016 06:24:44 +0000 (06:24 +0000)
committersbarati@apple.com <sbarati@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 30 Nov 2016 06:24:44 +0000 (06:24 +0000)
https://bugs.webkit.org/show_bug.cgi?id=163865

Reviewed by Filip Pizlo.

JSTests:

* microbenchmarks/default-derived-constructor.js: Added.
(createClassHierarchy.let.currentClass):
(createClassHierarchy):
* stress/call-varargs-spread.js: Added.
(assert):
(bar):
(foo):
* stress/load-varargs-on-new-array-with-spread-convert-to-static-loads.js: Added.
(assert):
(baz):
(bar):
(foo):
* stress/new-array-with-spread-with-normal-spread-and-phantom-spread.js: Added.
(assert):
(foo):
(escape):
(bar):
* stress/phantom-new-array-with-spread-osr-exit.js: Added.
(assert):
(baz):
(bar):
(effects):
(foo):
* stress/phantom-spread-forward-varargs.js: Added.
(assert):
(test1.bar):
(test1.foo):
(test1):
(test2.bar):
(test2.foo):
(test3.baz):
(test3.bar):
(test3.foo):
(test4.baz):
(test4.bar):
(test4.foo):
(test5.baz):
(test5.bar):
(test5.foo):
* stress/phantom-spread-osr-exit.js: Added.
(assert):
(baz):
(bar):
(effects):
(foo):
* stress/spread-call-convert-to-static-call.js: Added.
(assert):
(baz):
(bar):
(foo):
* stress/spread-forward-call-varargs-stack-overflow.js: Added.
(assert):
(identity):
(bar):
(foo):
* stress/spread-forward-varargs-rest-parameter-change-iterator-protocol-2.js: Added.
(assert):
(baz.Array.prototype.Symbol.iterator):
(baz):
(bar):
(foo):
(test):
* stress/spread-forward-varargs-rest-parameter-change-iterator-protocol.js: Added.
(assert):
(baz.Array.prototype.Symbol.iterator):
(baz):
(bar):
(foo):
* stress/spread-forward-varargs-stack-overflow.js: Added.
(assert):
(bar):
(foo):

Source/JavaScriptCore:

This patch optimizes the following patterns to prevent both the allocation
of the rest parameter, and the execution of the iterator protocol:

```
function foo(...args) {
    let arr = [...args];
}

and

function foo(...args) {
    bar(...args);
}
```

To do this, I've extended the arguments elimination phase to reason
about Spread and NewArrayWithSpread. I've added two new nodes, PhantomSpread
and PhantomNewArrayWithSpread. PhantomSpread is only allowed over rest
parameters that don't escape. If the rest parameter *does* escape, we can't
convert the spread into a phantom because it would not be sound w.r.t JS
semantics because we would be reading from the call frame even though
the rest array may have changed.

Note that NewArrayWithSpread also understands what to do when one of its
arguments is PhantomSpread(@PhantomCreateRest) even if it itself is escaped.

PhantomNewArrayWithSpread is only allowed over a series of
PhantomSpread(@PhantomCreateRest) nodes. Like with PhantomSpread, PhantomNewArrayWithSpread
is only allowed if none of its arguments that are being spread are escaped
and if it itself is not escaped.

Because there is a dependency between a node being a candidate and
the escaped state of the node's children, I've extended the notion
of escaping a node inside the arguments elimination phase. Now, when
any node is escaped, we must consider all other candidates that are may
now no longer be valid.

For example:

```
function foo(...args) {
    escape(args);
    bar(...args);
}
```

In the above program, we don't know if the function call to escape()
modifies args, therefore, the spread can not become phantom because
the execution of the spread may not be as simple as reading the
arguments from the call frame.

Unfortunately, the arguments elimination phase does not consider control
flow when doing its escape analysis. It would be good to integrate this
phase with the object allocation sinking phase. To see why, consider
an example where we don't eliminate the spread and allocation of the rest
parameter even though we could:

```
function foo(rareCondition, ...args) {
    bar(...args);
    if (rareCondition)
        baz(args);
}
```

There are only a few users of the PhantomSpread and PhantomNewArrayWithSpread
nodes. PhantomSpread is only used by PhantomNewArrayWithSpread and NewArrayWithSpread.
PhantomNewArrayWithSpread is only used by ForwardVarargs and the various
*Call*ForwardVarargs nodes. The users of these phantoms know how to produce
what the phantom node would have produced. For example, NewArrayWithSpread
knows how to produce the values that would have been produced by PhantomSpread(@PhantomCreateRest)
by directly reading from the call frame.

This patch is a 6% speedup on my MBP on ES6SampleBench.

* b3/B3LowerToAir.cpp:
(JSC::B3::Air::LowerToAir::tryAppendLea):
* b3/B3ValueRep.h:
* builtins/BuiltinExecutables.cpp:
(JSC::BuiltinExecutables::createDefaultConstructor):
* dfg/DFGAbstractInterpreterInlines.h:
(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
* dfg/DFGArgumentsEliminationPhase.cpp:
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGDoesGC.cpp:
(JSC::DFG::doesGC):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
* dfg/DFGForAllKills.h:
(JSC::DFG::forAllKillsInBlock):
* dfg/DFGNode.h:
(JSC::DFG::Node::hasConstant):
(JSC::DFG::Node::constant):
(JSC::DFG::Node::bitVector):
(JSC::DFG::Node::isPhantomAllocation):
* dfg/DFGNodeType.h:
* dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
(JSC::DFG::OSRAvailabilityAnalysisPhase::run):
(JSC::DFG::LocalOSRAvailabilityCalculator::LocalOSRAvailabilityCalculator):
(JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
* dfg/DFGOSRAvailabilityAnalysisPhase.h:
* dfg/DFGObjectAllocationSinkingPhase.cpp:
* dfg/DFGPreciseLocalClobberize.h:
(JSC::DFG::PreciseLocalClobberizeAdaptor::readTop):
* dfg/DFGPredictionPropagationPhase.cpp:
* dfg/DFGPromotedHeapLocation.cpp:
(WTF::printInternal):
* dfg/DFGPromotedHeapLocation.h:
* dfg/DFGSafeToExecute.h:
(JSC::DFG::safeToExecute):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGValidate.cpp:
* ftl/FTLCapabilities.cpp:
(JSC::FTL::canCompile):
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::LowerDFGToB3):
(JSC::FTL::DFG::LowerDFGToB3::compileNode):
(JSC::FTL::DFG::LowerDFGToB3::compileNewArrayWithSpread):
(JSC::FTL::DFG::LowerDFGToB3::compileSpread):
(JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargsSpread):
(JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargs):
(JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargs):
(JSC::FTL::DFG::LowerDFGToB3::getSpreadLengthFromInlineCallFrame):
(JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargsWithSpread):
* ftl/FTLOperations.cpp:
(JSC::FTL::operationPopulateObjectInOSR):
(JSC::FTL::operationMaterializeObjectInOSR):
* jit/SetupVarargsFrame.cpp:
(JSC::emitSetupVarargsFrameFastCase):
* jsc.cpp:
(GlobalObject::finishCreation):
(functionMaxArguments):
* runtime/JSFixedArray.h:
(JSC::JSFixedArray::createFromArray):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@209121 268f45cc-cd09-0410-ab3c-d52691b4dbfc

42 files changed:
JSTests/ChangeLog
JSTests/microbenchmarks/default-derived-constructor.js [new file with mode: 0644]
JSTests/stress/call-varargs-spread.js [new file with mode: 0644]
JSTests/stress/load-varargs-on-new-array-with-spread-convert-to-static-loads.js [new file with mode: 0644]
JSTests/stress/new-array-with-spread-with-normal-spread-and-phantom-spread.js [new file with mode: 0644]
JSTests/stress/phantom-new-array-with-spread-osr-exit.js [new file with mode: 0644]
JSTests/stress/phantom-spread-forward-varargs.js [new file with mode: 0644]
JSTests/stress/phantom-spread-osr-exit.js [new file with mode: 0644]
JSTests/stress/spread-call-convert-to-static-call.js [new file with mode: 0644]
JSTests/stress/spread-forward-call-varargs-stack-overflow.js [new file with mode: 0644]
JSTests/stress/spread-forward-varargs-rest-parameter-change-iterator-protocol-2.js [new file with mode: 0644]
JSTests/stress/spread-forward-varargs-rest-parameter-change-iterator-protocol.js [new file with mode: 0644]
JSTests/stress/spread-forward-varargs-stack-overflow.js [new file with mode: 0644]
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/b3/B3LowerToAir.cpp
Source/JavaScriptCore/b3/B3ValueRep.h
Source/JavaScriptCore/builtins/BuiltinExecutables.cpp
Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp
Source/JavaScriptCore/dfg/DFGClobberize.h
Source/JavaScriptCore/dfg/DFGDoesGC.cpp
Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
Source/JavaScriptCore/dfg/DFGForAllKills.h
Source/JavaScriptCore/dfg/DFGNode.h
Source/JavaScriptCore/dfg/DFGNodeType.h
Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.cpp
Source/JavaScriptCore/dfg/DFGOSRAvailabilityAnalysisPhase.h
Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp
Source/JavaScriptCore/dfg/DFGPreciseLocalClobberize.h
Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
Source/JavaScriptCore/dfg/DFGPromotedHeapLocation.cpp
Source/JavaScriptCore/dfg/DFGPromotedHeapLocation.h
Source/JavaScriptCore/dfg/DFGSafeToExecute.h
Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
Source/JavaScriptCore/dfg/DFGValidate.cpp
Source/JavaScriptCore/ftl/FTLCapabilities.cpp
Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
Source/JavaScriptCore/ftl/FTLOperations.cpp
Source/JavaScriptCore/jit/SetupVarargsFrame.cpp
Source/JavaScriptCore/jsc.cpp
Source/JavaScriptCore/runtime/JSFixedArray.h

index a757ba9..3c102a7 100644 (file)
@@ -1,3 +1,83 @@
+2016-11-29  Saam Barati  <sbarati@apple.com>
+
+        We should be able optimize the pattern where we spread a function's rest parameter to another call
+        https://bugs.webkit.org/show_bug.cgi?id=163865
+
+        Reviewed by Filip Pizlo.
+
+        * microbenchmarks/default-derived-constructor.js: Added.
+        (createClassHierarchy.let.currentClass):
+        (createClassHierarchy):
+        * stress/call-varargs-spread.js: Added.
+        (assert):
+        (bar):
+        (foo):
+        * stress/load-varargs-on-new-array-with-spread-convert-to-static-loads.js: Added.
+        (assert):
+        (baz):
+        (bar):
+        (foo):
+        * stress/new-array-with-spread-with-normal-spread-and-phantom-spread.js: Added.
+        (assert):
+        (foo):
+        (escape):
+        (bar):
+        * stress/phantom-new-array-with-spread-osr-exit.js: Added.
+        (assert):
+        (baz):
+        (bar):
+        (effects):
+        (foo):
+        * stress/phantom-spread-forward-varargs.js: Added.
+        (assert):
+        (test1.bar):
+        (test1.foo):
+        (test1):
+        (test2.bar):
+        (test2.foo):
+        (test3.baz):
+        (test3.bar):
+        (test3.foo):
+        (test4.baz):
+        (test4.bar):
+        (test4.foo):
+        (test5.baz):
+        (test5.bar):
+        (test5.foo):
+        * stress/phantom-spread-osr-exit.js: Added.
+        (assert):
+        (baz):
+        (bar):
+        (effects):
+        (foo):
+        * stress/spread-call-convert-to-static-call.js: Added.
+        (assert):
+        (baz):
+        (bar):
+        (foo):
+        * stress/spread-forward-call-varargs-stack-overflow.js: Added.
+        (assert):
+        (identity):
+        (bar):
+        (foo):
+        * stress/spread-forward-varargs-rest-parameter-change-iterator-protocol-2.js: Added.
+        (assert):
+        (baz.Array.prototype.Symbol.iterator):
+        (baz):
+        (bar):
+        (foo):
+        (test):
+        * stress/spread-forward-varargs-rest-parameter-change-iterator-protocol.js: Added.
+        (assert):
+        (baz.Array.prototype.Symbol.iterator):
+        (baz):
+        (bar):
+        (foo):
+        * stress/spread-forward-varargs-stack-overflow.js: Added.
+        (assert):
+        (bar):
+        (foo):
+
 2016-11-29  Caitlin Potter  <caitp@igalia.com>
 
         [JSC] always wrap AwaitExpression operand in a new Promise
diff --git a/JSTests/microbenchmarks/default-derived-constructor.js b/JSTests/microbenchmarks/default-derived-constructor.js
new file mode 100644 (file)
index 0000000..d081511
--- /dev/null
@@ -0,0 +1,17 @@
+function createClassHierarchy(depth) {
+    let currentClass = class Base { };
+    for (let i = 0; i < depth; i++) {
+        currentClass = class extends currentClass {};
+    }
+    return currentClass;
+}
+
+let ctor = createClassHierarchy(10);
+let start = Date.now();
+for (let i = 0; i < 500000; i++) {
+    let x = new ctor({}, {}, 20, 30, 40, 50, 60, {}, 80, true, false);
+}
+
+const verbose = false;
+if (verbose)
+    print(Date.now() - start);
diff --git a/JSTests/stress/call-varargs-spread.js b/JSTests/stress/call-varargs-spread.js
new file mode 100644 (file)
index 0000000..0e19c88
--- /dev/null
@@ -0,0 +1,28 @@
+function assert(b, m = "") {
+    if (!b)
+        throw new Error("Bad assert: " + m);
+}
+noInline(assert);
+
+function bar(...args) {
+    return args;
+}
+noInline(bar);
+
+function foo(a, ...args) {
+    let x = bar(...args, 42, ...args); 
+    return x;
+}
+noInline(foo);
+
+for (let i = 0; i < 10000; i++) {
+    let r = foo(i, i+1, i+2, i+3);
+    assert(r.length === 7);
+    assert(r[0] === i+1, JSON.stringify(r));
+    assert(r[1] === i+2, JSON.stringify(r));
+    assert(r[2] === i+3, JSON.stringify(r));
+    assert(r[3] === 42, JSON.stringify(r));
+    assert(r[4] === i+1, JSON.stringify(r));
+    assert(r[5] === i+2, JSON.stringify(r));
+    assert(r[6] === i+3, JSON.stringify(r));
+}
diff --git a/JSTests/stress/load-varargs-on-new-array-with-spread-convert-to-static-loads.js b/JSTests/stress/load-varargs-on-new-array-with-spread-convert-to-static-loads.js
new file mode 100644 (file)
index 0000000..48deb0d
--- /dev/null
@@ -0,0 +1,28 @@
+function assert(b) {
+    if (!b)
+        throw new Error("Bad!");
+}
+noInline(assert);
+
+function baz(...args) {
+    return args;
+}
+function bar(a, ...args) {
+    return baz(...args, 42, ...args);
+}
+function foo(a, b, c, d) {
+    return bar(a, b, c, d);
+}
+noInline(foo);
+
+for (let i = 0; i < 10000; i++) {
+    let r = foo(i, i+1, i+2, i+3);
+    assert(r.length === 7);
+    assert(r[0] === i+1);
+    assert(r[1] === i+2);
+    assert(r[2] === i+3);
+    assert(r[3] === 42);
+    assert(r[4] === i+1);
+    assert(r[5] === i+2);
+    assert(r[6] === i+3);
+}
diff --git a/JSTests/stress/new-array-with-spread-with-normal-spread-and-phantom-spread.js b/JSTests/stress/new-array-with-spread-with-normal-spread-and-phantom-spread.js
new file mode 100644 (file)
index 0000000..001ec72
--- /dev/null
@@ -0,0 +1,33 @@
+function assert(b) {
+    if (!b)
+        throw new Error("Bad assertion")
+}
+noInline(assert);
+
+function foo(a, ...args) {
+    let r = [...a, ...args];
+    return r;
+}
+noInline(foo);
+
+function escape(a) { return a; }
+noInline(escape);
+function bar(a, ...args) {
+    escape(args);
+    let r = [...a, ...args];
+    return r;
+}
+noInline(foo);
+
+for (let i = 0; i < 50000; i++) {
+    for (let f of [foo, bar]) {
+        let o = {};
+        let a = [o, 20];
+        let r = f(a, 30, 40);
+        assert(r.length === 4);
+        assert(r[0] === o);
+        assert(r[1] === 20);
+        assert(r[2] === 30);
+        assert(r[3] === 40);
+    }
+}
diff --git a/JSTests/stress/phantom-new-array-with-spread-osr-exit.js b/JSTests/stress/phantom-new-array-with-spread-osr-exit.js
new file mode 100644 (file)
index 0000000..5109ab0
--- /dev/null
@@ -0,0 +1,45 @@
+function assert(b) {
+    if (!b)
+        throw new Error("Bad assertion!");
+}
+noInline(assert);
+
+let value = false;
+
+function baz(x) {
+    if (typeof x !== "number") {
+        value = true;
+    }
+    return x;
+}
+noInline(baz);
+
+function bar(...args) {
+    return args;
+}
+
+let didEffects = false; 
+function effects() { didEffects = true; }
+noInline(effects);
+
+function foo(a, ...args) {
+    let theArgs = [...args, a, ...args];
+    baz(a);
+    if (value) {
+        effects();
+    }
+    let r = bar.apply(null, theArgs);
+    return r;
+}
+noInline(foo);
+
+for (let i = 0; i < 100000; i++) {
+    foo(i, i+1);
+    assert(!didEffects);
+}
+let o = {};
+let [a, b, c] = foo(o, 20);
+assert(a === 20);
+assert(b === o);
+assert(c === 20);
+assert(didEffects);
diff --git a/JSTests/stress/phantom-spread-forward-varargs.js b/JSTests/stress/phantom-spread-forward-varargs.js
new file mode 100644 (file)
index 0000000..f9a22e2
--- /dev/null
@@ -0,0 +1,117 @@
+"use strict";
+
+function assert(b, m="") {
+    if (!b)
+        throw new Error("Bad assertion: " + m);
+}
+noInline(assert);
+
+function test1() {
+    function bar(a, b, c, d) {
+        return [a, b, c, d];
+    }
+    function foo(...args) {
+        return bar(...args);
+    }
+    noInline(foo);
+
+    for (let i = 0; i < 10000; i++) {
+        let [a, b, c, d] = foo(i, i+1, i+2, i+3);
+        assert(a === i);
+        assert(b === i+1);
+        assert(c === i+2);
+        assert(d === i+3) ;
+    }
+}
+
+function test2() {
+    function bar(...args) {
+        return args;
+    }
+    function foo(a, ...args) {
+        return bar(...args, a, ...args);
+    }
+    noInline(foo);
+
+    for (let i = 0; i < 10000; i++) {
+        let r = foo(i, i+1, i+2, i+3);
+        assert(r.length === 7);
+        let [a, b, c, d, e, f, g] = r;
+        assert(a === i+1);
+        assert(b === i+2);
+        assert(c === i+3);
+        assert(d === i);
+        assert(e === i+1);
+        assert(f === i+2);
+        assert(g === i+3);
+    }
+}
+
+function test3() {
+    function baz(...args) {
+        return args;
+    }
+    function bar(...args) {
+        return baz(...args);
+    }
+    function foo(a, b, c, ...args) {
+        return bar(...args, a, ...args);
+    }
+    noInline(foo);
+
+    for (let i = 0; i < 100000; i++) {
+        let r = foo(i, i+1, i+2, i+3);
+        assert(r.length === 3);
+        let [a, b, c] = r;
+        assert(a === i+3);
+        assert(b === i);
+        assert(c === i+3);
+    }
+}
+
+function test4() {
+    function baz(...args) {
+        return args;
+    }
+    function bar(...args) {
+        return baz(...args);
+    }
+    function foo(a, b, c, d, ...args) {
+        return bar(...args, a, ...args);
+    }
+    noInline(foo);
+
+    for (let i = 0; i < 100000; i++) {
+        let r = foo(i, i+1, i+2, i+3);
+        assert(r.length === 1);
+        assert(r[0] === i);
+    }
+}
+
+function test5() {
+    function baz(a, b, c) {
+        return [a, b, c];
+    }
+    function bar(...args) {
+        return baz(...args);
+    }
+    function foo(a, b, c, d, ...args) {
+        return bar(...args, a, ...args);
+    }
+    noInline(foo);
+
+    for (let i = 0; i < 100000; i++) {
+        let r = foo(i, i+1, i+2, i+3);
+        assert(r.length === 3);
+        let [a, b, c] = r;
+        assert(a === i);
+        assert(b === undefined);
+        assert(c === undefined);
+    }
+}
+
+test1();
+test2();
+test3();
+test4();
+test5();
diff --git a/JSTests/stress/phantom-spread-osr-exit.js b/JSTests/stress/phantom-spread-osr-exit.js
new file mode 100644 (file)
index 0000000..c9189f9
--- /dev/null
@@ -0,0 +1,43 @@
+function assert(b) {
+    if (!b)
+        throw new Error("Bad assertion!");
+}
+noInline(assert);
+
+let value = false;
+
+function baz(x) {
+    if (typeof x !== "number") {
+        value = true;
+    }
+    return x;
+}
+noInline(baz);
+
+function bar(...args) {
+    return args;
+}
+
+let didEffects = false; 
+function effects() { didEffects = true; }
+noInline(effects);
+
+function foo(a, ...args) {
+    let r = bar(...args, baz(a), ...args);
+    if (value) {
+        effects();
+    }
+    return r;
+}
+noInline(foo);
+
+for (let i = 0; i < 100000; i++) {
+    foo(i, i+1);
+    assert(!didEffects);
+}
+let o = {};
+let [a, b, c] = foo(o, 20);
+assert(a === 20);
+assert(b === o);
+assert(c === 20);
+assert(didEffects);
diff --git a/JSTests/stress/spread-call-convert-to-static-call.js b/JSTests/stress/spread-call-convert-to-static-call.js
new file mode 100644 (file)
index 0000000..f572664
--- /dev/null
@@ -0,0 +1,29 @@
+function assert(b) {
+    if (!b)
+        throw new Error("Bad!");
+}
+noInline(assert);
+
+function baz(...args) {
+    return args;
+}
+noInline(baz);
+function bar(a, ...args) {
+    return baz(...args, 42, ...args);
+}
+function foo(a, b, c, d) {
+    return bar(a, b, c, d);
+}
+noInline(foo);
+
+for (let i = 0; i < 10000; i++) {
+    let r = foo(i, i+1, i+2, i+3);
+    assert(r.length === 7);
+    assert(r[0] === i+1);
+    assert(r[1] === i+2);
+    assert(r[2] === i+3);
+    assert(r[3] === 42);
+    assert(r[4] === i+1);
+    assert(r[5] === i+2);
+    assert(r[6] === i+3);
+}
diff --git a/JSTests/stress/spread-forward-call-varargs-stack-overflow.js b/JSTests/stress/spread-forward-call-varargs-stack-overflow.js
new file mode 100644 (file)
index 0000000..ed82286
--- /dev/null
@@ -0,0 +1,57 @@
+function assert(b) {
+    if (!b)
+        throw new Error("Bad assertion");
+}
+noInline(assert);
+
+function identity(a) { return a; }
+noInline(identity);
+
+function bar(...args) {
+    return args;
+}
+noInline(bar);
+
+function foo(a, ...args) {
+    let arg = identity(a);
+    try {
+        let r = bar(...args, ...args);
+        return r;
+    } catch(e) {
+        return arg;
+    }
+}
+noInline(foo);
+
+for (let i = 0; i < 40000; i++) {
+    let args = [];
+    for (let i = 0; i < 400; i++) {
+        args.push(i);
+    }
+
+    let o = {};
+    let r = foo(o, ...args);
+    let i = 0;
+    for (let arg of args) {
+        assert(r[i] === arg);
+        i++;
+    }
+    for (let arg of args) {
+        assert(r[i] === arg);
+        i++;
+    }
+}
+
+for (let i = 0; i < 20; i++) {
+    let threw = false;
+    let o = {};
+    let args = [];
+    let argCount = maxArguments() * (2/3);
+    argCount = argCount | 0;
+    for (let i = 0; i < argCount; i++) {
+        args.push(i);
+    }
+
+    let r = foo(o, ...args);
+    assert(r === o);
+}
diff --git a/JSTests/stress/spread-forward-varargs-rest-parameter-change-iterator-protocol-2.js b/JSTests/stress/spread-forward-varargs-rest-parameter-change-iterator-protocol-2.js
new file mode 100644 (file)
index 0000000..8196a5d
--- /dev/null
@@ -0,0 +1,46 @@
+function assert(b, m="") {
+    if (!b)
+        throw new Error("Bad assertion: " + m);
+}
+noInline(assert);
+
+let called = false;
+function baz(c) {
+    if (c) {
+        Array.prototype[Symbol.iterator] = function() {
+            assert(false, "should not be called!");
+            let i = 0;
+            return {
+                next() {
+                    assert(false, "should not be called!");
+                }
+            };
+        }
+    }
+}
+noInline(baz);
+
+function bar(...args) {
+    return args;
+}
+noInline(bar);
+
+function foo(c, ...args) {
+    args = [...args];
+    baz(c);
+    return bar.apply(null, args);
+}
+noInline(foo);
+
+function test(c) {
+    const args = [{}, 20, [], 45];
+    let r = foo(c, ...args);
+    assert(r.length === r.length);
+    for (let i = 0; i < r.length; i++)
+        assert(r[i] === args[i]);
+}
+noInline(test);
+for (let i = 0; i < 40000; i++) {
+    test(false);
+}
+test(true);
diff --git a/JSTests/stress/spread-forward-varargs-rest-parameter-change-iterator-protocol.js b/JSTests/stress/spread-forward-varargs-rest-parameter-change-iterator-protocol.js
new file mode 100644 (file)
index 0000000..72bd04f
--- /dev/null
@@ -0,0 +1,49 @@
+function assert(b) {
+    if (!b)
+        throw new Error("Bad assertion");
+}
+noInline(assert);
+
+let called = false;
+function baz(c) {
+    if (c) {
+        Array.prototype[Symbol.iterator] = function() {
+            let i = 0;
+            return {
+                next() {
+                    i++;
+                    if (i === 2)
+                        return {done: true};
+                    return {value: 40, done: false};
+                }
+            };
+        }
+    }
+}
+noInline(baz);
+
+function bar(...args) {
+    return args;
+}
+noInline(bar);
+
+function foo(c, ...args) {
+    baz(c);
+    return bar(...args);
+}
+noInline(foo);
+
+for (let i = 0; i < 10000; i++) {
+    const c = false;
+    const args = [{}, 20, [], 45];
+    let r = foo(c, ...args);
+    assert(r.length === r.length);
+    for (let i = 0; i < r.length; i++)
+        assert(r[i] === args[i]);
+}
+
+const c = true;
+const args = [{}, 20, [], 45];
+let r = foo(c, ...args);
+assert(r.length === 1);
+assert(r[0] === 40);
diff --git a/JSTests/stress/spread-forward-varargs-stack-overflow.js b/JSTests/stress/spread-forward-varargs-stack-overflow.js
new file mode 100644 (file)
index 0000000..b11d45e
--- /dev/null
@@ -0,0 +1,52 @@
+function assert(b) {
+    if (!b)
+        throw new Error("Bad assertion");
+}
+noInline(assert);
+
+function bar(...args) {
+    return args;
+}
+
+function foo(a, ...args) {
+    try {
+        let r = bar(...args, ...args);
+        return r;
+    } catch(e) {
+        return a;
+    }
+}
+noInline(foo);
+
+for (let i = 0; i < 10000; i++) {
+    let args = [];
+    for (let i = 0; i < 400; i++) {
+        args.push(i);
+    }
+
+    let o = {};
+    let r = foo(o, ...args);
+    let i = 0;
+    for (let arg of args) {
+        assert(r[i] === arg);
+        i++;
+    }
+    for (let arg of args) {
+        assert(r[i] === arg);
+        i++;
+    }
+}
+
+for (let i = 0; i < 20; i++) {
+    let threw = false;
+    let o = {};
+    let args = [];
+    let argCount = maxArguments() * (2/3);
+    argCount = argCount | 0;
+    for (let i = 0; i < argCount; i++) {
+        args.push(i);
+    }
+
+    let r = foo(o, ...args);
+    assert(r === o);
+}
index cb8ab73..2517975 100644 (file)
@@ -1,3 +1,149 @@
+2016-11-29  Saam Barati  <sbarati@apple.com>
+
+        We should be able optimize the pattern where we spread a function's rest parameter to another call
+        https://bugs.webkit.org/show_bug.cgi?id=163865
+
+        Reviewed by Filip Pizlo.
+
+        This patch optimizes the following patterns to prevent both the allocation
+        of the rest parameter, and the execution of the iterator protocol:
+        
+        ```
+        function foo(...args) {
+            let arr = [...args];
+        }
+        
+        and
+        
+        function foo(...args) {
+            bar(...args);
+        }
+        ```
+        
+        To do this, I've extended the arguments elimination phase to reason
+        about Spread and NewArrayWithSpread. I've added two new nodes, PhantomSpread
+        and PhantomNewArrayWithSpread. PhantomSpread is only allowed over rest
+        parameters that don't escape. If the rest parameter *does* escape, we can't
+        convert the spread into a phantom because it would not be sound w.r.t JS
+        semantics because we would be reading from the call frame even though
+        the rest array may have changed.
+        
+        Note that NewArrayWithSpread also understands what to do when one of its
+        arguments is PhantomSpread(@PhantomCreateRest) even if it itself is escaped.
+        
+        PhantomNewArrayWithSpread is only allowed over a series of
+        PhantomSpread(@PhantomCreateRest) nodes. Like with PhantomSpread, PhantomNewArrayWithSpread
+        is only allowed if none of its arguments that are being spread are escaped
+        and if it itself is not escaped.
+        
+        Because there is a dependency between a node being a candidate and
+        the escaped state of the node's children, I've extended the notion
+        of escaping a node inside the arguments elimination phase. Now, when
+        any node is escaped, we must consider all other candidates that are may
+        now no longer be valid.
+        
+        For example:
+        
+        ```
+        function foo(...args) {
+            escape(args);
+            bar(...args);
+        }
+        ```
+        
+        In the above program, we don't know if the function call to escape()
+        modifies args, therefore, the spread can not become phantom because
+        the execution of the spread may not be as simple as reading the
+        arguments from the call frame.
+        
+        Unfortunately, the arguments elimination phase does not consider control
+        flow when doing its escape analysis. It would be good to integrate this
+        phase with the object allocation sinking phase. To see why, consider
+        an example where we don't eliminate the spread and allocation of the rest
+        parameter even though we could:
+        
+        ```
+        function foo(rareCondition, ...args) {
+            bar(...args);
+            if (rareCondition)
+                baz(args);
+        }
+        ```
+        
+        There are only a few users of the PhantomSpread and PhantomNewArrayWithSpread
+        nodes. PhantomSpread is only used by PhantomNewArrayWithSpread and NewArrayWithSpread.
+        PhantomNewArrayWithSpread is only used by ForwardVarargs and the various
+        *Call*ForwardVarargs nodes. The users of these phantoms know how to produce
+        what the phantom node would have produced. For example, NewArrayWithSpread
+        knows how to produce the values that would have been produced by PhantomSpread(@PhantomCreateRest)
+        by directly reading from the call frame.
+        
+        This patch is a 6% speedup on my MBP on ES6SampleBench.
+
+        * b3/B3LowerToAir.cpp:
+        (JSC::B3::Air::LowerToAir::tryAppendLea):
+        * b3/B3ValueRep.h:
+        * builtins/BuiltinExecutables.cpp:
+        (JSC::BuiltinExecutables::createDefaultConstructor):
+        * dfg/DFGAbstractInterpreterInlines.h:
+        (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
+        * dfg/DFGArgumentsEliminationPhase.cpp:
+        * dfg/DFGClobberize.h:
+        (JSC::DFG::clobberize):
+        * dfg/DFGDoesGC.cpp:
+        (JSC::DFG::doesGC):
+        * dfg/DFGFixupPhase.cpp:
+        (JSC::DFG::FixupPhase::fixupNode):
+        * dfg/DFGForAllKills.h:
+        (JSC::DFG::forAllKillsInBlock):
+        * dfg/DFGNode.h:
+        (JSC::DFG::Node::hasConstant):
+        (JSC::DFG::Node::constant):
+        (JSC::DFG::Node::bitVector):
+        (JSC::DFG::Node::isPhantomAllocation):
+        * dfg/DFGNodeType.h:
+        * dfg/DFGOSRAvailabilityAnalysisPhase.cpp:
+        (JSC::DFG::OSRAvailabilityAnalysisPhase::run):
+        (JSC::DFG::LocalOSRAvailabilityCalculator::LocalOSRAvailabilityCalculator):
+        (JSC::DFG::LocalOSRAvailabilityCalculator::executeNode):
+        * dfg/DFGOSRAvailabilityAnalysisPhase.h:
+        * dfg/DFGObjectAllocationSinkingPhase.cpp:
+        * dfg/DFGPreciseLocalClobberize.h:
+        (JSC::DFG::PreciseLocalClobberizeAdaptor::readTop):
+        * dfg/DFGPredictionPropagationPhase.cpp:
+        * dfg/DFGPromotedHeapLocation.cpp:
+        (WTF::printInternal):
+        * dfg/DFGPromotedHeapLocation.h:
+        * dfg/DFGSafeToExecute.h:
+        (JSC::DFG::safeToExecute):
+        * dfg/DFGSpeculativeJIT32_64.cpp:
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGValidate.cpp:
+        * ftl/FTLCapabilities.cpp:
+        (JSC::FTL::canCompile):
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::LowerDFGToB3):
+        (JSC::FTL::DFG::LowerDFGToB3::compileNode):
+        (JSC::FTL::DFG::LowerDFGToB3::compileNewArrayWithSpread):
+        (JSC::FTL::DFG::LowerDFGToB3::compileSpread):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargsSpread):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargs):
+        (JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargs):
+        (JSC::FTL::DFG::LowerDFGToB3::getSpreadLengthFromInlineCallFrame):
+        (JSC::FTL::DFG::LowerDFGToB3::compileForwardVarargsWithSpread):
+        * ftl/FTLOperations.cpp:
+        (JSC::FTL::operationPopulateObjectInOSR):
+        (JSC::FTL::operationMaterializeObjectInOSR):
+        * jit/SetupVarargsFrame.cpp:
+        (JSC::emitSetupVarargsFrameFastCase):
+        * jsc.cpp:
+        (GlobalObject::finishCreation):
+        (functionMaxArguments):
+        * runtime/JSFixedArray.h:
+        (JSC::JSFixedArray::createFromArray):
+
 2016-11-29  Commit Queue  <commit-queue@webkit.org>
 
         Unreviewed, rolling out r209058 and r209074.
index 490bae4..0fbea66 100644 (file)
@@ -1929,7 +1929,7 @@ private:
             && canBeInternal(value->child(0))
             && value->child(0)->opcode() == Add) {
             innerAdd = value->child(0);
-            offset = value->child(1)->asInt32();
+            offset = static_cast<int32_t>(value->child(1)->asInt());
             value = value->child(0);
         }
         
index 595f5ae..5f9635e 100644 (file)
@@ -78,7 +78,7 @@ public:
         // As an input representation, this forces a particular register and states that
         // the register is used late. This means that the register is used after the result
         // is defined (i.e, the result will interfere with this as an input).
-        // It's not valid for this to be used as a result kind.
+        // It's not a valid output representation.
         LateRegister,
 
         // As an output representation, this tells us what stack slot B3 picked. It's not a valid
index bd88c08..d02e450 100644 (file)
@@ -45,7 +45,7 @@ BuiltinExecutables::BuiltinExecutables(VM& vm)
 UnlinkedFunctionExecutable* BuiltinExecutables::createDefaultConstructor(ConstructorKind constructorKind, const Identifier& name)
 {
     static NeverDestroyed<const String> baseConstructorCode(ASCIILiteral("(function () { })"));
-    static NeverDestroyed<const String> derivedConstructorCode(ASCIILiteral("(function () { super(...arguments); })"));
+    static NeverDestroyed<const String> derivedConstructorCode(ASCIILiteral("(function (...args) { super(...args); })"));
 
     switch (constructorKind) {
     case ConstructorKind::None:
index d4be12a..414cfc1 100644 (file)
@@ -1975,6 +1975,8 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
     case PhantomDirectArguments:
     case PhantomClonedArguments:
     case PhantomCreateRest:
+    case PhantomSpread:
+    case PhantomNewArrayWithSpread:
     case BottomValue:
         m_state.setDidClobber(true); // Prevent constant folding.
         // This claims to return bottom.
index 7a0c6a5..884fd84 100644 (file)
@@ -91,7 +91,7 @@ private:
     // Just finds nodes that we know how to work with.
     void identifyCandidates()
     {
-        for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
+        for (BasicBlock* block : m_graph.blocksInPreOrder()) {
             for (Node* node : *block) {
                 switch (node->op()) {
                 case CreateDirectArguments:
@@ -109,6 +109,38 @@ private:
                         m_candidates.add(node);
                     }
                     break;
+
+                case Spread:
+                    if (m_graph.isWatchingHavingABadTimeWatchpoint(node)) {
+                        // We check ArrayUse here because ArrayUse indicates that the iterator
+                        // protocol for Arrays is non-observable by user code (e.g, it hasn't
+                        // been changed).
+                        if (node->child1().useKind() == ArrayUse && node->child1()->op() == CreateRest && m_candidates.contains(node->child1().node()))
+                            m_candidates.add(node);
+                    }
+                    break;
+
+                case NewArrayWithSpread: {
+                    if (m_graph.isWatchingHavingABadTimeWatchpoint(node)) {
+                        BitVector* bitVector = node->bitVector();
+                        // We only allow for Spreads to be of rest nodes for now.
+                        bool isOK = true;
+                        for (unsigned i = 0; i < node->numChildren(); i++) {
+                            if (bitVector->get(i)) {
+                                Node* child = m_graph.varArgChild(node, i).node();
+                                isOK = child->op() == Spread && child->child1()->op() == CreateRest && m_candidates.contains(child);
+                                if (!isOK)
+                                    break;
+                            }
+                        }
+
+                        if (!isOK)
+                            break;
+
+                        m_candidates.add(node);
+                    }
+                    break;
+                }
                     
                 case CreateScopedArguments:
                     // FIXME: We could handle this if it wasn't for the fact that scoped arguments are
@@ -126,6 +158,62 @@ private:
         if (verbose)
             dataLog("Candidates: ", listDump(m_candidates), "\n");
     }
+
+    bool isStillValidCandidate(Node* candidate)
+    {
+        switch (candidate->op()) {
+        case Spread:
+            return m_candidates.contains(candidate->child1().node());
+
+        case NewArrayWithSpread: {
+            BitVector* bitVector = candidate->bitVector();
+            for (unsigned i = 0; i < candidate->numChildren(); i++) {
+                if (bitVector->get(i)) {
+                    if (!m_candidates.contains(m_graph.varArgChild(candidate, i).node()))
+                        return false;
+                }
+            }
+            return true;
+        }
+
+        default:
+            return true;
+        }
+
+        RELEASE_ASSERT_NOT_REACHED();
+        return false;
+    }
+
+    void removeInvalidCandidates()
+    {
+        bool changed;
+        do {
+            changed = false;
+            Vector<Node*, 1> toRemove;
+
+            for (Node* candidate : m_candidates) {
+                if (!isStillValidCandidate(candidate))
+                    toRemove.append(candidate);
+            }
+
+            if (toRemove.size()) {
+                changed = true;
+                for (Node* node : toRemove)
+                    m_candidates.remove(node);
+            }
+
+        } while (changed);
+    }
+
+    void transitivelyRemoveCandidate(Node* node, Node* source = nullptr)
+    {
+        bool removed = m_candidates.remove(node);
+        if (removed && verbose && source)
+            dataLog("eliminating candidate: ", node, " because it escapes from: ", source, "\n");
+
+        if (removed)
+            removeInvalidCandidates();
+    }
     
     // Look for escaping sites, and remove from the candidates set if we see an escape.
     void eliminateCandidatesThatEscape()
@@ -133,9 +221,7 @@ private:
         auto escape = [&] (Edge edge, Node* source) {
             if (!edge)
                 return;
-            bool removed = m_candidates.remove(edge.node());
-            if (removed && verbose)
-                dataLog("eliminating candidate: ", edge.node(), " because it escapes from: ", source, "\n");
+            transitivelyRemoveCandidate(edge.node(), source);
         };
         
         auto escapeBasedOnArrayMode = [&] (ArrayMode mode, Edge edge, Node* source) {
@@ -187,6 +273,8 @@ private:
                 break;
             }
         };
+
+        removeInvalidCandidates();
         
         for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
             for (Node* node : *block) {
@@ -201,11 +289,42 @@ private:
                     break;
 
                 case GetArrayLength:
-                    escapeBasedOnArrayMode(node->arrayMode(), node->child1(), node);
+                    // FIXME: It would not be hard to support NewArrayWithSpread here if it is only over Spread(CreateRest) nodes.
                     escape(node->child2(), node);
                     break;
+                
+                case NewArrayWithSpread: {
+                    BitVector* bitVector = node->bitVector();
+                    bool isWatchingHavingABadTimeWatchpoint = m_graph.isWatchingHavingABadTimeWatchpoint(node); 
+                    for (unsigned i = 0; i < node->numChildren(); i++) {
+                        Edge child = m_graph.varArgChild(node, i);
+                        bool dontEscape;
+                        if (bitVector->get(i)) {
+                            dontEscape = child->op() == Spread
+                                && child->child1().useKind() == ArrayUse
+                                && child->child1()->op() == CreateRest
+                                && isWatchingHavingABadTimeWatchpoint;
+                        } else
+                            dontEscape = false;
+
+                        if (!dontEscape)
+                            escape(child, node);
+                    }
+
+                    break;
+                }
+
+                case Spread: {
+                    bool isOK = node->child1().useKind() == ArrayUse && node->child1()->op() == CreateRest;
+                    if (!isOK)
+                        escape(node->child1(), node);
+                    break;
+                }
+
                     
                 case LoadVarargs:
+                    if (node->loadVarargsData()->offset && node->child1()->op() == NewArrayWithSpread)
+                        escape(node->child1(), node);
                     break;
                     
                 case CallVarargs:
@@ -214,6 +333,8 @@ private:
                 case TailCallVarargsInlinedCaller:
                     escape(node->child1(), node);
                     escape(node->child2(), node);
+                    if (node->callVarargsData()->firstVarArgOffset && node->child3()->op() == NewArrayWithSpread)
+                        escape(node->child3(), node);
                     break;
 
                 case Check:
@@ -263,6 +384,10 @@ private:
                         ASSERT(m_graph.isWatchingHavingABadTimeWatchpoint(node));
                         structure = globalObject->restParameterStructure();
                         break;
+                    case NewArrayWithSpread:
+                        ASSERT(m_graph.isWatchingHavingABadTimeWatchpoint(node));
+                        structure = globalObject->originalArrayStructureForIndexingType(ArrayWithContiguous);
+                        break;
                     default:
                         RELEASE_ASSERT_NOT_REACHED();
                     }
@@ -390,7 +515,7 @@ private:
                     if (nodeIndex == block->size() && candidate->owner != block) {
                         if (verbose)
                             dataLog("eliminating candidate: ", candidate, " because it is clobbered by: ", block->at(nodeIndex), "\n");
-                        m_candidates.remove(candidate);
+                        transitivelyRemoveCandidate(candidate);
                         return;
                     }
                     
@@ -417,7 +542,7 @@ private:
                         if (found) {
                             if (verbose)
                                 dataLog("eliminating candidate: ", candidate, " because it is clobbered by ", block->at(nodeIndex), "\n");
-                            m_candidates.remove(candidate);
+                            transitivelyRemoveCandidate(candidate);
                             return;
                         }
                     }
@@ -460,10 +585,11 @@ private:
                     // We traverse in such a way that we are guaranteed to see a def before a use.
                     // Therefore, we should have already transformed the allocation before the use
                     // of an allocation.
-                    ASSERT(candidate->op() == PhantomCreateRest || candidate->op() == PhantomDirectArguments || candidate->op() == PhantomClonedArguments);
+                    ASSERT(candidate->op() == PhantomCreateRest || candidate->op() == PhantomDirectArguments || candidate->op() == PhantomClonedArguments
+                        || candidate->op() == PhantomSpread || candidate->op() == PhantomNewArrayWithSpread);
                     return true;
                 };
-        
+
                 switch (node->op()) {
                 case CreateDirectArguments:
                     if (!m_candidates.contains(node))
@@ -488,6 +614,20 @@ private:
                     
                     node->setOpAndDefaultFlags(PhantomClonedArguments);
                     break;
+
+                case Spread:
+                    if (!m_candidates.contains(node))
+                        break;
+                    
+                    node->setOpAndDefaultFlags(PhantomSpread);
+                    break;
+
+                case NewArrayWithSpread:
+                    if (!m_candidates.contains(node))
+                        break;
+                    
+                    node->setOpAndDefaultFlags(PhantomNewArrayWithSpread);
+                    break;
                     
                 case GetFromArguments: {
                     Node* candidate = node->child1().node();
@@ -599,94 +739,185 @@ private:
                     if (!isEliminatedAllocation(candidate))
                         break;
                     
+                    // LoadVarargs can exit, so it better be exitOK.
+                    DFG_ASSERT(m_graph, node, node->origin.exitOK);
+                    bool canExit = true;
                     LoadVarargsData* varargsData = node->loadVarargsData();
-                    unsigned numberOfArgumentsToSkip = 0;
-                    if (candidate->op() == PhantomCreateRest)
-                        numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
-                    varargsData->offset += numberOfArgumentsToSkip;
 
-                    InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
+                    auto storeArgumentCountIncludingThis = [&] (unsigned argumentCountIncludingThis) {
+                        Node* argumentCountIncludingThisNode = insertionSet.insertConstant(
+                            nodeIndex, node->origin.withExitOK(canExit),
+                            jsNumber(argumentCountIncludingThis));
+                        insertionSet.insertNode(
+                            nodeIndex, SpecNone, MovHint, node->origin.takeValidExit(canExit),
+                            OpInfo(varargsData->count.offset()), Edge(argumentCountIncludingThisNode));
+                        insertionSet.insertNode(
+                            nodeIndex, SpecNone, PutStack, node->origin.withExitOK(canExit),
+                            OpInfo(m_graph.m_stackAccessData.add(varargsData->count, FlushedInt32)),
+                            Edge(argumentCountIncludingThisNode, KnownInt32Use));
+                    };
 
-                    if (inlineCallFrame
-                        && !inlineCallFrame->isVarargs()) {
+                    auto storeValue = [&] (Node* value, unsigned storeIndex) {
+                        VirtualRegister reg = varargsData->start + storeIndex;
+                        StackAccessData* data =
+                            m_graph.m_stackAccessData.add(reg, FlushedJSValue);
+                        
+                        insertionSet.insertNode(
+                            nodeIndex, SpecNone, MovHint, node->origin.takeValidExit(canExit),
+                            OpInfo(reg.offset()), Edge(value));
+                        insertionSet.insertNode(
+                            nodeIndex, SpecNone, PutStack, node->origin.withExitOK(canExit),
+                            OpInfo(data), Edge(value));
+                    };
 
-                        unsigned argumentCountIncludingThis = inlineCallFrame->arguments.size();
-                        if (argumentCountIncludingThis > varargsData->offset)
-                            argumentCountIncludingThis -= varargsData->offset;
-                        else
-                            argumentCountIncludingThis = 1;
-                        RELEASE_ASSERT(argumentCountIncludingThis >= 1);
+                    if (candidate->op() == PhantomNewArrayWithSpread) {
+                        bool canConvertToStaticLoadStores = true;
+                        BitVector* bitVector = candidate->bitVector();
 
-                        if (argumentCountIncludingThis <= varargsData->limit) {
-                            // LoadVarargs can exit, so it better be exitOK.
-                            DFG_ASSERT(m_graph, node, node->origin.exitOK);
-                            bool canExit = true;
-                            
-                            Node* argumentCountIncludingThisNode = insertionSet.insertConstant(
-                                nodeIndex, node->origin.withExitOK(canExit),
-                                jsNumber(argumentCountIncludingThis));
-                            insertionSet.insertNode(
-                                nodeIndex, SpecNone, MovHint, node->origin.takeValidExit(canExit),
-                                OpInfo(varargsData->count.offset()), Edge(argumentCountIncludingThisNode));
-                            insertionSet.insertNode(
-                                nodeIndex, SpecNone, PutStack, node->origin.withExitOK(canExit),
-                                OpInfo(m_graph.m_stackAccessData.add(varargsData->count, FlushedInt32)),
-                                Edge(argumentCountIncludingThisNode, KnownInt32Use));
-                            
-                            DFG_ASSERT(m_graph, node, varargsData->limit - 1 >= varargsData->mandatoryMinimum);
-                            // Define our limit to exclude "this", since that's a bit easier to reason about.
-                            unsigned limit = varargsData->limit - 1;
-                            Node* undefined = nullptr;
-                            for (unsigned storeIndex = 0; storeIndex < limit; ++storeIndex) {
-                                // First determine if we have an element we can load, and load it if
-                                // possible.
-                                
-                                unsigned loadIndex = storeIndex + varargsData->offset;
-                                
-                                Node* value;
-                                if (loadIndex + 1 < inlineCallFrame->arguments.size()) {
-                                    VirtualRegister reg = virtualRegisterForArgument(loadIndex + 1) + inlineCallFrame->stackOffset;
-                                    StackAccessData* data = m_graph.m_stackAccessData.add(
-                                        reg, FlushedJSValue);
-                                    
-                                    value = insertionSet.insertNode(
-                                        nodeIndex, SpecNone, GetStack, node->origin.withExitOK(canExit),
-                                        OpInfo(data));
-                                } else {
-                                    // FIXME: We shouldn't have to store anything if
-                                    // storeIndex >= varargsData->mandatoryMinimum, but we will still
-                                    // have GetStacks in that range. So if we don't do the stores, we'll
-                                    // have degenerate IR: we'll have GetStacks of something that didn't
-                                    // have PutStacks.
-                                    // https://bugs.webkit.org/show_bug.cgi?id=147434
-                                    
+                        for (unsigned i = 0; i < candidate->numChildren(); i++) {
+                            if (bitVector->get(i)) {
+                                Node* child = m_graph.varArgChild(candidate, i).node();
+                                ASSERT(child->op() == PhantomSpread && child->child1()->op() == PhantomCreateRest);
+                                InlineCallFrame* inlineCallFrame = child->child1()->origin.semantic.inlineCallFrame;
+                                if (!inlineCallFrame || inlineCallFrame->isVarargs()) {
+                                    canConvertToStaticLoadStores = false;
+                                    break;
+                                }
+                            }
+                        }
+
+                        if (canConvertToStaticLoadStores) {
+                            unsigned argumentCountIncludingThis = 1; // |this|
+                            for (unsigned i = 0; i < candidate->numChildren(); i++) {
+                                if (bitVector->get(i)) {
+                                    Node* child = m_graph.varArgChild(candidate, i).node();
+                                    ASSERT(child->op() == PhantomSpread && child->child1()->op() == PhantomCreateRest);
+                                    unsigned numberOfArgumentsToSkip = child->child1()->numberOfArgumentsToSkip();
+                                    InlineCallFrame* inlineCallFrame = child->child1()->origin.semantic.inlineCallFrame;
+                                    unsigned numberOfSpreadArguments;
+                                    unsigned frameArgumentCount = inlineCallFrame->arguments.size() - 1;
+                                    if (frameArgumentCount >= numberOfArgumentsToSkip)
+                                        numberOfSpreadArguments = frameArgumentCount - numberOfArgumentsToSkip;
+                                    else
+                                        numberOfSpreadArguments = 0;
+
+                                    argumentCountIncludingThis += numberOfSpreadArguments;
+                                } else
+                                    ++argumentCountIncludingThis;
+                            }
+
+                            if (argumentCountIncludingThis <= varargsData->limit) {
+                                storeArgumentCountIncludingThis(argumentCountIncludingThis);
+
+                                DFG_ASSERT(m_graph, node, varargsData->limit - 1 >= varargsData->mandatoryMinimum);
+                                // Define our limit to exclude "this", since that's a bit easier to reason about.
+                                unsigned limit = varargsData->limit - 1;
+                                unsigned storeIndex = 0;
+                                for (unsigned i = 0; i < candidate->numChildren(); i++) {
+                                    if (bitVector->get(i)) {
+                                        Node* child = m_graph.varArgChild(candidate, i).node();
+                                        ASSERT(child->op() == PhantomSpread && child->child1()->op() == PhantomCreateRest);
+                                        unsigned numberOfArgumentsToSkip = child->child1()->numberOfArgumentsToSkip();
+                                        InlineCallFrame* inlineCallFrame = child->child1()->origin.semantic.inlineCallFrame;
+                                        unsigned frameArgumentCount = inlineCallFrame->arguments.size() - 1;
+                                        for (unsigned loadIndex = numberOfArgumentsToSkip; loadIndex < frameArgumentCount; ++loadIndex) {
+                                            VirtualRegister reg = virtualRegisterForArgument(loadIndex + 1) + inlineCallFrame->stackOffset;
+                                            StackAccessData* data = m_graph.m_stackAccessData.add(reg, FlushedJSValue);
+                                            Node* value = insertionSet.insertNode(
+                                                nodeIndex, SpecNone, GetStack, node->origin.withExitOK(canExit),
+                                                OpInfo(data));
+                                            storeValue(value, storeIndex);
+                                            ++storeIndex;
+                                        }
+                                    } else {
+                                        Node* value = m_graph.varArgChild(candidate, i).node();
+                                        storeValue(value, storeIndex);
+                                        ++storeIndex;
+                                    }
+                                }
+
+                                RELEASE_ASSERT(storeIndex <= limit);
+                                Node* undefined = nullptr;
+                                for (; storeIndex < limit; ++storeIndex) {
                                     if (!undefined) {
                                         undefined = insertionSet.insertConstant(
                                             nodeIndex, node->origin.withExitOK(canExit), jsUndefined());
                                     }
-                                    value = undefined;
+                                    storeValue(undefined, storeIndex);
                                 }
-                                
-                                // Now that we have a value, store it.
-                                
-                                VirtualRegister reg = varargsData->start + storeIndex;
-                                StackAccessData* data =
-                                    m_graph.m_stackAccessData.add(reg, FlushedJSValue);
-                                
-                                insertionSet.insertNode(
-                                    nodeIndex, SpecNone, MovHint, node->origin.takeValidExit(canExit),
-                                    OpInfo(reg.offset()), Edge(value));
-                                insertionSet.insertNode(
-                                    nodeIndex, SpecNone, PutStack, node->origin.withExitOK(canExit),
-                                    OpInfo(data), Edge(value));
                             }
-                            
+
                             node->remove();
                             node->origin.exitOK = canExit;
                             break;
                         }
+                    } else {
+                        unsigned numberOfArgumentsToSkip = 0;
+                        if (candidate->op() == PhantomCreateRest)
+                            numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
+                        varargsData->offset += numberOfArgumentsToSkip;
+
+                        InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
+
+                        if (inlineCallFrame
+                            && !inlineCallFrame->isVarargs()) {
+
+                            unsigned argumentCountIncludingThis = inlineCallFrame->arguments.size();
+                            if (argumentCountIncludingThis > varargsData->offset)
+                                argumentCountIncludingThis -= varargsData->offset;
+                            else
+                                argumentCountIncludingThis = 1;
+                            RELEASE_ASSERT(argumentCountIncludingThis >= 1);
+
+                            if (argumentCountIncludingThis <= varargsData->limit) {
+                                
+                                storeArgumentCountIncludingThis(argumentCountIncludingThis);
+
+                                DFG_ASSERT(m_graph, node, varargsData->limit - 1 >= varargsData->mandatoryMinimum);
+                                // Define our limit to exclude "this", since that's a bit easier to reason about.
+                                unsigned limit = varargsData->limit - 1;
+                                Node* undefined = nullptr;
+                                for (unsigned storeIndex = 0; storeIndex < limit; ++storeIndex) {
+                                    // First determine if we have an element we can load, and load it if
+                                    // possible.
+                                    
+                                    Node* value = nullptr;
+                                    unsigned loadIndex = storeIndex + varargsData->offset;
+
+                                    if (loadIndex + 1 < inlineCallFrame->arguments.size()) {
+                                        VirtualRegister reg = virtualRegisterForArgument(loadIndex + 1) + inlineCallFrame->stackOffset;
+                                        StackAccessData* data = m_graph.m_stackAccessData.add(
+                                            reg, FlushedJSValue);
+                                        
+                                        value = insertionSet.insertNode(
+                                            nodeIndex, SpecNone, GetStack, node->origin.withExitOK(canExit),
+                                            OpInfo(data));
+                                    } else {
+                                        // FIXME: We shouldn't have to store anything if
+                                        // storeIndex >= varargsData->mandatoryMinimum, but we will still
+                                        // have GetStacks in that range. So if we don't do the stores, we'll
+                                        // have degenerate IR: we'll have GetStacks of something that didn't
+                                        // have PutStacks.
+                                        // https://bugs.webkit.org/show_bug.cgi?id=147434
+                                        
+                                        if (!undefined) {
+                                            undefined = insertionSet.insertConstant(
+                                                nodeIndex, node->origin.withExitOK(canExit), jsUndefined());
+                                        }
+                                        value = undefined;
+                                    }
+                                    
+                                    // Now that we have a value, store it.
+                                    storeValue(value, storeIndex);
+                                }
+                                
+                                node->remove();
+                                node->origin.exitOK = canExit;
+                                break;
+                            }
+                        }
                     }
-                    
+
                     node->setOpAndDefaultFlags(ForwardVarargs);
                     break;
                 }
@@ -698,27 +929,8 @@ private:
                     Node* candidate = node->child3().node();
                     if (!isEliminatedAllocation(candidate))
                         break;
-                    
-                    unsigned numberOfArgumentsToSkip = 0;
-                    if (candidate->op() == PhantomCreateRest)
-                        numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
-                    CallVarargsData* varargsData = node->callVarargsData();
-                    varargsData->firstVarArgOffset += numberOfArgumentsToSkip;
-
-                    InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
-                    if (inlineCallFrame && !inlineCallFrame->isVarargs()) {
-                        Vector<Node*> arguments;
-                        for (unsigned i = 1 + varargsData->firstVarArgOffset; i < inlineCallFrame->arguments.size(); ++i) {
-                            StackAccessData* data = m_graph.m_stackAccessData.add(
-                                virtualRegisterForArgument(i) + inlineCallFrame->stackOffset,
-                                FlushedJSValue);
-                            
-                            Node* value = insertionSet.insertNode(
-                                nodeIndex, SpecNone, GetStack, node->origin, OpInfo(data));
-                            
-                            arguments.append(value);
-                        }
-                        
+
+                    auto convertToStaticArgumentCountCall = [&] (const Vector<Node*>& arguments) {
                         unsigned firstChild = m_graph.m_varArgChildren.size();
                         m_graph.m_varArgChildren.append(node->child1());
                         m_graph.m_varArgChildren.append(node->child2());
@@ -743,25 +955,95 @@ private:
                         node->children = AdjacencyList(
                             AdjacencyList::Variable,
                             firstChild, m_graph.m_varArgChildren.size() - firstChild);
-                        break;
-                    }
+                    };
+
+                    auto convertToForwardsCall = [&] () {
+                        switch (node->op()) {
+                        case CallVarargs:
+                            node->setOpAndDefaultFlags(CallForwardVarargs);
+                            break;
+                        case ConstructVarargs:
+                            node->setOpAndDefaultFlags(ConstructForwardVarargs);
+                            break;
+                        case TailCallVarargs:
+                            node->setOpAndDefaultFlags(TailCallForwardVarargs);
+                            break;
+                        case TailCallVarargsInlinedCaller:
+                            node->setOpAndDefaultFlags(TailCallForwardVarargsInlinedCaller);
+                            break;
+                        default:
+                            RELEASE_ASSERT_NOT_REACHED();
+                        }
+                    };
                     
-                    switch (node->op()) {
-                    case CallVarargs:
-                        node->setOpAndDefaultFlags(CallForwardVarargs);
-                        break;
-                    case ConstructVarargs:
-                        node->setOpAndDefaultFlags(ConstructForwardVarargs);
-                        break;
-                    case TailCallVarargs:
-                        node->setOpAndDefaultFlags(TailCallForwardVarargs);
-                        break;
-                    case TailCallVarargsInlinedCaller:
-                        node->setOpAndDefaultFlags(TailCallForwardVarargsInlinedCaller);
-                        break;
-                    default:
-                        RELEASE_ASSERT_NOT_REACHED();
+                    if (candidate->op() == PhantomNewArrayWithSpread) {
+                        bool canTransformToStaticArgumentCountCall = true;
+                        BitVector* bitVector = candidate->bitVector();
+                        for (unsigned i = 0; i < candidate->numChildren(); i++) {
+                            if (bitVector->get(i)) {
+                                Node* node = m_graph.varArgChild(candidate, i).node();
+                                ASSERT(node->op() == PhantomSpread);
+                                ASSERT(node->child1()->op() == PhantomCreateRest);
+                                InlineCallFrame* inlineCallFrame = node->child1()->origin.semantic.inlineCallFrame;
+                                if (!inlineCallFrame || inlineCallFrame->isVarargs()) {
+                                    canTransformToStaticArgumentCountCall = false;
+                                    break;
+                                }
+                            }
+                        }
+
+                        if (canTransformToStaticArgumentCountCall) {
+                            Vector<Node*> arguments;
+                            for (unsigned i = 0; i < candidate->numChildren(); i++) {
+                                Node* child = m_graph.varArgChild(candidate, i).node();
+                                if (bitVector->get(i)) {
+                                    ASSERT(child->op() == PhantomSpread);
+                                    ASSERT(child->child1()->op() == PhantomCreateRest);
+                                    InlineCallFrame* inlineCallFrame = child->child1()->origin.semantic.inlineCallFrame;
+                                    unsigned numberOfArgumentsToSkip = child->child1()->numberOfArgumentsToSkip();
+                                    for (unsigned i = 1 + numberOfArgumentsToSkip; i < inlineCallFrame->arguments.size(); ++i) {
+                                        StackAccessData* data = m_graph.m_stackAccessData.add(
+                                            virtualRegisterForArgument(i) + inlineCallFrame->stackOffset,
+                                            FlushedJSValue);
+                                        
+                                        Node* value = insertionSet.insertNode(
+                                            nodeIndex, SpecNone, GetStack, node->origin, OpInfo(data));
+                                        
+                                        arguments.append(value);
+                                    }
+                                } else
+                                    arguments.append(child);
+                            }
+
+                            convertToStaticArgumentCountCall(arguments);
+                        } else
+                            convertToForwardsCall();
+                    } else {
+                        unsigned numberOfArgumentsToSkip = 0;
+                        if (candidate->op() == PhantomCreateRest)
+                            numberOfArgumentsToSkip = candidate->numberOfArgumentsToSkip();
+                        CallVarargsData* varargsData = node->callVarargsData();
+                        varargsData->firstVarArgOffset += numberOfArgumentsToSkip;
+
+                        InlineCallFrame* inlineCallFrame = candidate->origin.semantic.inlineCallFrame;
+                        if (inlineCallFrame && !inlineCallFrame->isVarargs()) {
+                            Vector<Node*> arguments;
+                            for (unsigned i = 1 + varargsData->firstVarArgOffset; i < inlineCallFrame->arguments.size(); ++i) {
+                                StackAccessData* data = m_graph.m_stackAccessData.add(
+                                    virtualRegisterForArgument(i) + inlineCallFrame->stackOffset,
+                                    FlushedJSValue);
+                                
+                                Node* value = insertionSet.insertNode(
+                                    nodeIndex, SpecNone, GetStack, node->origin, OpInfo(data));
+                                
+                                arguments.append(value);
+                            }
+                            
+                            convertToStaticArgumentCountCall(arguments);
+                        } else
+                            convertToForwardsCall();
                     }
+
                     break;
                 }
                     
index e52265a..a57cfce 100644 (file)
@@ -478,6 +478,8 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
         write(HeapObjectCount);
         return;
 
+    case PhantomSpread:
+    case PhantomNewArrayWithSpread:
     case PhantomCreateRest:
         // Even though it's phantom, it still has the property that one can't be replaced with another.
         read(HeapObjectCount);
@@ -1130,6 +1132,13 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
     case NewArrayWithSpread: {
         // This also reads from JSFixedArray's data store, but we don't have any way of describing that yet.
         read(HeapObjectCount);
+        for (unsigned i = 0; i < node->numChildren(); i++) {
+            Node* child = graph.varArgChild(node, i).node();
+            if (child->op() == PhantomSpread) {
+                read(Stack);
+                break;
+            }
+        }
         write(HeapObjectCount);
         return;
     }
index 411881c..435450c 100644 (file)
@@ -248,6 +248,8 @@ bool doesGC(Graph& graph, Node* node)
     case PhantomCreateActivation:
     case PhantomDirectArguments:
     case PhantomCreateRest:
+    case PhantomNewArrayWithSpread:
+    case PhantomSpread:
     case PhantomClonedArguments:
     case GetMyArgumentByVal:
     case GetMyArgumentByValOutOfBounds:
index 3fb4a26..92833f0 100644 (file)
@@ -1439,6 +1439,8 @@ private:
         case PhantomCreateActivation:
         case PhantomDirectArguments:
         case PhantomCreateRest:
+        case PhantomSpread:
+        case PhantomNewArrayWithSpread:
         case PhantomClonedArguments:
         case GetMyArgumentByVal:
         case GetMyArgumentByValOutOfBounds:
index 2430d9c..ac13301 100644 (file)
@@ -153,7 +153,7 @@ void forAllKillsInBlock(
     for (Node* node : combinedLiveness.liveAtTail[block])
         functor(block->size(), node);
     
-    LocalOSRAvailabilityCalculator localAvailability;
+    LocalOSRAvailabilityCalculator localAvailability(graph);
     localAvailability.beginBlock(block);
     // Start at the second node, because the functor is expected to only inspect nodes from the start of
     // the block up to nodeIndex (exclusive), so if nodeIndex is zero then the functor has nothing to do.
index 36115de..b28157f 100644 (file)
@@ -459,7 +459,6 @@ public:
             
         case PhantomDirectArguments:
         case PhantomClonedArguments:
-        case PhantomCreateRest:
             // These pretend to be the empty value constant for the benefit of the DFG backend, which
             // otherwise wouldn't take kindly to a node that doesn't compute a value.
             return true;
@@ -473,7 +472,7 @@ public:
     {
         ASSERT(hasConstant());
         
-        if (op() == PhantomDirectArguments || op() == PhantomClonedArguments || op() == PhantomCreateRest) {
+        if (op() == PhantomDirectArguments || op() == PhantomClonedArguments) {
             // These pretend to be the empty value constant for the benefit of the DFG backend, which
             // otherwise wouldn't take kindly to a node that doesn't compute a value.
             return FrozenValue::emptySingleton();
@@ -1073,7 +1072,7 @@ public:
 
     BitVector* bitVector()
     {
-        ASSERT(op() == NewArrayWithSpread);
+        ASSERT(op() == NewArrayWithSpread || op() == PhantomNewArrayWithSpread);
         return m_opInfo.as<BitVector*>();
     }
 
@@ -1769,6 +1768,8 @@ public:
         case PhantomNewObject:
         case PhantomDirectArguments:
         case PhantomCreateRest:
+        case PhantomSpread:
+        case PhantomNewArrayWithSpread:
         case PhantomClonedArguments:
         case PhantomNewFunction:
         case PhantomNewGeneratorFunction:
index bff7fe3..6cb3692 100644 (file)
@@ -345,6 +345,8 @@ namespace JSC { namespace DFG {
     macro(CreateDirectArguments, NodeResultJS) \
     macro(PhantomDirectArguments, NodeResultJS | NodeMustGenerate) \
     macro(PhantomCreateRest, NodeResultJS | NodeMustGenerate) \
+    macro(PhantomSpread, NodeResultJS | NodeMustGenerate) \
+    macro(PhantomNewArrayWithSpread, NodeResultJS | NodeMustGenerate | NodeHasVarArgs) \
     macro(CreateScopedArguments, NodeResultJS) \
     macro(CreateClonedArguments, NodeResultJS) \
     macro(PhantomClonedArguments, NodeResultJS | NodeMustGenerate) \
index c188775..0359846 100644 (file)
@@ -66,7 +66,7 @@ public:
 
         // This could be made more efficient by processing blocks in reverse postorder.
         
-        LocalOSRAvailabilityCalculator calculator;
+        LocalOSRAvailabilityCalculator calculator(m_graph);
         bool changed;
         do {
             changed = false;
@@ -105,7 +105,8 @@ bool performOSRAvailabilityAnalysis(Graph& graph)
     return runPhase<OSRAvailabilityAnalysisPhase>(graph);
 }
 
-LocalOSRAvailabilityCalculator::LocalOSRAvailabilityCalculator()
+LocalOSRAvailabilityCalculator::LocalOSRAvailabilityCalculator(Graph& graph)
+    : m_graph(graph)
 {
 }
 
@@ -164,7 +165,7 @@ void LocalOSRAvailabilityCalculator::executeNode(Node* node)
         }
         break;
     }
-        
+    
     case PhantomCreateRest:
     case PhantomDirectArguments:
     case PhantomClonedArguments: {
@@ -208,6 +209,17 @@ void LocalOSRAvailabilityCalculator::executeNode(Node* node)
             Availability(node->child2().node()));
         break;
     }
+
+    case PhantomSpread:
+        m_availability.m_heap.set(PromotedHeapLocation(SpreadPLoc, node), Availability(node->child1().node()));
+        break;
+
+    case PhantomNewArrayWithSpread:
+        for (unsigned i = 0; i < node->numChildren(); i++) {
+            Node* child = m_graph.varArgChild(node, i).node();
+            m_availability.m_heap.set(PromotedHeapLocation(NewArrayWithSpreadArgumentPLoc, node, i), Availability(child));
+        }
+        break;
         
     default:
         break;
index 59b7e70..2433c04 100644 (file)
@@ -46,7 +46,7 @@ bool performOSRAvailabilityAnalysis(Graph&);
 // having run the availability analysis.
 class LocalOSRAvailabilityCalculator {
 public:
-    LocalOSRAvailabilityCalculator();
+    LocalOSRAvailabilityCalculator(Graph&);
     ~LocalOSRAvailabilityCalculator();
     
     void beginBlock(BasicBlock*);
@@ -54,6 +54,7 @@ public:
     void executeNode(Node*);
     
     AvailabilityMap m_availability;
+    Graph& m_graph;
 };
 
 } } // namespace JSC::DFG
index a752f00..b7bde89 100644 (file)
@@ -1644,7 +1644,7 @@ private:
 
         // Place Phis in the right places, replace all uses of any load with the appropriate
         // value, and create the materialization nodes.
-        LocalOSRAvailabilityCalculator availabilityCalculator;
+        LocalOSRAvailabilityCalculator availabilityCalculator(m_graph);
         m_graph.clearReplacements();
         for (BasicBlock* block : m_graph.blocksInPreOrder()) {
             m_heap = m_heapAtHead[block];
index 56bc6d4..baf0ad9 100644 (file)
@@ -106,41 +106,75 @@ private:
     
     void readTop()
     {
-        switch (m_node->op()) {
-        case GetMyArgumentByVal:
-        case GetMyArgumentByValOutOfBounds:
-        case ForwardVarargs:
-        case CallForwardVarargs:
-        case ConstructForwardVarargs:
-        case TailCallForwardVarargs:
-        case TailCallForwardVarargsInlinedCaller: {
-
-            InlineCallFrame* inlineCallFrame;
-            if (m_node->hasArgumentsChild() && m_node->argumentsChild())
-                inlineCallFrame = m_node->argumentsChild()->origin.semantic.inlineCallFrame;
-            else
-                inlineCallFrame = m_node->origin.semantic.inlineCallFrame;
-
-            unsigned numberOfArgumentsToSkip = 0;
-            if (m_node->op() == GetMyArgumentByVal || m_node->op() == GetMyArgumentByValOutOfBounds) {
-                // The value of numberOfArgumentsToSkip guarantees that GetMyArgumentByVal* will never
-                // read any arguments below the number of arguments to skip. For example, if numberOfArgumentsToSkip is 2,
-                // we will never read argument 0 or argument 1.
-                numberOfArgumentsToSkip = m_node->numberOfArgumentsToSkip();
-            }
-
+        auto readFrame = [&] (InlineCallFrame* inlineCallFrame, unsigned numberOfArgumentsToSkip) {
             if (!inlineCallFrame) {
                 // Read the outermost arguments and argument count.
                 for (unsigned i = 1 + numberOfArgumentsToSkip; i < static_cast<unsigned>(m_graph.m_codeBlock->numParameters()); i++)
                     m_read(virtualRegisterForArgument(i));
                 m_read(VirtualRegister(CallFrameSlot::argumentCount));
-                break;
+                return;
             }
             
             for (unsigned i = 1 + numberOfArgumentsToSkip; i < inlineCallFrame->arguments.size(); i++)
                 m_read(VirtualRegister(inlineCallFrame->stackOffset + virtualRegisterForArgument(i).offset()));
             if (inlineCallFrame->isVarargs())
                 m_read(VirtualRegister(inlineCallFrame->stackOffset + CallFrameSlot::argumentCount));
+        };
+
+        auto readNewArrayWithSpreadNode = [&] (Node* arrayWithSpread) {
+            ASSERT(arrayWithSpread->op() == NewArrayWithSpread || arrayWithSpread->op() == PhantomNewArrayWithSpread);
+            BitVector* bitVector = arrayWithSpread->bitVector();
+            for (unsigned i = 0; i < arrayWithSpread->numChildren(); i++) {
+                if (bitVector->get(i)) {
+                    Node* child = m_graph.varArgChild(arrayWithSpread, i).node();
+                    if (child->op() == PhantomSpread) {
+                        ASSERT(child->child1()->op() == PhantomCreateRest);
+                        InlineCallFrame* inlineCallFrame = child->child1()->origin.semantic.inlineCallFrame;
+                        unsigned numberOfArgumentsToSkip = child->child1()->numberOfArgumentsToSkip();
+                        readFrame(inlineCallFrame, numberOfArgumentsToSkip);
+                    }
+                }
+            }
+        };
+
+        bool isForwardingNode = false;
+        switch (m_node->op()) {
+        case ForwardVarargs:
+        case CallForwardVarargs:
+        case ConstructForwardVarargs:
+        case TailCallForwardVarargs:
+        case TailCallForwardVarargsInlinedCaller:
+            isForwardingNode = true;
+            FALLTHROUGH;
+        case GetMyArgumentByVal:
+        case GetMyArgumentByValOutOfBounds: {
+
+            if (isForwardingNode && m_node->hasArgumentsChild() && m_node->argumentsChild() && m_node->argumentsChild()->op() == PhantomNewArrayWithSpread) {
+                Node* arrayWithSpread = m_node->argumentsChild().node();
+                readNewArrayWithSpreadNode(arrayWithSpread);
+            } else {
+                InlineCallFrame* inlineCallFrame;
+                if (m_node->hasArgumentsChild() && m_node->argumentsChild())
+                    inlineCallFrame = m_node->argumentsChild()->origin.semantic.inlineCallFrame;
+                else
+                    inlineCallFrame = m_node->origin.semantic.inlineCallFrame;
+
+                unsigned numberOfArgumentsToSkip = 0;
+                if (m_node->op() == GetMyArgumentByVal || m_node->op() == GetMyArgumentByValOutOfBounds) {
+                    // The value of numberOfArgumentsToSkip guarantees that GetMyArgumentByVal* will never
+                    // read any arguments below the number of arguments to skip. For example, if numberOfArgumentsToSkip is 2,
+                    // we will never read argument 0 or argument 1.
+                    numberOfArgumentsToSkip = m_node->numberOfArgumentsToSkip();
+                }
+
+                readFrame(inlineCallFrame, numberOfArgumentsToSkip);
+            }
+
+            break;
+        }
+        
+        case NewArrayWithSpread: {
+            readNewArrayWithSpreadNode(m_node);
             break;
         }
 
index db9f31d..d980a4a 100644 (file)
@@ -1012,6 +1012,8 @@ private:
         case PhantomCreateActivation:
         case PhantomDirectArguments:
         case PhantomCreateRest:
+        case PhantomSpread:
+        case PhantomNewArrayWithSpread:
         case PhantomClonedArguments:
         case GetMyArgumentByVal:
         case GetMyArgumentByValOutOfBounds:
index 3b0f752..62dbb10 100644 (file)
@@ -114,6 +114,14 @@ void printInternal(PrintStream& out, PromotedLocationKind kind)
     case VectorLengthPLoc:
         out.print("VectorLengthPLoc");
         return;
+
+    case SpreadPLoc:
+        out.print("SpreadPLoc");
+        return;
+
+    case NewArrayWithSpreadArgumentPLoc:
+        out.print("NewArrayWithSpreadArgumentPLoc");
+        return;
     }
     
     RELEASE_ASSERT_NOT_REACHED();
index f3ae4b9..c4707f4 100644 (file)
@@ -61,7 +61,9 @@ enum PromotedLocationKind {
     NamedPropertyPLoc,
     PublicLengthPLoc,
     StructurePLoc,
-    VectorLengthPLoc
+    VectorLengthPLoc,
+    SpreadPLoc,
+    NewArrayWithSpreadArgumentPLoc,
 };
 
 class PromotedLocationDescriptor {
index ae632d3..c250d87 100644 (file)
@@ -360,6 +360,8 @@ bool safeToExecute(AbstractStateType& state, Graph& graph, Node* node)
     case MaterializeCreateActivation:
     case PhantomDirectArguments:
     case PhantomCreateRest:
+    case PhantomSpread:
+    case PhantomNewArrayWithSpread:
     case PhantomClonedArguments:
     case GetMyArgumentByVal:
     case GetMyArgumentByValOutOfBounds:
index 8d1bc0b..5d19713 100644 (file)
@@ -5616,6 +5616,8 @@ void SpeculativeJIT::compile(Node* node)
     case GetMyArgumentByVal:
     case GetMyArgumentByValOutOfBounds:
     case PhantomCreateRest:
+    case PhantomSpread:
+    case PhantomNewArrayWithSpread:
         DFG_CRASH(m_jit.graph(), node, "unexpected node in DFG backend");
         break;
     }
index e538fbb..1e6ddb4 100644 (file)
@@ -5840,6 +5840,8 @@ void SpeculativeJIT::compile(Node* node)
     case KillStack:
     case GetStack:
     case PhantomCreateRest:
+    case PhantomSpread:
+    case PhantomNewArrayWithSpread:
         DFG_CRASH(m_jit.graph(), node, "Unexpected node");
         break;
     }
index 733932f..f9c22bb 100644 (file)
@@ -693,6 +693,39 @@ private:
                     VALIDATE((node), node->child1()->isPhantomAllocation());
                     break;
 
+                case PhantomSpread:
+                    VALIDATE((node), m_graph.m_form == SSA);
+                    // We currently only support PhantomSpread over PhantomCreateRest.
+                    VALIDATE((node), node->child1()->op() == PhantomCreateRest);
+                    break;
+
+                case PhantomNewArrayWithSpread: {
+                    VALIDATE((node), m_graph.m_form == SSA);
+                    BitVector* bitVector = node->bitVector();
+                    for (unsigned i = 0; i < node->numChildren(); i++) {
+                        Node* child = m_graph.varArgChild(node, i).node();
+                        if (bitVector->get(i)) {
+                            // We currently only support PhantomSpread over PhantomCreateRest.
+                            VALIDATE((node), child->op() == PhantomSpread);
+                        } else
+                            VALIDATE((node), !child->isPhantomAllocation());
+                    }
+                    break;
+                }
+
+                case NewArrayWithSpread: {
+                    BitVector* bitVector = node->bitVector();
+                    for (unsigned i = 0; i < node->numChildren(); i++) {
+                        Node* child = m_graph.varArgChild(node, i).node();
+                        if (child->isPhantomAllocation()) {
+                            VALIDATE((node), bitVector->get(i));
+                            VALIDATE((node), m_graph.m_form == SSA);
+                            VALIDATE((node), child->op() == PhantomSpread);
+                        }
+                    }
+                    break;
+                }
+
                 default:
                     m_graph.doToChildren(
                         node,
index 24479b6..c00d8a2 100644 (file)
@@ -235,6 +235,8 @@ inline CapabilityLevel canCompile(Node* node)
     case MaterializeCreateActivation:
     case PhantomDirectArguments:
     case PhantomCreateRest:
+    case PhantomSpread:
+    case PhantomNewArrayWithSpread:
     case PhantomClonedArguments:
     case GetMyArgumentByVal:
     case GetMyArgumentByValOutOfBounds:
index 5edd9ea..c66e691 100644 (file)
@@ -134,6 +134,7 @@ public:
         , m_ftlState(state)
         , m_out(state)
         , m_proc(*state.proc)
+        , m_availabilityCalculator(m_graph)
         , m_state(state.graph)
         , m_interpreter(state.graph, m_state)
     {
@@ -1089,6 +1090,8 @@ private:
         case PhantomCreateActivation:
         case PhantomDirectArguments:
         case PhantomCreateRest:
+        case PhantomSpread:
+        case PhantomNewArrayWithSpread:
         case PhantomClonedArguments:
         case PutHint:
         case BottomValue:
@@ -4331,6 +4334,8 @@ private:
         if (m_graph.isWatchingHavingABadTimeWatchpoint(m_node)) {
             unsigned startLength = 0;
             BitVector* bitVector = m_node->bitVector();
+            HashMap<InlineCallFrame*, LValue, WTF::DefaultHash<InlineCallFrame*>::Hash, WTF::NullableHashTraits<InlineCallFrame*>> cachedSpreadLengths;
+
             for (unsigned i = 0; i < m_node->numChildren(); ++i) {
                 if (!bitVector->get(i))
                     ++startLength;
@@ -4341,8 +4346,18 @@ private:
             for (unsigned i = 0; i < m_node->numChildren(); ++i) {
                 if (bitVector->get(i)) {
                     Edge use = m_graph.varArgChild(m_node, i);
-                    LValue fixedArray = lowCell(use);
-                    length = m_out.add(length, m_out.load32(fixedArray, m_heaps.JSFixedArray_size));
+                    if (use->op() == PhantomSpread) {
+                        RELEASE_ASSERT(use->child1()->op() == PhantomCreateRest);
+                        InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame;
+                        unsigned numberOfArgumentsToSkip = use->child1()->numberOfArgumentsToSkip();
+                        LValue spreadLength = cachedSpreadLengths.ensure(inlineCallFrame, [&] () {
+                            return getSpreadLengthFromInlineCallFrame(inlineCallFrame, numberOfArgumentsToSkip);
+                        }).iterator->value;
+                        length = m_out.add(length, spreadLength);
+                    } else {
+                        LValue fixedArray = lowCell(use);
+                        length = m_out.add(length, m_out.load32(fixedArray, m_heaps.JSFixedArray_size));
+                    }
                 }
             }
 
@@ -4355,42 +4370,84 @@ private:
             for (unsigned i = 0; i < m_node->numChildren(); ++i) {
                 Edge use = m_graph.varArgChild(m_node, i);
                 if (bitVector->get(i)) {
-                    LBasicBlock loopStart = m_out.newBlock();
-                    LBasicBlock continuation = m_out.newBlock();
+                    if (use->op() == PhantomSpread) {
+                        RELEASE_ASSERT(use->child1()->op() == PhantomCreateRest);
+                        InlineCallFrame* inlineCallFrame = use->child1()->origin.semantic.inlineCallFrame;
+                        unsigned numberOfArgumentsToSkip = use->child1()->numberOfArgumentsToSkip();
 
-                    LValue fixedArray = lowCell(use);
+                        LValue length = m_out.zeroExtPtr(cachedSpreadLengths.get(inlineCallFrame));
+                        LValue sourceStart = getArgumentsStart(inlineCallFrame, numberOfArgumentsToSkip);
 
-                    ValueFromBlock fixedIndexStart = m_out.anchor(m_out.constIntPtr(0));
-                    ValueFromBlock arrayIndexStart = m_out.anchor(index);
-                    ValueFromBlock arrayIndexStartForFinish = m_out.anchor(index);
+                        LBasicBlock loopStart = m_out.newBlock();
+                        LBasicBlock continuation = m_out.newBlock();
 
-                    LValue fixedArraySize = m_out.zeroExtPtr(m_out.load32(fixedArray, m_heaps.JSFixedArray_size));
+                        ValueFromBlock loadIndexStart = m_out.anchor(m_out.constIntPtr(0));
+                        ValueFromBlock arrayIndexStart = m_out.anchor(index);
+                        ValueFromBlock arrayIndexStartForFinish = m_out.anchor(index);
 
-                    m_out.branch(
-                        m_out.isZero64(fixedArraySize),
-                        unsure(continuation), unsure(loopStart));
+                        m_out.branch(
+                            m_out.isZero64(length),
+                            unsure(continuation), unsure(loopStart));
 
-                    LBasicBlock lastNext = m_out.appendTo(loopStart, continuation);
+                        LBasicBlock lastNext = m_out.appendTo(loopStart, continuation);
 
-                    LValue arrayIndex = m_out.phi(pointerType(), arrayIndexStart);
-                    LValue fixedArrayIndex = m_out.phi(pointerType(), fixedIndexStart);
+                        LValue arrayIndex = m_out.phi(pointerType(), arrayIndexStart);
+                        LValue loadIndex = m_out.phi(pointerType(), loadIndexStart);
 
-                    LValue item = m_out.load64(m_out.baseIndex(m_heaps.JSFixedArray_buffer, fixedArray, fixedArrayIndex));
-                    m_out.store64(item, m_out.baseIndex(m_heaps.indexedContiguousProperties, storage, arrayIndex));
+                        LValue item = m_out.load64(m_out.baseIndex(m_heaps.variables, sourceStart, loadIndex));
+                        m_out.store64(item, m_out.baseIndex(m_heaps.indexedContiguousProperties, storage, arrayIndex));
 
-                    LValue nextArrayIndex = m_out.add(arrayIndex, m_out.constIntPtr(1));
-                    LValue nextFixedArrayIndex = m_out.add(fixedArrayIndex, m_out.constIntPtr(1));
-                    ValueFromBlock arrayIndexLoopForFinish = m_out.anchor(nextArrayIndex);
+                        LValue nextArrayIndex = m_out.add(arrayIndex, m_out.constIntPtr(1));
+                        LValue nextLoadIndex = m_out.add(loadIndex, m_out.constIntPtr(1));
+                        ValueFromBlock arrayIndexLoopForFinish = m_out.anchor(nextArrayIndex);
 
-                    m_out.addIncomingToPhi(fixedArrayIndex, m_out.anchor(nextFixedArrayIndex));
-                    m_out.addIncomingToPhi(arrayIndex, m_out.anchor(nextArrayIndex));
+                        m_out.addIncomingToPhi(loadIndex, m_out.anchor(nextLoadIndex));
+                        m_out.addIncomingToPhi(arrayIndex, m_out.anchor(nextArrayIndex));
 
-                    m_out.branch(
-                        m_out.below(nextFixedArrayIndex, fixedArraySize),
-                        unsure(loopStart), unsure(continuation));
+                        m_out.branch(
+                            m_out.below(nextLoadIndex, length),
+                            unsure(loopStart), unsure(continuation));
 
-                    m_out.appendTo(continuation, lastNext);
-                    index = m_out.phi(pointerType(), arrayIndexStartForFinish, arrayIndexLoopForFinish);
+                        m_out.appendTo(continuation, lastNext);
+                        index = m_out.phi(pointerType(), arrayIndexStartForFinish, arrayIndexLoopForFinish);
+                    } else {
+                        LBasicBlock loopStart = m_out.newBlock();
+                        LBasicBlock continuation = m_out.newBlock();
+
+                        LValue fixedArray = lowCell(use);
+
+                        ValueFromBlock fixedIndexStart = m_out.anchor(m_out.constIntPtr(0));
+                        ValueFromBlock arrayIndexStart = m_out.anchor(index);
+                        ValueFromBlock arrayIndexStartForFinish = m_out.anchor(index);
+
+                        LValue fixedArraySize = m_out.zeroExtPtr(m_out.load32(fixedArray, m_heaps.JSFixedArray_size));
+
+                        m_out.branch(
+                            m_out.isZero64(fixedArraySize),
+                            unsure(continuation), unsure(loopStart));
+
+                        LBasicBlock lastNext = m_out.appendTo(loopStart, continuation);
+
+                        LValue arrayIndex = m_out.phi(pointerType(), arrayIndexStart);
+                        LValue fixedArrayIndex = m_out.phi(pointerType(), fixedIndexStart);
+
+                        LValue item = m_out.load64(m_out.baseIndex(m_heaps.JSFixedArray_buffer, fixedArray, fixedArrayIndex));
+                        m_out.store64(item, m_out.baseIndex(m_heaps.indexedContiguousProperties, storage, arrayIndex));
+
+                        LValue nextArrayIndex = m_out.add(arrayIndex, m_out.constIntPtr(1));
+                        LValue nextFixedArrayIndex = m_out.add(fixedArrayIndex, m_out.constIntPtr(1));
+                        ValueFromBlock arrayIndexLoopForFinish = m_out.anchor(nextArrayIndex);
+
+                        m_out.addIncomingToPhi(fixedArrayIndex, m_out.anchor(nextFixedArrayIndex));
+                        m_out.addIncomingToPhi(arrayIndex, m_out.anchor(nextArrayIndex));
+
+                        m_out.branch(
+                            m_out.below(nextFixedArrayIndex, fixedArraySize),
+                            unsure(loopStart), unsure(continuation));
+
+                        m_out.appendTo(continuation, lastNext);
+                        index = m_out.phi(pointerType(), arrayIndexStartForFinish, arrayIndexLoopForFinish);
+                    }
                 } else {
                     IndexedAbstractHeap& heap = m_heaps.indexedContiguousProperties;
                     LValue item = lowJSValue(use);
@@ -4428,6 +4485,12 @@ private:
 
     void compileSpread()
     {
+        // It would be trivial to support this, but for now, we never create
+        // IR that would necessitate this. The reason is that Spread is only
+        // consumed by NewArrayWithSpread and never anything else. Also, any
+        // Spread(PhantomCreateRest) will turn into PhantomSpread(PhantomCreateRest).
+        RELEASE_ASSERT(m_node->child1()->op() != PhantomCreateRest); 
+
         LValue argument = lowCell(m_node->child1());
 
         LValue result;
@@ -6131,6 +6194,272 @@ private:
             });
     }
     
+    void compileCallOrConstructVarargsSpread()
+    {
+        Node* node = m_node;
+        LValue jsCallee = lowJSValue(m_node->child1());
+        LValue thisArg = lowJSValue(m_node->child2());
+
+        RELEASE_ASSERT(node->child3()->op() == PhantomNewArrayWithSpread);
+        Node* arrayWithSpread = node->child3().node();
+        BitVector* bitVector = arrayWithSpread->bitVector();
+        unsigned numNonSpreadParameters = 0;
+        Vector<LValue, 2> spreadLengths;
+        Vector<LValue, 8> patchpointArguments;
+        HashMap<InlineCallFrame*, LValue, WTF::DefaultHash<InlineCallFrame*>::Hash, WTF::NullableHashTraits<InlineCallFrame*>> cachedSpreadLengths;
+
+        for (unsigned i = 0; i < arrayWithSpread->numChildren(); i++) {
+            if (bitVector->get(i)) {
+                Node* spread = m_graph.varArgChild(arrayWithSpread, i).node();
+                RELEASE_ASSERT(spread->op() == PhantomSpread);
+                RELEASE_ASSERT(spread->child1()->op() == PhantomCreateRest);
+                InlineCallFrame* inlineCallFrame = spread->child1()->origin.semantic.inlineCallFrame;
+                unsigned numberOfArgumentsToSkip = spread->child1()->numberOfArgumentsToSkip();
+                LValue length = cachedSpreadLengths.ensure(inlineCallFrame, [&] () {
+                    return m_out.zeroExtPtr(getSpreadLengthFromInlineCallFrame(inlineCallFrame, numberOfArgumentsToSkip));
+                }).iterator->value;
+                patchpointArguments.append(length);
+                spreadLengths.append(length);
+            } else {
+                ++numNonSpreadParameters;
+                LValue argument = lowJSValue(m_graph.varArgChild(arrayWithSpread, i));
+                patchpointArguments.append(argument);
+            }
+        }
+
+        LValue argumentCountIncludingThis = m_out.constIntPtr(numNonSpreadParameters + 1);
+        for (LValue length : spreadLengths)
+            argumentCountIncludingThis = m_out.add(length, argumentCountIncludingThis);
+        
+        PatchpointValue* patchpoint = m_out.patchpoint(Int64);
+
+        patchpoint->append(jsCallee, ValueRep::reg(GPRInfo::regT0));
+        patchpoint->append(thisArg, ValueRep::WarmAny);
+        patchpoint->append(argumentCountIncludingThis, ValueRep::WarmAny);
+        patchpoint->appendVectorWithRep(patchpointArguments, ValueRep::WarmAny);
+        patchpoint->append(m_tagMask, ValueRep::reg(GPRInfo::tagMaskRegister));
+        patchpoint->append(m_tagTypeNumber, ValueRep::reg(GPRInfo::tagTypeNumberRegister));
+
+        RefPtr<PatchpointExceptionHandle> exceptionHandle = preparePatchpointForExceptions(patchpoint);
+
+        patchpoint->clobber(RegisterSet::macroScratchRegisters());
+        patchpoint->clobber(RegisterSet::volatileRegistersForJSCall()); // No inputs will be in a volatile register.
+        patchpoint->resultConstraint = ValueRep::reg(GPRInfo::returnValueGPR);
+
+        patchpoint->numGPScratchRegisters = 0;
+
+        // This is the minimum amount of call arg area stack space that all JS->JS calls always have.
+        unsigned minimumJSCallAreaSize =
+            sizeof(CallerFrameAndPC) +
+            WTF::roundUpToMultipleOf(stackAlignmentBytes(), 5 * sizeof(EncodedJSValue));
+
+        m_proc.requestCallArgAreaSizeInBytes(minimumJSCallAreaSize);
+        
+        CodeOrigin codeOrigin = codeOriginDescriptionOfCallSite();
+        State* state = &m_ftlState;
+        patchpoint->setGenerator(
+            [=] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+                AllowMacroScratchRegisterUsage allowScratch(jit);
+                CallSiteIndex callSiteIndex =
+                    state->jitCode->common.addUniqueCallSiteIndex(codeOrigin);
+
+                Box<CCallHelpers::JumpList> exceptions =
+                    exceptionHandle->scheduleExitCreation(params)->jumps(jit);
+
+                exceptionHandle->scheduleExitCreationForUnwind(params, callSiteIndex);
+
+                jit.store32(
+                    CCallHelpers::TrustedImm32(callSiteIndex.bits()),
+                    CCallHelpers::tagFor(VirtualRegister(CallFrameSlot::argumentCount)));
+
+                CallLinkInfo* callLinkInfo = jit.codeBlock()->addCallLinkInfo();
+
+                RegisterSet usedRegisters = RegisterSet::allRegisters();
+                usedRegisters.exclude(RegisterSet::volatileRegistersForJSCall());
+                GPRReg calleeGPR = params[1].gpr();
+                usedRegisters.set(calleeGPR);
+
+                ScratchRegisterAllocator allocator(usedRegisters);
+                GPRReg scratchGPR1 = allocator.allocateScratchGPR();
+                GPRReg scratchGPR2 = allocator.allocateScratchGPR();
+                GPRReg scratchGPR3 = allocator.allocateScratchGPR();
+                GPRReg scratchGPR4 = allocator.allocateScratchGPR();
+                RELEASE_ASSERT(!allocator.numberOfReusedRegisters());
+
+                auto getValueFromRep = [&] (B3::ValueRep rep, GPRReg result) {
+                    ASSERT(!usedRegisters.get(result));
+
+                    if (rep.isConstant()) {
+                        jit.move(CCallHelpers::Imm64(rep.value()), result);
+                        return;
+                    }
+
+                    // Note: in this function, we only request 64 bit values.
+                    if (rep.isStack()) {
+                        jit.load64(
+                            CCallHelpers::Address(GPRInfo::callFrameRegister, rep.offsetFromFP()),
+                            result);
+                        return;
+                    }
+
+                    RELEASE_ASSERT(rep.isGPR());
+                    ASSERT(usedRegisters.get(rep.gpr()));
+                    jit.move(rep.gpr(), result);
+                };
+
+                auto callWithExceptionCheck = [&] (void* callee) {
+                    jit.move(CCallHelpers::TrustedImmPtr(callee), GPRInfo::nonPreservedNonArgumentGPR);
+                    jit.call(GPRInfo::nonPreservedNonArgumentGPR);
+                    exceptions->append(jit.emitExceptionCheck(AssemblyHelpers::NormalExceptionCheck, AssemblyHelpers::FarJumpWidth));
+                };
+
+                auto adjustStack = [&] (GPRReg amount) {
+                    jit.addPtr(CCallHelpers::TrustedImm32(sizeof(CallerFrameAndPC)), amount, CCallHelpers::stackPointerRegister);
+                };
+
+                CCallHelpers::JumpList slowCase;
+                unsigned originalStackHeight = params.proc().frameSize();
+
+                {
+                    unsigned numUsedSlots = WTF::roundUpToMultipleOf(stackAlignmentRegisters(), originalStackHeight / sizeof(EncodedJSValue));
+                    B3::ValueRep argumentCountIncludingThisRep = params[3];
+                    getValueFromRep(argumentCountIncludingThisRep, scratchGPR2);
+                    slowCase.append(jit.branch32(CCallHelpers::Above, scratchGPR2, CCallHelpers::TrustedImm32(JSC::maxArguments + 1)));
+                    
+                    jit.move(scratchGPR2, scratchGPR1);
+                    jit.addPtr(CCallHelpers::TrustedImmPtr(static_cast<size_t>(numUsedSlots + CallFrame::headerSizeInRegisters)), scratchGPR1);
+                    // scratchGPR1 now has the required frame size in Register units
+                    // Round scratchGPR1 to next multiple of stackAlignmentRegisters()
+                    jit.addPtr(CCallHelpers::TrustedImm32(stackAlignmentRegisters() - 1), scratchGPR1);
+                    jit.andPtr(CCallHelpers::TrustedImm32(~(stackAlignmentRegisters() - 1)), scratchGPR1);
+                    jit.negPtr(scratchGPR1);
+                    jit.lshiftPtr(CCallHelpers::Imm32(3), scratchGPR1);
+                    jit.addPtr(GPRInfo::callFrameRegister, scratchGPR1);
+
+                    jit.store32(scratchGPR2, CCallHelpers::Address(scratchGPR1, CallFrameSlot::argumentCount * static_cast<int>(sizeof(Register)) + PayloadOffset));
+
+                    int storeOffset = CallFrame::thisArgumentOffset() * static_cast<int>(sizeof(Register));
+
+                    for (unsigned i = arrayWithSpread->numChildren(); i--; ) {
+                        unsigned paramsOffset = 4;
+
+                        if (bitVector->get(i)) {
+                            Node* spread = state->graph.varArgChild(arrayWithSpread, i).node();
+                            RELEASE_ASSERT(spread->op() == PhantomSpread);
+                            RELEASE_ASSERT(spread->child1()->op() == PhantomCreateRest);
+                            InlineCallFrame* inlineCallFrame = spread->child1()->origin.semantic.inlineCallFrame;
+
+                            unsigned numberOfArgumentsToSkip = spread->child1()->numberOfArgumentsToSkip();
+
+                            B3::ValueRep numArgumentsToCopy = params[paramsOffset + i];
+                            getValueFromRep(numArgumentsToCopy, scratchGPR3);
+                            int loadOffset = (AssemblyHelpers::argumentsStart(inlineCallFrame).offset() + numberOfArgumentsToSkip) * static_cast<int>(sizeof(Register));
+
+                            auto done = jit.branchTestPtr(MacroAssembler::Zero, scratchGPR3);
+                            auto loopStart = jit.label();
+                            jit.subPtr(CCallHelpers::TrustedImmPtr(static_cast<size_t>(1)), scratchGPR3);
+                            jit.subPtr(CCallHelpers::TrustedImmPtr(static_cast<size_t>(1)), scratchGPR2);
+                            jit.load64(CCallHelpers::BaseIndex(GPRInfo::callFrameRegister, scratchGPR3, CCallHelpers::TimesEight, loadOffset), scratchGPR4);
+                            jit.store64(scratchGPR4,
+                                CCallHelpers::BaseIndex(scratchGPR1, scratchGPR2, CCallHelpers::TimesEight, storeOffset));
+                            jit.branchTestPtr(CCallHelpers::NonZero, scratchGPR3).linkTo(loopStart, &jit);
+                            done.link(&jit);
+                        } else {
+                            jit.subPtr(CCallHelpers::TrustedImmPtr(static_cast<size_t>(1)), scratchGPR2);
+                            getValueFromRep(params[paramsOffset + i], scratchGPR3);
+                            jit.store64(scratchGPR3,
+                                CCallHelpers::BaseIndex(scratchGPR1, scratchGPR2, CCallHelpers::TimesEight, storeOffset));
+                        }
+                    }
+                }
+
+                {
+                    CCallHelpers::Jump dontThrow = jit.jump();
+                    slowCase.link(&jit);
+                    jit.setupArgumentsExecState();
+                    callWithExceptionCheck(bitwise_cast<void*>(operationThrowStackOverflowForVarargs));
+                    jit.abortWithReason(DFGVarargsThrowingPathDidNotThrow);
+                    
+                    dontThrow.link(&jit);
+                }
+
+                adjustStack(scratchGPR1);
+                
+                ASSERT(calleeGPR == GPRInfo::regT0);
+                jit.store64(calleeGPR, CCallHelpers::calleeFrameSlot(CallFrameSlot::callee));
+                getValueFromRep(params[2], scratchGPR3);
+                jit.store64(scratchGPR3, CCallHelpers::calleeArgumentSlot(0));
+                
+                CallLinkInfo::CallType callType;
+                if (node->op() == ConstructVarargs || node->op() == ConstructForwardVarargs)
+                    callType = CallLinkInfo::ConstructVarargs;
+                else if (node->op() == TailCallVarargs || node->op() == TailCallForwardVarargs)
+                    callType = CallLinkInfo::TailCallVarargs;
+                else
+                    callType = CallLinkInfo::CallVarargs;
+                
+                bool isTailCall = CallLinkInfo::callModeFor(callType) == CallMode::Tail;
+                
+                CCallHelpers::DataLabelPtr targetToCheck;
+                CCallHelpers::Jump slowPath = jit.branchPtrWithPatch(
+                    CCallHelpers::NotEqual, GPRInfo::regT0, targetToCheck,
+                    CCallHelpers::TrustedImmPtr(nullptr));
+                
+                CCallHelpers::Call fastCall;
+                CCallHelpers::Jump done;
+                
+                if (isTailCall) {
+                    jit.emitRestoreCalleeSaves();
+                    jit.prepareForTailCallSlow();
+                    fastCall = jit.nearTailCall();
+                } else {
+                    fastCall = jit.nearCall();
+                    done = jit.jump();
+                }
+                
+                slowPath.link(&jit);
+
+                if (isTailCall)
+                    jit.emitRestoreCalleeSaves();
+                ASSERT(!usedRegisters.get(GPRInfo::regT2));
+                jit.move(CCallHelpers::TrustedImmPtr(callLinkInfo), GPRInfo::regT2);
+                CCallHelpers::Call slowCall = jit.nearCall();
+                
+                if (isTailCall)
+                    jit.abortWithReason(JITDidReturnFromTailCall);
+                else
+                    done.link(&jit);
+                
+                callLinkInfo->setUpCall(callType, node->origin.semantic, GPRInfo::regT0);
+
+                jit.addPtr(
+                    CCallHelpers::TrustedImm32(-originalStackHeight),
+                    GPRInfo::callFrameRegister, CCallHelpers::stackPointerRegister);
+                
+                jit.addLinkTask(
+                    [=] (LinkBuffer& linkBuffer) {
+                        MacroAssemblerCodePtr linkCall =
+                            linkBuffer.vm().getCTIStub(linkCallThunkGenerator).code();
+                        linkBuffer.link(slowCall, FunctionPtr(linkCall.executableAddress()));
+                        
+                        callLinkInfo->setCallLocations(
+                            CodeLocationLabel(linkBuffer.locationOfNearCall(slowCall)),
+                            CodeLocationLabel(linkBuffer.locationOf(targetToCheck)),
+                            linkBuffer.locationOfNearCall(fastCall));
+                    });
+            });
+
+        switch (node->op()) {
+        case TailCallForwardVarargs:
+            m_out.unreachable();
+            break;
+
+        default:
+            setJSValue(patchpoint);
+            break;
+        }
+    }
+
     void compileCallOrConstructVarargs()
     {
         Node* node = m_node;
@@ -6157,6 +6486,12 @@ private:
             DFG_CRASH(m_graph, node, "bad node type");
             break;
         }
+
+        if (forwarding && m_node->child3() && m_node->child3()->op() == PhantomNewArrayWithSpread) {
+            compileCallOrConstructVarargsSpread();
+            return;
+        }
+
         
         PatchpointValue* patchpoint = m_out.patchpoint(Int64);
 
@@ -6530,6 +6865,11 @@ private:
     
     void compileForwardVarargs()
     {
+        if (m_node->child1() && m_node->child1()->op() == PhantomNewArrayWithSpread) {
+            compileForwardVarargsWithSpread();
+            return;
+        }
+
         LoadVarargsData* data = m_node->loadVarargsData();
         InlineCallFrame* inlineCallFrame;
         if (m_node->child1())
@@ -6618,6 +6958,135 @@ private:
         m_out.appendTo(continuation, lastNext);
     }
 
+    LValue getSpreadLengthFromInlineCallFrame(InlineCallFrame* inlineCallFrame, unsigned numberOfArgumentsToSkip)
+    {
+        ArgumentsLength argumentsLength = getArgumentsLength(inlineCallFrame);
+        if (argumentsLength.isKnown) {
+            unsigned knownLength = argumentsLength.known;
+            if (knownLength >= numberOfArgumentsToSkip)
+                knownLength = knownLength - numberOfArgumentsToSkip;
+            else
+                knownLength = 0;
+            return m_out.constInt32(knownLength);
+        }
+
+
+        // We need to perform the same logical operation as the code above, but through dynamic operations.
+        if (!numberOfArgumentsToSkip)
+            return argumentsLength.value;
+
+        LBasicBlock isLarger = m_out.newBlock();
+        LBasicBlock continuation = m_out.newBlock();
+
+        ValueFromBlock smallerOrEqualLengthResult = m_out.anchor(m_out.constInt32(0));
+        m_out.branch(
+            m_out.above(argumentsLength.value, m_out.constInt32(numberOfArgumentsToSkip)), unsure(isLarger), unsure(continuation));
+        LBasicBlock lastNext = m_out.appendTo(isLarger, continuation);
+        ValueFromBlock largerLengthResult = m_out.anchor(m_out.sub(argumentsLength.value, m_out.constInt32(numberOfArgumentsToSkip)));
+        m_out.jump(continuation);
+
+        m_out.appendTo(continuation, lastNext);
+        return m_out.phi(Int32, smallerOrEqualLengthResult, largerLengthResult);
+    }
+
+    void compileForwardVarargsWithSpread()
+    {
+        HashMap<InlineCallFrame*, LValue, WTF::DefaultHash<InlineCallFrame*>::Hash, WTF::NullableHashTraits<InlineCallFrame*>> cachedSpreadLengths;
+
+        Node* arrayWithSpread = m_node->child1().node();
+        RELEASE_ASSERT(arrayWithSpread->op() == PhantomNewArrayWithSpread);
+        BitVector* bitVector = arrayWithSpread->bitVector();
+
+        unsigned numberOfStaticArguments = 0;
+        Vector<LValue, 2> spreadLengths;
+        for (unsigned i = 0; i < arrayWithSpread->numChildren(); i++) {
+            if (bitVector->get(i)) {
+                Node* child = m_graph.varArgChild(arrayWithSpread, i).node();
+                ASSERT(child->op() == PhantomSpread);
+                ASSERT(child->child1()->op() == PhantomCreateRest);
+                InlineCallFrame* inlineCallFrame = child->child1()->origin.semantic.inlineCallFrame;
+                LValue length = cachedSpreadLengths.ensure(inlineCallFrame, [&] () {
+                    return getSpreadLengthFromInlineCallFrame(inlineCallFrame, child->child1()->numberOfArgumentsToSkip());
+                }).iterator->value;
+                spreadLengths.append(length);
+            } else
+                ++numberOfStaticArguments;
+        }
+
+        LValue lengthIncludingThis = m_out.constInt32(1 + numberOfStaticArguments);
+        for (LValue length : spreadLengths)
+            lengthIncludingThis = m_out.add(lengthIncludingThis, length);
+
+        LoadVarargsData* data = m_node->loadVarargsData();
+        speculate(
+            VarargsOverflow, noValue(), nullptr,
+            m_out.above(lengthIncludingThis, m_out.constInt32(data->limit)));
+        
+        m_out.store32(lengthIncludingThis, payloadFor(data->machineCount));
+
+        LValue targetStart = addressFor(data->machineStart).value();
+        LValue storeIndex = m_out.constIntPtr(0);
+        for (unsigned i = 0; i < arrayWithSpread->numChildren(); i++) {
+            if (bitVector->get(i)) {
+                Node* child = m_graph.varArgChild(arrayWithSpread, i).node();
+                RELEASE_ASSERT(child->op() == PhantomSpread);
+                RELEASE_ASSERT(child->child1()->op() == PhantomCreateRest);
+                InlineCallFrame* inlineCallFrame = child->child1()->origin.semantic.inlineCallFrame;
+
+                LValue sourceStart = getArgumentsStart(inlineCallFrame, child->child1()->numberOfArgumentsToSkip());
+                LValue spreadLength = m_out.zeroExtPtr(cachedSpreadLengths.get(inlineCallFrame));
+
+                LBasicBlock loop = m_out.newBlock();
+                LBasicBlock continuation = m_out.newBlock();
+                ValueFromBlock startLoadIndex = m_out.anchor(m_out.constIntPtr(0));
+                ValueFromBlock startStoreIndex = m_out.anchor(storeIndex);
+                ValueFromBlock startStoreIndexForEnd = m_out.anchor(storeIndex);
+
+                m_out.branch(m_out.isZero64(spreadLength), unsure(continuation), unsure(loop));
+
+                LBasicBlock lastNext = m_out.appendTo(loop, continuation);
+                LValue loopStoreIndex = m_out.phi(Int64, startStoreIndex);
+                LValue loadIndex = m_out.phi(Int64, startLoadIndex);
+                LValue value = m_out.load64(
+                    m_out.baseIndex(m_heaps.variables, sourceStart, loadIndex));
+                m_out.store64(value, m_out.baseIndex(m_heaps.variables, targetStart, loopStoreIndex));
+                LValue nextLoadIndex = m_out.add(m_out.constIntPtr(1), loadIndex);
+                m_out.addIncomingToPhi(loadIndex, m_out.anchor(nextLoadIndex));
+                LValue nextStoreIndex = m_out.add(m_out.constIntPtr(1), loopStoreIndex);
+                m_out.addIncomingToPhi(loopStoreIndex, m_out.anchor(nextStoreIndex));
+                ValueFromBlock loopStoreIndexForEnd = m_out.anchor(nextStoreIndex);
+                m_out.branch(m_out.below(nextLoadIndex, spreadLength), unsure(loop), unsure(continuation));
+
+                m_out.appendTo(continuation, lastNext);
+                storeIndex = m_out.phi(Int64, startStoreIndexForEnd, loopStoreIndexForEnd);
+            } else {
+                LValue value = lowJSValue(m_graph.varArgChild(arrayWithSpread, i));
+                m_out.store64(value, m_out.baseIndex(m_heaps.variables, targetStart, storeIndex));
+                storeIndex = m_out.add(m_out.constIntPtr(1), storeIndex);
+            }
+        }
+
+        LBasicBlock undefinedLoop = m_out.newBlock();
+        LBasicBlock continuation = m_out.newBlock();
+
+        ValueFromBlock startStoreIndex = m_out.anchor(storeIndex);
+        LValue loopBoundValue = m_out.constIntPtr(data->mandatoryMinimum);
+        m_out.branch(m_out.below(storeIndex, loopBoundValue),
+            unsure(undefinedLoop), unsure(continuation));
+
+        LBasicBlock lastNext = m_out.appendTo(undefinedLoop, continuation);
+        LValue loopStoreIndex = m_out.phi(Int64, startStoreIndex);
+        m_out.store64(
+            m_out.constInt64(JSValue::encode(jsUndefined())),
+            m_out.baseIndex(m_heaps.variables, targetStart, loopStoreIndex));
+        LValue nextIndex = m_out.add(loopStoreIndex, m_out.constIntPtr(1));
+        m_out.addIncomingToPhi(loopStoreIndex, m_out.anchor(nextIndex));
+        m_out.branch(
+            m_out.below(nextIndex, loopBoundValue), unsure(undefinedLoop), unsure(continuation));
+
+        m_out.appendTo(continuation, lastNext);
+    }
+
     void compileJump()
     {
         m_out.jump(lowBlock(m_node->targetBlock()));
index 467f517..47c8328 100644 (file)
@@ -35,6 +35,7 @@
 #include "InlineCallFrame.h"
 #include "JSAsyncFunction.h"
 #include "JSCInlines.h"
+#include "JSFixedArray.h"
 #include "JSGeneratorFunction.h"
 #include "JSLexicalEnvironment.h"
 
@@ -86,6 +87,8 @@ extern "C" void JIT_OPERATION operationPopulateObjectInOSR(
     case PhantomDirectArguments:
     case PhantomClonedArguments:
     case PhantomCreateRest:
+    case PhantomSpread:
+    case PhantomNewArrayWithSpread:
         // Those are completely handled by operationMaterializeObjectInOSR
         break;
 
@@ -393,14 +396,102 @@ extern "C" JSCell* JIT_OPERATION operationMaterializeObjectInOSR(
                 ASSERT(found);
             }
 #endif
-
             return array;
         }
+
         default:
             RELEASE_ASSERT_NOT_REACHED();
             return nullptr;
         }
     }
+
+    case PhantomSpread: {
+        JSArray* array = nullptr;
+        for (unsigned i = materialization->properties().size(); i--;) {
+            const ExitPropertyValue& property = materialization->properties()[i];
+            if (property.location().kind() == SpreadPLoc) {
+                array = jsCast<JSArray*>(JSValue::decode(values[i]));
+                break;
+            }
+        }
+        RELEASE_ASSERT(array);
+
+        // Note: it is sound for JSFixedArray::createFromArray to call getDirectIndex here
+        // because we're guaranteed we won't be calling any getters. The reason for this is
+        // that we only support PhantomSpread over CreateRest, which is an array we create.
+        // Any attempts to put a getter on any indices on the rest array will escape the array.
+        JSFixedArray* fixedArray = JSFixedArray::createFromArray(exec, vm, array);
+        return fixedArray;
+    }
+
+    case PhantomNewArrayWithSpread: {
+        CodeBlock* codeBlock = baselineCodeBlockForOriginAndBaselineCodeBlock(
+            materialization->origin(), exec->codeBlock());
+        JSGlobalObject* globalObject = codeBlock->globalObject();
+        Structure* structure = globalObject->arrayStructureForIndexingTypeDuringAllocation(ArrayWithContiguous);
+
+        unsigned arraySize = 0;
+        unsigned numProperties = 0;
+        for (unsigned i = materialization->properties().size(); i--;) {
+            const ExitPropertyValue& property = materialization->properties()[i];
+            if (property.location().kind() == NewArrayWithSpreadArgumentPLoc) {
+                ++numProperties;
+                JSValue value = JSValue::decode(values[i]);
+                if (JSFixedArray* fixedArray = jsDynamicCast<JSFixedArray*>(value))
+                    arraySize += fixedArray->size();
+                else
+                    arraySize += 1;
+            }
+        }
+
+        JSArray* result = JSArray::tryCreateUninitialized(vm, structure, arraySize);
+        RELEASE_ASSERT(result);
+
+#if !ASSERT_DISABLED
+        // Ensure we see indices for everything in the range: [0, numProperties)
+        for (unsigned i = 0; i < numProperties; ++i) {
+            bool found = false;
+            for (unsigned j = 0; j < materialization->properties().size(); ++j) {
+                const ExitPropertyValue& property = materialization->properties()[j];
+                if (property.location().kind() == NewArrayWithSpreadArgumentPLoc && property.location().info() == i) {
+                    found = true;
+                    break;
+                }
+            }
+            ASSERT(found);
+        }
+#endif
+
+        Vector<JSValue, 8> arguments;
+        arguments.grow(numProperties);
+
+        for (unsigned i = materialization->properties().size(); i--;) {
+            const ExitPropertyValue& property = materialization->properties()[i];
+            if (property.location().kind() == NewArrayWithSpreadArgumentPLoc) {
+                JSValue value = JSValue::decode(values[i]);
+                RELEASE_ASSERT(property.location().info() < numProperties);
+                arguments[property.location().info()] = value;
+            }
+        }
+
+        unsigned arrayIndex = 0;
+        for (JSValue value : arguments) {
+            if (JSFixedArray* fixedArray = jsDynamicCast<JSFixedArray*>(value)) {
+                for (unsigned i = 0; i < fixedArray->size(); i++) {
+                    ASSERT(fixedArray->get(i));
+                    result->initializeIndex(vm, arrayIndex, fixedArray->get(i));
+                    ++arrayIndex;
+                }
+            } else {
+                // We are not spreading.
+                result->initializeIndex(vm, arrayIndex, value);
+                ++arrayIndex;
+            }
+        }
+
+        return result;
+    }
+
         
     default:
         RELEASE_ASSERT_NOT_REACHED();
index d2d4515..a177b0c 100644 (file)
@@ -78,7 +78,7 @@ void emitSetupVarargsFrameFastCase(CCallHelpers& jit, GPRReg numUsedSlotsGPR, GP
         jit.sub32(CCallHelpers::TrustedImm32(firstVarArgOffset), scratchGPR1);
         endVarArgs.link(&jit);
     }
-    slowCase.append(jit.branch32(CCallHelpers::Above, scratchGPR1, CCallHelpers::TrustedImm32(maxArguments + 1)));
+    slowCase.append(jit.branch32(CCallHelpers::Above, scratchGPR1, CCallHelpers::TrustedImm32(JSC::maxArguments + 1)));
     
     emitSetVarargsFrame(jit, scratchGPR1, true, numUsedSlotsGPR, scratchGPR2);
 
index 2d95d64..434e4ea 100644 (file)
@@ -990,6 +990,8 @@ static EncodedJSValue JSC_HOST_CALL functionStartSamplingProfiler(ExecState*);
 static EncodedJSValue JSC_HOST_CALL functionSamplingProfilerStackTraces(ExecState*);
 #endif
 
+static EncodedJSValue JSC_HOST_CALL functionMaxArguments(ExecState*);
+
 #if ENABLE(WEBASSEMBLY)
 static EncodedJSValue JSC_HOST_CALL functionTestWasmModuleFunctions(ExecState*);
 #endif
@@ -1242,6 +1244,8 @@ protected:
         addFunction(vm, "samplingProfilerStackTraces", functionSamplingProfilerStackTraces, 0);
 #endif
 
+        addFunction(vm, "maxArguments", functionMaxArguments, 0);
+
 #if ENABLE(WEBASSEMBLY)
         addFunction(vm, "testWasmModuleFunctions", functionTestWasmModuleFunctions, 0);
 #endif
@@ -2484,6 +2488,11 @@ EncodedJSValue JSC_HOST_CALL functionSamplingProfilerStackTraces(ExecState* exec
 }
 #endif // ENABLE(SAMPLING_PROFILER)
 
+EncodedJSValue JSC_HOST_CALL functionMaxArguments(ExecState*)
+{
+    return JSValue::encode(jsNumber(JSC::maxArguments));
+}
+
 #if ENABLE(WEBASSEMBLY)
 
 static JSValue box(ExecState* exec, VM& vm, JSValue wasmValue)
index cdb1ff1..c754e6b 100644 (file)
@@ -79,7 +79,11 @@ public:
                 // We may still call into this function when !globalObject->isArrayIteratorProtocolFastAndNonObservable(),
                 // however, if we do that, we ensure we're calling in with an array with all self properties between
                 // [0, length).
-                ASSERT(array->globalObject()->isArrayIteratorProtocolFastAndNonObservable());
+                //
+                // We may also call into this during OSR exit to materialize a phantom fixed array.
+                // We may be creating a fixed array during OSR exit even after the iterator protocol changed.
+                // But, when the phantom would have logically been created, the protocol hadn't been
+                // changed. Therefore, it is sound to assume empty indices are jsUndefined().
                 value = jsUndefined();
             }
             RETURN_IF_EXCEPTION(throwScope, nullptr);