https://bugs.webkit.org/show_bug.cgi?id=152723
Reviewed by Filip Pizlo.
JSTests:
* stress/generator-fib-ftl-and-array.js: Added.
(fib):
* stress/generator-fib-ftl-and-object.js: Added.
(fib):
* stress/generator-fib-ftl-and-string.js: Added.
(fib):
* stress/generator-fib-ftl.js: Added.
(fib):
* stress/generator-frame-empty.js: Added.
(shouldThrow):
(shouldThrow.fib):
* stress/generator-reduced-save-point-put-to-scope.js: Added.
(shouldBe):
(gen):
* stress/generator-transfer-register-beyond-mutiple-yields.js: Added.
(shouldBe):
(gen):
Source/JavaScriptCore:
This patch introduces DFG and FTL support for ES6 generators.
ES6 generator is compiled by the BytecodeGenerator. But at the last phase, BytecodeGenerator performs "generatorification" onto the unlinked code.
In BytecodeGenerator phase, we just emit op_yield for each yield point. And we don't emit any generator related switch, save, and resume sequences
here. Those are emitted by the generatorification phase.
So the graph is super simple! Before the generatorification, the graph looks like this.
op_enter -> ...... -> op_yield -> ..... -> op_yield -> ...
Roughly speaking, in the generatorification phase, we turn out which variables should be saved and resumed at each op_yield.
This is done by liveness analysis. After that, we convert op_yield to the sequence of "op_put_to_scope", "op_ret", and "op_get_from_scope".
op_put_to_scope and op_get_from_scope sequences are corresponding to the save and resume sequences. We set up the scope for the generator frame and
perform op_put_to_scope and op_get_from_scope onto it. The live registers are saved and resumed over the generator's next() calls by using this
special generator frame scope. And we also set up the global switch for the generator.
In the generatorification phase,
1. We construct the BytecodeGraph from the unlinked instructions. This constructs the basic blocks, and it is used in the subsequent analysis.
2. We perform the analysis onto the unlinked code. We extract the live variables at each op_yield.
3. We insert the get_from_scope and put_to_scope at each op_yield. Which registers should be saved and resumed is offered by (2).
Then, clip the op_yield themselves. And we also insert the switch_imm. The jump targets of this switch are just after this op_switch_imm and each op_yield point.
One interesting point is the try-range. We split the try-range at the op_yield point in BytecodeGenerator phase.
This drops the hacky thing that is introduced in [1].
If the try-range covers the resume sequences, the exception handler's use-registers are incorrectly transferred to the entry block.
For example,
handler uses r2
try-range
label:(entry block can jump here) ^
r1 = get_from_scope # resume sequence starts | use r2 is transferred to the entry block!
r2 = get_from_scope |
starts usual sequences |
... |
Handler's r2 use should be considered at the `r1 = get_from_scope` point.
Previously, we handle this edge case by treating op_resume specially in the liveness analysis[1].
To drop this workaround, we split the try-range not to cover this resume sequence.
handler uses r2
try-range
label:(entry block can jump here)
r1 = get_from_scope # resume sequence starts
r2 = get_from_scope
starts usual sequences ^ try-range should start from here.
... |
OK. Let's show the detailed example.
1. First, there is the normal bytecode sequence. Here, | represents the offsets, and [] represents the bytecodes.
bytecodes | [ ] | [ ] | [ ] | [ ] | [ ] | [ ] |
try-range <----------------------------------->
2. When we emit the op_yield in the bytecode generator, we carefully split the try-range.
bytecodes | [ ] | [ ] | [op_yield] | [ ] | [ ] | [ ] |
try-range <-----------> <----------------->
3. And in the generatorification phase, we insert the switch's jump target and save & resume sequences. And we also drop op_yield.
Insert save seq Insert resume seq
before op_yield. after op_yield's point.
v v
bytecodes | [ ] | [ ] | [op_yield] | [ ] | [ ] | [ ] |
try-range <-----------> ^ <----------------->
^ |
Jump to here. Drop this op_yield.
4. The final layout is the following.
bytecodes | [ ] | [ ][save seq][op_ret] | [resume seq] | [ ] | [ ] | [ ] |
try-range <-----------------------------> <---------------->
^
Jump to here.
The rewriting done by the BytecodeRewriter is executed in a batch manner. Since these modification changes the basic blocks and size of unlinked instructions,
BytecodeRewriter also performs the offset adjustment for UnlinkedCodeBlock. So, this rewriting is performed onto the BytecodeGraph rather than BytecodeBasicBlock.
The reason why we take this design is simple: we don't want to newly create the basic blocks and opcodes for this early phase like DFG. Instead, we perform the
modification and adjustment to the unlinked instructions and UnlinkedCodeBlock in a in-place manner.
Bytecode rewriting functionality is offered by BytecodeRewriter. BytecodeRewriter allows us to insert any bytecodes to any places
in a in-place manner. BytecodeRewriter handles the original bytecode offsets as labels. And you can insert bytecodes before and after
these labels. You can also insert any jumps to any places. When you insert jumps, you need to specify jump target with this labels.
These labels (original bytecode offsets) are automatically converted to the appropriate offsets by BytecodeRewriter.
After that phase, the data flow of the generator-saved-and-resumed-registers are explicitly represented by the get_from_scope and put_to_scope.
And the switch is inserted to represent the actual control flow for the generator. And op_yield is removed. Since we use the existing bytecodes (op_switch_imm, op_put_to_scope
op_ret, and op_get_from_scope), DFG and FTL changes are not necessary. This patch also drops data structures and implementations for the old generator,
op_resume, op_save implementations and GeneratorFrame.
Note that this patch does not leverage the recent multi entrypoints support in B3. After this patch is introduced, we will submit a new patch that leverages the multi
entrypoints for generator's resume and sees the performance gain.
Microbenchmarks related to generators show up to 2.9x improvements.
Baseline Patched
generator-fib 102.0116+-3.2880 ^ 34.9670+-0.2221 ^ definitely 2.9174x faster
generator-sunspider-access-nsieve 5.8596+-0.0371 ^ 4.9051+-0.0720 ^ definitely 1.1946x faster
generator-with-several-types 332.1478+-4.2425 ^ 124.6642+-2.4826 ^ definitely 2.6643x faster
<geometric> 58.2998+-0.7758 ^ 27.7425+-0.2577 ^ definitely 2.1015x faster
In ES6SampleBench's Basic, we can observe 41% improvement (Macbook Pro).
Baseline:
Geometric Mean Result: 133.55 ms +- 4.49 ms
Benchmark First Iteration Worst 2% Steady State
Air 54.03 ms +- 7.51 ms 29.06 ms +- 3.13 ms 2276.59 ms +- 61.17 ms
Basic 30.18 ms +- 1.86 ms 18.85 ms +- 0.45 ms 2851.16 ms +- 41.87 ms
Patched:
Geometric Mean Result: 121.78 ms +- 3.96 ms
Benchmark First Iteration Worst 2% Steady State
Air 52.09 ms +- 6.89 ms 29.59 ms +- 3.16 ms 2239.90 ms +- 54.60 ms
Basic 29.28 ms +- 1.46 ms 16.26 ms +- 0.66 ms 2025.15 ms +- 38.56 ms
[1]: https://bugs.webkit.org/show_bug.cgi?id=159281
* CMakeLists.txt:
* JavaScriptCore.xcodeproj/project.pbxproj:
* builtins/GeneratorPrototype.js:
(globalPrivate.generatorResume):
* bytecode/BytecodeBasicBlock.cpp:
(JSC::BytecodeBasicBlock::shrinkToFit):
(JSC::BytecodeBasicBlock::computeImpl):
(JSC::BytecodeBasicBlock::compute):
(JSC::isBranch): Deleted.
(JSC::isUnconditionalBranch): Deleted.
(JSC::isTerminal): Deleted.
(JSC::isThrow): Deleted.
(JSC::linkBlocks): Deleted.
(JSC::computeBytecodeBasicBlocks): Deleted.
* bytecode/BytecodeBasicBlock.h:
(JSC::BytecodeBasicBlock::isEntryBlock):
(JSC::BytecodeBasicBlock::isExitBlock):
(JSC::BytecodeBasicBlock::leaderOffset):
(JSC::BytecodeBasicBlock::totalLength):
(JSC::BytecodeBasicBlock::offsets):
(JSC::BytecodeBasicBlock::successors):
(JSC::BytecodeBasicBlock::index):
(JSC::BytecodeBasicBlock::addSuccessor):
(JSC::BytecodeBasicBlock::BytecodeBasicBlock):
(JSC::BytecodeBasicBlock::addLength):
(JSC::BytecodeBasicBlock::leaderBytecodeOffset): Deleted.
(JSC::BytecodeBasicBlock::totalBytecodeLength): Deleted.
(JSC::BytecodeBasicBlock::bytecodeOffsets): Deleted.
(JSC::BytecodeBasicBlock::addBytecodeLength): Deleted.
* bytecode/BytecodeGeneratorification.cpp: Added.
(JSC::BytecodeGeneratorification::BytecodeGeneratorification):
(JSC::BytecodeGeneratorification::graph):
(JSC::BytecodeGeneratorification::yields):
(JSC::BytecodeGeneratorification::enterPoint):
(JSC::BytecodeGeneratorification::storageForGeneratorLocal):
(JSC::GeneratorLivenessAnalysis::GeneratorLivenessAnalysis):
(JSC::GeneratorLivenessAnalysis::computeDefsForBytecodeOffset):
(JSC::GeneratorLivenessAnalysis::computeUsesForBytecodeOffset):
(JSC::GeneratorLivenessAnalysis::run):
(JSC::BytecodeGeneratorification::run):
(JSC::performGeneratorification):
* bytecode/BytecodeGeneratorification.h: Copied from Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysisInlines.h.
* bytecode/BytecodeGraph.h: Added.
(JSC::BytecodeGraph::codeBlock):
(JSC::BytecodeGraph::instructions):
(JSC::BytecodeGraph::basicBlocksInReverseOrder):
(JSC::BytecodeGraph::blockContainsBytecodeOffset):
(JSC::BytecodeGraph::findBasicBlockForBytecodeOffset):
(JSC::BytecodeGraph::findBasicBlockWithLeaderOffset):
(JSC::BytecodeGraph::size):
(JSC::BytecodeGraph::at):
(JSC::BytecodeGraph::operator[]):
(JSC::BytecodeGraph::begin):
(JSC::BytecodeGraph::end):
(JSC::BytecodeGraph::first):
(JSC::BytecodeGraph::last):
(JSC::BytecodeGraph<Block>::BytecodeGraph):
* bytecode/BytecodeList.json:
* bytecode/BytecodeLivenessAnalysis.cpp:
(JSC::BytecodeLivenessAnalysis::BytecodeLivenessAnalysis):
(JSC::BytecodeLivenessAnalysis::computeDefsForBytecodeOffset):
(JSC::BytecodeLivenessAnalysis::computeUsesForBytecodeOffset):
(JSC::BytecodeLivenessAnalysis::getLivenessInfoAtBytecodeOffset):
(JSC::BytecodeLivenessAnalysis::computeFullLiveness):
(JSC::BytecodeLivenessAnalysis::computeKills):
(JSC::BytecodeLivenessAnalysis::dumpResults):
(JSC::BytecodeLivenessAnalysis::compute):
(JSC::isValidRegisterForLiveness): Deleted.
(JSC::getLeaderOffsetForBasicBlock): Deleted.
(JSC::findBasicBlockWithLeaderOffset): Deleted.
(JSC::blockContainsBytecodeOffset): Deleted.
(JSC::findBasicBlockForBytecodeOffset): Deleted.
(JSC::stepOverInstruction): Deleted.
(JSC::computeLocalLivenessForBytecodeOffset): Deleted.
(JSC::computeLocalLivenessForBlock): Deleted.
(JSC::BytecodeLivenessAnalysis::runLivenessFixpoint): Deleted.
* bytecode/BytecodeLivenessAnalysis.h:
* bytecode/BytecodeLivenessAnalysisInlines.h:
(JSC::isValidRegisterForLiveness):
(JSC::BytecodeLivenessPropagation<DerivedAnalysis>::stepOverInstruction):
(JSC::BytecodeLivenessPropagation<DerivedAnalysis>::computeLocalLivenessForBytecodeOffset):
(JSC::BytecodeLivenessPropagation<DerivedAnalysis>::computeLocalLivenessForBlock):
(JSC::BytecodeLivenessPropagation<DerivedAnalysis>::getLivenessInfoAtBytecodeOffset):
(JSC::BytecodeLivenessPropagation<DerivedAnalysis>::runLivenessFixpoint):
* bytecode/BytecodeRewriter.cpp: Added.
(JSC::BytecodeRewriter::applyModification):
(JSC::BytecodeRewriter::execute):
(JSC::BytecodeRewriter::adjustJumpTargetsInFragment):
(JSC::BytecodeRewriter::insertImpl):
(JSC::BytecodeRewriter::adjustJumpTarget):
* bytecode/BytecodeRewriter.h: Added.
(JSC::BytecodeRewriter::InsertionPoint::InsertionPoint):
(JSC::BytecodeRewriter::InsertionPoint::operator<):
(JSC::BytecodeRewriter::InsertionPoint::operator==):
(JSC::BytecodeRewriter::Insertion::length):
(JSC::BytecodeRewriter::Fragment::Fragment):
(JSC::BytecodeRewriter::Fragment::appendInstruction):
(JSC::BytecodeRewriter::BytecodeRewriter):
(JSC::BytecodeRewriter::insertFragmentBefore):
(JSC::BytecodeRewriter::insertFragmentAfter):
(JSC::BytecodeRewriter::removeBytecode):
(JSC::BytecodeRewriter::graph):
(JSC::BytecodeRewriter::adjustAbsoluteOffset):
(JSC::BytecodeRewriter::adjustJumpTarget):
(JSC::BytecodeRewriter::calculateDifference):
* bytecode/BytecodeUseDef.h:
(JSC::computeUsesForBytecodeOffset):
(JSC::computeDefsForBytecodeOffset):
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::dumpBytecode):
(JSC::CodeBlock::finishCreation):
(JSC::CodeBlock::handlerForIndex):
(JSC::CodeBlock::shrinkToFit):
(JSC::CodeBlock::valueProfileForBytecodeOffset):
(JSC::CodeBlock::livenessAnalysisSlow):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::isConstantRegisterIndex):
(JSC::CodeBlock::livenessAnalysis):
(JSC::CodeBlock::liveCalleeLocalsAtYield): Deleted.
* bytecode/HandlerInfo.h:
(JSC::HandlerInfoBase::handlerForIndex):
* bytecode/Opcode.h:
(JSC::isBranch):
(JSC::isUnconditionalBranch):
(JSC::isTerminal):
(JSC::isThrow):
* bytecode/PreciseJumpTargets.cpp:
(JSC::getJumpTargetsForBytecodeOffset):
(JSC::computePreciseJumpTargetsInternal):
(JSC::computePreciseJumpTargets):
(JSC::recomputePreciseJumpTargets):
(JSC::findJumpTargetsForBytecodeOffset):
* bytecode/PreciseJumpTargets.h:
* bytecode/PreciseJumpTargetsInlines.h: Added.
(JSC::extractStoredJumpTargetsForBytecodeOffset):
* bytecode/UnlinkedCodeBlock.cpp:
(JSC::UnlinkedCodeBlock::handlerForBytecodeOffset):
(JSC::UnlinkedCodeBlock::handlerForIndex):
(JSC::UnlinkedCodeBlock::applyModification):
* bytecode/UnlinkedCodeBlock.h:
(JSC::UnlinkedStringJumpTable::offsetForValue):
(JSC::UnlinkedCodeBlock::numCalleeLocals):
* bytecode/VirtualRegister.h:
* bytecompiler/BytecodeGenerator.cpp:
(JSC::BytecodeGenerator::generate):
(JSC::BytecodeGenerator::BytecodeGenerator):
(JSC::BytecodeGenerator::emitComplexPopScopes):
(JSC::prepareJumpTableForStringSwitch):
(JSC::BytecodeGenerator::emitYieldPoint):
(JSC::BytecodeGenerator::emitSave): Deleted.
(JSC::BytecodeGenerator::emitResume): Deleted.
(JSC::BytecodeGenerator::emitGeneratorStateLabel): Deleted.
(JSC::BytecodeGenerator::beginGenerator): Deleted.
(JSC::BytecodeGenerator::endGenerator): Deleted.
* bytecompiler/BytecodeGenerator.h:
(JSC::BytecodeGenerator::generatorStateRegister):
(JSC::BytecodeGenerator::generatorValueRegister):
(JSC::BytecodeGenerator::generatorResumeModeRegister):
(JSC::BytecodeGenerator::generatorFrameRegister):
* bytecompiler/NodesCodegen.cpp:
(JSC::FunctionNode::emitBytecode):
* dfg/DFGOperations.cpp:
* interpreter/Interpreter.cpp:
(JSC::findExceptionHandler):
(JSC::GetCatchHandlerFunctor::operator()):
(JSC::UnwindFunctor::operator()):
* interpreter/Interpreter.h:
* interpreter/InterpreterInlines.h: Copied from Source/JavaScriptCore/bytecode/PreciseJumpTargets.h.
(JSC::Interpreter::getOpcodeID):
* jit/JIT.cpp:
(JSC::JIT::privateCompileMainPass):
* jit/JIT.h:
* jit/JITOpcodes.cpp:
(JSC::JIT::emit_op_save): Deleted.
(JSC::JIT::emit_op_resume): Deleted.
* llint/LowLevelInterpreter.asm:
* parser/Parser.cpp:
(JSC::Parser<LexerType>::parseInner):
(JSC::Parser<LexerType>::parseGeneratorFunctionSourceElements):
(JSC::Parser<LexerType>::createGeneratorParameters):
* parser/Parser.h:
* runtime/CommonSlowPaths.cpp:
(JSC::SLOW_PATH_DECL): Deleted.
* runtime/CommonSlowPaths.h:
* runtime/GeneratorFrame.cpp: Removed.
(JSC::GeneratorFrame::GeneratorFrame): Deleted.
(JSC::GeneratorFrame::finishCreation): Deleted.
(JSC::GeneratorFrame::createStructure): Deleted.
(JSC::GeneratorFrame::create): Deleted.
(JSC::GeneratorFrame::save): Deleted.
(JSC::GeneratorFrame::resume): Deleted.
(JSC::GeneratorFrame::visitChildren): Deleted.
* runtime/GeneratorFrame.h: Removed.
(JSC::GeneratorFrame::locals): Deleted.
(JSC::GeneratorFrame::localAt): Deleted.
(JSC::GeneratorFrame::offsetOfLocals): Deleted.
(JSC::GeneratorFrame::allocationSizeForLocals): Deleted.
* runtime/JSGeneratorFunction.h:
* runtime/VM.cpp:
(JSC::VM::VM):
* runtime/VM.h:
Source/WTF:
* wtf/FastBitVector.h:
(WTF::FastBitVector::FastBitVector):
git-svn-id: https://svn.webkit.org/repository/webkit/trunk@204994
268f45cc-cd09-0410-ab3c-
d52691b4dbfc
+2016-08-25 Yusuke Suzuki <utatane.tea@gmail.com>
+
+ [DFG][FTL] Implement ES6 Generators in DFG / FTL
+ https://bugs.webkit.org/show_bug.cgi?id=152723
+
+ Reviewed by Filip Pizlo.
+
+ * stress/generator-fib-ftl-and-array.js: Added.
+ (fib):
+ * stress/generator-fib-ftl-and-object.js: Added.
+ (fib):
+ * stress/generator-fib-ftl-and-string.js: Added.
+ (fib):
+ * stress/generator-fib-ftl.js: Added.
+ (fib):
+ * stress/generator-frame-empty.js: Added.
+ (shouldThrow):
+ (shouldThrow.fib):
+ * stress/generator-reduced-save-point-put-to-scope.js: Added.
+ (shouldBe):
+ (gen):
+ * stress/generator-transfer-register-beyond-mutiple-yields.js: Added.
+ (shouldBe):
+ (gen):
+
2016-08-25 JF Bastien <jfbastien@apple.com>
TryGetById should have a ValueProfile so that it can predict its output type
--- /dev/null
+(function () {
+ function *fib()
+ {
+ let a = 1;
+ let b = 1;
+ let c = [ 0 ];
+ while (true) {
+ c[0] = a;
+ yield c;
+ [a, b] = [b, a + b];
+ }
+ }
+
+ let value = 0;
+ for (let i = 0; i < 1e4; ++i) {
+ let f = fib();
+ for (let i = 0; i < 100; ++i) {
+ value = f.next().value;
+ }
+ if (value[0] !== 354224848179262000000)
+ throw new Error(`bad value:${result}`);
+ }
+}());
--- /dev/null
+(function () {
+ function *fib()
+ {
+ let a = 1;
+ let b = 1;
+ let c = { fib: 0 };
+ while (true) {
+ c.fib = a;
+ yield c;
+ [a, b] = [b, a + b];
+ }
+ }
+
+ let value = 0;
+ for (let i = 0; i < 1e4; ++i) {
+ let f = fib();
+ for (let i = 0; i < 100; ++i) {
+ value = f.next().value;
+ }
+ if (value.fib !== 354224848179262000000)
+ throw new Error(`bad value:${result}`);
+ }
+}());
--- /dev/null
+(function () {
+ function *fib()
+ {
+ let a = 1;
+ let b = 1;
+ let c = "Result! ";
+ while (true) {
+ yield c + a;
+ [a, b] = [b, a + b];
+ }
+ }
+
+ let value = 0;
+ for (let i = 0; i < 1e4; ++i) {
+ let f = fib();
+ for (let i = 0; i < 100; ++i) {
+ value = f.next().value;
+ }
+ if (value !== `Result! 354224848179262000000`)
+ throw new Error(`bad value:${result}`);
+ }
+}());
--- /dev/null
+(function () {
+ function *fib()
+ {
+ let a = 1;
+ let b = 1;
+ while (true) {
+ yield a;
+ [a, b] = [b, a + b];
+ }
+ }
+
+ let value = 0;
+ for (let i = 0; i < 1e4; ++i) {
+ let f = fib();
+ for (let i = 0; i < 100; ++i) {
+ value = f.next().value;
+ }
+ if (value !== 354224848179262000000)
+ throw new Error(`bad value:${result}`);
+ }
+}());
--- /dev/null
+function shouldThrow(func, errorMessage) {
+ var errorThrown = false;
+ var error = null;
+ try {
+ func();
+ } catch (e) {
+ errorThrown = true;
+ error = e;
+ }
+ if (!errorThrown)
+ throw new Error('not thrown');
+ if (String(error) !== errorMessage)
+ throw new Error(`bad error: ${String(error)}`);
+}
+
+shouldThrow(function () {
+ function *fib(flag)
+ {
+ let a = 1;
+ let b = 1;
+ yield 42;
+ if (flag)
+ return c;
+ let c = 500;
+ }
+
+ let value = 0;
+ for (let i = 0; i < 1e4; ++i) {
+ for (let v of fib(false)) {
+ }
+ }
+ for (let v of fib(true)) {
+ }
+}, `ReferenceError: Cannot access uninitialized variable.`);
--- /dev/null
+function shouldBe(actual, expected) {
+ if (actual !== expected)
+ throw new Error(`bad value: ${String(actual)}`);
+}
+
+function error()
+{
+ throw "ok";
+}
+
+function* gen()
+{
+ var value = 42;
+ try {
+ yield 300;
+ value = 500;
+ error();
+ } catch (e) {
+ yield 42;
+ return value;
+ }
+ return 200;
+}
+
+var g = gen();
+shouldBe(g.next().value, 300);
+shouldBe(g.next().value, 42);
+shouldBe(g.next().value, 500);
--- /dev/null
+function shouldBe(actual, expected) {
+ if (actual !== expected)
+ throw new Error('bad value: ' + actual);
+}
+
+
+function *gen()
+{
+ var test = 42;
+ yield 32;
+ yield 33;
+ yield test;
+}
+
+var g = gen();
+shouldBe(g.next().value, 32);
+shouldBe(g.next().value, 33);
+shouldBe(g.next().value, 42);
bytecode/ArrayAllocationProfile.cpp
bytecode/ArrayProfile.cpp
bytecode/BytecodeBasicBlock.cpp
+ bytecode/BytecodeGeneratorification.cpp
+ bytecode/BytecodeRewriter.cpp
bytecode/BytecodeIntrinsicRegistry.cpp
bytecode/BytecodeLivenessAnalysis.cpp
bytecode/CallEdge.cpp
runtime/FunctionHasExecutedCache.cpp
runtime/FunctionPrototype.cpp
runtime/FunctionRareData.cpp
- runtime/GeneratorFrame.cpp
runtime/GeneratorFunctionConstructor.cpp
runtime/GeneratorFunctionPrototype.cpp
runtime/GeneratorPrototype.cpp
COMMAND ${PYTHON_EXECUTABLE} ${JAVASCRIPTCORE_DIR}/generate-bytecode-files --bytecodes_h ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/Bytecodes.h --init_bytecodes_asm ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/InitBytecodes.asm ${JAVASCRIPTCORE_DIR}/bytecode/BytecodeList.json
VERBATIM)
+list(APPEND JavaScriptCore_HEADERS
+ ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/Bytecodes.h
+)
+
add_custom_command(
OUTPUT ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/LLIntDesiredOffsets.h
MAIN_DEPENDENCY ${JAVASCRIPTCORE_DIR}/offlineasm/generate_offset_extractor.rb
+2016-08-25 Yusuke Suzuki <utatane.tea@gmail.com>
+
+ [DFG][FTL] Implement ES6 Generators in DFG / FTL
+ https://bugs.webkit.org/show_bug.cgi?id=152723
+
+ Reviewed by Filip Pizlo.
+
+ This patch introduces DFG and FTL support for ES6 generators.
+ ES6 generator is compiled by the BytecodeGenerator. But at the last phase, BytecodeGenerator performs "generatorification" onto the unlinked code.
+ In BytecodeGenerator phase, we just emit op_yield for each yield point. And we don't emit any generator related switch, save, and resume sequences
+ here. Those are emitted by the generatorification phase.
+
+ So the graph is super simple! Before the generatorification, the graph looks like this.
+
+ op_enter -> ...... -> op_yield -> ..... -> op_yield -> ...
+
+ Roughly speaking, in the generatorification phase, we turn out which variables should be saved and resumed at each op_yield.
+ This is done by liveness analysis. After that, we convert op_yield to the sequence of "op_put_to_scope", "op_ret", and "op_get_from_scope".
+ op_put_to_scope and op_get_from_scope sequences are corresponding to the save and resume sequences. We set up the scope for the generator frame and
+ perform op_put_to_scope and op_get_from_scope onto it. The live registers are saved and resumed over the generator's next() calls by using this
+ special generator frame scope. And we also set up the global switch for the generator.
+
+ In the generatorification phase,
+
+ 1. We construct the BytecodeGraph from the unlinked instructions. This constructs the basic blocks, and it is used in the subsequent analysis.
+ 2. We perform the analysis onto the unlinked code. We extract the live variables at each op_yield.
+ 3. We insert the get_from_scope and put_to_scope at each op_yield. Which registers should be saved and resumed is offered by (2).
+ Then, clip the op_yield themselves. And we also insert the switch_imm. The jump targets of this switch are just after this op_switch_imm and each op_yield point.
+
+ One interesting point is the try-range. We split the try-range at the op_yield point in BytecodeGenerator phase.
+ This drops the hacky thing that is introduced in [1].
+ If the try-range covers the resume sequences, the exception handler's use-registers are incorrectly transferred to the entry block.
+ For example,
+
+ handler uses r2
+ try-range
+ label:(entry block can jump here) ^
+ r1 = get_from_scope # resume sequence starts | use r2 is transferred to the entry block!
+ r2 = get_from_scope |
+ starts usual sequences |
+ ... |
+
+ Handler's r2 use should be considered at the `r1 = get_from_scope` point.
+ Previously, we handle this edge case by treating op_resume specially in the liveness analysis[1].
+ To drop this workaround, we split the try-range not to cover this resume sequence.
+
+ handler uses r2
+ try-range
+ label:(entry block can jump here)
+ r1 = get_from_scope # resume sequence starts
+ r2 = get_from_scope
+ starts usual sequences ^ try-range should start from here.
+ ... |
+
+ OK. Let's show the detailed example.
+
+ 1. First, there is the normal bytecode sequence. Here, | represents the offsets, and [] represents the bytecodes.
+
+ bytecodes | [ ] | [ ] | [ ] | [ ] | [ ] | [ ] |
+ try-range <----------------------------------->
+
+ 2. When we emit the op_yield in the bytecode generator, we carefully split the try-range.
+
+ bytecodes | [ ] | [ ] | [op_yield] | [ ] | [ ] | [ ] |
+ try-range <-----------> <----------------->
+
+ 3. And in the generatorification phase, we insert the switch's jump target and save & resume sequences. And we also drop op_yield.
+
+ Insert save seq Insert resume seq
+ before op_yield. after op_yield's point.
+ v v
+ bytecodes | [ ] | [ ] | [op_yield] | [ ] | [ ] | [ ] |
+ try-range <-----------> ^ <----------------->
+ ^ |
+ Jump to here. Drop this op_yield.
+
+ 4. The final layout is the following.
+
+ bytecodes | [ ] | [ ][save seq][op_ret] | [resume seq] | [ ] | [ ] | [ ] |
+ try-range <-----------------------------> <---------------->
+ ^
+ Jump to here.
+
+ The rewriting done by the BytecodeRewriter is executed in a batch manner. Since these modification changes the basic blocks and size of unlinked instructions,
+ BytecodeRewriter also performs the offset adjustment for UnlinkedCodeBlock. So, this rewriting is performed onto the BytecodeGraph rather than BytecodeBasicBlock.
+ The reason why we take this design is simple: we don't want to newly create the basic blocks and opcodes for this early phase like DFG. Instead, we perform the
+ modification and adjustment to the unlinked instructions and UnlinkedCodeBlock in a in-place manner.
+
+ Bytecode rewriting functionality is offered by BytecodeRewriter. BytecodeRewriter allows us to insert any bytecodes to any places
+ in a in-place manner. BytecodeRewriter handles the original bytecode offsets as labels. And you can insert bytecodes before and after
+ these labels. You can also insert any jumps to any places. When you insert jumps, you need to specify jump target with this labels.
+ These labels (original bytecode offsets) are automatically converted to the appropriate offsets by BytecodeRewriter.
+
+ After that phase, the data flow of the generator-saved-and-resumed-registers are explicitly represented by the get_from_scope and put_to_scope.
+ And the switch is inserted to represent the actual control flow for the generator. And op_yield is removed. Since we use the existing bytecodes (op_switch_imm, op_put_to_scope
+ op_ret, and op_get_from_scope), DFG and FTL changes are not necessary. This patch also drops data structures and implementations for the old generator,
+ op_resume, op_save implementations and GeneratorFrame.
+
+ Note that this patch does not leverage the recent multi entrypoints support in B3. After this patch is introduced, we will submit a new patch that leverages the multi
+ entrypoints for generator's resume and sees the performance gain.
+
+ Microbenchmarks related to generators show up to 2.9x improvements.
+
+ Baseline Patched
+
+ generator-fib 102.0116+-3.2880 ^ 34.9670+-0.2221 ^ definitely 2.9174x faster
+ generator-sunspider-access-nsieve 5.8596+-0.0371 ^ 4.9051+-0.0720 ^ definitely 1.1946x faster
+ generator-with-several-types 332.1478+-4.2425 ^ 124.6642+-2.4826 ^ definitely 2.6643x faster
+
+ <geometric> 58.2998+-0.7758 ^ 27.7425+-0.2577 ^ definitely 2.1015x faster
+
+ In ES6SampleBench's Basic, we can observe 41% improvement (Macbook Pro).
+
+ Baseline:
+ Geometric Mean Result: 133.55 ms +- 4.49 ms
+
+ Benchmark First Iteration Worst 2% Steady State
+ Air 54.03 ms +- 7.51 ms 29.06 ms +- 3.13 ms 2276.59 ms +- 61.17 ms
+ Basic 30.18 ms +- 1.86 ms 18.85 ms +- 0.45 ms 2851.16 ms +- 41.87 ms
+
+ Patched:
+ Geometric Mean Result: 121.78 ms +- 3.96 ms
+
+ Benchmark First Iteration Worst 2% Steady State
+ Air 52.09 ms +- 6.89 ms 29.59 ms +- 3.16 ms 2239.90 ms +- 54.60 ms
+ Basic 29.28 ms +- 1.46 ms 16.26 ms +- 0.66 ms 2025.15 ms +- 38.56 ms
+
+ [1]: https://bugs.webkit.org/show_bug.cgi?id=159281
+
+ * CMakeLists.txt:
+ * JavaScriptCore.xcodeproj/project.pbxproj:
+ * builtins/GeneratorPrototype.js:
+ (globalPrivate.generatorResume):
+ * bytecode/BytecodeBasicBlock.cpp:
+ (JSC::BytecodeBasicBlock::shrinkToFit):
+ (JSC::BytecodeBasicBlock::computeImpl):
+ (JSC::BytecodeBasicBlock::compute):
+ (JSC::isBranch): Deleted.
+ (JSC::isUnconditionalBranch): Deleted.
+ (JSC::isTerminal): Deleted.
+ (JSC::isThrow): Deleted.
+ (JSC::linkBlocks): Deleted.
+ (JSC::computeBytecodeBasicBlocks): Deleted.
+ * bytecode/BytecodeBasicBlock.h:
+ (JSC::BytecodeBasicBlock::isEntryBlock):
+ (JSC::BytecodeBasicBlock::isExitBlock):
+ (JSC::BytecodeBasicBlock::leaderOffset):
+ (JSC::BytecodeBasicBlock::totalLength):
+ (JSC::BytecodeBasicBlock::offsets):
+ (JSC::BytecodeBasicBlock::successors):
+ (JSC::BytecodeBasicBlock::index):
+ (JSC::BytecodeBasicBlock::addSuccessor):
+ (JSC::BytecodeBasicBlock::BytecodeBasicBlock):
+ (JSC::BytecodeBasicBlock::addLength):
+ (JSC::BytecodeBasicBlock::leaderBytecodeOffset): Deleted.
+ (JSC::BytecodeBasicBlock::totalBytecodeLength): Deleted.
+ (JSC::BytecodeBasicBlock::bytecodeOffsets): Deleted.
+ (JSC::BytecodeBasicBlock::addBytecodeLength): Deleted.
+ * bytecode/BytecodeGeneratorification.cpp: Added.
+ (JSC::BytecodeGeneratorification::BytecodeGeneratorification):
+ (JSC::BytecodeGeneratorification::graph):
+ (JSC::BytecodeGeneratorification::yields):
+ (JSC::BytecodeGeneratorification::enterPoint):
+ (JSC::BytecodeGeneratorification::storageForGeneratorLocal):
+ (JSC::GeneratorLivenessAnalysis::GeneratorLivenessAnalysis):
+ (JSC::GeneratorLivenessAnalysis::computeDefsForBytecodeOffset):
+ (JSC::GeneratorLivenessAnalysis::computeUsesForBytecodeOffset):
+ (JSC::GeneratorLivenessAnalysis::run):
+ (JSC::BytecodeGeneratorification::run):
+ (JSC::performGeneratorification):
+ * bytecode/BytecodeGeneratorification.h: Copied from Source/JavaScriptCore/bytecode/BytecodeLivenessAnalysisInlines.h.
+ * bytecode/BytecodeGraph.h: Added.
+ (JSC::BytecodeGraph::codeBlock):
+ (JSC::BytecodeGraph::instructions):
+ (JSC::BytecodeGraph::basicBlocksInReverseOrder):
+ (JSC::BytecodeGraph::blockContainsBytecodeOffset):
+ (JSC::BytecodeGraph::findBasicBlockForBytecodeOffset):
+ (JSC::BytecodeGraph::findBasicBlockWithLeaderOffset):
+ (JSC::BytecodeGraph::size):
+ (JSC::BytecodeGraph::at):
+ (JSC::BytecodeGraph::operator[]):
+ (JSC::BytecodeGraph::begin):
+ (JSC::BytecodeGraph::end):
+ (JSC::BytecodeGraph::first):
+ (JSC::BytecodeGraph::last):
+ (JSC::BytecodeGraph<Block>::BytecodeGraph):
+ * bytecode/BytecodeList.json:
+ * bytecode/BytecodeLivenessAnalysis.cpp:
+ (JSC::BytecodeLivenessAnalysis::BytecodeLivenessAnalysis):
+ (JSC::BytecodeLivenessAnalysis::computeDefsForBytecodeOffset):
+ (JSC::BytecodeLivenessAnalysis::computeUsesForBytecodeOffset):
+ (JSC::BytecodeLivenessAnalysis::getLivenessInfoAtBytecodeOffset):
+ (JSC::BytecodeLivenessAnalysis::computeFullLiveness):
+ (JSC::BytecodeLivenessAnalysis::computeKills):
+ (JSC::BytecodeLivenessAnalysis::dumpResults):
+ (JSC::BytecodeLivenessAnalysis::compute):
+ (JSC::isValidRegisterForLiveness): Deleted.
+ (JSC::getLeaderOffsetForBasicBlock): Deleted.
+ (JSC::findBasicBlockWithLeaderOffset): Deleted.
+ (JSC::blockContainsBytecodeOffset): Deleted.
+ (JSC::findBasicBlockForBytecodeOffset): Deleted.
+ (JSC::stepOverInstruction): Deleted.
+ (JSC::computeLocalLivenessForBytecodeOffset): Deleted.
+ (JSC::computeLocalLivenessForBlock): Deleted.
+ (JSC::BytecodeLivenessAnalysis::runLivenessFixpoint): Deleted.
+ * bytecode/BytecodeLivenessAnalysis.h:
+ * bytecode/BytecodeLivenessAnalysisInlines.h:
+ (JSC::isValidRegisterForLiveness):
+ (JSC::BytecodeLivenessPropagation<DerivedAnalysis>::stepOverInstruction):
+ (JSC::BytecodeLivenessPropagation<DerivedAnalysis>::computeLocalLivenessForBytecodeOffset):
+ (JSC::BytecodeLivenessPropagation<DerivedAnalysis>::computeLocalLivenessForBlock):
+ (JSC::BytecodeLivenessPropagation<DerivedAnalysis>::getLivenessInfoAtBytecodeOffset):
+ (JSC::BytecodeLivenessPropagation<DerivedAnalysis>::runLivenessFixpoint):
+ * bytecode/BytecodeRewriter.cpp: Added.
+ (JSC::BytecodeRewriter::applyModification):
+ (JSC::BytecodeRewriter::execute):
+ (JSC::BytecodeRewriter::adjustJumpTargetsInFragment):
+ (JSC::BytecodeRewriter::insertImpl):
+ (JSC::BytecodeRewriter::adjustJumpTarget):
+ * bytecode/BytecodeRewriter.h: Added.
+ (JSC::BytecodeRewriter::InsertionPoint::InsertionPoint):
+ (JSC::BytecodeRewriter::InsertionPoint::operator<):
+ (JSC::BytecodeRewriter::InsertionPoint::operator==):
+ (JSC::BytecodeRewriter::Insertion::length):
+ (JSC::BytecodeRewriter::Fragment::Fragment):
+ (JSC::BytecodeRewriter::Fragment::appendInstruction):
+ (JSC::BytecodeRewriter::BytecodeRewriter):
+ (JSC::BytecodeRewriter::insertFragmentBefore):
+ (JSC::BytecodeRewriter::insertFragmentAfter):
+ (JSC::BytecodeRewriter::removeBytecode):
+ (JSC::BytecodeRewriter::graph):
+ (JSC::BytecodeRewriter::adjustAbsoluteOffset):
+ (JSC::BytecodeRewriter::adjustJumpTarget):
+ (JSC::BytecodeRewriter::calculateDifference):
+ * bytecode/BytecodeUseDef.h:
+ (JSC::computeUsesForBytecodeOffset):
+ (JSC::computeDefsForBytecodeOffset):
+ * bytecode/CodeBlock.cpp:
+ (JSC::CodeBlock::dumpBytecode):
+ (JSC::CodeBlock::finishCreation):
+ (JSC::CodeBlock::handlerForIndex):
+ (JSC::CodeBlock::shrinkToFit):
+ (JSC::CodeBlock::valueProfileForBytecodeOffset):
+ (JSC::CodeBlock::livenessAnalysisSlow):
+ * bytecode/CodeBlock.h:
+ (JSC::CodeBlock::isConstantRegisterIndex):
+ (JSC::CodeBlock::livenessAnalysis):
+ (JSC::CodeBlock::liveCalleeLocalsAtYield): Deleted.
+ * bytecode/HandlerInfo.h:
+ (JSC::HandlerInfoBase::handlerForIndex):
+ * bytecode/Opcode.h:
+ (JSC::isBranch):
+ (JSC::isUnconditionalBranch):
+ (JSC::isTerminal):
+ (JSC::isThrow):
+ * bytecode/PreciseJumpTargets.cpp:
+ (JSC::getJumpTargetsForBytecodeOffset):
+ (JSC::computePreciseJumpTargetsInternal):
+ (JSC::computePreciseJumpTargets):
+ (JSC::recomputePreciseJumpTargets):
+ (JSC::findJumpTargetsForBytecodeOffset):
+ * bytecode/PreciseJumpTargets.h:
+ * bytecode/PreciseJumpTargetsInlines.h: Added.
+ (JSC::extractStoredJumpTargetsForBytecodeOffset):
+ * bytecode/UnlinkedCodeBlock.cpp:
+ (JSC::UnlinkedCodeBlock::handlerForBytecodeOffset):
+ (JSC::UnlinkedCodeBlock::handlerForIndex):
+ (JSC::UnlinkedCodeBlock::applyModification):
+ * bytecode/UnlinkedCodeBlock.h:
+ (JSC::UnlinkedStringJumpTable::offsetForValue):
+ (JSC::UnlinkedCodeBlock::numCalleeLocals):
+ * bytecode/VirtualRegister.h:
+ * bytecompiler/BytecodeGenerator.cpp:
+ (JSC::BytecodeGenerator::generate):
+ (JSC::BytecodeGenerator::BytecodeGenerator):
+ (JSC::BytecodeGenerator::emitComplexPopScopes):
+ (JSC::prepareJumpTableForStringSwitch):
+ (JSC::BytecodeGenerator::emitYieldPoint):
+ (JSC::BytecodeGenerator::emitSave): Deleted.
+ (JSC::BytecodeGenerator::emitResume): Deleted.
+ (JSC::BytecodeGenerator::emitGeneratorStateLabel): Deleted.
+ (JSC::BytecodeGenerator::beginGenerator): Deleted.
+ (JSC::BytecodeGenerator::endGenerator): Deleted.
+ * bytecompiler/BytecodeGenerator.h:
+ (JSC::BytecodeGenerator::generatorStateRegister):
+ (JSC::BytecodeGenerator::generatorValueRegister):
+ (JSC::BytecodeGenerator::generatorResumeModeRegister):
+ (JSC::BytecodeGenerator::generatorFrameRegister):
+ * bytecompiler/NodesCodegen.cpp:
+ (JSC::FunctionNode::emitBytecode):
+ * dfg/DFGOperations.cpp:
+ * interpreter/Interpreter.cpp:
+ (JSC::findExceptionHandler):
+ (JSC::GetCatchHandlerFunctor::operator()):
+ (JSC::UnwindFunctor::operator()):
+ * interpreter/Interpreter.h:
+ * interpreter/InterpreterInlines.h: Copied from Source/JavaScriptCore/bytecode/PreciseJumpTargets.h.
+ (JSC::Interpreter::getOpcodeID):
+ * jit/JIT.cpp:
+ (JSC::JIT::privateCompileMainPass):
+ * jit/JIT.h:
+ * jit/JITOpcodes.cpp:
+ (JSC::JIT::emit_op_save): Deleted.
+ (JSC::JIT::emit_op_resume): Deleted.
+ * llint/LowLevelInterpreter.asm:
+ * parser/Parser.cpp:
+ (JSC::Parser<LexerType>::parseInner):
+ (JSC::Parser<LexerType>::parseGeneratorFunctionSourceElements):
+ (JSC::Parser<LexerType>::createGeneratorParameters):
+ * parser/Parser.h:
+ * runtime/CommonSlowPaths.cpp:
+ (JSC::SLOW_PATH_DECL): Deleted.
+ * runtime/CommonSlowPaths.h:
+ * runtime/GeneratorFrame.cpp: Removed.
+ (JSC::GeneratorFrame::GeneratorFrame): Deleted.
+ (JSC::GeneratorFrame::finishCreation): Deleted.
+ (JSC::GeneratorFrame::createStructure): Deleted.
+ (JSC::GeneratorFrame::create): Deleted.
+ (JSC::GeneratorFrame::save): Deleted.
+ (JSC::GeneratorFrame::resume): Deleted.
+ (JSC::GeneratorFrame::visitChildren): Deleted.
+ * runtime/GeneratorFrame.h: Removed.
+ (JSC::GeneratorFrame::locals): Deleted.
+ (JSC::GeneratorFrame::localAt): Deleted.
+ (JSC::GeneratorFrame::offsetOfLocals): Deleted.
+ (JSC::GeneratorFrame::allocationSizeForLocals): Deleted.
+ * runtime/JSGeneratorFunction.h:
+ * runtime/VM.cpp:
+ (JSC::VM::VM):
+ * runtime/VM.h:
+
2016-08-25 JF Bastien <jfbastien@apple.com>
TryGetById should have a ValueProfile so that it can predict its output type
53F40E8D1D5901F20099A1B6 /* WASMParser.h in Headers */ = {isa = PBXBuildFile; fileRef = 53F40E8C1D5901F20099A1B6 /* WASMParser.h */; };
53F40E8F1D5902820099A1B6 /* WASMB3IRGenerator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 53F40E8E1D5902820099A1B6 /* WASMB3IRGenerator.cpp */; };
53F40E911D5903020099A1B6 /* WASMOps.h in Headers */ = {isa = PBXBuildFile; fileRef = 53F40E901D5903020099A1B6 /* WASMOps.h */; };
- 53F40E931D5A4AB30099A1B6 /* WASMB3IRGenerator.h in Headers */ = {isa = PBXBuildFile; fileRef = 53F40E921D5A4AB30099A1B6 /* WASMB3IRGenerator.h */; };
+ 53F40E931D5A4AB30099A1B6 /* WASMB3IRGenerator.h in Headers */ = {isa = PBXBuildFile; fileRef = 53F40E921D5A4AB30099A1B6 /* WASMB3IRGenerator.h */; };
53F40E951D5A7AEF0099A1B6 /* WASMModuleParser.h in Headers */ = {isa = PBXBuildFile; fileRef = 53F40E941D5A7AEF0099A1B6 /* WASMModuleParser.h */; };
53F40E971D5A7BEC0099A1B6 /* WASMModuleParser.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 53F40E961D5A7BEC0099A1B6 /* WASMModuleParser.cpp */; };
53F6BF6D1C3F060A00F41E5D /* InternalFunctionAllocationProfile.h in Headers */ = {isa = PBXBuildFile; fileRef = 53F6BF6C1C3F060A00F41E5D /* InternalFunctionAllocationProfile.h */; settings = {ATTRIBUTES = (Private, ); }; };
709FB86C1AE335C60039D069 /* WeakSetPrototype.h in Headers */ = {isa = PBXBuildFile; fileRef = 709FB8661AE335C60039D069 /* WeakSetPrototype.h */; settings = {ATTRIBUTES = (Private, ); }; };
70B0A9D11A9B66460001306A /* RuntimeFlags.h in Headers */ = {isa = PBXBuildFile; fileRef = 70B0A9D01A9B66200001306A /* RuntimeFlags.h */; settings = {ATTRIBUTES = (Private, ); }; };
70B791911C024A13002481E2 /* SourceCodeKey.h in Headers */ = {isa = PBXBuildFile; fileRef = 70B7918E1C0244C9002481E2 /* SourceCodeKey.h */; settings = {ATTRIBUTES = (Private, ); }; };
- 70B791921C024A23002481E2 /* GeneratorFrame.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 70B791831C024432002481E2 /* GeneratorFrame.cpp */; };
- 70B791931C024A28002481E2 /* GeneratorFrame.h in Headers */ = {isa = PBXBuildFile; fileRef = 70B791841C024432002481E2 /* GeneratorFrame.h */; settings = {ATTRIBUTES = (Private, ); }; };
70B791941C024A28002481E2 /* GeneratorFunctionConstructor.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 70B791851C024432002481E2 /* GeneratorFunctionConstructor.cpp */; };
70B791951C024A28002481E2 /* GeneratorFunctionConstructor.h in Headers */ = {isa = PBXBuildFile; fileRef = 70B791861C024432002481E2 /* GeneratorFunctionConstructor.h */; settings = {ATTRIBUTES = (Private, ); }; };
70B791961C024A28002481E2 /* GeneratorFunctionPrototype.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 70B791871C024432002481E2 /* GeneratorFunctionPrototype.cpp */; };
E18E3A590DF9278C00D90B34 /* VM.cpp in Sources */ = {isa = PBXBuildFile; fileRef = E18E3A570DF9278C00D90B34 /* VM.cpp */; };
E318CBC01B8AEF5100A2929D /* JSModuleNamespaceObject.cpp in Sources */ = {isa = PBXBuildFile; fileRef = E318CBBE1B8AEF5100A2929D /* JSModuleNamespaceObject.cpp */; };
E318CBC11B8AEF5100A2929D /* JSModuleNamespaceObject.h in Headers */ = {isa = PBXBuildFile; fileRef = E318CBBF1B8AEF5100A2929D /* JSModuleNamespaceObject.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ E328DAE71D38D004001A2529 /* BytecodeGeneratorification.cpp in Sources */ = {isa = PBXBuildFile; fileRef = E3D264261D38C042000BE174 /* BytecodeGeneratorification.cpp */; };
+ E328DAE81D38D005001A2529 /* BytecodeGeneratorification.h in Headers */ = {isa = PBXBuildFile; fileRef = E3D264271D38C042000BE174 /* BytecodeGeneratorification.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ E328DAE91D38D005001A2529 /* BytecodeGraph.h in Headers */ = {isa = PBXBuildFile; fileRef = E3D264281D38C042000BE174 /* BytecodeGraph.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ E328DAEA1D38D005001A2529 /* BytecodeRewriter.cpp in Sources */ = {isa = PBXBuildFile; fileRef = E3D264291D38C042000BE174 /* BytecodeRewriter.cpp */; };
+ E328DAEB1D38D005001A2529 /* BytecodeRewriter.h in Headers */ = {isa = PBXBuildFile; fileRef = E3D2642A1D38C042000BE174 /* BytecodeRewriter.h */; settings = {ATTRIBUTES = (Private, ); }; };
E33637A51B63220200EE0840 /* ReflectObject.cpp in Sources */ = {isa = PBXBuildFile; fileRef = E33637A31B63220200EE0840 /* ReflectObject.cpp */; };
E33637A61B63220200EE0840 /* ReflectObject.h in Headers */ = {isa = PBXBuildFile; fileRef = E33637A41B63220200EE0840 /* ReflectObject.h */; settings = {ATTRIBUTES = (Private, ); }; };
E33B3E261B7ABD750048DB2E /* InspectorInstrumentationObject.lut.h in Headers */ = {isa = PBXBuildFile; fileRef = E33B3E251B7ABD750048DB2E /* InspectorInstrumentationObject.lut.h */; };
E3794E751B77EB97005543AE /* ModuleAnalyzer.cpp in Sources */ = {isa = PBXBuildFile; fileRef = E3794E731B77EB97005543AE /* ModuleAnalyzer.cpp */; };
E3794E761B77EB97005543AE /* ModuleAnalyzer.h in Headers */ = {isa = PBXBuildFile; fileRef = E3794E741B77EB97005543AE /* ModuleAnalyzer.h */; settings = {ATTRIBUTES = (Private, ); }; };
E3963CEE1B73F75000EB4CE5 /* NodesAnalyzeModule.cpp in Sources */ = {isa = PBXBuildFile; fileRef = E3963CEC1B73F75000EB4CE5 /* NodesAnalyzeModule.cpp */; };
+ E39D45F51D39005600B3B377 /* InterpreterInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = E39D9D841D39000600667282 /* InterpreterInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
E39DA4A61B7E8B7C0084F33A /* JSModuleRecord.cpp in Sources */ = {isa = PBXBuildFile; fileRef = E39DA4A41B7E8B7C0084F33A /* JSModuleRecord.cpp */; };
E39DA4A71B7E8B7C0084F33A /* JSModuleRecord.h in Headers */ = {isa = PBXBuildFile; fileRef = E39DA4A51B7E8B7C0084F33A /* JSModuleRecord.h */; settings = {ATTRIBUTES = (Private, ); }; };
+ E3A421431D6F58930007C617 /* PreciseJumpTargetsInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = E3A421421D6F588F0007C617 /* PreciseJumpTargetsInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
E3D239C81B829C1C00BBEF67 /* JSModuleEnvironment.cpp in Sources */ = {isa = PBXBuildFile; fileRef = E3D239C61B829C1C00BBEF67 /* JSModuleEnvironment.cpp */; };
E3D239C91B829C1C00BBEF67 /* JSModuleEnvironment.h in Headers */ = {isa = PBXBuildFile; fileRef = E3D239C71B829C1C00BBEF67 /* JSModuleEnvironment.h */; settings = {ATTRIBUTES = (Private, ); }; };
E3EF88741B66DF23003F26CB /* JSPropertyNameIterator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = E3EF88721B66DF23003F26CB /* JSPropertyNameIterator.cpp */; };
53F40E8C1D5901F20099A1B6 /* WASMParser.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WASMParser.h; sourceTree = "<group>"; };
53F40E8E1D5902820099A1B6 /* WASMB3IRGenerator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = WASMB3IRGenerator.cpp; sourceTree = "<group>"; };
53F40E901D5903020099A1B6 /* WASMOps.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WASMOps.h; sourceTree = "<group>"; };
- 53F40E921D5A4AB30099A1B6 /* WASMB3IRGenerator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WASMB3IRGenerator.h; sourceTree = "<group>"; };
+ 53F40E921D5A4AB30099A1B6 /* WASMB3IRGenerator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WASMB3IRGenerator.h; sourceTree = "<group>"; };
53F40E941D5A7AEF0099A1B6 /* WASMModuleParser.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WASMModuleParser.h; sourceTree = "<group>"; };
53F40E961D5A7BEC0099A1B6 /* WASMModuleParser.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = WASMModuleParser.cpp; sourceTree = "<group>"; };
53F6BF6C1C3F060A00F41E5D /* InternalFunctionAllocationProfile.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = InternalFunctionAllocationProfile.h; sourceTree = "<group>"; };
709FB8651AE335C60039D069 /* WeakSetPrototype.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = WeakSetPrototype.cpp; sourceTree = "<group>"; };
709FB8661AE335C60039D069 /* WeakSetPrototype.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WeakSetPrototype.h; sourceTree = "<group>"; };
70B0A9D01A9B66200001306A /* RuntimeFlags.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = RuntimeFlags.h; sourceTree = "<group>"; };
- 70B791831C024432002481E2 /* GeneratorFrame.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = GeneratorFrame.cpp; sourceTree = "<group>"; };
- 70B791841C024432002481E2 /* GeneratorFrame.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GeneratorFrame.h; sourceTree = "<group>"; };
70B791851C024432002481E2 /* GeneratorFunctionConstructor.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = GeneratorFunctionConstructor.cpp; sourceTree = "<group>"; };
70B791861C024432002481E2 /* GeneratorFunctionConstructor.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = GeneratorFunctionConstructor.h; sourceTree = "<group>"; };
70B791871C024432002481E2 /* GeneratorFunctionPrototype.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = GeneratorFunctionPrototype.cpp; sourceTree = "<group>"; };
E3794E731B77EB97005543AE /* ModuleAnalyzer.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ModuleAnalyzer.cpp; sourceTree = "<group>"; };
E3794E741B77EB97005543AE /* ModuleAnalyzer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ModuleAnalyzer.h; sourceTree = "<group>"; };
E3963CEC1B73F75000EB4CE5 /* NodesAnalyzeModule.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = NodesAnalyzeModule.cpp; sourceTree = "<group>"; };
+ E39D9D841D39000600667282 /* InterpreterInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = InterpreterInlines.h; sourceTree = "<group>"; };
E39DA4A41B7E8B7C0084F33A /* JSModuleRecord.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSModuleRecord.cpp; sourceTree = "<group>"; };
E39DA4A51B7E8B7C0084F33A /* JSModuleRecord.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSModuleRecord.h; sourceTree = "<group>"; };
+ E3A421421D6F588F0007C617 /* PreciseJumpTargetsInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PreciseJumpTargetsInlines.h; sourceTree = "<group>"; };
E3D239C61B829C1C00BBEF67 /* JSModuleEnvironment.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSModuleEnvironment.cpp; sourceTree = "<group>"; };
E3D239C71B829C1C00BBEF67 /* JSModuleEnvironment.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSModuleEnvironment.h; sourceTree = "<group>"; };
+ E3D264261D38C042000BE174 /* BytecodeGeneratorification.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = BytecodeGeneratorification.cpp; sourceTree = "<group>"; };
+ E3D264271D38C042000BE174 /* BytecodeGeneratorification.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BytecodeGeneratorification.h; sourceTree = "<group>"; };
+ E3D264281D38C042000BE174 /* BytecodeGraph.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BytecodeGraph.h; sourceTree = "<group>"; };
+ E3D264291D38C042000BE174 /* BytecodeRewriter.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = BytecodeRewriter.cpp; sourceTree = "<group>"; };
+ E3D2642A1D38C042000BE174 /* BytecodeRewriter.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BytecodeRewriter.h; sourceTree = "<group>"; };
E3EF88721B66DF23003F26CB /* JSPropertyNameIterator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSPropertyNameIterator.cpp; sourceTree = "<group>"; };
E3EF88731B66DF23003F26CB /* JSPropertyNameIterator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSPropertyNameIterator.h; sourceTree = "<group>"; };
E49DC14912EF261A00184A1F /* SourceProviderCacheItem.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = SourceProviderCacheItem.h; sourceTree = "<group>"; };
1429D77A0ED20D7300B89619 /* interpreter */ = {
isa = PBXGroup;
children = (
+ E39D9D841D39000600667282 /* InterpreterInlines.h */,
0F55F0F114D1063600AC7649 /* AbstractPC.cpp */,
0F55F0F214D1063600AC7649 /* AbstractPC.h */,
1429D85B0ED218E900B89619 /* CLoopStack.cpp */,
53F40E901D5903020099A1B6 /* WASMOps.h */,
7BC547D21B69599B00959B58 /* WASMFormat.h */,
53F40E8E1D5902820099A1B6 /* WASMB3IRGenerator.cpp */,
- 53F40E921D5A4AB30099A1B6 /* WASMB3IRGenerator.h */,
+ 53F40E921D5A4AB30099A1B6 /* WASMB3IRGenerator.h */,
53F40E8A1D5901BB0099A1B6 /* WASMFunctionParser.h */,
53F40E961D5A7BEC0099A1B6 /* WASMModuleParser.cpp */,
53F40E941D5A7AEF0099A1B6 /* WASMModuleParser.h */,
F692A85D0255597D01FF60F7 /* FunctionPrototype.h */,
62D2D38D1ADF103F000206C1 /* FunctionRareData.cpp */,
62D2D38E1ADF103F000206C1 /* FunctionRareData.h */,
- 70B791831C024432002481E2 /* GeneratorFrame.cpp */,
- 70B791841C024432002481E2 /* GeneratorFrame.h */,
70B791851C024432002481E2 /* GeneratorFunctionConstructor.cpp */,
70B791861C024432002481E2 /* GeneratorFunctionConstructor.h */,
70B791871C024432002481E2 /* GeneratorFunctionPrototype.cpp */,
969A078F0ED1D3AE00F1F681 /* bytecode */ = {
isa = PBXGroup;
children = (
+ E3A421421D6F588F0007C617 /* PreciseJumpTargetsInlines.h */,
+ E3D264261D38C042000BE174 /* BytecodeGeneratorification.cpp */,
+ E3D264271D38C042000BE174 /* BytecodeGeneratorification.h */,
+ E3D264281D38C042000BE174 /* BytecodeGraph.h */,
+ E3D264291D38C042000BE174 /* BytecodeRewriter.cpp */,
+ E3D2642A1D38C042000BE174 /* BytecodeRewriter.h */,
5370B4F31BF25EA2005C40FC /* AdaptiveInferredPropertyValueWatchpointBase.cpp */,
5370B4F41BF25EA2005C40FC /* AdaptiveInferredPropertyValueWatchpointBase.h */,
0F8335B41639C1E3001443B5 /* ArrayAllocationProfile.cpp */,
0FEC852D1BDACDAC0080FF74 /* B3ProcedureInlines.h in Headers */,
0FEC85BD1BE1462F0080FF74 /* B3ReduceStrength.h in Headers */,
0FEC85311BDACDAC0080FF74 /* B3StackmapSpecial.h in Headers */,
+ E328DAE81D38D005001A2529 /* BytecodeGeneratorification.h in Headers */,
0FEC85351BDACDAC0080FF74 /* B3SlotBaseValue.h in Headers */,
0FEC85361BDACDAC0080FF74 /* B3SuccessorCollection.h in Headers */,
0FEC85381BDACDAC0080FF74 /* B3SwitchCase.h in Headers */,
99DA00A91BD5993100F4575C /* builtins_generate_separate_header.py in Headers */,
0F338E111BF0276C0013C88F /* B3OpaqueByproduct.h in Headers */,
FEA0C4031CDD7D1D00481991 /* FunctionWhitelist.h in Headers */,
+ E3A421431D6F58930007C617 /* PreciseJumpTargetsInlines.h in Headers */,
99DA00AA1BD5993100F4575C /* builtins_generate_separate_implementation.py in Headers */,
99DA00A31BD5993100F4575C /* builtins_generator.py in Headers */,
412952781D2CF6BC00E78B89 /* builtins_generate_internals_wrapper_implementation.py in Headers */,
A737810E1799EA2E00817533 /* DFGNaturalLoops.h in Headers */,
86ECA3EA132DEF1C002B2AD7 /* DFGNode.h in Headers */,
0FFB921B16D02F010055A5DB /* DFGNodeAllocator.h in Headers */,
- 70B791931C024A28002481E2 /* GeneratorFrame.h in Headers */,
0FA581BB150E953000B9A2D9 /* DFGNodeFlags.h in Headers */,
0F300B7818AB051100A6D72E /* DFGNodeOrigin.h in Headers */,
0FA581BC150E953000B9A2D9 /* DFGNodeType.h in Headers */,
0FD8A32617D51F5700CA2C40 /* DFGOSREntrypointCreationPhase.h in Headers */,
0FC0976A1468A6F700CF2442 /* DFGOSRExit.h in Headers */,
0F235BEC17178E7300690C7F /* DFGOSRExitBase.h in Headers */,
+ E39D45F51D39005600B3B377 /* InterpreterInlines.h in Headers */,
0FFB921C16D02F110055A5DB /* DFGOSRExitCompilationInfo.h in Headers */,
0FC0977114693AF500CF2442 /* DFGOSRExitCompiler.h in Headers */,
0F7025AA1714B0FC00382C0E /* DFGOSRExitCompilerCommon.h in Headers */,
E33B3E261B7ABD750048DB2E /* InspectorInstrumentationObject.lut.h in Headers */,
A532438C18568335002ED692 /* InspectorProtocolObjects.h in Headers */,
A55D93AC18514F7900400DED /* InspectorProtocolTypes.h in Headers */,
+ E328DAEB1D38D005001A2529 /* BytecodeRewriter.h in Headers */,
A50E4B6218809DD50068A46D /* InspectorRuntimeAgent.h in Headers */,
A593CF831840377100BFCE27 /* InspectorValues.h in Headers */,
969A07990ED1D3AE00F1F681 /* Instruction.h in Headers */,
E33F50811B8429A400413856 /* JSInternalPromise.h in Headers */,
0F61832A1C45BF070072450B /* AirCCallingConvention.h in Headers */,
E33F50791B84225700413856 /* JSInternalPromiseConstructor.h in Headers */,
+ E328DAE91D38D005001A2529 /* BytecodeGraph.h in Headers */,
E33F50871B8449EF00413856 /* JSInternalPromiseConstructor.lut.h in Headers */,
E33F50851B8437A000413856 /* JSInternalPromiseDeferred.h in Headers */,
E33F50751B8421C000413856 /* JSInternalPromisePrototype.h in Headers */,
0FB5467714F59B5C002C2989 /* LazyOperandValueProfile.h in Headers */,
99DA00B01BD5994E00F4575C /* lazywriter.py in Headers */,
BC18C4310E16F5CD00B34460 /* Lexer.h in Headers */,
- 53F40E931D5A4AB30099A1B6 /* WASMB3IRGenerator.h in Headers */,
+ 53F40E931D5A4AB30099A1B6 /* WASMB3IRGenerator.h in Headers */,
BC18C52E0E16FCE100B34460 /* Lexer.lut.h in Headers */,
DCF3D56B1CD29472003D5C65 /* LazyClassStructureInlines.h in Headers */,
FE187A021BFBE5610038BBCA /* JITMulGenerator.h in Headers */,
147F39BD107EC37600427A48 /* ArgList.cpp in Sources */,
0F743BAA16B88249009F9277 /* ARM64Disassembler.cpp in Sources */,
86D3B2C310156BDE002865E7 /* ARMAssembler.cpp in Sources */,
+ E328DAE71D38D004001A2529 /* BytecodeGeneratorification.cpp in Sources */,
65C02850171795E200351E35 /* ARMv7Disassembler.cpp in Sources */,
65C0285C1717966800351E35 /* ARMv7DOpcode.cpp in Sources */,
0F8335B71639C1E6001443B5 /* ArrayAllocationProfile.cpp in Sources */,
0F25F1B1181635F300522F39 /* FTLSlowPathCall.cpp in Sources */,
0F338DF11BE93AD10013C88F /* B3StackmapValue.cpp in Sources */,
0F25F1B3181635F300522F39 /* FTLSlowPathCallKey.cpp in Sources */,
+ E328DAEA1D38D005001A2529 /* BytecodeRewriter.cpp in Sources */,
4319DA031C1BE40A001D260B /* B3LowerMacrosAfterOptimizations.cpp in Sources */,
0FEA0A161706BB9000BB722C /* FTLState.cpp in Sources */,
0F235BE117178E1C00690C7F /* FTLThunks.cpp in Sources */,
79EE0BFF1B4AFB85000385C9 /* VariableEnvironment.cpp in Sources */,
0F6C73501AC9F99F00BE1682 /* VariableWriteFireDetail.cpp in Sources */,
0FE0502C1AA9095600D33B33 /* VarOffset.cpp in Sources */,
- 70B791921C024A23002481E2 /* GeneratorFrame.cpp in Sources */,
0F20C2591A8013AB00DA3229 /* VirtualRegister.cpp in Sources */,
E18E3A590DF9278C00D90B34 /* VM.cpp in Sources */,
FE5932A7183C5A2600A1ECCC /* VMEntryScope.cpp in Sources */,
} else {
try {
generator.@generatorState = @GeneratorStateExecuting;
- value = generator.@generatorNext.@call(generator.@generatorThis, generator, state, sentValue, resumeMode);
+ value = generator.@generatorNext.@call(generator.@generatorThis, generator, state, sentValue, resumeMode, generator.@generatorFrame);
if (generator.@generatorState === @GeneratorStateExecuting) {
generator.@generatorState = @GeneratorStateCompleted;
done = true;
#include "BytecodeBasicBlock.h"
#include "CodeBlock.h"
+#include "InterpreterInlines.h"
#include "JSCInlines.h"
#include "PreciseJumpTargets.h"
void BytecodeBasicBlock::shrinkToFit()
{
- m_bytecodeOffsets.shrinkToFit();
+ m_offsets.shrinkToFit();
m_successors.shrinkToFit();
}
-static bool isBranch(OpcodeID opcodeID)
-{
- switch (opcodeID) {
- case op_jmp:
- case op_jtrue:
- case op_jfalse:
- case op_jeq_null:
- case op_jneq_null:
- case op_jneq_ptr:
- case op_jless:
- case op_jlesseq:
- case op_jgreater:
- case op_jgreatereq:
- case op_jnless:
- case op_jnlesseq:
- case op_jngreater:
- case op_jngreatereq:
- case op_switch_imm:
- case op_switch_char:
- case op_switch_string:
- case op_save:
- return true;
- default:
- return false;
- }
-}
-
-static bool isUnconditionalBranch(OpcodeID opcodeID)
-{
- switch (opcodeID) {
- case op_jmp:
- return true;
- default:
- return false;
- }
-}
-
-static bool isTerminal(OpcodeID opcodeID)
-{
- switch (opcodeID) {
- case op_ret:
- case op_end:
- return true;
- default:
- return false;
- }
-}
-
-static bool isThrow(OpcodeID opcodeID)
-{
- switch (opcodeID) {
- case op_throw:
- case op_throw_static_error:
- return true;
- default:
- return false;
- }
-}
-
static bool isJumpTarget(OpcodeID opcodeID, const Vector<unsigned, 32>& jumpTargets, unsigned bytecodeOffset)
{
if (opcodeID == op_catch)
return std::binary_search(jumpTargets.begin(), jumpTargets.end(), bytecodeOffset);
}
-static void linkBlocks(BytecodeBasicBlock* predecessor, BytecodeBasicBlock* successor)
-{
- predecessor->addSuccessor(successor);
-}
-
-void computeBytecodeBasicBlocks(CodeBlock* codeBlock, Vector<std::unique_ptr<BytecodeBasicBlock>>& basicBlocks)
+template<typename Block, typename Instruction>
+void BytecodeBasicBlock::computeImpl(Block* codeBlock, Instruction* instructionsBegin, unsigned instructionCount, Vector<std::unique_ptr<BytecodeBasicBlock>>& basicBlocks)
{
Vector<unsigned, 32> jumpTargets;
- computePreciseJumpTargets(codeBlock, jumpTargets);
+ computePreciseJumpTargets(codeBlock, instructionsBegin, instructionCount, jumpTargets);
+
+ auto appendBlock = [&] (std::unique_ptr<BytecodeBasicBlock>&& block) {
+ block->m_index = basicBlocks.size();
+ basicBlocks.append(WTFMove(block));
+ };
+
+ auto linkBlocks = [&] (BytecodeBasicBlock* from, BytecodeBasicBlock* to) {
+ from->addSuccessor(to);
+ };
// Create the entry and exit basic blocks.
basicBlocks.reserveCapacity(jumpTargets.size() + 2);
auto firstBlock = std::make_unique<BytecodeBasicBlock>(0, 0);
linkBlocks(entry.get(), firstBlock.get());
- basicBlocks.append(WTFMove(entry));
+ appendBlock(WTFMove(entry));
BytecodeBasicBlock* current = firstBlock.get();
- basicBlocks.append(WTFMove(firstBlock));
+ appendBlock(WTFMove(firstBlock));
auto exit = std::make_unique<BytecodeBasicBlock>(BytecodeBasicBlock::ExitBlock);
bool nextInstructionIsLeader = false;
Interpreter* interpreter = codeBlock->vm()->interpreter;
- Instruction* instructionsBegin = codeBlock->instructions().begin();
- unsigned instructionCount = codeBlock->instructions().size();
for (unsigned bytecodeOffset = 0; bytecodeOffset < instructionCount;) {
- OpcodeID opcodeID = interpreter->getOpcodeID(instructionsBegin[bytecodeOffset].u.opcode);
+ OpcodeID opcodeID = interpreter->getOpcodeID(instructionsBegin[bytecodeOffset]);
unsigned opcodeLength = opcodeLengths[opcodeID];
bool createdBlock = false;
if (isJumpTarget(opcodeID, jumpTargets, bytecodeOffset) || nextInstructionIsLeader) {
auto newBlock = std::make_unique<BytecodeBasicBlock>(bytecodeOffset, opcodeLength);
current = newBlock.get();
- basicBlocks.append(WTFMove(newBlock));
+ appendBlock(WTFMove(newBlock));
createdBlock = true;
nextInstructionIsLeader = false;
bytecodeOffset += opcodeLength;
continue;
// Otherwise, just add to the length of the current block.
- current->addBytecodeLength(opcodeLength);
+ current->addLength(opcodeLength);
bytecodeOffset += opcodeLength;
}
continue;
bool fallsThrough = true;
- for (unsigned bytecodeOffset = block->leaderBytecodeOffset(); bytecodeOffset < block->leaderBytecodeOffset() + block->totalBytecodeLength();) {
- const Instruction& currentInstruction = instructionsBegin[bytecodeOffset];
- OpcodeID opcodeID = interpreter->getOpcodeID(currentInstruction.u.opcode);
+ for (unsigned bytecodeOffset = block->leaderOffset(); bytecodeOffset < block->leaderOffset() + block->totalLength();) {
+ OpcodeID opcodeID = interpreter->getOpcodeID(instructionsBegin[bytecodeOffset]);
unsigned opcodeLength = opcodeLengths[opcodeID];
// If we found a terminal bytecode, link to the exit block.
if (isTerminal(opcodeID)) {
- ASSERT(bytecodeOffset + opcodeLength == block->leaderBytecodeOffset() + block->totalBytecodeLength());
+ ASSERT(bytecodeOffset + opcodeLength == block->leaderOffset() + block->totalLength());
linkBlocks(block, exit.get());
fallsThrough = false;
break;
// If there isn't one, treat this throw as a terminal. This is true even if we have a finally
// block because the finally block will create its own catch, which will generate a HandlerInfo.
if (isThrow(opcodeID)) {
- ASSERT(bytecodeOffset + opcodeLength == block->leaderBytecodeOffset() + block->totalBytecodeLength());
- HandlerInfo* handler = codeBlock->handlerForBytecodeOffset(bytecodeOffset);
+ ASSERT(bytecodeOffset + opcodeLength == block->leaderOffset() + block->totalLength());
+ auto* handler = codeBlock->handlerForBytecodeOffset(bytecodeOffset);
fallsThrough = false;
if (!handler) {
linkBlocks(block, exit.get());
}
for (unsigned i = 0; i < basicBlocks.size(); i++) {
BytecodeBasicBlock* otherBlock = basicBlocks[i].get();
- if (handler->target == otherBlock->leaderBytecodeOffset()) {
+ if (handler->target == otherBlock->leaderOffset()) {
linkBlocks(block, otherBlock);
break;
}
// If we found a branch, link to the block(s) that we jump to.
if (isBranch(opcodeID)) {
- ASSERT(bytecodeOffset + opcodeLength == block->leaderBytecodeOffset() + block->totalBytecodeLength());
+ ASSERT(bytecodeOffset + opcodeLength == block->leaderOffset() + block->totalLength());
Vector<unsigned, 1> bytecodeOffsetsJumpedTo;
- findJumpTargetsForBytecodeOffset(codeBlock, bytecodeOffset, bytecodeOffsetsJumpedTo);
+ findJumpTargetsForBytecodeOffset(codeBlock, instructionsBegin, bytecodeOffset, bytecodeOffsetsJumpedTo);
for (unsigned i = 0; i < basicBlocks.size(); i++) {
BytecodeBasicBlock* otherBlock = basicBlocks[i].get();
- if (bytecodeOffsetsJumpedTo.contains(otherBlock->leaderBytecodeOffset()))
+ if (bytecodeOffsetsJumpedTo.contains(otherBlock->leaderOffset()))
linkBlocks(block, otherBlock);
}
}
}
- basicBlocks.append(WTFMove(exit));
+ appendBlock(WTFMove(exit));
for (auto& basicBlock : basicBlocks)
basicBlock->shrinkToFit();
}
+void BytecodeBasicBlock::compute(CodeBlock* codeBlock, Instruction* instructionsBegin, unsigned instructionCount, Vector<std::unique_ptr<BytecodeBasicBlock>>& basicBlocks)
+{
+ computeImpl(codeBlock, instructionsBegin, instructionCount, basicBlocks);
+}
+
+void BytecodeBasicBlock::compute(UnlinkedCodeBlock* codeBlock, UnlinkedInstruction* instructionsBegin, unsigned instructionCount, Vector<std::unique_ptr<BytecodeBasicBlock>>& basicBlocks)
+{
+ BytecodeBasicBlock::computeImpl(codeBlock, instructionsBegin, instructionCount, basicBlocks);
+}
+
} // namespace JSC
namespace JSC {
class CodeBlock;
+class UnlinkedCodeBlock;
+struct Instruction;
+struct UnlinkedInstruction;
class BytecodeBasicBlock {
WTF_MAKE_FAST_ALLOCATED;
BytecodeBasicBlock(SpecialBlockType);
void shrinkToFit();
- bool isEntryBlock() { return !m_leaderBytecodeOffset && !m_totalBytecodeLength; }
- bool isExitBlock() { return m_leaderBytecodeOffset == UINT_MAX && m_totalBytecodeLength == UINT_MAX; }
+ bool isEntryBlock() { return !m_leaderOffset && !m_totalLength; }
+ bool isExitBlock() { return m_leaderOffset == UINT_MAX && m_totalLength == UINT_MAX; }
- unsigned leaderBytecodeOffset() { return m_leaderBytecodeOffset; }
- unsigned totalBytecodeLength() { return m_totalBytecodeLength; }
+ unsigned leaderOffset() { return m_leaderOffset; }
+ unsigned totalLength() { return m_totalLength; }
- Vector<unsigned>& bytecodeOffsets() { return m_bytecodeOffsets; }
- void addBytecodeLength(unsigned);
+ const Vector<unsigned>& offsets() const { return m_offsets; }
- Vector<BytecodeBasicBlock*>& successors() { return m_successors; }
- void addSuccessor(BytecodeBasicBlock* block) { m_successors.append(block); }
+ const Vector<BytecodeBasicBlock*>& successors() const { return m_successors; }
FastBitVector& in() { return m_in; }
FastBitVector& out() { return m_out; }
+ unsigned index() const { return m_index; }
+
+ static void compute(CodeBlock*, Instruction* instructionsBegin, unsigned instructionCount, Vector<std::unique_ptr<BytecodeBasicBlock>>&);
+ static void compute(UnlinkedCodeBlock*, UnlinkedInstruction* instructionsBegin, unsigned instructionCount, Vector<std::unique_ptr<BytecodeBasicBlock>>&);
+
private:
- unsigned m_leaderBytecodeOffset;
- unsigned m_totalBytecodeLength;
+ template<typename Block, typename Instruction> static void computeImpl(Block* codeBlock, Instruction* instructionsBegin, unsigned instructionCount, Vector<std::unique_ptr<BytecodeBasicBlock>>& basicBlocks);
+
+ void addSuccessor(BytecodeBasicBlock* block) { m_successors.append(block); }
+
+ void addLength(unsigned);
- Vector<unsigned> m_bytecodeOffsets;
+ unsigned m_leaderOffset;
+ unsigned m_totalLength;
+ unsigned m_index;
+
+ Vector<unsigned> m_offsets;
Vector<BytecodeBasicBlock*> m_successors;
FastBitVector m_in;
FastBitVector m_out;
};
-void computeBytecodeBasicBlocks(CodeBlock*, Vector<std::unique_ptr<BytecodeBasicBlock>>&);
-
inline BytecodeBasicBlock::BytecodeBasicBlock(unsigned start, unsigned length)
- : m_leaderBytecodeOffset(start)
- , m_totalBytecodeLength(length)
+ : m_leaderOffset(start)
+ , m_totalLength(length)
{
- m_bytecodeOffsets.append(m_leaderBytecodeOffset);
+ m_offsets.append(m_leaderOffset);
}
inline BytecodeBasicBlock::BytecodeBasicBlock(BytecodeBasicBlock::SpecialBlockType blockType)
- : m_leaderBytecodeOffset(blockType == BytecodeBasicBlock::EntryBlock ? 0 : UINT_MAX)
- , m_totalBytecodeLength(blockType == BytecodeBasicBlock::EntryBlock ? 0 : UINT_MAX)
+ : m_leaderOffset(blockType == BytecodeBasicBlock::EntryBlock ? 0 : UINT_MAX)
+ , m_totalLength(blockType == BytecodeBasicBlock::EntryBlock ? 0 : UINT_MAX)
{
}
-inline void BytecodeBasicBlock::addBytecodeLength(unsigned bytecodeLength)
+inline void BytecodeBasicBlock::addLength(unsigned bytecodeLength)
{
- m_bytecodeOffsets.append(m_leaderBytecodeOffset + m_totalBytecodeLength);
- m_totalBytecodeLength += bytecodeLength;
+ m_offsets.append(m_leaderOffset + m_totalLength);
+ m_totalLength += bytecodeLength;
}
} // namespace JSC
--- /dev/null
+/*
+ * Copyright (C) 2016 Yusuke Suzuki <utatane.tea@gmail.com>
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "BytecodeGeneratorification.h"
+
+#include "BytecodeLivenessAnalysisInlines.h"
+#include "BytecodeRewriter.h"
+#include "BytecodeUseDef.h"
+#include "IdentifierInlines.h"
+#include "InterpreterInlines.h"
+#include "JSCInlines.h"
+#include "JSCJSValueInlines.h"
+#include "JSGeneratorFunction.h"
+#include "StrongInlines.h"
+#include "UnlinkedCodeBlock.h"
+#include <wtf/Optional.h>
+
+namespace JSC {
+
+struct YieldData {
+ size_t point { 0 };
+ int argument { 0 };
+ FastBitVector liveness;
+};
+
+class BytecodeGeneratorification {
+public:
+ typedef Vector<YieldData> Yields;
+
+ BytecodeGeneratorification(UnlinkedCodeBlock* codeBlock, UnlinkedCodeBlock::UnpackedInstructions& instructions, SymbolTable* generatorFrameSymbolTable, int generatorFrameSymbolTableIndex)
+ : m_graph(codeBlock, instructions)
+ , m_generatorFrameSymbolTable(*codeBlock->vm(), generatorFrameSymbolTable)
+ , m_generatorFrameSymbolTableIndex(generatorFrameSymbolTableIndex)
+ {
+ for (BytecodeBasicBlock* block : m_graph) {
+ for (unsigned bytecodeOffset : block->offsets()) {
+ const UnlinkedInstruction* pc = &m_graph.instructions()[bytecodeOffset];
+ switch (pc->u.opcode) {
+ case op_enter: {
+ m_enterPoint = bytecodeOffset;
+ break;
+ }
+
+ case op_yield: {
+ unsigned liveCalleeLocalsIndex = pc[2].u.index;
+ if (liveCalleeLocalsIndex >= m_yields.size())
+ m_yields.resize(liveCalleeLocalsIndex + 1);
+ YieldData& data = m_yields[liveCalleeLocalsIndex];
+ data.point = bytecodeOffset;
+ data.argument = pc[3].u.operand;
+ break;
+ }
+
+ default:
+ break;
+ }
+ }
+ }
+ }
+
+ struct Storage {
+ Identifier identifier;
+ unsigned identifierIndex;
+ ScopeOffset scopeOffset;
+ };
+
+ void run();
+
+ BytecodeGraph<UnlinkedCodeBlock>& graph() { return m_graph; }
+
+ const Yields& yields() const
+ {
+ return m_yields;
+ }
+
+ Yields& yields()
+ {
+ return m_yields;
+ }
+
+ unsigned enterPoint() const
+ {
+ return m_enterPoint;
+ }
+
+private:
+ Storage storageForGeneratorLocal(unsigned index)
+ {
+ // We assign a symbol to a register. There is one-on-one corresponding between a register and a symbol.
+ // By doing so, we allocate the specific storage to save the given register.
+ // This allow us not to save all the live registers even if the registers are not overwritten from the previous resuming time.
+ // It means that, the register can be retrieved even if the immediate previous op_save does not save it.
+
+ if (m_storages.size() <= index)
+ m_storages.resize(index + 1);
+ if (Optional<Storage> storage = m_storages[index])
+ return *storage;
+
+ UnlinkedCodeBlock* codeBlock = m_graph.codeBlock();
+ Identifier identifier = Identifier::fromUid(PrivateName());
+ unsigned identifierIndex = codeBlock->numberOfIdentifiers();
+ codeBlock->addIdentifier(identifier);
+ ScopeOffset scopeOffset = m_generatorFrameSymbolTable->takeNextScopeOffset(NoLockingNecessary);
+ m_generatorFrameSymbolTable->set(NoLockingNecessary, identifier.impl(), SymbolTableEntry(VarOffset(scopeOffset)));
+
+ Storage storage = {
+ identifier,
+ identifierIndex,
+ scopeOffset
+ };
+ m_storages[index] = storage;
+ return storage;
+ }
+
+ unsigned m_enterPoint { 0 };
+ BytecodeGraph<UnlinkedCodeBlock> m_graph;
+ Vector<Optional<Storage>> m_storages;
+ Yields m_yields;
+ Strong<SymbolTable> m_generatorFrameSymbolTable;
+ int m_generatorFrameSymbolTableIndex;
+};
+
+class GeneratorLivenessAnalysis : public BytecodeLivenessPropagation<GeneratorLivenessAnalysis> {
+public:
+ GeneratorLivenessAnalysis(BytecodeGeneratorification& generatorification)
+ : m_generatorification(generatorification)
+ {
+ }
+
+ template<typename Functor>
+ void computeDefsForBytecodeOffset(UnlinkedCodeBlock* codeBlock, OpcodeID opcodeID, UnlinkedInstruction* instruction, FastBitVector&, const Functor& functor)
+ {
+ JSC::computeDefsForBytecodeOffset(codeBlock, opcodeID, instruction, functor);
+ }
+
+ template<typename Functor>
+ void computeUsesForBytecodeOffset(UnlinkedCodeBlock* codeBlock, OpcodeID opcodeID, UnlinkedInstruction* instruction, FastBitVector&, const Functor& functor)
+ {
+ JSC::computeUsesForBytecodeOffset(codeBlock, opcodeID, instruction, functor);
+ }
+
+ void run()
+ {
+ // Perform modified liveness analysis to determine which locals are live at the merge points.
+ // This produces the conservative results for the question, "which variables should be saved and resumed?".
+
+ runLivenessFixpoint(m_generatorification.graph());
+
+ for (YieldData& data : m_generatorification.yields())
+ data.liveness = getLivenessInfoAtBytecodeOffset(m_generatorification.graph(), data.point + opcodeLength(op_yield));
+ }
+
+private:
+ BytecodeGeneratorification& m_generatorification;
+};
+
+void BytecodeGeneratorification::run()
+{
+ // We calculate the liveness at each merge point. This gives us the information which registers should be saved and resumed conservatively.
+
+ {
+ GeneratorLivenessAnalysis pass(*this);
+ pass.run();
+ }
+
+ UnlinkedCodeBlock* codeBlock = m_graph.codeBlock();
+ BytecodeRewriter rewriter(m_graph);
+
+ // Setup the global switch for the generator.
+ {
+ unsigned nextToEnterPoint = enterPoint() + opcodeLength(op_enter);
+ unsigned switchTableIndex = m_graph.codeBlock()->numberOfSwitchJumpTables();
+ VirtualRegister state = virtualRegisterForArgument(static_cast<int32_t>(JSGeneratorFunction::GeneratorArgument::State));
+ auto& jumpTable = m_graph.codeBlock()->addSwitchJumpTable();
+ jumpTable.min = 0;
+ jumpTable.branchOffsets.resize(m_yields.size() + 1);
+ jumpTable.branchOffsets.fill(0);
+ jumpTable.add(0, nextToEnterPoint);
+ for (unsigned i = 0; i < m_yields.size(); ++i)
+ jumpTable.add(i + 1, m_yields[i].point);
+
+ rewriter.insertFragmentBefore(nextToEnterPoint, [&](BytecodeRewriter::Fragment& fragment) {
+ fragment.appendInstruction(op_switch_imm, switchTableIndex, nextToEnterPoint, state.offset());
+ });
+ }
+
+ for (const YieldData& data : m_yields) {
+ VirtualRegister scope = virtualRegisterForArgument(static_cast<int32_t>(JSGeneratorFunction::GeneratorArgument::Frame));
+
+ // Emit save sequence.
+ rewriter.insertFragmentBefore(data.point, [&](BytecodeRewriter::Fragment& fragment) {
+ data.liveness.forEachSetBit([&](size_t index) {
+ VirtualRegister operand = virtualRegisterForLocal(index);
+ Storage storage = storageForGeneratorLocal(index);
+
+ fragment.appendInstruction(
+ op_put_to_scope,
+ scope.offset(), // scope
+ storage.identifierIndex, // identifier
+ operand.offset(), // value
+ GetPutInfo(DoNotThrowIfNotFound, LocalClosureVar, InitializationMode::NotInitialization).operand(), // info
+ m_generatorFrameSymbolTableIndex, // symbol table constant index
+ storage.scopeOffset.offset() // scope offset
+ );
+ });
+
+ // Insert op_ret just after save sequence.
+ fragment.appendInstruction(op_ret, data.argument);
+ });
+
+ // Emit resume sequence.
+ rewriter.insertFragmentAfter(data.point, [&](BytecodeRewriter::Fragment& fragment) {
+ data.liveness.forEachSetBit([&](size_t index) {
+ VirtualRegister operand = virtualRegisterForLocal(index);
+ Storage storage = storageForGeneratorLocal(index);
+
+ UnlinkedValueProfile profile = codeBlock->addValueProfile();
+ fragment.appendInstruction(
+ op_get_from_scope,
+ operand.offset(), // dst
+ scope.offset(), // scope
+ storage.identifierIndex, // identifier
+ GetPutInfo(DoNotThrowIfNotFound, LocalClosureVar, InitializationMode::NotInitialization).operand(), // info
+ 0, // local scope depth
+ storage.scopeOffset.offset(), // scope offset
+ profile // profile
+ );
+ });
+ });
+
+ // Clip the unnecessary bytecodes.
+ rewriter.removeBytecode(data.point);
+ }
+
+ rewriter.execute();
+}
+
+void performGeneratorification(UnlinkedCodeBlock* codeBlock, UnlinkedCodeBlock::UnpackedInstructions& instructions, SymbolTable* generatorFrameSymbolTable, int generatorFrameSymbolTableIndex)
+{
+ BytecodeGeneratorification pass(codeBlock, instructions, generatorFrameSymbolTable, generatorFrameSymbolTableIndex);
+ pass.run();
+}
+
+} // namespace JSC
--- /dev/null
+/*
+ * Copyright (C) 2016 Yusuke Suzuki <utatane.tea@gmail.com>
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "UnlinkedCodeBlock.h"
+
+namespace JSC {
+
+class SymbolTable;
+
+void performGeneratorification(UnlinkedCodeBlock*, UnlinkedCodeBlock::UnpackedInstructions&, SymbolTable* generatorFrameSymbolTable, int generatorFrameSymbolTableIndex);
+
+} // namespace JSC
--- /dev/null
+/*
+ * Copyright (C) 2016 Yusuke Suzuki <utatane.tea@gmail.com>
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "BytecodeBasicBlock.h"
+#include <wtf/IndexedContainerIterator.h>
+#include <wtf/IteratorRange.h>
+#include <wtf/Vector.h>
+
+namespace JSC {
+
+class BytecodeBasicBlock;
+
+template<typename Block>
+class BytecodeGraph {
+ WTF_MAKE_FAST_ALLOCATED;
+ WTF_MAKE_NONCOPYABLE(BytecodeGraph);
+public:
+ typedef Block CodeBlock;
+ typedef typename Block::Instruction Instruction;
+ typedef Vector<std::unique_ptr<BytecodeBasicBlock>> BasicBlocksVector;
+
+ typedef WTF::IndexedContainerIterator<BytecodeGraph<Block>> iterator;
+
+ inline BytecodeGraph(Block*, typename Block::UnpackedInstructions&);
+
+ Block* codeBlock() const { return m_codeBlock; }
+
+ typename Block::UnpackedInstructions& instructions() { return m_instructions; }
+
+ WTF::IteratorRange<BasicBlocksVector::reverse_iterator> basicBlocksInReverseOrder()
+ {
+ return WTF::makeIteratorRange(m_basicBlocks.rbegin(), m_basicBlocks.rend());
+ }
+
+ static bool blockContainsBytecodeOffset(BytecodeBasicBlock* block, unsigned bytecodeOffset)
+ {
+ unsigned leaderOffset = block->leaderOffset();
+ return bytecodeOffset >= leaderOffset && bytecodeOffset < leaderOffset + block->totalLength();
+ }
+
+ BytecodeBasicBlock* findBasicBlockForBytecodeOffset(unsigned bytecodeOffset)
+ {
+ /*
+ for (unsigned i = 0; i < m_basicBlocks.size(); i++) {
+ if (blockContainsBytecodeOffset(m_basicBlocks[i].get(), bytecodeOffset))
+ return m_basicBlocks[i].get();
+ }
+ return 0;
+ */
+
+ std::unique_ptr<BytecodeBasicBlock>* basicBlock = approximateBinarySearch<std::unique_ptr<BytecodeBasicBlock>, unsigned>(m_basicBlocks, m_basicBlocks.size(), bytecodeOffset, [] (std::unique_ptr<BytecodeBasicBlock>* basicBlock) { return (*basicBlock)->leaderOffset(); });
+ // We found the block we were looking for.
+ if (blockContainsBytecodeOffset((*basicBlock).get(), bytecodeOffset))
+ return (*basicBlock).get();
+
+ // Basic block is to the left of the returned block.
+ if (bytecodeOffset < (*basicBlock)->leaderOffset()) {
+ ASSERT(basicBlock - 1 >= m_basicBlocks.data());
+ ASSERT(blockContainsBytecodeOffset(basicBlock[-1].get(), bytecodeOffset));
+ return basicBlock[-1].get();
+ }
+
+ // Basic block is to the right of the returned block.
+ ASSERT(&basicBlock[1] <= &m_basicBlocks.last());
+ ASSERT(blockContainsBytecodeOffset(basicBlock[1].get(), bytecodeOffset));
+ return basicBlock[1].get();
+ }
+
+ BytecodeBasicBlock* findBasicBlockWithLeaderOffset(unsigned leaderOffset)
+ {
+ return (*tryBinarySearch<std::unique_ptr<BytecodeBasicBlock>, unsigned>(m_basicBlocks, m_basicBlocks.size(), leaderOffset, [] (std::unique_ptr<BytecodeBasicBlock>* basicBlock) { return (*basicBlock)->leaderOffset(); })).get();
+ }
+
+ unsigned size() const { return m_basicBlocks.size(); }
+ BytecodeBasicBlock* at(unsigned index) const { return m_basicBlocks[index].get(); }
+ BytecodeBasicBlock* operator[](unsigned index) const { return at(index); }
+
+ iterator begin() const { return iterator(*this, 0); }
+ iterator end() const { return iterator(*this, size()); }
+ BytecodeBasicBlock* first() { return at(0); }
+ BytecodeBasicBlock* last() { return at(size() - 1); }
+
+private:
+ Block* m_codeBlock;
+ BasicBlocksVector m_basicBlocks;
+ typename Block::UnpackedInstructions& m_instructions;
+};
+
+
+template<typename Block>
+BytecodeGraph<Block>::BytecodeGraph(Block* codeBlock, typename Block::UnpackedInstructions& instructions)
+ : m_codeBlock(codeBlock)
+ , m_instructions(instructions)
+{
+ ASSERT(m_codeBlock);
+ BytecodeBasicBlock::compute(m_codeBlock, instructions.begin(), instructions.size(), m_basicBlocks);
+ ASSERT(m_basicBlocks.size());
+}
+
+} // namespace JSC
{ "name" : "op_assert", "length" : 3 },
{ "name" : "op_create_rest", "length": 4 },
{ "name" : "op_get_rest_length", "length": 3 },
- { "name" : "op_save", "length" : 4 },
- { "name" : "op_resume", "length" : 3 },
+ { "name" : "op_yield", "length" : 4 },
{ "name" : "op_watchdog", "length" : 1 },
{ "name" : "op_log_shadow_chicken_prologue", "length" : 2},
{ "name" : "op_log_shadow_chicken_tail", "length" : 3}
#include "BytecodeUseDef.h"
#include "CodeBlock.h"
#include "FullBytecodeLiveness.h"
-#include "PreciseJumpTargets.h"
+#include "InterpreterInlines.h"
namespace JSC {
BytecodeLivenessAnalysis::BytecodeLivenessAnalysis(CodeBlock* codeBlock)
- : m_codeBlock(codeBlock)
+ : m_graph(codeBlock, codeBlock->instructions())
{
- ASSERT(m_codeBlock);
compute();
}
-static bool isValidRegisterForLiveness(CodeBlock* codeBlock, int operand)
+template<typename Functor>
+void BytecodeLivenessAnalysis::computeDefsForBytecodeOffset(CodeBlock* codeBlock, OpcodeID opcodeID, Instruction* instruction, FastBitVector&, const Functor& functor)
{
- if (codeBlock->isConstantRegisterIndex(operand))
- return false;
-
- VirtualRegister virtualReg(operand);
- return virtualReg.isLocal();
-}
-
-static unsigned getLeaderOffsetForBasicBlock(std::unique_ptr<BytecodeBasicBlock>* basicBlock)
-{
- return (*basicBlock)->leaderBytecodeOffset();
-}
-
-static BytecodeBasicBlock* findBasicBlockWithLeaderOffset(Vector<std::unique_ptr<BytecodeBasicBlock>>& basicBlocks, unsigned leaderOffset)
-{
- return (*tryBinarySearch<std::unique_ptr<BytecodeBasicBlock>, unsigned>(basicBlocks, basicBlocks.size(), leaderOffset, getLeaderOffsetForBasicBlock)).get();
-}
-
-static bool blockContainsBytecodeOffset(BytecodeBasicBlock* block, unsigned bytecodeOffset)
-{
- unsigned leaderOffset = block->leaderBytecodeOffset();
- return bytecodeOffset >= leaderOffset && bytecodeOffset < leaderOffset + block->totalBytecodeLength();
-}
-
-static BytecodeBasicBlock* findBasicBlockForBytecodeOffset(Vector<std::unique_ptr<BytecodeBasicBlock>>& basicBlocks, unsigned bytecodeOffset)
-{
-/*
- for (unsigned i = 0; i < basicBlocks.size(); i++) {
- if (blockContainsBytecodeOffset(basicBlocks[i].get(), bytecodeOffset))
- return basicBlocks[i].get();
- }
- return 0;
-*/
- std::unique_ptr<BytecodeBasicBlock>* basicBlock = approximateBinarySearch<std::unique_ptr<BytecodeBasicBlock>, unsigned>(
- basicBlocks, basicBlocks.size(), bytecodeOffset, getLeaderOffsetForBasicBlock);
- // We found the block we were looking for.
- if (blockContainsBytecodeOffset((*basicBlock).get(), bytecodeOffset))
- return (*basicBlock).get();
-
- // Basic block is to the left of the returned block.
- if (bytecodeOffset < (*basicBlock)->leaderBytecodeOffset()) {
- ASSERT(basicBlock - 1 >= basicBlocks.data());
- ASSERT(blockContainsBytecodeOffset(basicBlock[-1].get(), bytecodeOffset));
- return basicBlock[-1].get();
- }
-
- // Basic block is to the right of the returned block.
- ASSERT(&basicBlock[1] <= &basicBlocks.last());
- ASSERT(blockContainsBytecodeOffset(basicBlock[1].get(), bytecodeOffset));
- return basicBlock[1].get();
-}
-
-// Simplified interface to bytecode use/def, which determines defs first and then uses, and includes
-// exception handlers in the uses.
-template<typename UseFunctor, typename DefFunctor>
-static void stepOverInstruction(CodeBlock* codeBlock, BytecodeBasicBlock* block, Vector<std::unique_ptr<BytecodeBasicBlock>>& basicBlocks, unsigned bytecodeOffset, const UseFunctor& use, const DefFunctor& def)
-{
- // This abstractly execute the instruction in reverse. Instructions logically first use operands and
- // then define operands. This logical ordering is necessary for operations that use and def the same
- // operand, like:
- //
- // op_add loc1, loc1, loc2
- //
- // The use of loc1 happens before the def of loc1. That's a semantic requirement since the add
- // operation cannot travel forward in time to read the value that it will produce after reading that
- // value. Since we are executing in reverse, this means that we must do defs before uses (reverse of
- // uses before defs).
- //
- // Since this is a liveness analysis, this ordering ends up being particularly important: if we did
- // uses before defs, then the add operation above would appear to not have loc1 live, since we'd
- // first add it to the out set (the use), and then we'd remove it (the def).
-
- computeDefsForBytecodeOffset(
- codeBlock, block, bytecodeOffset,
- [&] (CodeBlock* codeBlock, Instruction*, OpcodeID, int operand) {
- if (isValidRegisterForLiveness(codeBlock, operand))
- def(VirtualRegister(operand).toLocal());
- });
-
- computeUsesForBytecodeOffset(
- codeBlock, block, bytecodeOffset,
- [&] (CodeBlock* codeBlock, Instruction*, OpcodeID, int operand) {
- if (isValidRegisterForLiveness(codeBlock, operand))
- use(VirtualRegister(operand).toLocal());
- });
-
- // If we have an exception handler, we want the live-in variables of the
- // exception handler block to be included in the live-in of this particular bytecode.
- if (HandlerInfo* handler = codeBlock->handlerForBytecodeOffset(bytecodeOffset)) {
- // FIXME: This resume check should not be needed.
- // https://bugs.webkit.org/show_bug.cgi?id=159281
- Interpreter* interpreter = codeBlock->vm()->interpreter;
- Instruction* instructionsBegin = codeBlock->instructions().begin();
- Instruction* instruction = &instructionsBegin[bytecodeOffset];
- OpcodeID opcodeID = interpreter->getOpcodeID(instruction->u.opcode);
- if (opcodeID != op_resume) {
- BytecodeBasicBlock* handlerBlock = findBasicBlockWithLeaderOffset(basicBlocks, handler->target);
- ASSERT(handlerBlock);
- handlerBlock->in().forEachSetBit(use);
- }
- }
-}
-
-static void stepOverInstruction(CodeBlock* codeBlock, BytecodeBasicBlock* block, Vector<std::unique_ptr<BytecodeBasicBlock>>& basicBlocks, unsigned bytecodeOffset, FastBitVector& out)
-{
- stepOverInstruction(
- codeBlock, block, basicBlocks, bytecodeOffset,
- [&] (unsigned bitIndex) {
- // This is the use functor, so we set the bit.
- out.set(bitIndex);
- },
- [&] (unsigned bitIndex) {
- // This is the def functor, so we clear the bit.
- out.clear(bitIndex);
- });
+ JSC::computeDefsForBytecodeOffset(codeBlock, opcodeID, instruction, functor);
}
-static void computeLocalLivenessForBytecodeOffset(CodeBlock* codeBlock, BytecodeBasicBlock* block, Vector<std::unique_ptr<BytecodeBasicBlock>>& basicBlocks, unsigned targetOffset, FastBitVector& result)
+template<typename Functor>
+void BytecodeLivenessAnalysis::computeUsesForBytecodeOffset(CodeBlock* codeBlock, OpcodeID opcodeID, Instruction* instruction, FastBitVector&, const Functor& functor)
{
- ASSERT(!block->isExitBlock());
- ASSERT(!block->isEntryBlock());
-
- FastBitVector out = block->out();
-
- for (int i = block->bytecodeOffsets().size() - 1; i >= 0; i--) {
- unsigned bytecodeOffset = block->bytecodeOffsets()[i];
- if (targetOffset > bytecodeOffset)
- break;
-
- stepOverInstruction(codeBlock, block, basicBlocks, bytecodeOffset, out);
- }
-
- result.set(out);
-}
-
-static void computeLocalLivenessForBlock(CodeBlock* codeBlock, BytecodeBasicBlock* block, Vector<std::unique_ptr<BytecodeBasicBlock>>& basicBlocks)
-{
- if (block->isExitBlock() || block->isEntryBlock())
- return;
- computeLocalLivenessForBytecodeOffset(codeBlock, block, basicBlocks, block->leaderBytecodeOffset(), block->in());
-}
-
-void BytecodeLivenessAnalysis::runLivenessFixpoint()
-{
- UnlinkedCodeBlock* unlinkedCodeBlock = m_codeBlock->unlinkedCodeBlock();
- unsigned numberOfVariables = unlinkedCodeBlock->m_numCalleeLocals;
-
- for (unsigned i = 0; i < m_basicBlocks.size(); i++) {
- BytecodeBasicBlock* block = m_basicBlocks[i].get();
- block->in().resize(numberOfVariables);
- block->out().resize(numberOfVariables);
- }
-
- bool changed;
- m_basicBlocks.last()->in().clearAll();
- m_basicBlocks.last()->out().clearAll();
- FastBitVector newOut;
- newOut.resize(m_basicBlocks.last()->out().numBits());
- do {
- changed = false;
- for (unsigned i = m_basicBlocks.size() - 1; i--;) {
- BytecodeBasicBlock* block = m_basicBlocks[i].get();
- newOut.clearAll();
- for (unsigned j = 0; j < block->successors().size(); j++)
- newOut.merge(block->successors()[j]->in());
- bool outDidChange = block->out().setAndCheck(newOut);
- computeLocalLivenessForBlock(m_codeBlock, block, m_basicBlocks);
- changed |= outDidChange;
- }
- } while (changed);
+ JSC::computeUsesForBytecodeOffset(codeBlock, opcodeID, instruction, functor);
}
void BytecodeLivenessAnalysis::getLivenessInfoAtBytecodeOffset(unsigned bytecodeOffset, FastBitVector& result)
{
- BytecodeBasicBlock* block = findBasicBlockForBytecodeOffset(m_basicBlocks, bytecodeOffset);
+ BytecodeBasicBlock* block = m_graph.findBasicBlockForBytecodeOffset(bytecodeOffset);
ASSERT(block);
ASSERT(!block->isEntryBlock());
ASSERT(!block->isExitBlock());
result.resize(block->out().numBits());
- computeLocalLivenessForBytecodeOffset(m_codeBlock, block, m_basicBlocks, bytecodeOffset, result);
+ computeLocalLivenessForBytecodeOffset(m_graph, block, bytecodeOffset, result);
}
bool BytecodeLivenessAnalysis::operandIsLiveAtBytecodeOffset(int operand, unsigned bytecodeOffset)
void BytecodeLivenessAnalysis::computeFullLiveness(FullBytecodeLiveness& result)
{
FastBitVector out;
+ CodeBlock* codeBlock = m_graph.codeBlock();
- result.m_map.resize(m_codeBlock->instructions().size());
+ result.m_map.resize(codeBlock->instructions().size());
- for (unsigned i = m_basicBlocks.size(); i--;) {
- BytecodeBasicBlock* block = m_basicBlocks[i].get();
+ for (std::unique_ptr<BytecodeBasicBlock>& block : m_graph.basicBlocksInReverseOrder()) {
if (block->isEntryBlock() || block->isExitBlock())
continue;
out = block->out();
- for (unsigned i = block->bytecodeOffsets().size(); i--;) {
- unsigned bytecodeOffset = block->bytecodeOffsets()[i];
- stepOverInstruction(m_codeBlock, block, m_basicBlocks, bytecodeOffset, out);
+ for (unsigned i = block->offsets().size(); i--;) {
+ unsigned bytecodeOffset = block->offsets()[i];
+ stepOverInstruction(m_graph, bytecodeOffset, out);
result.m_map[bytecodeOffset] = out;
}
}
{
FastBitVector out;
- result.m_codeBlock = m_codeBlock;
- result.m_killSets = std::make_unique<BytecodeKills::KillSet[]>(m_codeBlock->instructions().size());
+ CodeBlock* codeBlock = m_graph.codeBlock();
+ result.m_codeBlock = codeBlock;
+ result.m_killSets = std::make_unique<BytecodeKills::KillSet[]>(codeBlock->instructions().size());
- for (unsigned i = m_basicBlocks.size(); i--;) {
- BytecodeBasicBlock* block = m_basicBlocks[i].get();
+ for (std::unique_ptr<BytecodeBasicBlock>& block : m_graph.basicBlocksInReverseOrder()) {
if (block->isEntryBlock() || block->isExitBlock())
continue;
out = block->out();
- for (unsigned i = block->bytecodeOffsets().size(); i--;) {
- unsigned bytecodeOffset = block->bytecodeOffsets()[i];
+ for (unsigned i = block->offsets().size(); i--;) {
+ unsigned bytecodeOffset = block->offsets()[i];
stepOverInstruction(
- m_codeBlock, block, m_basicBlocks, bytecodeOffset,
+ m_graph, bytecodeOffset, out,
[&] (unsigned index) {
// This is for uses.
if (out.get(index))
void BytecodeLivenessAnalysis::dumpResults()
{
- dataLog("\nDumping bytecode liveness for ", *m_codeBlock, ":\n");
- Interpreter* interpreter = m_codeBlock->vm()->interpreter;
- Instruction* instructionsBegin = m_codeBlock->instructions().begin();
- for (unsigned i = 0; i < m_basicBlocks.size(); i++) {
- BytecodeBasicBlock* block = m_basicBlocks[i].get();
- dataLogF("\nBytecode basic block %u: %p (offset: %u, length: %u)\n", i, block, block->leaderBytecodeOffset(), block->totalBytecodeLength());
+ CodeBlock* codeBlock = m_graph.codeBlock();
+ dataLog("\nDumping bytecode liveness for ", *codeBlock, ":\n");
+ Interpreter* interpreter = codeBlock->vm()->interpreter;
+ Instruction* instructionsBegin = codeBlock->instructions().begin();
+ unsigned i = 0;
+ for (BytecodeBasicBlock* block : m_graph) {
+ dataLogF("\nBytecode basic block %u: %p (offset: %u, length: %u)\n", i++, block, block->leaderOffset(), block->totalLength());
dataLogF("Successors: ");
for (unsigned j = 0; j < block->successors().size(); j++) {
BytecodeBasicBlock* successor = block->successors()[j];
dataLogF("Exit block: %p\n", block);
continue;
}
- for (unsigned bytecodeOffset = block->leaderBytecodeOffset(); bytecodeOffset < block->leaderBytecodeOffset() + block->totalBytecodeLength();) {
+ for (unsigned bytecodeOffset = block->leaderOffset(); bytecodeOffset < block->leaderOffset() + block->totalLength();) {
const Instruction* currentInstruction = &instructionsBegin[bytecodeOffset];
dataLogF("Live variables: ");
dataLogF("%u ", j);
}
dataLogF("\n");
- m_codeBlock->dumpBytecode(WTF::dataFile(), m_codeBlock->globalObject()->globalExec(), instructionsBegin, currentInstruction);
+ codeBlock->dumpBytecode(WTF::dataFile(), codeBlock->globalObject()->globalExec(), instructionsBegin, currentInstruction);
OpcodeID opcodeID = interpreter->getOpcodeID(instructionsBegin[bytecodeOffset].u.opcode);
unsigned opcodeLength = opcodeLengths[opcodeID];
void BytecodeLivenessAnalysis::compute()
{
- computeBytecodeBasicBlocks(m_codeBlock, m_basicBlocks);
- ASSERT(m_basicBlocks.size());
- runLivenessFixpoint();
+ runLivenessFixpoint(m_graph);
if (Options::dumpBytecodeLivenessResults())
dumpResults();
* THE POSSIBILITY OF SUCH DAMAGE.
*/
-#ifndef BytecodeLivenessAnalysis_h
-#define BytecodeLivenessAnalysis_h
+#pragma once
#include "BytecodeBasicBlock.h"
+#include "BytecodeGraph.h"
+#include "CodeBlock.h"
#include <wtf/FastBitVector.h>
#include <wtf/HashMap.h>
#include <wtf/Vector.h>
namespace JSC {
class BytecodeKills;
-class CodeBlock;
class FullBytecodeLiveness;
-class BytecodeLivenessAnalysis {
+template<typename DerivedAnalysis>
+class BytecodeLivenessPropagation {
+protected:
+ template<typename Graph, typename UseFunctor, typename DefFunctor> void stepOverInstruction(Graph&, unsigned bytecodeOffset, FastBitVector& out, const UseFunctor&, const DefFunctor&);
+
+ template<typename Graph> void stepOverInstruction(Graph&, unsigned bytecodeOffset, FastBitVector& out);
+
+ template<typename Graph> bool computeLocalLivenessForBytecodeOffset(Graph&, BytecodeBasicBlock*, unsigned targetOffset, FastBitVector& result);
+
+ template<typename Graph> bool computeLocalLivenessForBlock(Graph&, BytecodeBasicBlock*);
+
+ template<typename Graph> FastBitVector getLivenessInfoAtBytecodeOffset(Graph&, unsigned bytecodeOffset);
+
+ template<typename Graph> void runLivenessFixpoint(Graph&);
+};
+
+class BytecodeLivenessAnalysis : private BytecodeLivenessPropagation<BytecodeLivenessAnalysis> {
WTF_MAKE_FAST_ALLOCATED;
WTF_MAKE_NONCOPYABLE(BytecodeLivenessAnalysis);
public:
+ friend class BytecodeLivenessPropagation<BytecodeLivenessAnalysis>;
BytecodeLivenessAnalysis(CodeBlock*);
bool operandIsLiveAtBytecodeOffset(int operand, unsigned bytecodeOffset);
private:
void compute();
- void runLivenessFixpoint();
void dumpResults();
void getLivenessInfoAtBytecodeOffset(unsigned bytecodeOffset, FastBitVector&);
- CodeBlock* m_codeBlock;
- Vector<std::unique_ptr<BytecodeBasicBlock>> m_basicBlocks;
+ template<typename Functor> void computeDefsForBytecodeOffset(CodeBlock*, OpcodeID, Instruction*, FastBitVector&, const Functor&);
+ template<typename Functor> void computeUsesForBytecodeOffset(CodeBlock*, OpcodeID, Instruction*, FastBitVector&, const Functor&);
+
+ BytecodeGraph<CodeBlock> m_graph;
};
inline bool operandIsAlwaysLive(int operand);
inline bool operandThatIsNotAlwaysLiveIsLive(const FastBitVector& out, int operand);
inline bool operandIsLive(const FastBitVector& out, int operand);
+inline bool isValidRegisterForLiveness(int operand);
} // namespace JSC
-
-#endif // BytecodeLivenessAnalysis_h
* THE POSSIBILITY OF SUCH DAMAGE.
*/
-#ifndef BytecodeLivenessAnalysisInlines_h
-#define BytecodeLivenessAnalysisInlines_h
+#pragma once
+#include "BytecodeGraph.h"
#include "BytecodeLivenessAnalysis.h"
#include "CodeBlock.h"
+#include "Interpreter.h"
#include "Operations.h"
namespace JSC {
return operandIsAlwaysLive(operand) || operandThatIsNotAlwaysLiveIsLive(out, operand);
}
-} // namespace JSC
+inline bool isValidRegisterForLiveness(int operand)
+{
+ VirtualRegister virtualReg(operand);
+ if (virtualReg.isConstant())
+ return false;
+ return virtualReg.isLocal();
+}
+
+// Simplified interface to bytecode use/def, which determines defs first and then uses, and includes
+// exception handlers in the uses.
+template<typename DerivedAnalysis>
+template<typename Graph, typename UseFunctor, typename DefFunctor>
+inline void BytecodeLivenessPropagation<DerivedAnalysis>::stepOverInstruction(Graph& graph, unsigned bytecodeOffset, FastBitVector& out, const UseFunctor& use, const DefFunctor& def)
+{
+ // This abstractly execute the instruction in reverse. Instructions logically first use operands and
+ // then define operands. This logical ordering is necessary for operations that use and def the same
+ // operand, like:
+ //
+ // op_add loc1, loc1, loc2
+ //
+ // The use of loc1 happens before the def of loc1. That's a semantic requirement since the add
+ // operation cannot travel forward in time to read the value that it will produce after reading that
+ // value. Since we are executing in reverse, this means that we must do defs before uses (reverse of
+ // uses before defs).
+ //
+ // Since this is a liveness analysis, this ordering ends up being particularly important: if we did
+ // uses before defs, then the add operation above would appear to not have loc1 live, since we'd
+ // first add it to the out set (the use), and then we'd remove it (the def).
+
+ auto* codeBlock = graph.codeBlock();
+ Interpreter* interpreter = codeBlock->vm()->interpreter;
+ auto* instructionsBegin = graph.instructions().begin();
+ auto* instruction = &instructionsBegin[bytecodeOffset];
+ OpcodeID opcodeID = interpreter->getOpcodeID(*instruction);
+
+ static_cast<DerivedAnalysis*>(this)->computeDefsForBytecodeOffset(
+ codeBlock, opcodeID, instruction, out,
+ [&] (typename Graph::CodeBlock*, typename Graph::Instruction*, OpcodeID, int operand) {
+ if (isValidRegisterForLiveness(operand))
+ def(VirtualRegister(operand).toLocal());
+ });
+
+ static_cast<DerivedAnalysis*>(this)->computeUsesForBytecodeOffset(
+ codeBlock, opcodeID, instruction, out,
+ [&] (typename Graph::CodeBlock*, typename Graph::Instruction*, OpcodeID, int operand) {
+ if (isValidRegisterForLiveness(operand))
+ use(VirtualRegister(operand).toLocal());
+ });
+
+ // If we have an exception handler, we want the live-in variables of the
+ // exception handler block to be included in the live-in of this particular bytecode.
+ if (auto* handler = codeBlock->handlerForBytecodeOffset(bytecodeOffset)) {
+ BytecodeBasicBlock* handlerBlock = graph.findBasicBlockWithLeaderOffset(handler->target);
+ ASSERT(handlerBlock);
+ handlerBlock->in().forEachSetBit(use);
+ }
+}
+
+template<typename DerivedAnalysis>
+template<typename Graph>
+inline void BytecodeLivenessPropagation<DerivedAnalysis>::stepOverInstruction(Graph& graph, unsigned bytecodeOffset, FastBitVector& out)
+{
+ stepOverInstruction(
+ graph, bytecodeOffset, out,
+ [&] (unsigned bitIndex) {
+ // This is the use functor, so we set the bit.
+ out.set(bitIndex);
+ },
+ [&] (unsigned bitIndex) {
+ // This is the def functor, so we clear the bit.
+ out.clear(bitIndex);
+ });
+}
+
+template<typename DerivedAnalysis>
+template<typename Graph>
+inline bool BytecodeLivenessPropagation<DerivedAnalysis>::computeLocalLivenessForBytecodeOffset(Graph& graph, BytecodeBasicBlock* block, unsigned targetOffset, FastBitVector& result)
+{
+ ASSERT(!block->isExitBlock());
+ ASSERT(!block->isEntryBlock());
-#endif // BytecodeLivenessAnalysisInlines_h
+ FastBitVector out = block->out();
+ for (int i = block->offsets().size() - 1; i >= 0; i--) {
+ unsigned bytecodeOffset = block->offsets()[i];
+ if (targetOffset > bytecodeOffset)
+ break;
+ stepOverInstruction(graph, bytecodeOffset, out);
+ }
+
+ return result.setAndCheck(out);
+}
+
+template<typename DerivedAnalysis>
+template<typename Graph>
+inline bool BytecodeLivenessPropagation<DerivedAnalysis>::computeLocalLivenessForBlock(Graph& graph, BytecodeBasicBlock* block)
+{
+ if (block->isExitBlock() || block->isEntryBlock())
+ return false;
+ return computeLocalLivenessForBytecodeOffset(graph, block, block->leaderOffset(), block->in());
+}
+
+template<typename DerivedAnalysis>
+template<typename Graph>
+inline FastBitVector BytecodeLivenessPropagation<DerivedAnalysis>::getLivenessInfoAtBytecodeOffset(Graph& graph, unsigned bytecodeOffset)
+{
+ BytecodeBasicBlock* block = graph.findBasicBlockForBytecodeOffset(bytecodeOffset);
+ ASSERT(block);
+ ASSERT(!block->isEntryBlock());
+ ASSERT(!block->isExitBlock());
+ FastBitVector out;
+ out.resize(block->out().numBits());
+ computeLocalLivenessForBytecodeOffset(graph, block, bytecodeOffset, out);
+ return out;
+}
+
+template<typename DerivedAnalysis>
+template<typename Graph>
+inline void BytecodeLivenessPropagation<DerivedAnalysis>::runLivenessFixpoint(Graph& graph)
+{
+ auto* codeBlock = graph.codeBlock();
+ unsigned numberOfVariables = codeBlock->numCalleeLocals();
+ for (BytecodeBasicBlock* block : graph) {
+ block->in().resize(numberOfVariables);
+ block->out().resize(numberOfVariables);
+ block->in().clearAll();
+ block->out().clearAll();
+ }
+
+ bool changed;
+ BytecodeBasicBlock* lastBlock = graph.last();
+ lastBlock->in().clearAll();
+ lastBlock->out().clearAll();
+ FastBitVector newOut;
+ newOut.resize(lastBlock->out().numBits());
+ do {
+ changed = false;
+ for (std::unique_ptr<BytecodeBasicBlock>& block : graph.basicBlocksInReverseOrder()) {
+ newOut.clearAll();
+ for (BytecodeBasicBlock* successor : block->successors())
+ newOut.merge(successor->in());
+ block->out().set(newOut);
+ changed |= computeLocalLivenessForBlock(graph, block.get());
+ }
+ } while (changed);
+}
+
+} // namespace JSC
--- /dev/null
+/*
+ * Copyright (C) 2016 Yusuke Suzuki <utatane.tea@gmail.com>
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "BytecodeRewriter.h"
+
+#include "PreciseJumpTargetsInlines.h"
+#include <wtf/BubbleSort.h>
+
+namespace JSC {
+
+void BytecodeRewriter::applyModification()
+{
+ for (size_t insertionIndex = m_insertions.size(); insertionIndex--;) {
+ Insertion& insertion = m_insertions[insertionIndex];
+ if (insertion.type == Insertion::Type::Remove)
+ m_graph.instructions().remove(insertion.index.bytecodeOffset, insertion.length());
+ else {
+ if (insertion.includeBranch == IncludeBranch::Yes) {
+ int finalOffset = insertion.index.bytecodeOffset + calculateDifference(m_insertions.begin(), m_insertions.begin() + insertionIndex);
+ adjustJumpTargetsInFragment(finalOffset, insertion);
+ }
+ m_graph.instructions().insertVector(insertion.index.bytecodeOffset, insertion.instructions);
+ }
+ }
+ m_insertions.clear();
+}
+
+void BytecodeRewriter::execute()
+{
+ WTF::bubbleSort(m_insertions.begin(), m_insertions.end(), [] (const Insertion& lhs, const Insertion& rhs) {
+ return lhs.index < rhs.index;
+ });
+
+ UnlinkedCodeBlock* codeBlock = m_graph.codeBlock();
+ codeBlock->applyModification(*this);
+}
+
+void BytecodeRewriter::adjustJumpTargetsInFragment(unsigned finalOffset, Insertion& insertion)
+{
+ auto& fragment = insertion.instructions;
+ UnlinkedInstruction* instructionsBegin = fragment.data();
+ for (unsigned fragmentOffset = 0, fragmentCount = fragment.size(); fragmentOffset < fragmentCount;) {
+ UnlinkedInstruction& instruction = fragment[fragmentOffset];
+ OpcodeID opcodeID = instruction.u.opcode;
+ if (isBranch(opcodeID)) {
+ unsigned bytecodeOffset = finalOffset + fragmentOffset;
+ UnlinkedCodeBlock* codeBlock = m_graph.codeBlock();
+ extractStoredJumpTargetsForBytecodeOffset(codeBlock, codeBlock->vm()->interpreter, instructionsBegin, fragmentOffset, [&](int32_t& label) {
+ int absoluteOffset = adjustAbsoluteOffset(label);
+ label = absoluteOffset - static_cast<int>(bytecodeOffset);
+ });
+ }
+ fragmentOffset += opcodeLength(opcodeID);
+ }
+}
+
+void BytecodeRewriter::insertImpl(InsertionPoint insertionPoint, IncludeBranch includeBranch, Vector<UnlinkedInstruction>&& fragment)
+{
+ ASSERT(insertionPoint.position == Position::Before || insertionPoint.position == Position::After);
+ m_insertions.append(Insertion {
+ insertionPoint,
+ Insertion::Type::Insert,
+ includeBranch,
+ 0,
+ WTFMove(fragment)
+ });
+}
+
+int BytecodeRewriter::adjustJumpTarget(InsertionPoint startPoint, InsertionPoint jumpTargetPoint)
+{
+ if (startPoint < jumpTargetPoint) {
+ int jumpTarget = jumpTargetPoint.bytecodeOffset;
+ auto start = std::lower_bound(m_insertions.begin(), m_insertions.end(), startPoint, [&] (const Insertion& insertion, InsertionPoint startPoint) {
+ return insertion.index < startPoint;
+ });
+ if (start != m_insertions.end()) {
+ auto end = std::lower_bound(m_insertions.begin(), m_insertions.end(), jumpTargetPoint, [&] (const Insertion& insertion, InsertionPoint jumpTargetPoint) {
+ return insertion.index < jumpTargetPoint;
+ });
+ jumpTarget += calculateDifference(start, end);
+ }
+ return jumpTarget - startPoint.bytecodeOffset;
+ }
+
+ if (startPoint == jumpTargetPoint)
+ return 0;
+
+ return -adjustJumpTarget(jumpTargetPoint, startPoint);
+}
+
+} // namespace JSC
--- /dev/null
+/*
+ * Copyright (C) 2016 Yusuke Suzuki <utatane.tea@gmail.com>
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "BytecodeGraph.h"
+#include "Bytecodes.h"
+#include "Opcode.h"
+#include "UnlinkedCodeBlock.h"
+#include <wtf/Insertion.h>
+
+namespace JSC {
+
+// BytecodeRewriter offers the ability to insert and remove the bytecodes including jump operations.
+//
+// We use the original bytecode offsets as labels. When you emit some jumps, you can specify the jump target by
+// using the original bytecode offsets. These bytecode offsets are later converted appropriate values by the
+// rewriter. And we also use the labels to represents the position the new bytecodes inserted.
+//
+// | [bytecode] | [bytecode] |
+// offsets A B C
+//
+// We can use the above "A", "B", and "C" offsets as labels. And the rewriter has the ability to insert bytecode fragments
+// before and after the label. For example, if you insert the fragment after "B", the layout becomes like this.
+//
+// | [bytecode] | [fragment] [bytecode] |
+// offsets A B C
+//
+// And even if you remove some original bytecodes, the offset remains as labels. For example, when you remove the A's bytecode,
+// the layout becomes like this.
+//
+// | | [bytecode] |
+// offsets A B C
+//
+// And still you can insert fragments before and after "A".
+//
+// | [fragment] | [bytecode] |
+// offsets A B C
+//
+// We can insert bytecode fragments "Before" and "After" the labels. This inserted position, either "Before" and "After",
+// has effect when the label is involved with jumps. For example, when you have jump to the position "B",
+//
+// | [bytecode] | [bytecode] |
+// offsets A B C
+// ^
+// jump to here.
+//
+// and you insert the bytecode before/after "B",
+//
+// | [bytecode] [before] | [after] [bytecode] |
+// offsets A B C
+// ^
+// jump to here.
+//
+// as you can see, the execution jumping into "B" does not execute [before] code.
+class BytecodeRewriter {
+WTF_MAKE_NONCOPYABLE(BytecodeRewriter);
+public:
+ enum class Position : int8_t {
+ EntryPoint = -2,
+ Before = -1,
+ LabelPoint = 0,
+ After = 1,
+ OriginalBytecodePoint = 2,
+ };
+
+ enum class IncludeBranch : uint8_t {
+ No = 0,
+ Yes = 1,
+ };
+
+ struct InsertionPoint {
+ int bytecodeOffset;
+ Position position;
+
+ InsertionPoint(int offset, Position pos)
+ : bytecodeOffset(offset)
+ , position(pos)
+ {
+ }
+
+ bool operator<(const InsertionPoint& other) const
+ {
+ if (bytecodeOffset == other.bytecodeOffset)
+ return position < other.position;
+ return bytecodeOffset < other.bytecodeOffset;
+ }
+
+ bool operator==(const InsertionPoint& other) const
+ {
+ return bytecodeOffset == other.bytecodeOffset && position == other.position;
+ }
+ };
+
+private:
+ struct Insertion {
+ enum class Type : uint8_t { Insert = 0, Remove = 1, };
+
+ size_t length() const
+ {
+ if (type == Type::Remove)
+ return removeLength;
+ return instructions.size();
+ }
+
+ InsertionPoint index;
+ Type type;
+ IncludeBranch includeBranch;
+ size_t removeLength;
+ Vector<UnlinkedInstruction> instructions;
+ };
+
+public:
+ class Fragment {
+ WTF_MAKE_NONCOPYABLE(Fragment);
+ public:
+ Fragment(Vector<UnlinkedInstruction>& fragment, IncludeBranch& includeBranch)
+ : m_fragment(fragment)
+ , m_includeBranch(includeBranch)
+ {
+ }
+
+ template<class... Args>
+ void appendInstruction(OpcodeID opcodeID, Args... args)
+ {
+ if (isBranch(opcodeID))
+ m_includeBranch = IncludeBranch::Yes;
+
+ UnlinkedInstruction instructions[sizeof...(args) + 1] = {
+ UnlinkedInstruction(opcodeID),
+ UnlinkedInstruction(args)...
+ };
+ m_fragment.append(instructions, sizeof...(args) + 1);
+ }
+
+ private:
+ Vector<UnlinkedInstruction>& m_fragment;
+ IncludeBranch& m_includeBranch;
+ };
+
+ BytecodeRewriter(BytecodeGraph<UnlinkedCodeBlock>& graph)
+ : m_graph(graph)
+ {
+ }
+
+ template<class Function>
+ void insertFragmentBefore(unsigned bytecodeOffset, Function function)
+ {
+ IncludeBranch includeBranch = IncludeBranch::No;
+ Vector<UnlinkedInstruction> instructions;
+ Fragment fragment(instructions, includeBranch);
+ function(fragment);
+ insertImpl(InsertionPoint(bytecodeOffset, Position::Before), includeBranch, WTFMove(instructions));
+ }
+
+ template<class Function>
+ void insertFragmentAfter(unsigned bytecodeOffset, Function function)
+ {
+ IncludeBranch includeBranch = IncludeBranch::No;
+ Vector<UnlinkedInstruction> instructions;
+ Fragment fragment(instructions, includeBranch);
+ function(fragment);
+ insertImpl(InsertionPoint(bytecodeOffset, Position::After), includeBranch, WTFMove(instructions));
+ }
+
+ void removeBytecode(unsigned bytecodeOffset)
+ {
+ m_insertions.append(Insertion { InsertionPoint(bytecodeOffset, Position::OriginalBytecodePoint), Insertion::Type::Remove, IncludeBranch::No, opcodeLength(m_graph.instructions()[bytecodeOffset].u.opcode), { } });
+ }
+
+ void execute();
+
+ BytecodeGraph<UnlinkedCodeBlock>& graph() { return m_graph; }
+
+ int adjustAbsoluteOffset(int absoluteOffset)
+ {
+ return adjustJumpTarget(InsertionPoint(0, Position::EntryPoint), InsertionPoint(absoluteOffset, Position::LabelPoint));
+ }
+
+ int adjustJumpTarget(int originalBytecodeOffset, int originalJumpTarget)
+ {
+ return adjustJumpTarget(InsertionPoint(originalBytecodeOffset, Position::LabelPoint), InsertionPoint(originalJumpTarget, Position::LabelPoint));
+ }
+
+private:
+ void insertImpl(InsertionPoint, IncludeBranch, Vector<UnlinkedInstruction>&& fragment);
+
+ friend class UnlinkedCodeBlock;
+ void applyModification();
+ void adjustJumpTargetsInFragment(unsigned finalOffset, Insertion&);
+
+ int adjustJumpTarget(InsertionPoint startPoint, InsertionPoint jumpTargetPoint);
+ template<typename Iterator> int calculateDifference(Iterator begin, Iterator end);
+
+ BytecodeGraph<UnlinkedCodeBlock>& m_graph;
+ Vector<Insertion, 8> m_insertions;
+};
+
+template<typename Iterator>
+inline int BytecodeRewriter::calculateDifference(Iterator begin, Iterator end)
+{
+ int result = 0;
+ for (; begin != end; ++begin) {
+ if (begin->type == Insertion::Type::Remove)
+ result -= begin->length();
+ else
+ result += begin->length();
+ }
+ return result;
+}
+
+} // namespace JSC
namespace JSC {
-template<typename Functor>
-void computeUsesForBytecodeOffset(
- CodeBlock* codeBlock, BytecodeBasicBlock* block, unsigned bytecodeOffset, const Functor& functor)
+template<typename Block, typename Functor, typename Instruction>
+void computeUsesForBytecodeOffset(Block* codeBlock, OpcodeID opcodeID, Instruction* instruction, const Functor& functor)
{
- Interpreter* interpreter = codeBlock->vm()->interpreter;
- Instruction* instructionsBegin = codeBlock->instructions().begin();
- Instruction* instruction = &instructionsBegin[bytecodeOffset];
- OpcodeID opcodeID = interpreter->getOpcodeID(instruction->u.opcode);
-
if (opcodeID != op_enter && codeBlock->wasCompiledWithDebuggingOpcodes() && codeBlock->scopeRegister().isValid())
functor(codeBlock, instruction, opcodeID, codeBlock->scopeRegister().offset());
case op_jneq_null:
case op_dec:
case op_inc:
- case op_log_shadow_chicken_prologue:
- case op_resume: {
+ case op_log_shadow_chicken_prologue: {
ASSERT(opcodeLengths[opcodeID] > 1);
functor(codeBlock, instruction, opcodeID, instruction[1].u.operand);
return;
functor(codeBlock, instruction, opcodeID, codeBlock->scopeRegister().offset());
return;
}
- case op_save: {
+ case op_yield: {
functor(codeBlock, instruction, opcodeID, instruction[1].u.operand);
- unsigned mergePointBytecodeOffset = bytecodeOffset + instruction[3].u.operand;
- BytecodeBasicBlock* mergePointBlock = nullptr;
- for (BytecodeBasicBlock* successor : block->successors()) {
- if (successor->leaderBytecodeOffset() == mergePointBytecodeOffset) {
- mergePointBlock = successor;
- break;
- }
- }
- ASSERT(mergePointBlock);
- mergePointBlock->in().forEachSetBit([&](unsigned local) {
- functor(codeBlock, instruction, opcodeID, virtualRegisterForLocal(local).offset());
- });
+ functor(codeBlock, instruction, opcodeID, instruction[3].u.operand);
return;
}
default:
}
}
-template<typename Functor>
-void computeDefsForBytecodeOffset(CodeBlock* codeBlock, BytecodeBasicBlock* block, unsigned bytecodeOffset, const Functor& functor)
+template<typename Block, typename Instruction, typename Functor>
+void computeDefsForBytecodeOffset(Block* codeBlock, OpcodeID opcodeID, Instruction* instruction, const Functor& functor)
{
- Interpreter* interpreter = codeBlock->vm()->interpreter;
- Instruction* instructionsBegin = codeBlock->instructions().begin();
- Instruction* instruction = &instructionsBegin[bytecodeOffset];
- OpcodeID opcodeID = interpreter->getOpcodeID(instruction->u.opcode);
switch (opcodeID) {
// These don't define anything.
case op_put_to_scope:
case op_end:
case op_throw:
case op_throw_static_error:
- case op_save:
case op_assert:
case op_debug:
case op_ret:
case op_watchdog:
case op_log_shadow_chicken_prologue:
case op_log_shadow_chicken_tail:
+ case op_yield:
#define LLINT_HELPER_OPCODES(opcode, length) case opcode:
FOR_EACH_LLINT_OPCODE_EXTENSION(LLINT_HELPER_OPCODES);
#undef LLINT_HELPER_OPCODES
functor(codeBlock, instruction, opcodeID, virtualRegisterForLocal(i).offset());
return;
}
- case op_resume: {
- RELEASE_ASSERT(block->successors().size() == 1);
- // FIXME: This is really dirty.
- // https://bugs.webkit.org/show_bug.cgi?id=159281
- block->successors()[0]->in().forEachSetBit([&](unsigned local) {
- functor(codeBlock, instruction, opcodeID, virtualRegisterForLocal(local).offset());
- });
- return;
- }
}
}
#include "ArithProfile.h"
#include "BasicBlockLocation.h"
#include "BytecodeGenerator.h"
+#include "BytecodeLivenessAnalysis.h"
#include "BytecodeUseDef.h"
#include "CallLinkStatus.h"
#include "DFGCapabilities.h"
} while (i < m_rareData->m_stringSwitchJumpTables.size());
}
- if (m_rareData && !m_rareData->m_liveCalleeLocalsAtYield.isEmpty()) {
- out.printf("\nLive Callee Locals:\n");
- unsigned i = 0;
- do {
- const FastBitVector& liveness = m_rareData->m_liveCalleeLocalsAtYield[i];
- out.printf(" live%1u = ", i);
- liveness.dump(out);
- out.printf("\n");
- ++i;
- } while (i < m_rareData->m_liveCalleeLocalsAtYield.size());
- }
-
out.printf("\n");
}
out.printf("%s, %d", debugHookName(debugHookID), hasBreakpointFlag);
break;
}
- case op_save: {
- int generator = (++it)->u.operand;
- unsigned liveCalleeLocalsIndex = (++it)->u.unsignedValue;
- int offset = (++it)->u.operand;
- FastBitVector liveness;
- if (liveCalleeLocalsIndex < m_rareData->m_liveCalleeLocalsAtYield.size())
- liveness = m_rareData->m_liveCalleeLocalsAtYield[liveCalleeLocalsIndex];
- printLocationAndOp(out, exec, location, it, "save");
- out.printf("%s, ", registerName(generator).data());
- liveness.dump(out);
- out.printf("(@live%1u), %d(->%d)", liveCalleeLocalsIndex, offset, location + offset);
- break;
- }
- case op_resume: {
- int generator = (++it)->u.operand;
- unsigned liveCalleeLocalsIndex = (++it)->u.unsignedValue;
- FastBitVector liveness;
- if (liveCalleeLocalsIndex < m_rareData->m_liveCalleeLocalsAtYield.size())
- liveness = m_rareData->m_liveCalleeLocalsAtYield[liveCalleeLocalsIndex];
- printLocationAndOp(out, exec, location, it, "resume");
- out.printf("%s, ", registerName(generator).data());
- liveness.dump(out);
- out.printf("(@live%1u)", liveCalleeLocalsIndex);
- break;
- }
case op_assert: {
int condition = (++it)->u.operand;
int line = (++it)->u.operand;
m_rareData->m_constantBuffers = other.m_rareData->m_constantBuffers;
m_rareData->m_switchJumpTables = other.m_rareData->m_switchJumpTables;
m_rareData->m_stringSwitchJumpTables = other.m_rareData->m_stringSwitchJumpTables;
- m_rareData->m_liveCalleeLocalsAtYield = other.m_rareData->m_liveCalleeLocalsAtYield;
}
heap()->m_codeBlocks.add(this);
UnlinkedStringJumpTable::StringOffsetTable::iterator end = unlinkedCodeBlock->stringSwitchJumpTable(i).offsetTable.end();
for (; ptr != end; ++ptr) {
OffsetLocation offset;
- offset.branchOffset = ptr->value;
+ offset.branchOffset = ptr->value.branchOffset;
m_rareData->m_stringSwitchJumpTables[i].offsetTable.add(ptr->key, offset);
}
}
// Bookkeep the strongly referenced module environments.
HashSet<JSModuleEnvironment*> stronglyReferencedModuleEnvironments;
- // Bookkeep the merge point bytecode offsets.
- Vector<size_t> mergePointBytecodeOffsets;
-
RefCountedArray<Instruction> instructions(instructionCount);
+ unsigned valueProfileCount = 0;
+ auto linkValueProfile = [&](unsigned bytecodeOffset, unsigned opLength) {
+ unsigned valueProfileIndex = valueProfileCount++;
+ ValueProfile* profile = &m_valueProfiles[valueProfileIndex];
+ ASSERT(profile->m_bytecodeOffset == -1);
+ profile->m_bytecodeOffset = bytecodeOffset;
+ instructions[bytecodeOffset + opLength - 1] = profile;
+ };
+
for (unsigned i = 0; !instructionReader.atEnd(); ) {
const UnlinkedInstruction* pc = instructionReader.next();
case op_try_get_by_id:
case op_get_from_arguments:
case op_to_number: {
- ValueProfile* profile = &m_valueProfiles[pc[opLength - 1].u.operand];
- ASSERT(profile->m_bytecodeOffset == -1);
- profile->m_bytecodeOffset = i;
- instructions[i + opLength - 1] = profile;
+ linkValueProfile(i, opLength);
break;
}
case op_put_by_val: {
case op_call:
case op_tail_call:
case op_call_eval: {
- ValueProfile* profile = &m_valueProfiles[pc[opLength - 1].u.operand];
- ASSERT(profile->m_bytecodeOffset == -1);
- profile->m_bytecodeOffset = i;
- instructions[i + opLength - 1] = profile;
+ linkValueProfile(i, opLength);
int arrayProfileIndex = pc[opLength - 2].u.operand;
m_arrayProfiles[arrayProfileIndex] = ArrayProfile(i);
instructions[i + opLength - 2] = &m_arrayProfiles[arrayProfileIndex];
}
case op_construct: {
instructions[i + 5] = &m_llintCallLinkInfos[pc[5].u.operand];
- ValueProfile* profile = &m_valueProfiles[pc[opLength - 1].u.operand];
- ASSERT(profile->m_bytecodeOffset == -1);
- profile->m_bytecodeOffset = i;
- instructions[i + opLength - 1] = profile;
+ linkValueProfile(i, opLength);
break;
}
case op_get_array_length:
}
case op_get_from_scope: {
- ValueProfile* profile = &m_valueProfiles[pc[opLength - 1].u.operand];
- ASSERT(profile->m_bytecodeOffset == -1);
- profile->m_bytecodeOffset = i;
- instructions[i + opLength - 1] = profile;
+ linkValueProfile(i, opLength);
// get_from_scope dst, scope, id, GetPutInfo, Structure, Operand
break;
}
- case op_save: {
- unsigned liveCalleeLocalsIndex = pc[2].u.index;
- int offset = pc[3].u.operand;
- if (liveCalleeLocalsIndex >= mergePointBytecodeOffsets.size())
- mergePointBytecodeOffsets.resize(liveCalleeLocalsIndex + 1);
- mergePointBytecodeOffsets[liveCalleeLocalsIndex] = i + offset;
- break;
- }
-
default:
break;
}
m_instructions = WTFMove(instructions);
- // Perform bytecode liveness analysis to determine which locals are live and should be resumed when executing op_resume.
- if (unlinkedCodeBlock->parseMode() == SourceParseMode::GeneratorBodyMode) {
- if (size_t count = mergePointBytecodeOffsets.size()) {
- createRareDataIfNecessary();
- BytecodeLivenessAnalysis liveness(this);
- m_rareData->m_liveCalleeLocalsAtYield.grow(count);
- size_t liveCalleeLocalsIndex = 0;
- for (size_t bytecodeOffset : mergePointBytecodeOffsets) {
- m_rareData->m_liveCalleeLocalsAtYield[liveCalleeLocalsIndex] = liveness.getLivenessInfoAtBytecodeOffset(bytecodeOffset);
- ++liveCalleeLocalsIndex;
- }
- }
- }
-
// Set optimization thresholds only after m_instructions is initialized, since these
// rely on the instruction count (and are in theory permitted to also inspect the
// instruction stream to more accurate assess the cost of tier-up).
{
if (!m_rareData)
return 0;
-
- Vector<HandlerInfo>& exceptionHandlers = m_rareData->m_exceptionHandlers;
- for (size_t i = 0; i < exceptionHandlers.size(); ++i) {
- HandlerInfo& handler = exceptionHandlers[i];
- if ((requiredHandler == RequiredHandler::CatchHandler) && !handler.isCatchHandler())
- continue;
-
- // Handlers are ordered innermost first, so the first handler we encounter
- // that contains the source address is the correct handler to use.
- // This index used is either the BytecodeOffset or a CallSiteIndex.
- if (handler.start <= index && handler.end > index)
- return &handler;
- }
-
- return 0;
+ return HandlerInfo::handlerForIndex(m_rareData->m_exceptionHandlers, index, requiredHandler);
}
CallSiteIndex CodeBlock::newExceptionHandlingCallSiteIndex(CallSiteIndex originalCallSite)
if (m_rareData) {
m_rareData->m_switchJumpTables.shrinkToFit();
m_rareData->m_stringSwitchJumpTables.shrinkToFit();
- m_rareData->m_liveCalleeLocalsAtYield.shrinkToFit();
}
} // else don't shrink these, because we would have already pointed pointers into these tables.
}
ValueProfile* CodeBlock::valueProfileForBytecodeOffset(int bytecodeOffset)
{
- ValueProfile* result = binarySearch<ValueProfile, int>(
- m_valueProfiles, m_valueProfiles.size(), bytecodeOffset,
- getValueProfileBytecodeOffset<ValueProfile>);
- ASSERT(result->m_bytecodeOffset != -1);
- ASSERT(instructions()[bytecodeOffset + opcodeLength(
- m_vm->interpreter->getOpcodeID(
- instructions()[bytecodeOffset].u.opcode)) - 1].u.profile == result);
- return result;
+ OpcodeID opcodeID = m_vm->interpreter->getOpcodeID(instructions()[bytecodeOffset].u.opcode);
+ unsigned length = opcodeLength(opcodeID);
+ return instructions()[bytecodeOffset + length - 1].u.profile;
}
void CodeBlock::validate()
#endif
}
+BytecodeLivenessAnalysis& CodeBlock::livenessAnalysisSlow()
+{
+ std::unique_ptr<BytecodeLivenessAnalysis> analysis = std::make_unique<BytecodeLivenessAnalysis>(this);
+ {
+ ConcurrentJITLocker locker(m_lock);
+ if (!m_livenessAnalysis)
+ m_livenessAnalysis = WTFMove(analysis);
+ return *m_livenessAnalysis;
+ }
+}
+
+
} // namespace JSC
#include "ArrayProfile.h"
#include "ByValInfo.h"
#include "BytecodeConventions.h"
-#include "BytecodeLivenessAnalysis.h"
#include "CallLinkInfo.h"
#include "CallReturnOffsetToBytecodeOffset.h"
#include "CodeBlockHash.h"
namespace JSC {
+class BytecodeLivenessAnalysis;
class ExecState;
class JITAddGenerator;
class JSModuleEnvironment;
return index >= m_numVars;
}
- enum class RequiredHandler {
- CatchHandler,
- AnyHandler
- };
HandlerInfo* handlerForBytecodeOffset(unsigned bytecodeOffset, RequiredHandler = RequiredHandler::AnyHandler);
HandlerInfo* handlerForIndex(unsigned, RequiredHandler = RequiredHandler::AnyHandler);
void removeExceptionHandlerForCallSite(CallSiteIndex);
return static_cast<Instruction*>(returnAddress) - instructions().begin();
}
+ typedef JSC::Instruction Instruction;
+ typedef RefCountedArray<Instruction>& UnpackedInstructions;
+
unsigned numberOfInstructions() const { return m_instructions.size(); }
RefCountedArray<Instruction>& instructions() { return m_instructions; }
const RefCountedArray<Instruction>& instructions() const { return m_instructions; }
}
WriteBarrier<Unknown>& constantRegister(int index) { return m_constantRegisters[index - FirstConstantRegisterIndex]; }
- ALWAYS_INLINE bool isConstantRegisterIndex(int index) const { return index >= FirstConstantRegisterIndex; }
+ static ALWAYS_INLINE bool isConstantRegisterIndex(int index) { return index >= FirstConstantRegisterIndex; }
ALWAYS_INLINE JSValue getConstant(int index) const { return m_constantRegisters[index - FirstConstantRegisterIndex].get(); }
ALWAYS_INLINE SourceCodeRepresentation constantSourceCodeRepresentation(int index) const { return m_constantsSourceCodeRepresentation[index - FirstConstantRegisterIndex]; }
if (!!m_livenessAnalysis)
return *m_livenessAnalysis;
}
- std::unique_ptr<BytecodeLivenessAnalysis> analysis =
- std::make_unique<BytecodeLivenessAnalysis>(this);
- {
- ConcurrentJITLocker locker(m_lock);
- if (!m_livenessAnalysis)
- m_livenessAnalysis = WTFMove(analysis);
- return *m_livenessAnalysis;
- }
+ return livenessAnalysisSlow();
}
void validate();
StringJumpTable& addStringSwitchJumpTable() { createRareDataIfNecessary(); m_rareData->m_stringSwitchJumpTables.append(StringJumpTable()); return m_rareData->m_stringSwitchJumpTables.last(); }
StringJumpTable& stringSwitchJumpTable(int tableIndex) { RELEASE_ASSERT(m_rareData); return m_rareData->m_stringSwitchJumpTables[tableIndex]; }
- // Live callee registers at yield points.
- const FastBitVector& liveCalleeLocalsAtYield(unsigned index) const
- {
- RELEASE_ASSERT(m_rareData);
- return m_rareData->m_liveCalleeLocalsAtYield[index];
- }
- FastBitVector& liveCalleeLocalsAtYield(unsigned index)
- {
- RELEASE_ASSERT(m_rareData);
- return m_rareData->m_liveCalleeLocalsAtYield[index];
- }
-
EvalCodeCache& evalCodeCache() { createRareDataIfNecessary(); return m_rareData->m_evalCodeCache; }
enum ShrinkMode {
Vector<SimpleJumpTable> m_switchJumpTables;
Vector<StringJumpTable> m_stringSwitchJumpTables;
- Vector<FastBitVector> m_liveCalleeLocalsAtYield;
-
EvalCodeCache m_evalCodeCache;
};
private:
friend class CodeBlockSet;
+
+ BytecodeLivenessAnalysis& livenessAnalysisSlow();
CodeBlock* specialOSREntryBlockOrNull();
#define HandlerInfo_h
#include "CodeLocation.h"
+#include <wtf/Vector.h>
namespace JSC {
SynthesizedFinally = 3
};
+enum class RequiredHandler {
+ CatchHandler,
+ AnyHandler
+};
+
struct HandlerInfoBase {
HandlerType type() const { return static_cast<HandlerType>(typeBits); }
void setType(HandlerType type) { typeBits = static_cast<uint32_t>(type); }
bool isCatchHandler() const { return type() == HandlerType::Catch; }
+ template<typename Handler>
+ static Handler* handlerForIndex(Vector<Handler>& exeptionHandlers, unsigned index, RequiredHandler requiredHandler)
+ {
+ for (Handler& handler : exeptionHandlers) {
+ if ((requiredHandler == RequiredHandler::CatchHandler) && !handler.isCatchHandler())
+ continue;
+
+ // Handlers are ordered innermost first, so the first handler we encounter
+ // that contains the source address is the correct handler to use.
+ // This index used is either the BytecodeOffset or a CallSiteIndex.
+ if (handler.start <= index && handler.end > index)
+ return &handler;
+ }
+
+ return nullptr;
+ }
+
uint32_t start;
uint32_t end;
uint32_t target;
return 0;
}
+inline bool isBranch(OpcodeID opcodeID)
+{
+ switch (opcodeID) {
+ case op_jmp:
+ case op_jtrue:
+ case op_jfalse:
+ case op_jeq_null:
+ case op_jneq_null:
+ case op_jneq_ptr:
+ case op_jless:
+ case op_jlesseq:
+ case op_jgreater:
+ case op_jgreatereq:
+ case op_jnless:
+ case op_jnlesseq:
+ case op_jngreater:
+ case op_jngreatereq:
+ case op_switch_imm:
+ case op_switch_char:
+ case op_switch_string:
+ return true;
+ default:
+ return false;
+ }
+}
+
+inline bool isUnconditionalBranch(OpcodeID opcodeID)
+{
+ switch (opcodeID) {
+ case op_jmp:
+ return true;
+ default:
+ return false;
+ }
+}
+
+inline bool isTerminal(OpcodeID opcodeID)
+{
+ switch (opcodeID) {
+ case op_ret:
+ case op_end:
+ return true;
+ default:
+ return false;
+ }
+}
+
+inline bool isThrow(OpcodeID opcodeID)
+{
+ switch (opcodeID) {
+ case op_throw:
+ case op_throw_static_error:
+ return true;
+ default:
+ return false;
+ }
+}
+
} // namespace JSC
namespace WTF {
#include "config.h"
#include "PreciseJumpTargets.h"
+#include "InterpreterInlines.h"
#include "JSCInlines.h"
+#include "PreciseJumpTargetsInlines.h"
namespace JSC {
-template <size_t vectorSize>
-static void getJumpTargetsForBytecodeOffset(CodeBlock* codeBlock, Interpreter* interpreter, Instruction* instructionsBegin, unsigned bytecodeOffset, Vector<unsigned, vectorSize>& out)
+template <size_t vectorSize, typename Block, typename Instruction>
+static void getJumpTargetsForBytecodeOffset(Block* codeBlock, Interpreter* interpreter, Instruction* instructionsBegin, unsigned bytecodeOffset, Vector<unsigned, vectorSize>& out)
{
- OpcodeID opcodeID = interpreter->getOpcodeID(instructionsBegin[bytecodeOffset].u.opcode);
- Instruction* current = instructionsBegin + bytecodeOffset;
- switch (opcodeID) {
- case op_jmp:
- out.append(bytecodeOffset + current[1].u.operand);
- break;
- case op_jtrue:
- case op_jfalse:
- case op_jeq_null:
- case op_jneq_null:
- out.append(bytecodeOffset + current[2].u.operand);
- break;
- case op_jneq_ptr:
- case op_jless:
- case op_jlesseq:
- case op_jgreater:
- case op_jgreatereq:
- case op_jnless:
- case op_jnlesseq:
- case op_jngreater:
- case op_jngreatereq:
- case op_save: // The jump of op_save is purely for calculating liveness.
- out.append(bytecodeOffset + current[3].u.operand);
- break;
- case op_switch_imm:
- case op_switch_char: {
- SimpleJumpTable& table = codeBlock->switchJumpTable(current[1].u.operand);
- for (unsigned i = table.branchOffsets.size(); i--;)
- out.append(bytecodeOffset + table.branchOffsets[i]);
- out.append(bytecodeOffset + current[2].u.operand);
- break;
- }
- case op_switch_string: {
- StringJumpTable& table = codeBlock->stringSwitchJumpTable(current[1].u.operand);
- StringJumpTable::StringOffsetTable::iterator iter = table.offsetTable.begin();
- StringJumpTable::StringOffsetTable::iterator end = table.offsetTable.end();
- for (; iter != end; ++iter)
- out.append(bytecodeOffset + iter->value.branchOffset);
- out.append(bytecodeOffset + current[2].u.operand);
- break;
- }
- case op_loop_hint:
+ OpcodeID opcodeID = interpreter->getOpcodeID(instructionsBegin[bytecodeOffset]);
+ extractStoredJumpTargetsForBytecodeOffset(codeBlock, interpreter, instructionsBegin, bytecodeOffset, [&](int32_t& relativeOffset) {
+ out.append(bytecodeOffset + relativeOffset);
+ });
+ // op_loop_hint does not have jump target stored in bytecode instructions.
+ if (opcodeID == op_loop_hint)
out.append(bytecodeOffset);
- break;
- default:
- break;
- }
}
-void computePreciseJumpTargets(CodeBlock* codeBlock, Vector<unsigned, 32>& out)
+enum class ComputePreciseJumpTargetsMode {
+ FollowCodeBlockClaim,
+ ForceCompute,
+};
+
+template<ComputePreciseJumpTargetsMode Mode, typename Block, typename Instruction, size_t vectorSize>
+void computePreciseJumpTargetsInternal(Block* codeBlock, Instruction* instructionsBegin, unsigned instructionCount, Vector<unsigned, vectorSize>& out)
{
ASSERT(out.isEmpty());
// We will derive a superset of the jump targets that the code block thinks it has.
// So, if the code block claims there are none, then we are done.
- if (!codeBlock->numberOfJumpTargets())
+ if (Mode == ComputePreciseJumpTargetsMode::FollowCodeBlockClaim && !codeBlock->numberOfJumpTargets())
return;
for (unsigned i = codeBlock->numberOfExceptionHandlers(); i--;) {
}
Interpreter* interpreter = codeBlock->vm()->interpreter;
- Instruction* instructionsBegin = codeBlock->instructions().begin();
- unsigned instructionCount = codeBlock->instructions().size();
for (unsigned bytecodeOffset = 0; bytecodeOffset < instructionCount;) {
- OpcodeID opcodeID = interpreter->getOpcodeID(instructionsBegin[bytecodeOffset].u.opcode);
+ OpcodeID opcodeID = interpreter->getOpcodeID(instructionsBegin[bytecodeOffset]);
getJumpTargetsForBytecodeOffset(codeBlock, interpreter, instructionsBegin, bytecodeOffset, out);
bytecodeOffset += opcodeLengths[opcodeID];
}
out.shrinkToFit();
}
-void findJumpTargetsForBytecodeOffset(CodeBlock* codeBlock, unsigned bytecodeOffset, Vector<unsigned, 1>& out)
+void computePreciseJumpTargets(CodeBlock* codeBlock, Vector<unsigned, 32>& out)
{
- Interpreter* interpreter = codeBlock->vm()->interpreter;
- Instruction* instructionsBegin = codeBlock->instructions().begin();
- getJumpTargetsForBytecodeOffset(codeBlock, interpreter, instructionsBegin, bytecodeOffset, out);
+ computePreciseJumpTargetsInternal<ComputePreciseJumpTargetsMode::FollowCodeBlockClaim>(codeBlock, codeBlock->instructions().begin(), codeBlock->instructions().size(), out);
+}
+
+void computePreciseJumpTargets(CodeBlock* codeBlock, Instruction* instructionsBegin, unsigned instructionCount, Vector<unsigned, 32>& out)
+{
+ computePreciseJumpTargetsInternal<ComputePreciseJumpTargetsMode::FollowCodeBlockClaim>(codeBlock, instructionsBegin, instructionCount, out);
+}
+
+void computePreciseJumpTargets(UnlinkedCodeBlock* codeBlock, UnlinkedInstruction* instructionsBegin, unsigned instructionCount, Vector<unsigned, 32>& out)
+{
+ computePreciseJumpTargetsInternal<ComputePreciseJumpTargetsMode::FollowCodeBlockClaim>(codeBlock, instructionsBegin, instructionCount, out);
+}
+
+void recomputePreciseJumpTargets(UnlinkedCodeBlock* codeBlock, UnlinkedInstruction* instructionsBegin, unsigned instructionCount, Vector<unsigned>& out)
+{
+ computePreciseJumpTargetsInternal<ComputePreciseJumpTargetsMode::ForceCompute>(codeBlock, instructionsBegin, instructionCount, out);
+}
+
+void findJumpTargetsForBytecodeOffset(CodeBlock* codeBlock, Instruction* instructionsBegin, unsigned bytecodeOffset, Vector<unsigned, 1>& out)
+{
+ getJumpTargetsForBytecodeOffset(codeBlock, codeBlock->vm()->interpreter, instructionsBegin, bytecodeOffset, out);
+}
+
+void findJumpTargetsForBytecodeOffset(UnlinkedCodeBlock* codeBlock, UnlinkedInstruction* instructionsBegin, unsigned bytecodeOffset, Vector<unsigned, 1>& out)
+{
+ getJumpTargetsForBytecodeOffset(codeBlock, codeBlock->vm()->interpreter, instructionsBegin, bytecodeOffset, out);
}
} // namespace JSC
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
-#ifndef PreciseJumpTargets_h
-#define PreciseJumpTargets_h
+#pragma once
#include "CodeBlock.h"
namespace JSC {
+class UnlinkedCodeBlock;
+struct UnlinkedInstruction;
+
// Return a sorted list of bytecode index that are the destination of a jump.
void computePreciseJumpTargets(CodeBlock*, Vector<unsigned, 32>& out);
+void computePreciseJumpTargets(CodeBlock*, Instruction* instructionsBegin, unsigned instructionCount, Vector<unsigned, 32>& out);
+void computePreciseJumpTargets(UnlinkedCodeBlock*, UnlinkedInstruction* instructionsBegin, unsigned instructionCount, Vector<unsigned, 32>& out);
-void findJumpTargetsForBytecodeOffset(CodeBlock*, unsigned bytecodeOffset, Vector<unsigned, 1>& out);
-
-} // namespace JSC
+void recomputePreciseJumpTargets(UnlinkedCodeBlock*, UnlinkedInstruction* instructionsBegin, unsigned instructionCount, Vector<unsigned>& out);
-#endif // PreciseJumpTargets_h
+void findJumpTargetsForBytecodeOffset(CodeBlock*, Instruction* instructionsBegin, unsigned bytecodeOffset, Vector<unsigned, 1>& out);
+void findJumpTargetsForBytecodeOffset(UnlinkedCodeBlock*, UnlinkedInstruction* instructionsBegin, unsigned bytecodeOffset, Vector<unsigned, 1>& out);
+} // namespace JSC
--- /dev/null
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "InterpreterInlines.h"
+#include "Opcode.h"
+#include "PreciseJumpTargets.h"
+
+namespace JSC {
+
+template<typename Block, typename Instruction, typename Function>
+inline void extractStoredJumpTargetsForBytecodeOffset(Block* codeBlock, Interpreter* interpreter, Instruction* instructionsBegin, unsigned bytecodeOffset, Function function)
+{
+ OpcodeID opcodeID = interpreter->getOpcodeID(instructionsBegin[bytecodeOffset]);
+ Instruction* current = instructionsBegin + bytecodeOffset;
+ switch (opcodeID) {
+ case op_jmp:
+ function(current[1].u.operand);
+ break;
+ case op_jtrue:
+ case op_jfalse:
+ case op_jeq_null:
+ case op_jneq_null:
+ function(current[2].u.operand);
+ break;
+ case op_jneq_ptr:
+ case op_jless:
+ case op_jlesseq:
+ case op_jgreater:
+ case op_jgreatereq:
+ case op_jnless:
+ case op_jnlesseq:
+ case op_jngreater:
+ case op_jngreatereq:
+ function(current[3].u.operand);
+ break;
+ case op_switch_imm:
+ case op_switch_char: {
+ auto& table = codeBlock->switchJumpTable(current[1].u.operand);
+ for (unsigned i = table.branchOffsets.size(); i--;)
+ function(table.branchOffsets[i]);
+ function(current[2].u.operand);
+ break;
+ }
+ case op_switch_string: {
+ auto& table = codeBlock->stringSwitchJumpTable(current[1].u.operand);
+ auto iter = table.offsetTable.begin();
+ auto end = table.offsetTable.end();
+ for (; iter != end; ++iter)
+ function(iter->value.branchOffset);
+ function(current[2].u.operand);
+ break;
+ }
+ default:
+ break;
+ }
+}
+
+} // namespace JSC
#include "UnlinkedCodeBlock.h"
#include "BytecodeGenerator.h"
+#include "BytecodeRewriter.h"
#include "ClassInfo.h"
#include "CodeCache.h"
#include "Executable.h"
#include "JSString.h"
#include "JSCInlines.h"
#include "Parser.h"
+#include "PreciseJumpTargetsInlines.h"
#include "SourceProvider.h"
#include "Structure.h"
#include "SymbolTable.h"
return *m_unlinkedInstructions;
}
+UnlinkedHandlerInfo* UnlinkedCodeBlock::handlerForBytecodeOffset(unsigned bytecodeOffset, RequiredHandler requiredHandler)
+{
+ return handlerForIndex(bytecodeOffset, requiredHandler);
+}
+
+UnlinkedHandlerInfo* UnlinkedCodeBlock::handlerForIndex(unsigned index, RequiredHandler requiredHandler)
+{
+ if (!m_rareData)
+ return nullptr;
+ return UnlinkedHandlerInfo::handlerForIndex(m_rareData->m_exceptionHandlers, index, requiredHandler);
+}
+
+void UnlinkedCodeBlock::applyModification(BytecodeRewriter& rewriter)
+{
+ // Before applying the changes, we adjust the jumps based on the original bytecode offset, the offset to the jump target, and
+ // the insertion information.
+
+ BytecodeGraph<UnlinkedCodeBlock>& graph = rewriter.graph();
+ UnlinkedInstruction* instructionsBegin = graph.instructions().begin();
+
+ for (int bytecodeOffset = 0, instructionCount = graph.instructions().size(); bytecodeOffset < instructionCount;) {
+ UnlinkedInstruction* current = instructionsBegin + bytecodeOffset;
+ OpcodeID opcodeID = current[0].u.opcode;
+ extractStoredJumpTargetsForBytecodeOffset(this, vm()->interpreter, instructionsBegin, bytecodeOffset, [&](int32_t& relativeOffset) {
+ relativeOffset = rewriter.adjustJumpTarget(bytecodeOffset, bytecodeOffset + relativeOffset);
+ });
+ bytecodeOffset += opcodeLength(opcodeID);
+ }
+
+ // Then, exception handlers should be adjusted.
+ if (m_rareData) {
+ for (UnlinkedHandlerInfo& handler : m_rareData->m_exceptionHandlers) {
+ handler.target = rewriter.adjustAbsoluteOffset(handler.target);
+ handler.start = rewriter.adjustAbsoluteOffset(handler.start);
+ handler.end = rewriter.adjustAbsoluteOffset(handler.end);
+ }
+
+ for (size_t i = 0; i < m_rareData->m_opProfileControlFlowBytecodeOffsets.size(); ++i)
+ m_rareData->m_opProfileControlFlowBytecodeOffsets[i] = rewriter.adjustAbsoluteOffset(m_rareData->m_opProfileControlFlowBytecodeOffsets[i]);
+
+ if (!m_rareData->m_typeProfilerInfoMap.isEmpty()) {
+ HashMap<unsigned, RareData::TypeProfilerExpressionRange> adjustedTypeProfilerInfoMap;
+ for (auto& entry : m_rareData->m_typeProfilerInfoMap)
+ adjustedTypeProfilerInfoMap.set(rewriter.adjustAbsoluteOffset(entry.key), entry.value);
+ m_rareData->m_typeProfilerInfoMap.swap(adjustedTypeProfilerInfoMap);
+ }
+ }
+
+ for (size_t i = 0; i < m_propertyAccessInstructions.size(); ++i)
+ m_propertyAccessInstructions[i] = rewriter.adjustAbsoluteOffset(m_propertyAccessInstructions[i]);
+
+ for (size_t i = 0; i < m_expressionInfo.size(); ++i)
+ m_expressionInfo[i].instructionOffset = rewriter.adjustAbsoluteOffset(m_expressionInfo[i].instructionOffset);
+
+ // Then, modify the unlinked instructions.
+ rewriter.applyModification();
+
+ // And recompute the jump target based on the modified unlinked instructions.
+ m_jumpTargets.clear();
+ recomputePreciseJumpTargets(this, graph.instructions().begin(), graph.instructions().size(), m_jumpTargets);
+}
+
}
namespace JSC {
+class BytecodeRewriter;
+class Debugger;
class FunctionMetadataNode;
class FunctionExecutable;
class JSScope;
typedef unsigned UnlinkedLLIntCallLinkInfo;
struct UnlinkedStringJumpTable {
- typedef HashMap<RefPtr<StringImpl>, int32_t> StringOffsetTable;
+ struct OffsetLocation {
+ int32_t branchOffset;
+ };
+
+ typedef HashMap<RefPtr<StringImpl>, OffsetLocation> StringOffsetTable;
StringOffsetTable offsetTable;
inline int32_t offsetForValue(StringImpl* value, int32_t defaultOffset)
StringOffsetTable::const_iterator loc = offsetTable.find(value);
if (loc == end)
return defaultOffset;
- return loc->value;
+ return loc->value.branchOffset;
}
};
} u;
};
+class BytecodeGeneratorification;
+
class UnlinkedCodeBlock : public JSCell {
public:
typedef JSCell Base;
enum { CallFunction, ApplyFunction };
+ typedef UnlinkedInstruction Instruction;
+ typedef Vector<UnlinkedInstruction, 0, UnsafeVectorOverflow> UnpackedInstructions;
+
bool isConstructor() const { return m_isConstructor; }
bool isStrictMode() const { return m_isStrictMode; }
bool usesEval() const { return m_usesEval; }
unsigned jumpTarget(int index) const { return m_jumpTargets[index]; }
unsigned lastJumpTarget() const { return m_jumpTargets.last(); }
+ UnlinkedHandlerInfo* handlerForBytecodeOffset(unsigned bytecodeOffset, RequiredHandler = RequiredHandler::AnyHandler);
+ UnlinkedHandlerInfo* handlerForIndex(unsigned, RequiredHandler = RequiredHandler::AnyHandler);
+
bool isBuiltinFunction() const { return m_isBuiltinFunction; }
ConstructorKind constructorKind() const { return static_cast<ConstructorKind>(m_constructorKind); }
void setInstructions(std::unique_ptr<UnlinkedInstructionStream>);
const UnlinkedInstructionStream& instructions() const;
+ int numCalleeLocals() const { return m_numCalleeLocals; }
+
int m_numVars;
int m_numCapturedVars;
int m_numCalleeLocals;
}
private:
+ friend class BytecodeRewriter;
+ void applyModification(BytecodeRewriter&);
void createRareDataIfNecessary()
{
#ifndef VirtualRegister_h
#define VirtualRegister_h
+#include "BytecodeConventions.h"
#include "CallFrame.h"
-
#include <wtf/PrintStream.h>
namespace JSC {
private:
static const int s_invalidVirtualRegister = 0x3fffffff;
- static const int s_firstConstantRegisterIndex = 0x40000000;
+ static const int s_firstConstantRegisterIndex = FirstConstantRegisterIndex;
static int localToOperand(int local) { return -1 - local; }
static int operandToLocal(int operand) { return -1 - operand; }
#include "ArithProfile.h"
#include "BuiltinExecutables.h"
+#include "BytecodeGeneratorification.h"
#include "BytecodeLivenessAnalysis.h"
#include "Interpreter.h"
#include "JSFunction.h"
m_codeBlock->addExceptionHandler(info);
}
+
+ if (m_codeBlock->parseMode() == SourceParseMode::GeneratorBodyMode)
+ performGeneratorification(m_codeBlock.get(), m_instructions, m_generatorFrameSymbolTable.get(), m_generatorFrameSymbolTableIndex);
+
m_codeBlock->setInstructions(std::make_unique<UnlinkedInstructionStream>(m_instructions));
m_codeBlock->shrinkToFit();
bool shouldCaptureAllOfTheThings = m_shouldEmitDebugHooks || codeBlock->usesEval();
bool needsArguments = (functionNode->usesArguments() || codeBlock->usesEval() || (functionNode->usesArrowFunction() && !codeBlock->isArrowFunction() && isArgumentsUsedInInnerArrowFunction()));
- // Generator never provides "arguments". "arguments" reference will be resolved in an upper generator function scope.
- if (parseMode == SourceParseMode::GeneratorBodyMode)
+ if (parseMode == SourceParseMode::GeneratorBodyMode) {
+ // Generator never provides "arguments". "arguments" reference will be resolved in an upper generator function scope.
needsArguments = false;
+ // Generator uses the var scope to save and resume its variables. So the lexical scope is always instantiated.
+ shouldCaptureSomeOfTheThings = true;
+ }
+
if (parseMode == SourceParseMode::GeneratorWrapperFunctionMode && needsArguments) {
// Generator does not provide "arguments". Instead, wrapping GeneratorFunction provides "arguments".
// This is because arguments of a generator should be evaluated before starting it.
// function *gen(a, b = hello())
// {
// return {
- // @generatorNext: function (@generator, @generatorState, @generatorValue, @generatorResumeMode)
+ // @generatorNext: function (@generator, @generatorState, @generatorValue, @generatorResumeMode, @generatorFrame)
// {
// arguments; // This `arguments` should reference to the gen's arguments.
// ...
return captures(uid) ? VarKind::Scope : VarKind::Stack;
};
- emitEnter();
-
- allocateAndEmitScope();
-
m_calleeRegister.setIndex(CallFrameSlot::callee);
initializeParameters(parameters);
ASSERT(!(isSimpleParameterList && m_restParameter));
- // Before emitting a scope creation, emit a generator prologue that contains jump based on a generator's state.
- if (parseMode == SourceParseMode::GeneratorBodyMode) {
- m_generatorRegister = &m_parameters[1];
-
- // Jump with switch_imm based on @generatorState. We don't take the coroutine styled generator implementation.
- // When calling `next()`, we would like to enter the same prologue instead of jumping based on the saved instruction pointer.
- // It's suitale for inlining, because it just inlines one `next` function implementation.
+ emitEnter();
- beginGenerator(generatorStateRegister());
+ if (parseMode == SourceParseMode::GeneratorBodyMode)
+ m_generatorRegister = &m_parameters[1];
- // Initial state.
- emitGeneratorStateLabel();
- }
+ allocateAndEmitScope();
if (functionNameIsInScope(functionNode->ident(), functionNode->functionMode())) {
ASSERT(parseMode != SourceParseMode::GeneratorBodyMode);
emitLoadNewTargetFromArrowFunctionLexicalEnvironment();
}
+ // Set up the lexical environment scope as the generator frame. We store the saved and resumed generator registers into this scope with the symbol keys.
+ // Since they are symbol keyed, these variables cannot be reached from the usual code.
+ if (SourceParseMode::GeneratorBodyMode == parseMode) {
+ ASSERT(m_lexicalEnvironmentRegister);
+ m_generatorFrameSymbolTable.set(*m_vm, functionSymbolTable);
+ m_generatorFrameSymbolTableIndex = symbolTableConstantIndex;
+ emitMove(generatorFrameRegister(), m_lexicalEnvironmentRegister);
+ emitPutById(generatorRegister(), propertyNames().builtinNames().generatorFramePrivateName(), generatorFrameRegister());
+ }
+
bool shouldInitializeBlockScopedFunctions = false; // We generate top-level function declarations in ::generate().
pushLexicalScope(m_scopeNode, TDZCheckOptimization::Optimize, NestedScopeType::IsNotNested, nullptr, shouldInitializeBlockScopedFunctions);
}
if (flipTries) {
while (m_tryContextStack.size() != finallyContext.tryContextStackSize) {
ASSERT(m_tryContextStack.size() > finallyContext.tryContextStackSize);
- TryContext context = m_tryContextStack.last();
- m_tryContextStack.removeLast();
+ TryContext context = m_tryContextStack.takeLast();
TryRange range;
range.start = context.start;
range.end = beforeFinally;
ASSERT(nodes[i]->isString());
StringImpl* clause = static_cast<StringNode*>(nodes[i])->value().impl();
- jumpTable.offsetTable.add(clause, labels[i]->bind(switchAddress, switchAddress + 3));
+ jumpTable.offsetTable.add(clause, UnlinkedStringJumpTable::OffsetLocation { labels[i]->bind(switchAddress, switchAddress + 3) });
}
}
void BytecodeGenerator::emitYieldPoint(RegisterID* argument)
{
RefPtr<Label> mergePoint = newLabel();
- size_t yieldPointIndex = m_generatorResumeLabels.size();
- emitGeneratorStateChange(yieldPointIndex);
- // First yield point is used for initial sequence.
- unsigned liveCalleeLocalsIndex = yieldPointIndex - 1;
- emitSave(mergePoint.get(), liveCalleeLocalsIndex);
- emitReturn(argument);
- emitResume(mergePoint.get(), liveCalleeLocalsIndex);
-}
+ unsigned yieldPointIndex = m_yieldPoints++;
+ emitGeneratorStateChange(yieldPointIndex + 1);
+
+ // Split the try range here.
+ RefPtr<Label> savePoint = emitLabel(newLabel().get());
+ for (unsigned i = m_tryContextStack.size(); i--;) {
+ TryContext& context = m_tryContextStack[i];
+ TryRange range;
+ range.start = context.start;
+ range.end = savePoint;
+ range.tryData = context.tryData;
+ m_tryRanges.append(range);
+
+ // Try range will be restared at the merge point.
+ context.start = mergePoint;
+ }
+ Vector<TryContext> savedTryContextStack;
+ m_tryContextStack.swap(savedTryContextStack);
-void BytecodeGenerator::emitSave(Label* mergePoint, unsigned liveCalleeLocalsIndex)
-{
- size_t begin = instructions().size();
- emitOpcode(op_save);
- instructions().append(m_generatorRegister->index());
- instructions().append(liveCalleeLocalsIndex);
- instructions().append(mergePoint->bind(begin, instructions().size()));
-}
+ emitOpcode(op_yield);
+ instructions().append(generatorFrameRegister()->index());
+ instructions().append(yieldPointIndex);
+ instructions().append(argument->index());
-void BytecodeGenerator::emitResume(Label* mergePoint, unsigned liveCalleeLocalsIndex)
-{
- emitGeneratorStateLabel();
- emitOpcode(op_resume);
- instructions().append(m_generatorRegister->index());
- instructions().append(liveCalleeLocalsIndex);
- emitLabel(mergePoint);
+ // Restore the try contexts, which start offset is updated to the merge point.
+ m_tryContextStack.swap(savedTryContextStack);
+ emitLabel(mergePoint.get());
}
RegisterID* BytecodeGenerator::emitYield(RegisterID* argument)
emitPutById(generatorRegister(), propertyNames().builtinNames().generatorStatePrivateName(), completedState);
}
-void BytecodeGenerator::emitGeneratorStateLabel()
-{
- RefPtr<Label> label = newLabel();
- m_generatorResumeLabels.append(label.get());
- emitLabel(label.get());
-}
-
-void BytecodeGenerator::beginGenerator(RegisterID* state)
-{
- beginSwitch(state, SwitchInfo::SwitchImmediate);
-}
-
-void BytecodeGenerator::endGenerator(Label* defaultLabel)
-{
- SwitchInfo switchInfo = m_switchContextStack.last();
- m_switchContextStack.removeLast();
-
- instructions()[switchInfo.bytecodeOffset + 1] = m_codeBlock->numberOfSwitchJumpTables();
- instructions()[switchInfo.bytecodeOffset + 2] = defaultLabel->bind(switchInfo.bytecodeOffset, switchInfo.bytecodeOffset + 3);
-
- UnlinkedSimpleJumpTable& jumpTable = m_codeBlock->addSwitchJumpTable();
- int32_t switchAddress = switchInfo.bytecodeOffset;
- jumpTable.min = 0;
- jumpTable.branchOffsets.resize(m_generatorResumeLabels.size() + 1);
- jumpTable.branchOffsets.fill(0);
- for (uint32_t i = 0; i < m_generatorResumeLabels.size(); ++i) {
- // We're emitting this after the clause labels should have been fixed, so
- // the labels should not be "forward" references
- ASSERT(!m_generatorResumeLabels[i]->isForward());
- jumpTable.add(i, m_generatorResumeLabels[i]->bind(switchAddress, switchAddress + 3));
- }
-}
-
} // namespace JSC
namespace WTF {
#include "CodeBlock.h"
#include "Instruction.h"
#include "Interpreter.h"
+#include "JSGeneratorFunction.h"
#include "Label.h"
#include "LabelScope.h"
#include "Nodes.h"
void endSwitch(uint32_t clauseCount, RefPtr<Label>*, ExpressionNode**, Label* defaultLabel, int32_t min, int32_t range);
void emitYieldPoint(RegisterID*);
- void emitSave(Label* mergePoint, unsigned liveCalleeLocalsIndex);
- void emitResume(Label* mergePoint, unsigned liveCalleeLocalsIndex);
void emitGeneratorStateLabel();
void emitGeneratorStateChange(int32_t state);
RegisterID* emitYield(RegisterID* argument);
RegisterID* emitDelegateYield(RegisterID* argument, ThrowableExpressionData*);
- void beginGenerator(RegisterID*);
- void endGenerator(Label* defaultLabel);
- RegisterID* generatorStateRegister() { return &m_parameters[2]; }
- RegisterID* generatorValueRegister() { return &m_parameters[3]; }
- RegisterID* generatorResumeModeRegister() { return &m_parameters[4]; }
+ RegisterID* generatorStateRegister() { return &m_parameters[static_cast<int32_t>(JSGeneratorFunction::GeneratorArgument::State)]; }
+ RegisterID* generatorValueRegister() { return &m_parameters[static_cast<int32_t>(JSGeneratorFunction::GeneratorArgument::Value)]; }
+ RegisterID* generatorResumeModeRegister() { return &m_parameters[static_cast<int32_t>(JSGeneratorFunction::GeneratorArgument::ResumeMode)]; }
+ RegisterID* generatorFrameRegister() { return &m_parameters[static_cast<int32_t>(JSGeneratorFunction::GeneratorArgument::Frame)]; }
CodeType codeType() const { return m_codeType; }
Vector<SwitchInfo> m_switchContextStack;
Vector<RefPtr<ForInContext>> m_forInContextStack;
Vector<TryContext> m_tryContextStack;
- Vector<RefPtr<Label>> m_generatorResumeLabels;
+ unsigned m_yieldPoints { 0 };
+
+ Strong<SymbolTable> m_generatorFrameSymbolTable;
+ int m_generatorFrameSymbolTableIndex { 0 };
+
enum FunctionVariableType : uint8_t { NormalFunctionVariable, GlobalFunctionVariable };
Vector<std::pair<FunctionMetadataNode*, FunctionVariableType>> m_functionsToInitialize;
bool m_needToInitializeArguments { false };
RefPtr<Label> done = generator.newLabel();
generator.emitLabel(done.get());
generator.emitReturn(generator.emitLoad(nullptr, jsUndefined()));
- generator.endGenerator(done.get());
break;
}
#include "DirectArguments.h"
#include "FTLForOSREntryJITCode.h"
#include "FTLOSREntry.h"
-#include "HostCallReturnValue.h"
#include "GetterSetter.h"
+#include "HostCallReturnValue.h"
#include "Interpreter.h"
#include "JIT.h"
#include "JITExceptions.h"
return jsString(&vm, builder.toString());
}
-ALWAYS_INLINE static HandlerInfo* findExceptionHandler(StackVisitor& visitor, CodeBlock* codeBlock, CodeBlock::RequiredHandler requiredHandler)
+ALWAYS_INLINE static HandlerInfo* findExceptionHandler(StackVisitor& visitor, CodeBlock* codeBlock, RequiredHandler requiredHandler)
{
ASSERT(codeBlock);
#if ENABLE(DFG_JIT)
if (!codeBlock)
return StackVisitor::Continue;
- m_handler = findExceptionHandler(visitor, codeBlock, CodeBlock::RequiredHandler::CatchHandler);
+ m_handler = findExceptionHandler(visitor, codeBlock, RequiredHandler::CatchHandler);
if (m_handler)
return StackVisitor::Done;
m_handler = nullptr;
if (!m_isTermination) {
if (m_codeBlock && !isWebAssemblyExecutable(m_codeBlock->ownerExecutable()))
- m_handler = findExceptionHandler(visitor, m_codeBlock, CodeBlock::RequiredHandler::AnyHandler);
+ m_handler = findExceptionHandler(visitor, m_codeBlock, RequiredHandler::AnyHandler);
}
if (m_handler)
struct HandlerInfo;
struct Instruction;
struct ProtoCallFrame;
+ struct UnlinkedInstruction;
enum UnwindStart { UnwindFromCurrentFrame, UnwindFromCallerFrame };
return opcode;
#endif
}
-
+
+ OpcodeID getOpcodeID(const Instruction&);
+ OpcodeID getOpcodeID(const UnlinkedInstruction&);
+
bool isOpcode(Opcode);
JSValue execute(ProgramExecutable*, CallFrame*, JSObject* thisObj);
--- /dev/null
+/*
+ * Copyright (C) 2016 Yusuke Suzuki <utatane.tea@gmail.com>
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "Instruction.h"
+#include "Interpreter.h"
+#include "UnlinkedCodeBlock.h"
+
+namespace JSC {
+
+inline OpcodeID Interpreter::getOpcodeID(const Instruction& instruction)
+{
+ return getOpcodeID(instruction.u.opcode);
+}
+
+inline OpcodeID Interpreter::getOpcodeID(const UnlinkedInstruction& instruction)
+{
+ return instruction.u.opcode;
+}
+
+} // namespace JSC
DEFINE_OP(op_get_rest_length)
DEFINE_OP(op_check_tdz)
DEFINE_OP(op_assert)
- DEFINE_OP(op_save)
- DEFINE_OP(op_resume)
DEFINE_OP(op_debug)
DEFINE_OP(op_del_by_id)
DEFINE_OP(op_del_by_val)
void emit_op_get_rest_length(Instruction*);
void emit_op_check_tdz(Instruction*);
void emit_op_assert(Instruction*);
- void emit_op_save(Instruction*);
- void emit_op_resume(Instruction*);
void emit_op_debug(Instruction*);
void emit_op_del_by_id(Instruction*);
void emit_op_del_by_val(Instruction*);
#endif
}
-void JIT::emit_op_save(Instruction* currentInstruction)
-{
- JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_save);
- slowPathCall.call();
-}
-
-void JIT::emit_op_resume(Instruction* currentInstruction)
-{
- JITSlowPathCall slowPathCall(this, currentInstruction, slow_path_resume);
- slowPathCall.call();
-}
-
} // namespace JSC
#endif // ENABLE(JIT)
dispatch(3)
-_llint_op_save:
- traceExecution()
- callOpcodeSlowPath(_slow_path_save)
- dispatch(4)
-
-
-_llint_op_resume:
- traceExecution()
- callOpcodeSlowPath(_slow_path_resume)
- dispatch(3)
+_llint_op_yield:
+ notSupported()
_llint_op_create_lexical_environment:
if (m_lexer->isReparsingFunction()) {
ParserFunctionInfo<ASTBuilder> functionInfo;
if (parseMode == SourceParseMode::GeneratorBodyMode)
- m_parameters = createGeneratorParameters(context);
+ m_parameters = createGeneratorParameters(context, functionInfo.parameterCount);
else
m_parameters = parseFunctionParameters(context, parseMode, functionInfo);
ParserFunctionInfo<TreeBuilder> info;
info.name = &m_vm->propertyNames->nullIdentifier;
- createGeneratorParameters(context);
+ createGeneratorParameters(context, info.parameterCount);
info.startOffset = parametersStart;
info.startLine = tokenLine();
- info.parameterCount = 4; // generator, state, value, resume mode
{
AutoPopScopeRef generatorBodyScope(this, pushScope());
}
template <typename LexerType>
-template <class TreeBuilder> typename TreeBuilder::FormalParameterList Parser<LexerType>::createGeneratorParameters(TreeBuilder& context)
+template <class TreeBuilder> typename TreeBuilder::FormalParameterList Parser<LexerType>::createGeneratorParameters(TreeBuilder& context, unsigned& parameterCount)
{
auto parameters = context.createFormalParameterList();
JSTokenLocation location(tokenLocation());
JSTextPosition position = tokenStartPosition();
- // @generator
- declareParameter(&m_vm->propertyNames->builtinNames().generatorPrivateName());
- auto generator = context.createBindingLocation(location, m_vm->propertyNames->builtinNames().generatorPrivateName(), position, position, AssignmentContext::DeclarationStatement);
- context.appendParameter(parameters, generator, 0);
+ auto addParameter = [&](const Identifier& name) {
+ declareParameter(&name);
+ auto binding = context.createBindingLocation(location, name, position, position, AssignmentContext::DeclarationStatement);
+ context.appendParameter(parameters, binding, 0);
+ ++parameterCount;
+ };
+ // @generator
+ addParameter(m_vm->propertyNames->builtinNames().generatorPrivateName());
// @generatorState
- declareParameter(&m_vm->propertyNames->builtinNames().generatorStatePrivateName());
- auto generatorState = context.createBindingLocation(location, m_vm->propertyNames->builtinNames().generatorStatePrivateName(), position, position, AssignmentContext::DeclarationStatement);
- context.appendParameter(parameters, generatorState, 0);
-
+ addParameter(m_vm->propertyNames->builtinNames().generatorStatePrivateName());
// @generatorValue
- declareParameter(&m_vm->propertyNames->builtinNames().generatorValuePrivateName());
- auto generatorValue = context.createBindingLocation(location, m_vm->propertyNames->builtinNames().generatorValuePrivateName(), position, position, AssignmentContext::DeclarationStatement);
- context.appendParameter(parameters, generatorValue, 0);
-
+ addParameter(m_vm->propertyNames->builtinNames().generatorValuePrivateName());
// @generatorResumeMode
- declareParameter(&m_vm->propertyNames->builtinNames().generatorResumeModePrivateName());
- auto generatorResumeMode = context.createBindingLocation(location, m_vm->propertyNames->builtinNames().generatorResumeModePrivateName(), position, position, AssignmentContext::DeclarationStatement);
- context.appendParameter(parameters, generatorResumeMode, 0);
+ addParameter(m_vm->propertyNames->builtinNames().generatorResumeModePrivateName());
+ // @generatorFrame
+ addParameter(m_vm->propertyNames->builtinNames().generatorFramePrivateName());
return parameters;
}
ALWAYS_INLINE bool isArrowFunctionParameters();
template <class TreeBuilder, class FunctionInfoType> NEVER_INLINE typename TreeBuilder::FormalParameterList parseFunctionParameters(TreeBuilder&, SourceParseMode, FunctionInfoType&);
- template <class TreeBuilder> NEVER_INLINE typename TreeBuilder::FormalParameterList createGeneratorParameters(TreeBuilder&);
+ template <class TreeBuilder> NEVER_INLINE typename TreeBuilder::FormalParameterList createGeneratorParameters(TreeBuilder&, unsigned& parameterCount);
template <class TreeBuilder> NEVER_INLINE TreeClassExpression parseClass(TreeBuilder&, FunctionNameRequirements, ParserClassInfo<TreeBuilder>&);
#include "Error.h"
#include "ErrorHandlingScope.h"
#include "ExceptionFuzz.h"
-#include "GeneratorFrame.h"
#include "GetterSetter.h"
#include "HostCallReturnValue.h"
#include "Interpreter.h"
END();
}
-SLOW_PATH_DECL(slow_path_save)
-{
- // Only save variables and temporary registers. The scope registers are included in them.
- // But parameters are not included. Because the generator implementation replaces the values of parameters on each generator.next() call.
- BEGIN();
- JSValue generator = OP(1).jsValue();
- GeneratorFrame* frame = nullptr;
- JSValue value = generator.get(exec, exec->propertyNames().builtinNames().generatorFramePrivateName());
- if (!value.isNull())
- frame = jsCast<GeneratorFrame*>(value);
- else {
- // FIXME: Once JSGenerator specialized object is introduced, this GeneratorFrame should be embeded into it to avoid allocations.
- // https://bugs.webkit.org/show_bug.cgi?id=151545
- frame = GeneratorFrame::create(exec->vm(), exec->codeBlock()->numCalleeLocals());
- PutPropertySlot slot(generator, true, PutPropertySlot::PutById);
- asObject(generator)->methodTable(exec->vm())->put(asObject(generator), exec, exec->propertyNames().builtinNames().generatorFramePrivateName(), frame, slot);
- }
- unsigned liveCalleeLocalsIndex = pc[2].u.unsignedValue;
- frame->save(exec, exec->codeBlock()->liveCalleeLocalsAtYield(liveCalleeLocalsIndex));
- END();
-}
-
-SLOW_PATH_DECL(slow_path_resume)
-{
- BEGIN();
- JSValue generator = OP(1).jsValue();
- GeneratorFrame* frame = jsCast<GeneratorFrame*>(generator.get(exec, exec->propertyNames().builtinNames().generatorFramePrivateName()));
- unsigned liveCalleeLocalsIndex = pc[2].u.unsignedValue;
- frame->resume(exec, exec->codeBlock()->liveCalleeLocalsAtYield(liveCalleeLocalsIndex));
- END();
-}
-
SLOW_PATH_DECL(slow_path_create_lexical_environment)
{
BEGIN();
SLOW_PATH_HIDDEN_DECL(slow_path_to_index_string);
SLOW_PATH_HIDDEN_DECL(slow_path_profile_type_clear_log);
SLOW_PATH_HIDDEN_DECL(slow_path_assert);
-SLOW_PATH_HIDDEN_DECL(slow_path_save);
-SLOW_PATH_HIDDEN_DECL(slow_path_resume);
SLOW_PATH_HIDDEN_DECL(slow_path_create_lexical_environment);
SLOW_PATH_HIDDEN_DECL(slow_path_push_with_scope);
SLOW_PATH_HIDDEN_DECL(slow_path_resolve_scope);
+++ /dev/null
-/*
- * Copyright (C) 2015 Yusuke Suzuki <utatane.tea@gmail.com>.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
- * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
- * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
- * THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#include "config.h"
-#include "GeneratorFrame.h"
-
-#include "CodeBlock.h"
-#include "HeapIterationScope.h"
-#include "JSCJSValueInlines.h"
-#include "JSCellInlines.h"
-#include "SlotVisitorInlines.h"
-#include "StructureInlines.h"
-
-namespace JSC {
-
-const ClassInfo GeneratorFrame::s_info = { "GeneratorFrame", nullptr, nullptr, CREATE_METHOD_TABLE(GeneratorFrame) };
-
-GeneratorFrame::GeneratorFrame(VM& vm, size_t numberOfCalleeLocals)
- : Base(vm, vm.generatorFrameStructure.get())
- , m_numberOfCalleeLocals(numberOfCalleeLocals)
-{
-}
-
-void GeneratorFrame::finishCreation(VM& vm)
-{
- Base::finishCreation(vm);
- for (size_t i = 0; i < m_numberOfCalleeLocals; ++i)
- localAt(i).clear();
-}
-
-Structure* GeneratorFrame::createStructure(VM& vm, JSGlobalObject* globalObject, JSValue prototype)
-{
- return Structure::create(vm, globalObject, prototype, TypeInfo(CellType, StructureFlags), info());
-}
-
-GeneratorFrame* GeneratorFrame::create(VM& vm, size_t numberOfLocals)
-{
- GeneratorFrame* result =
- new (
- NotNull,
- allocateCell<GeneratorFrame>(vm.heap, allocationSizeForLocals(numberOfLocals)))
- GeneratorFrame(vm, numberOfLocals);
- result->finishCreation(vm);
- return result;
-}
-
-void GeneratorFrame::save(ExecState* exec, const FastBitVector& liveCalleeLocals)
-{
- // Only save callee locals.
- // Every time a generator is called (or resumed), parameters should be replaced.
- ASSERT(liveCalleeLocals.numBits() <= m_numberOfCalleeLocals);
- liveCalleeLocals.forEachSetBit([&](size_t index) {
- localAt(index).set(exec->vm(), this, exec->uncheckedR(virtualRegisterForLocal(index)).jsValue());
- });
-}
-
-void GeneratorFrame::resume(ExecState* exec, const FastBitVector& liveCalleeLocals)
-{
- // Only resume callee locals.
- // Every time a generator is called (or resumed), parameters should be replaced.
- liveCalleeLocals.forEachSetBit([&](size_t index) {
- exec->uncheckedR(virtualRegisterForLocal(index)) = localAt(index).get();
- localAt(index).clear();
- });
-}
-
-void GeneratorFrame::visitChildren(JSCell* cell, SlotVisitor& visitor)
-{
- GeneratorFrame* thisObject = jsCast<GeneratorFrame*>(cell);
- Base::visitChildren(thisObject, visitor);
- // Since only true cell pointers are stored as a cell, we can safely mark them.
- visitor.appendValues(thisObject->locals(), thisObject->m_numberOfCalleeLocals);
-}
-
-} // namespace JSC
+++ /dev/null
-/*
- * Copyright (C) 2015 Yusuke Suzuki <utatane.tea@gmail.com>.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- * notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- * notice, this list of conditions and the following disclaimer in the
- * documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
- * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
- * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
- * THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef GeneratorFrame_h
-#define GeneratorFrame_h
-
-#include "JSCell.h"
-#include <wtf/FastBitVector.h>
-
-namespace JSC {
-
-class GeneratorFrame : public JSCell {
- friend class JIT;
-#if ENABLE(DFG_JIT)
- friend class DFG::SpeculativeJIT;
- friend class DFG::JITCompiler;
-#endif
- friend class VM;
-public:
- typedef JSCell Base;
- static const unsigned StructureFlags = StructureIsImmortal | Base::StructureFlags;
-
- DECLARE_EXPORT_INFO;
-
- static GeneratorFrame* create(VM&, size_t numberOfCalleeLocals);
-
- WriteBarrierBase<Unknown>* locals()
- {
- return bitwise_cast<WriteBarrierBase<Unknown>*>(bitwise_cast<char*>(this) + offsetOfLocals());
- }
-
- WriteBarrierBase<Unknown>& localAt(size_t index)
- {
- ASSERT(index < m_numberOfCalleeLocals);
- return locals()[index];
- }
-
- static size_t offsetOfLocals()
- {
- return WTF::roundUpToMultipleOf<sizeof(WriteBarrier<Unknown>)>(sizeof(GeneratorFrame));
- }
-
- static size_t allocationSizeForLocals(unsigned numberOfLocals)
- {
- return offsetOfLocals() + numberOfLocals * sizeof(WriteBarrier<Unknown>);
- }
-
- static Structure* createStructure(VM&, JSGlobalObject*, JSValue prototype);
-
- void save(ExecState*, const FastBitVector& liveCalleeLocals);
- void resume(ExecState*, const FastBitVector& liveCalleeLocals);
-
-private:
- GeneratorFrame(VM&, size_t numberOfCalleeLocals);
-
- size_t m_numberOfCalleeLocals;
-
- friend class LLIntOffsetsExtractor;
-
- void finishCreation(VM&);
-
-protected:
- static void visitChildren(JSCell*, SlotVisitor&);
-};
-
-} // namespace JSC
-
-#endif // GeneratorFrame_h
Executing = -2,
};
+ // [this], @generator, @generatorState, @generatorValue, @generatorResumeMode, @generatorFrame.
+ enum class GeneratorArgument : int32_t {
+ ThisValue = 0,
+ Generator = 1,
+ State = 2,
+ Value = 3,
+ ResumeMode = 4,
+ Frame = 5,
+ };
+
const static unsigned StructureFlags = Base::StructureFlags;
DECLARE_EXPORT_INFO;
#include "FTLThunks.h"
#include "FunctionConstructor.h"
#include "GCActivityCallback.h"
-#include "GeneratorFrame.h"
#include "GetterSetter.h"
#include "Heap.h"
#include "HeapIterationScope.h"
inferredTypeStructure.set(*this, InferredType::createStructure(*this, 0, jsNull()));
inferredTypeTableStructure.set(*this, InferredTypeTable::createStructure(*this, 0, jsNull()));
functionRareDataStructure.set(*this, FunctionRareData::createStructure(*this, 0, jsNull()));
- generatorFrameStructure.set(*this, GeneratorFrame::createStructure(*this, 0, jsNull()));
exceptionStructure.set(*this, Exception::createStructure(*this, 0, jsNull()));
promiseDeferredStructure.set(*this, JSPromiseDeferred::createStructure(*this, 0, jsNull()));
internalPromiseDeferredStructure.set(*this, JSInternalPromiseDeferred::createStructure(*this, 0, jsNull()));
Strong<Structure> inferredTypeStructure;
Strong<Structure> inferredTypeTableStructure;
Strong<Structure> functionRareDataStructure;
- Strong<Structure> generatorFrameStructure;
Strong<Structure> exceptionStructure;
Strong<Structure> promiseDeferredStructure;
Strong<Structure> internalPromiseDeferredStructure;
+2016-08-25 Yusuke Suzuki <utatane.tea@gmail.com>
+
+ [DFG][FTL] Implement ES6 Generators in DFG / FTL
+ https://bugs.webkit.org/show_bug.cgi?id=152723
+
+ Reviewed by Filip Pizlo.
+
+ * wtf/FastBitVector.h:
+ (WTF::FastBitVector::FastBitVector):
+
2016-08-25 JF Bastien <jfbastien@apple.com>
TryGetById should have a ValueProfile so that it can predict its output type
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
-#ifndef FastBitVector_h
-#define FastBitVector_h
+#pragma once
#include <string.h>
#include <wtf/FastMalloc.h>
class FastBitVector {
public:
- FastBitVector()
- : m_array(0)
- , m_numBits(0)
+ FastBitVector() = default;
+
+ FastBitVector(FastBitVector&& other)
+ : m_array(std::exchange(other.m_array, nullptr))
+ , m_numBits(std::exchange(other.m_numBits, 0))
{
}
-
+
FastBitVector(const FastBitVector& other)
: m_array(0)
, m_numBits(0)
static size_t arrayLength(size_t numBits) { return (numBits + 31) >> 5; }
size_t arrayLength() const { return arrayLength(m_numBits); }
- uint32_t* m_array; // No, this can't be an std::unique_ptr<uint32_t[]>.
- size_t m_numBits;
+ uint32_t* m_array { nullptr }; // No, this can't be an std::unique_ptr<uint32_t[]>.
+ size_t m_numBits { 0 };
};
} // namespace WTF
using WTF::FastBitVector;
-
-#endif // FastBitVector_h
-