B3 should support tuple types
authorkeith_miller@apple.com <keith_miller@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 2 Aug 2019 21:02:05 +0000 (21:02 +0000)
committerkeith_miller@apple.com <keith_miller@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 2 Aug 2019 21:02:05 +0000 (21:02 +0000)
https://bugs.webkit.org/show_bug.cgi?id=200327

Reviewed by Filip Pizlo.

As part of the Wasm multi-value proposal, we need to teach B3 that
patchpoints can return more than one value.  This is done by
adding a new B3::Type called Tuple. Unlike, other B3 types Tuple
is actually an encoded index into a numeric B3::Type vector on the
procedure. This lets us distinguish any two tuples from each
other, moreover, it's possible to get the vector of types with
just the B3::Tuple type and the procedure.

Since most B3 operations only expect to see a single numeric child
there is a new Opcode, Extract, that takes yields the some, fixed,
entry from a tuple value. Extract would be the only other change
needed to make tuples work in B3 except that some optimizations
expect to be able to take any non-Void value and stick it into a
Variable of the same type. This means both Get/Set from a variable
have to support Tuples as well. For simplicity and consistency,
the ability to accept tuples is also applied to Phi and Upsilon.

In order to lower a Tuple, B3Lowering needs to have a Tmp for each
nested type in a Tuple. While we could reuse the existing
IndexedTables to hold the extra information we need to lower
Tuples, we instead use a two new HashTables for Value->Tmp(s) and
Phi->Tmp(s). It's expected that Tuples will be sufficiently
uncommon the overhead of tracking everything together would be
prohibitive. On the other hand, we don't worry about this for
Variables because we don't expect those to make it to lowering.

* JavaScriptCore.xcodeproj/project.pbxproj:
* Sources.txt:
* b3/B3Bank.h:
(JSC::B3::bankForType):
* b3/B3CheckValue.cpp:
(JSC::B3::CheckValue::CheckValue):
* b3/B3ExtractValue.cpp: Copied from Source/JavaScriptCore/b3/B3ProcedureInlines.h.
(JSC::B3::ExtractValue::~ExtractValue):
(JSC::B3::ExtractValue::dumpMeta const):
* b3/B3ExtractValue.h: Copied from Source/JavaScriptCore/b3/B3FixSSA.h.
* b3/B3FixSSA.h:
* b3/B3LowerMacros.cpp:
* b3/B3LowerMacrosAfterOptimizations.cpp:
* b3/B3LowerToAir.cpp:
* b3/B3NativeTraits.h:
* b3/B3Opcode.cpp:
(JSC::B3::invertedCompare):
(WTF::printInternal):
* b3/B3Opcode.h:
(JSC::B3::opcodeForConstant):
* b3/B3PatchpointSpecial.cpp:
(JSC::B3::PatchpointSpecial::forEachArg):
(JSC::B3::PatchpointSpecial::isValid):
(JSC::B3::PatchpointSpecial::admitsStack):
(JSC::B3::PatchpointSpecial::generate):
* b3/B3PatchpointValue.cpp:
(JSC::B3::PatchpointValue::dumpMeta const):
(JSC::B3::PatchpointValue::PatchpointValue):
* b3/B3PatchpointValue.h:
* b3/B3Procedure.cpp:
(JSC::B3::Procedure::addTuple):
(JSC::B3::Procedure::isValidTuple const):
(JSC::B3::Procedure::tupleForType const):
(JSC::B3::Procedure::addIntConstant):
(JSC::B3::Procedure::addConstant):
* b3/B3Procedure.h:
(JSC::B3::Procedure::returnCount const):
* b3/B3ProcedureInlines.h:
(JSC::B3::Procedure::extractFromTuple const):
* b3/B3ReduceStrength.cpp:
* b3/B3StackmapSpecial.cpp:
(JSC::B3::StackmapSpecial::isValidImpl):
(JSC::B3::StackmapSpecial::isArgValidForType):
(JSC::B3::StackmapSpecial::isArgValidForRep):
(JSC::B3::StackmapSpecial::isArgValidForValue): Deleted.
* b3/B3StackmapSpecial.h:
* b3/B3StackmapValue.h:
* b3/B3Type.cpp:
(WTF::printInternal):
* b3/B3Type.h:
(JSC::B3::Type::Type):
(JSC::B3::Type::tupleFromIndex):
(JSC::B3::Type::kind const):
(JSC::B3::Type::tupleIndex const):
(JSC::B3::Type::hash const):
(JSC::B3::Type::operator== const):
(JSC::B3::Type::operator!= const):
(JSC::B3::Type::isInt const):
(JSC::B3::Type::isFloat const):
(JSC::B3::Type::isNumeric const):
(JSC::B3::Type::isTuple const):
(JSC::B3::sizeofType):
(JSC::B3::isInt): Deleted.
(JSC::B3::isFloat): Deleted.
* b3/B3TypeMap.h:
(JSC::B3::TypeMap::at):
* b3/B3Validate.cpp:
* b3/B3Value.cpp:
(JSC::B3::Value::isRounded const):
(JSC::B3::Value::effects const):
(JSC::B3::Value::typeFor):
* b3/B3Value.h:
* b3/B3ValueInlines.h:
* b3/B3ValueKey.cpp:
(JSC::B3::ValueKey::intConstant):
* b3/B3ValueKey.h:
(JSC::B3::ValueKey::hash const):
* b3/B3ValueRep.h:
* b3/B3Width.h:
(JSC::B3::widthForType):
* b3/air/AirArg.cpp:
(JSC::B3::Air::Arg::canRepresent const):
* b3/air/AirArg.h:
* b3/air/AirCCallingConvention.cpp:
(JSC::B3::Air::cCallResult):
* b3/air/AirLowerMacros.cpp:
(JSC::B3::Air::lowerMacros):
* b3/testb3.h:
(populateWithInterestingValues):
* b3/testb3_1.cpp:
(run):
* b3/testb3_3.cpp:
(testStorePartial8BitRegisterOnX86):
* b3/testb3_5.cpp:
(testPatchpointWithRegisterResult):
(testPatchpointWithStackArgumentResult):
(testPatchpointWithAnyResult):
* b3/testb3_6.cpp:
(testPatchpointDoubleRegs):
(testSomeEarlyRegister):
* b3/testb3_7.cpp:
(testShuffleDoesntTrashCalleeSaves):
(testReportUsedRegistersLateUseFollowedByEarlyDefDoesNotMarkUseAsDead):
(testSimpleTuplePair):
(testSimpleTuplePairUnused):
(testSimpleTuplePairStack):
(tailDupedTuplePair):
(tuplePairVariableLoop):
(tupleNestedLoop):
(addTupleTests):
* b3/testb3_8.cpp:
(testLoad):
(addLoadTests):
* ftl/FTLAbbreviatedTypes.h:
* ftl/FTLLowerDFGToB3.cpp:
(JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstruct):
(JSC::FTL::DFG::LowerDFGToB3::compileDirectCallOrConstruct):
(JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargsSpread):
(JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargs):
(JSC::FTL::DFG::LowerDFGToB3::compileCallEval):
(JSC::FTL::DFG::LowerDFGToB3::compileCPUIntrinsic):
(JSC::FTL::DFG::LowerDFGToB3::compileInstanceOf):
(JSC::FTL::DFG::LowerDFGToB3::compileCallDOMGetter):
(JSC::FTL::DFG::LowerDFGToB3::emitBinarySnippet):
(JSC::FTL::DFG::LowerDFGToB3::emitBinaryBitOpSnippet):
(JSC::FTL::DFG::LowerDFGToB3::emitRightShiftSnippet):
(JSC::FTL::DFG::LowerDFGToB3::allocateHeapCell):
* wasm/WasmAirIRGenerator.cpp:
(JSC::Wasm::AirIRGenerator::emitPatchpoint):
* wasm/WasmB3IRGenerator.cpp:
(JSC::Wasm::B3IRGenerator::B3IRGenerator):
* wasm/WasmCallingConvention.h:
(JSC::Wasm::CallingConvention::marshallArgument const):
(JSC::Wasm::CallingConvention::setupFrameInPrologue const):
(JSC::Wasm::CallingConvention::setupCall const):
(JSC::Wasm::CallingConventionAir::setupCall const):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@248178 268f45cc-cd09-0410-ab3c-d52691b4dbfc

51 files changed:
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/Sources.txt
Source/JavaScriptCore/b3/B3Bank.h
Source/JavaScriptCore/b3/B3CheckValue.cpp
Source/JavaScriptCore/b3/B3ExtractValue.cpp [new file with mode: 0644]
Source/JavaScriptCore/b3/B3ExtractValue.h [new file with mode: 0644]
Source/JavaScriptCore/b3/B3FixSSA.h
Source/JavaScriptCore/b3/B3LowerMacros.cpp
Source/JavaScriptCore/b3/B3LowerMacrosAfterOptimizations.cpp
Source/JavaScriptCore/b3/B3LowerToAir.cpp
Source/JavaScriptCore/b3/B3NativeTraits.h
Source/JavaScriptCore/b3/B3Opcode.cpp
Source/JavaScriptCore/b3/B3Opcode.h
Source/JavaScriptCore/b3/B3PatchpointSpecial.cpp
Source/JavaScriptCore/b3/B3PatchpointValue.cpp
Source/JavaScriptCore/b3/B3PatchpointValue.h
Source/JavaScriptCore/b3/B3Procedure.cpp
Source/JavaScriptCore/b3/B3Procedure.h
Source/JavaScriptCore/b3/B3ProcedureInlines.h
Source/JavaScriptCore/b3/B3ReduceStrength.cpp
Source/JavaScriptCore/b3/B3StackmapSpecial.cpp
Source/JavaScriptCore/b3/B3StackmapSpecial.h
Source/JavaScriptCore/b3/B3StackmapValue.h
Source/JavaScriptCore/b3/B3Type.cpp
Source/JavaScriptCore/b3/B3Type.h
Source/JavaScriptCore/b3/B3TypeMap.h
Source/JavaScriptCore/b3/B3Validate.cpp
Source/JavaScriptCore/b3/B3Value.cpp
Source/JavaScriptCore/b3/B3Value.h
Source/JavaScriptCore/b3/B3ValueInlines.h
Source/JavaScriptCore/b3/B3ValueKey.cpp
Source/JavaScriptCore/b3/B3ValueKey.h
Source/JavaScriptCore/b3/B3ValueRep.h
Source/JavaScriptCore/b3/B3Width.h
Source/JavaScriptCore/b3/air/AirArg.cpp
Source/JavaScriptCore/b3/air/AirArg.h
Source/JavaScriptCore/b3/air/AirCCallingConvention.cpp
Source/JavaScriptCore/b3/air/AirLowerMacros.cpp
Source/JavaScriptCore/b3/testb3.h
Source/JavaScriptCore/b3/testb3_1.cpp
Source/JavaScriptCore/b3/testb3_3.cpp
Source/JavaScriptCore/b3/testb3_5.cpp
Source/JavaScriptCore/b3/testb3_6.cpp
Source/JavaScriptCore/b3/testb3_7.cpp
Source/JavaScriptCore/b3/testb3_8.cpp
Source/JavaScriptCore/ftl/FTLAbbreviatedTypes.h
Source/JavaScriptCore/ftl/FTLLowerDFGToB3.cpp
Source/JavaScriptCore/wasm/WasmAirIRGenerator.cpp
Source/JavaScriptCore/wasm/WasmB3IRGenerator.cpp
Source/JavaScriptCore/wasm/WasmCallingConvention.h

index 0f62242..4d18e53 100644 (file)
@@ -1,3 +1,173 @@
+2019-08-01  Keith Miller  <keith_miller@apple.com>
+
+        B3 should support tuple types
+        https://bugs.webkit.org/show_bug.cgi?id=200327
+
+        Reviewed by Filip Pizlo.
+
+        As part of the Wasm multi-value proposal, we need to teach B3 that
+        patchpoints can return more than one value.  This is done by
+        adding a new B3::Type called Tuple. Unlike, other B3 types Tuple
+        is actually an encoded index into a numeric B3::Type vector on the
+        procedure. This lets us distinguish any two tuples from each
+        other, moreover, it's possible to get the vector of types with
+        just the B3::Tuple type and the procedure.
+
+        Since most B3 operations only expect to see a single numeric child
+        there is a new Opcode, Extract, that takes yields the some, fixed,
+        entry from a tuple value. Extract would be the only other change
+        needed to make tuples work in B3 except that some optimizations
+        expect to be able to take any non-Void value and stick it into a
+        Variable of the same type. This means both Get/Set from a variable
+        have to support Tuples as well. For simplicity and consistency,
+        the ability to accept tuples is also applied to Phi and Upsilon.
+
+        In order to lower a Tuple, B3Lowering needs to have a Tmp for each
+        nested type in a Tuple. While we could reuse the existing
+        IndexedTables to hold the extra information we need to lower
+        Tuples, we instead use a two new HashTables for Value->Tmp(s) and
+        Phi->Tmp(s). It's expected that Tuples will be sufficiently
+        uncommon the overhead of tracking everything together would be
+        prohibitive. On the other hand, we don't worry about this for
+        Variables because we don't expect those to make it to lowering.
+
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * Sources.txt:
+        * b3/B3Bank.h:
+        (JSC::B3::bankForType):
+        * b3/B3CheckValue.cpp:
+        (JSC::B3::CheckValue::CheckValue):
+        * b3/B3ExtractValue.cpp: Copied from Source/JavaScriptCore/b3/B3ProcedureInlines.h.
+        (JSC::B3::ExtractValue::~ExtractValue):
+        (JSC::B3::ExtractValue::dumpMeta const):
+        * b3/B3ExtractValue.h: Copied from Source/JavaScriptCore/b3/B3FixSSA.h.
+        * b3/B3FixSSA.h:
+        * b3/B3LowerMacros.cpp:
+        * b3/B3LowerMacrosAfterOptimizations.cpp:
+        * b3/B3LowerToAir.cpp:
+        * b3/B3NativeTraits.h:
+        * b3/B3Opcode.cpp:
+        (JSC::B3::invertedCompare):
+        (WTF::printInternal):
+        * b3/B3Opcode.h:
+        (JSC::B3::opcodeForConstant):
+        * b3/B3PatchpointSpecial.cpp:
+        (JSC::B3::PatchpointSpecial::forEachArg):
+        (JSC::B3::PatchpointSpecial::isValid):
+        (JSC::B3::PatchpointSpecial::admitsStack):
+        (JSC::B3::PatchpointSpecial::generate):
+        * b3/B3PatchpointValue.cpp:
+        (JSC::B3::PatchpointValue::dumpMeta const):
+        (JSC::B3::PatchpointValue::PatchpointValue):
+        * b3/B3PatchpointValue.h:
+        * b3/B3Procedure.cpp:
+        (JSC::B3::Procedure::addTuple):
+        (JSC::B3::Procedure::isValidTuple const):
+        (JSC::B3::Procedure::tupleForType const):
+        (JSC::B3::Procedure::addIntConstant):
+        (JSC::B3::Procedure::addConstant):
+        * b3/B3Procedure.h:
+        (JSC::B3::Procedure::returnCount const):
+        * b3/B3ProcedureInlines.h:
+        (JSC::B3::Procedure::extractFromTuple const):
+        * b3/B3ReduceStrength.cpp:
+        * b3/B3StackmapSpecial.cpp:
+        (JSC::B3::StackmapSpecial::isValidImpl):
+        (JSC::B3::StackmapSpecial::isArgValidForType):
+        (JSC::B3::StackmapSpecial::isArgValidForRep):
+        (JSC::B3::StackmapSpecial::isArgValidForValue): Deleted.
+        * b3/B3StackmapSpecial.h:
+        * b3/B3StackmapValue.h:
+        * b3/B3Type.cpp:
+        (WTF::printInternal):
+        * b3/B3Type.h:
+        (JSC::B3::Type::Type):
+        (JSC::B3::Type::tupleFromIndex):
+        (JSC::B3::Type::kind const):
+        (JSC::B3::Type::tupleIndex const):
+        (JSC::B3::Type::hash const):
+        (JSC::B3::Type::operator== const):
+        (JSC::B3::Type::operator!= const):
+        (JSC::B3::Type::isInt const):
+        (JSC::B3::Type::isFloat const):
+        (JSC::B3::Type::isNumeric const):
+        (JSC::B3::Type::isTuple const):
+        (JSC::B3::sizeofType):
+        (JSC::B3::isInt): Deleted.
+        (JSC::B3::isFloat): Deleted.
+        * b3/B3TypeMap.h:
+        (JSC::B3::TypeMap::at):
+        * b3/B3Validate.cpp:
+        * b3/B3Value.cpp:
+        (JSC::B3::Value::isRounded const):
+        (JSC::B3::Value::effects const):
+        (JSC::B3::Value::typeFor):
+        * b3/B3Value.h:
+        * b3/B3ValueInlines.h:
+        * b3/B3ValueKey.cpp:
+        (JSC::B3::ValueKey::intConstant):
+        * b3/B3ValueKey.h:
+        (JSC::B3::ValueKey::hash const):
+        * b3/B3ValueRep.h:
+        * b3/B3Width.h:
+        (JSC::B3::widthForType):
+        * b3/air/AirArg.cpp:
+        (JSC::B3::Air::Arg::canRepresent const):
+        * b3/air/AirArg.h:
+        * b3/air/AirCCallingConvention.cpp:
+        (JSC::B3::Air::cCallResult):
+        * b3/air/AirLowerMacros.cpp:
+        (JSC::B3::Air::lowerMacros):
+        * b3/testb3.h:
+        (populateWithInterestingValues):
+        * b3/testb3_1.cpp:
+        (run):
+        * b3/testb3_3.cpp:
+        (testStorePartial8BitRegisterOnX86):
+        * b3/testb3_5.cpp:
+        (testPatchpointWithRegisterResult):
+        (testPatchpointWithStackArgumentResult):
+        (testPatchpointWithAnyResult):
+        * b3/testb3_6.cpp:
+        (testPatchpointDoubleRegs):
+        (testSomeEarlyRegister):
+        * b3/testb3_7.cpp:
+        (testShuffleDoesntTrashCalleeSaves):
+        (testReportUsedRegistersLateUseFollowedByEarlyDefDoesNotMarkUseAsDead):
+        (testSimpleTuplePair):
+        (testSimpleTuplePairUnused):
+        (testSimpleTuplePairStack):
+        (tailDupedTuplePair):
+        (tuplePairVariableLoop):
+        (tupleNestedLoop):
+        (addTupleTests):
+        * b3/testb3_8.cpp:
+        (testLoad):
+        (addLoadTests):
+        * ftl/FTLAbbreviatedTypes.h:
+        * ftl/FTLLowerDFGToB3.cpp:
+        (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstruct):
+        (JSC::FTL::DFG::LowerDFGToB3::compileDirectCallOrConstruct):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargsSpread):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCallOrConstructVarargs):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCallEval):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCPUIntrinsic):
+        (JSC::FTL::DFG::LowerDFGToB3::compileInstanceOf):
+        (JSC::FTL::DFG::LowerDFGToB3::compileCallDOMGetter):
+        (JSC::FTL::DFG::LowerDFGToB3::emitBinarySnippet):
+        (JSC::FTL::DFG::LowerDFGToB3::emitBinaryBitOpSnippet):
+        (JSC::FTL::DFG::LowerDFGToB3::emitRightShiftSnippet):
+        (JSC::FTL::DFG::LowerDFGToB3::allocateHeapCell):
+        * wasm/WasmAirIRGenerator.cpp:
+        (JSC::Wasm::AirIRGenerator::emitPatchpoint):
+        * wasm/WasmB3IRGenerator.cpp:
+        (JSC::Wasm::B3IRGenerator::B3IRGenerator):
+        * wasm/WasmCallingConvention.h:
+        (JSC::Wasm::CallingConvention::marshallArgument const):
+        (JSC::Wasm::CallingConvention::setupFrameInPrologue const):
+        (JSC::Wasm::CallingConvention::setupCall const):
+        (JSC::Wasm::CallingConventionAir::setupCall const):
+
 2019-08-02  Yusuke Suzuki  <ysuzuki@apple.com>
 
         [JSC] Use "destroy" function directly for JSWebAssemblyCodeBlock and WebAssemblyFunction
index faf259e..58f1a79 100644 (file)
                530FDE7521FAB00600059D65 /* testIncludes.m in Sources */ = {isa = PBXBuildFile; fileRef = 530FDE7321FAAFC600059D65 /* testIncludes.m */; };
                5311BD4B1EA581E500525281 /* WasmOMGPlan.h in Headers */ = {isa = PBXBuildFile; fileRef = 5311BD491EA581E500525281 /* WasmOMGPlan.h */; };
                531374BD1D5CE67600AF7A0B /* WasmPlan.h in Headers */ = {isa = PBXBuildFile; fileRef = 531374BC1D5CE67600AF7A0B /* WasmPlan.h */; };
+               5318045C22EAAC4B004A7342 /* B3ExtractValue.h in Headers */ = {isa = PBXBuildFile; fileRef = 5318045B22EAAC4B004A7342 /* B3ExtractValue.h */; };
                5333BBDB2110F7D2007618EC /* DFGSpeculativeJIT32_64.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 86880F1B14328BB900B08D42 /* DFGSpeculativeJIT32_64.cpp */; };
                5333BBDC2110F7D9007618EC /* DFGSpeculativeJIT.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 86EC9DC21328DF82002B2AD7 /* DFGSpeculativeJIT.cpp */; };
                5333BBDD2110F7E1007618EC /* DFGSpeculativeJIT64.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 86880F4C14353B2100B08D42 /* DFGSpeculativeJIT64.cpp */; };
                5311BD491EA581E500525281 /* WasmOMGPlan.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WasmOMGPlan.h; sourceTree = "<group>"; };
                531374BC1D5CE67600AF7A0B /* WasmPlan.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WasmPlan.h; sourceTree = "<group>"; };
                531374BE1D5CE95000AF7A0B /* WasmPlan.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = WasmPlan.cpp; sourceTree = "<group>"; };
+               5318045B22EAAC4B004A7342 /* B3ExtractValue.h */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.c.h; name = B3ExtractValue.h; path = b3/B3ExtractValue.h; sourceTree = "<group>"; };
+               5318045D22EAAF0F004A7342 /* B3ExtractValue.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; name = B3ExtractValue.cpp; path = b3/B3ExtractValue.cpp; sourceTree = "<group>"; };
                531D4E191F59CDD200EC836C /* testapi.cpp */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.cpp.cpp; name = testapi.cpp; path = API/tests/testapi.cpp; sourceTree = "<group>"; };
                532631B3218777A5007B8191 /* JavaScriptCore.modulemap */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = "sourcecode.module-map"; path = JavaScriptCore.modulemap; sourceTree = "<group>"; };
                533B15DE1DC7F463004D500A /* WasmOps.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WasmOps.h; sourceTree = "<group>"; };
                                3395C70522555F6D00BDBFAD /* B3EliminateDeadCode.h */,
                                0F5BF16E1F23A5A10029D91D /* B3EnsureLoopPreHeaders.cpp */,
                                0F5BF16F1F23A5A10029D91D /* B3EnsureLoopPreHeaders.h */,
+                               5318045D22EAAF0F004A7342 /* B3ExtractValue.cpp */,
+                               5318045B22EAAC4B004A7342 /* B3ExtractValue.h */,
                                0F6971E81D92F42100BA02A5 /* B3FenceValue.cpp */,
                                0F6971E91D92F42100BA02A5 /* B3FenceValue.h */,
                                0F6B8AE01C4EFE1700969052 /* B3FixSSA.cpp */,
                                0F725CA81C503DED00AD943A /* B3EliminateCommonSubexpressions.h in Headers */,
                                3395C70722555F6D00BDBFAD /* B3EliminateDeadCode.h in Headers */,
                                0F5BF1711F23A5A10029D91D /* B3EnsureLoopPreHeaders.h in Headers */,
+                               5318045C22EAAC4B004A7342 /* B3ExtractValue.h in Headers */,
                                0F6971EA1D92F42400BA02A5 /* B3FenceValue.h in Headers */,
                                0F6B8AE51C4EFE1700969052 /* B3FixSSA.h in Headers */,
                                0F725CB01C506D3B00AD943A /* B3FoldPathConstants.h in Headers */,
index ee54ad9..87a28fb 100644 (file)
@@ -127,6 +127,7 @@ b3/B3Effects.cpp
 b3/B3EliminateCommonSubexpressions.cpp
 b3/B3EliminateDeadCode.cpp
 b3/B3EnsureLoopPreHeaders.cpp
+b3/B3ExtractValue.cpp
 b3/B3FenceValue.cpp
 b3/B3FixSSA.cpp
 b3/B3FoldPathConstants.cpp
index 6b569a3..d87e1e7 100644 (file)
@@ -47,8 +47,9 @@ void forEachBank(const Func& func)
 
 inline Bank bankForType(Type type)
 {
-    switch (type) {
+    switch (type.kind()) {
     case Void:
+    case Tuple:
         ASSERT_NOT_REACHED();
         return GP;
     case Int32:
index c117137..68b8647 100644 (file)
@@ -44,7 +44,7 @@ void CheckValue::convertToAdd()
 CheckValue::CheckValue(Kind kind, Origin origin, Value* left, Value* right)
     : StackmapValue(CheckedOpcode, kind, left->type(), origin)
 {
-    ASSERT(B3::isInt(type()));
+    ASSERT(type().isInt());
     ASSERT(left->type() == right->type());
     ASSERT(kind == CheckAdd || kind == CheckSub || kind == CheckMul);
     append(ConstrainedValue(left, ValueRep::WarmAny));
diff --git a/Source/JavaScriptCore/b3/B3ExtractValue.cpp b/Source/JavaScriptCore/b3/B3ExtractValue.cpp
new file mode 100644 (file)
index 0000000..95fef67
--- /dev/null
@@ -0,0 +1,44 @@
+/*
+ * Copyright (C) 2019 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "B3ExtractValue.h"
+
+#if ENABLE(B3_JIT)
+
+namespace JSC { namespace B3 {
+
+ExtractValue::~ExtractValue()
+{
+}
+
+void ExtractValue::dumpMeta(CommaPrinter& comma, PrintStream& out) const
+{
+    out.print(comma, "<<", m_index);
+}
+
+} } // namespace JSC::B3
+
+#endif // ENABLE(B3_JIT)
diff --git a/Source/JavaScriptCore/b3/B3ExtractValue.h b/Source/JavaScriptCore/b3/B3ExtractValue.h
new file mode 100644 (file)
index 0000000..99384d9
--- /dev/null
@@ -0,0 +1,65 @@
+/*
+ * Copyright (C) 2019 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#if ENABLE(B3_JIT)
+
+#include "B3Value.h"
+
+namespace JSC { namespace B3 {
+
+class JS_EXPORT_PRIVATE ExtractValue final : public Value {
+public:
+    static bool accepts(Kind kind) { return kind == Extract; }
+
+    ~ExtractValue();
+
+    int32_t index() const { return m_index; }
+
+    B3_SPECIALIZE_VALUE_FOR_FIXED_CHILDREN(1)
+    B3_SPECIALIZE_VALUE_FOR_FINAL_SIZE_FIXED_CHILDREN
+
+protected:
+    void dumpMeta(CommaPrinter&, PrintStream&) const override;
+
+    static Opcode opcodeFromConstructor(Origin, Type, Value*, int32_t) { return Extract; }
+
+    ExtractValue(Origin origin, Type type, Value* tuple, int32_t index)
+        : Value(CheckedOpcode, Extract, type, One, origin, tuple)
+        , m_index(index)
+    {
+    }
+
+private:
+    friend class Procedure;
+    friend class Value;
+
+    int32_t m_index;
+};
+
+} } // namespace JSC::B3
+
+#endif // ENABLE(B3_JIT)
index d0d594e..95dc9ce 100644 (file)
@@ -41,7 +41,7 @@ JS_EXPORT_PRIVATE void demoteValues(Procedure&, const IndexSet<Value*>&);
 
 // This fixes SSA for you. Use this after you have done demoteValues() and you have performed
 // whatever evil transformation you needed.
-bool fixSSA(Procedure&);
+JS_EXPORT_PRIVATE bool fixSSA(Procedure&);
 
 } } // namespace JSC::B3
 
index 4d4eb17..1a8c80e 100644 (file)
@@ -418,7 +418,7 @@ private:
         zeroDenCase->setSuccessors(FrequentedBlock(m_block));
 
         int64_t badNumeratorConst = 0;
-        switch (m_value->type()) {
+        switch (m_value->type().kind()) {
         case Int32:
             badNumeratorConst = std::numeric_limits<int32_t>::min();
             break;
index b2c5bc6..c10bd64 100644 (file)
@@ -138,7 +138,7 @@ private:
                 break;
             }
             case Neg: {
-                if (!isFloat(m_value->type()))
+                if (!m_value->type().isFloat())
                     break;
                 
                 // X86 is odd in that it requires this.
index 1df12f6..2c9c3b0 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2019 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -43,6 +43,7 @@
 #include "B3CheckSpecial.h"
 #include "B3Commutativity.h"
 #include "B3Dominators.h"
+#include "B3ExtractValue.h"
 #include "B3FenceValue.h"
 #include "B3MemoryValueInlines.h"
 #include "B3PatchpointSpecial.h"
@@ -114,15 +115,38 @@ public:
         using namespace Air;
         for (B3::BasicBlock* block : m_procedure)
             m_blockToBlock[block] = m_code.addBlock(block->frequency());
-        
+
+        auto ensureTupleTmps = [&] (Value* tupleValue, auto& hashTable) {
+            hashTable.ensure(tupleValue, [&] {
+                const auto tuple = m_procedure.tupleForType(tupleValue->type());
+                Vector<Tmp> tmps(tuple.size());
+
+                for (unsigned i = 0; i < tuple.size(); ++i)
+                    tmps[i] = tmpForType(tuple[i]);
+                return tmps;
+            });
+        };
+
         for (Value* value : m_procedure.values()) {
             switch (value->opcode()) {
             case Phi: {
+                if (value->type().isTuple()) {
+                    ensureTupleTmps(value, m_tuplePhiToTmps);
+                    ensureTupleTmps(value, m_tupleValueToTmps);
+                    break;
+                }
+
                 m_phiToTmp[value] = m_code.newTmp(value->resultBank());
                 if (B3LowerToAirInternal::verbose)
                     dataLog("Phi tmp for ", *value, ": ", m_phiToTmp[value], "\n");
                 break;
             }
+            case Get:
+            case Patchpoint: {
+                if (value->type().isTuple())
+                    ensureTupleTmps(value, m_tupleValueToTmps);
+                break;
+            }
             default:
                 break;
             }
@@ -130,8 +154,12 @@ public:
 
         for (B3::StackSlot* stack : m_procedure.stackSlots())
             m_stackToStack.add(stack, m_code.addStackSlot(stack));
-        for (Variable* variable : m_procedure.variables())
-            m_variableToTmp.add(variable, m_code.newTmp(variable->bank()));
+        for (Variable* variable : m_procedure.variables()) {
+            auto addResult = m_variableToTmps.add(variable, Vector<Tmp, 1>(m_procedure.returnCount(variable->type())));
+            ASSERT(addResult.isNewEntry);
+            for (unsigned i = 0; i < m_procedure.returnCount(variable->type()); ++i)
+                addResult.iterator->value[i] = tmpForType(variable->type().isNumeric() ? variable->type() : m_procedure.extractFromTuple(variable->type(), i));
+        }
 
         // Figure out which blocks are not rare.
         m_fastWorklist.push(m_procedure[0]);
@@ -397,6 +425,29 @@ private:
         return ArgPromise::tmp(value);
     }
 
+    Tmp tmpForType(Type type)
+    {
+        return m_code.newTmp(bankForType(type));
+    }
+
+    const Vector<Tmp>& tmpsForTuple(Value* tupleValue)
+    {
+        ASSERT(tupleValue->type().isTuple());
+
+        switch (tupleValue->opcode()) {
+        case Phi:
+        case Patchpoint: {
+            return m_tupleValueToTmps.find(tupleValue)->value;
+        }
+        case Get:
+        case Set:
+            return m_variableToTmps.find(tupleValue->as<VariableValue>()->variable())->value;
+        default:
+            break;
+        }
+        RELEASE_ASSERT_NOT_REACHED();
+    }
+
     bool canBeInternal(Value* value)
     {
         // If one of the internal things has already been computed, then we don't want to cause
@@ -657,12 +708,27 @@ private:
         return tmp(value);
     }
 
+    template<typename Functor>
+    void forEachImmOrTmp(Value* value, const Functor& func)
+    {
+        ASSERT(value->type() != Void);
+        if (!value->type().isTuple()) {
+            func(immOrTmp(value), value->type(), 0);
+            return;
+        }
+
+        const Vector<Type>& tuple = m_procedure.tupleForType(value->type());
+        const auto& tmps = tmpsForTuple(value);
+        for (unsigned i = 0; i < tuple.size(); ++i)
+            func(tmps[i], tuple[i], i);
+    }
+
     // By convention, we use Oops to mean "I don't know".
     Air::Opcode tryOpcodeForType(
         Air::Opcode opcode32, Air::Opcode opcode64, Air::Opcode opcodeDouble, Air::Opcode opcodeFloat, Type type)
     {
         Air::Opcode opcode;
-        switch (type) {
+        switch (type.kind()) {
         case Int32:
             opcode = opcode32;
             break;
@@ -1110,7 +1176,7 @@ private:
     Air::Opcode moveForType(Type type)
     {
         using namespace Air;
-        switch (type) {
+        switch (type.kind()) {
         case Int32:
             return Move32;
         case Int64:
@@ -1121,6 +1187,7 @@ private:
         case Double:
             return MoveDouble;
         case Void:
+        case Tuple:
             break;
         }
         RELEASE_ASSERT_NOT_REACHED();
@@ -1130,7 +1197,7 @@ private:
     Air::Opcode relaxedMoveForType(Type type)
     {
         using namespace Air;
-        switch (type) {
+        switch (type.kind()) {
         case Int32:
         case Int64:
             // For Int32, we could return Move or Move32. It's a trade-off.
@@ -1155,6 +1222,7 @@ private:
         case Double:
             return MoveDouble;
         case Void:
+        case Tuple:
             break;
         }
         RELEASE_ASSERT_NOT_REACHED();
@@ -1461,7 +1529,7 @@ private:
             Value* left = value->child(0);
             Value* right = value->child(1);
 
-            if (isInt(value->child(0)->type())) {
+            if (value->child(0)->type().isInt()) {
                 Arg rightImm = imm(right);
 
                 auto tryCompare = [&] (
@@ -2125,7 +2193,7 @@ private:
         using namespace Air;
         Air::Opcode convertToDoubleWord;
         Air::Opcode div;
-        switch (m_value->type()) {
+        switch (m_value->type().kind()) {
         case Int32:
             convertToDoubleWord = X86ConvertToDoubleWord32;
             div = X86Div32;
@@ -2449,7 +2517,7 @@ private:
                 if (isX86())
                     kind.effects = true;
                 else {
-                    switch (memory->type()) {
+                    switch (memory->type().kind()) {
                     case Int32:
                         kind = LoadAcq32;
                         break;
@@ -2631,23 +2699,23 @@ private:
         case Div: {
             if (m_value->isChill())
                 RELEASE_ASSERT(isARM64());
-            if (isInt(m_value->type()) && isX86()) {
+            if (m_value->type().isInt() && isX86()) {
                 appendX86Div(Div);
                 return;
             }
-            ASSERT(!isX86() || isFloat(m_value->type()));
+            ASSERT(!isX86() || m_value->type().isFloat());
 
             appendBinOp<Div32, Div64, DivDouble, DivFloat>(m_value->child(0), m_value->child(1));
             return;
         }
 
         case UDiv: {
-            if (isInt(m_value->type()) && isX86()) {
+            if (m_value->type().isInt() && isX86()) {
                 appendX86UDiv(UDiv);
                 return;
             }
 
-            ASSERT(!isX86() && !isFloat(m_value->type()));
+            ASSERT(!isX86() && !m_value->type().isFloat());
 
             appendBinOp<UDiv32, UDiv64, Air::Oops, Air::Oops>(m_value->child(0), m_value->child(1));
             return;
@@ -3010,7 +3078,7 @@ private:
 
         case Select: {
             MoveConditionallyConfig config;
-            if (isInt(m_value->type())) {
+            if (m_value->type().isInt()) {
                 config.moveConditionally32 = MoveConditionally32;
                 config.moveConditionally64 = MoveConditionally64;
                 config.moveConditionallyTest32 = MoveConditionallyTest32;
@@ -3075,39 +3143,45 @@ private:
             Inst inst(Patch, patchpointValue, Arg::special(m_patchpointSpecial));
 
             Vector<Inst> after;
-            if (patchpointValue->type() != Void) {
-                switch (patchpointValue->resultConstraint.kind()) {
+            auto generateResultOperand = [&] (Type type, ValueRep rep, Tmp tmp) {
+                switch (rep.kind()) {
                 case ValueRep::WarmAny:
                 case ValueRep::ColdAny:
                 case ValueRep::LateColdAny:
                 case ValueRep::SomeRegister:
                 case ValueRep::SomeEarlyRegister:
-                    inst.args.append(tmp(patchpointValue));
-                    break;
+                case ValueRep::SomeLateRegister:
+                    inst.args.append(tmp);
+                    return;
                 case ValueRep::Register: {
-                    Tmp reg = Tmp(patchpointValue->resultConstraint.reg());
+                    Tmp reg = Tmp(rep.reg());
                     inst.args.append(reg);
-                    after.append(Inst(
-                        relaxedMoveForType(patchpointValue->type()), m_value, reg, tmp(patchpointValue)));
-                    break;
+                    after.append(Inst(relaxedMoveForType(type), m_value, reg, tmp));
+                    return;
                 }
                 case ValueRep::StackArgument: {
-                    Arg arg = Arg::callArg(patchpointValue->resultConstraint.offsetFromSP());
+                    Arg arg = Arg::callArg(rep.offsetFromSP());
                     inst.args.append(arg);
-                    after.append(Inst(
-                        moveForType(patchpointValue->type()), m_value, arg, tmp(patchpointValue)));
-                    break;
+                    after.append(Inst(moveForType(type), m_value, arg, tmp));
+                    return;
                 }
                 default:
                     RELEASE_ASSERT_NOT_REACHED();
-                    break;
+                    return;
                 }
+            };
+
+            if (patchpointValue->type() != Void) {
+                forEachImmOrTmp(patchpointValue, [&] (Arg arg, Type type, unsigned index) {
+                    generateResultOperand(type, patchpointValue->resultConstraints[index], arg.tmp());
+                });
             }
             
             fillStackmap(inst, patchpointValue, 0);
-            
-            if (patchpointValue->resultConstraint.isReg())
-                patchpointValue->lateClobbered().clear(patchpointValue->resultConstraint.reg());
+            for (auto& constraint : patchpointValue->resultConstraints) {
+                if (constraint.isReg())
+                    patchpointValue->lateClobbered().clear(constraint.reg());
+            }
 
             for (unsigned i = patchpointValue->numGPScratchRegisters; i--;)
                 inst.args.append(m_code.newTmp(GP));
@@ -3119,6 +3193,15 @@ private:
             return;
         }
 
+        case Extract: {
+            Value* tupleValue = m_value->child(0);
+            unsigned index = m_value->as<ExtractValue>()->index();
+
+            const auto& tmps = tmpsForTuple(tupleValue);
+            append(relaxedMoveForType(m_value->type()), tmps[index], tmp(m_value));
+            return;
+        }
+
         case CheckAdd:
         case CheckSub:
         case CheckMul: {
@@ -3291,9 +3374,18 @@ private:
 
         case Upsilon: {
             Value* value = m_value->child(0);
-            append(
-                relaxedMoveForType(value->type()), immOrTmp(value),
-                m_phiToTmp[m_value->as<UpsilonValue>()->phi()]);
+            Value* phi = m_value->as<UpsilonValue>()->phi();
+            if (value->type().isNumeric()) {
+                append(relaxedMoveForType(value->type()), immOrTmp(value), m_phiToTmp[phi]);
+                return;
+            }
+
+            const Vector<Type>& tuple = m_procedure.tupleForType(value->type());
+            const auto& valueTmps = tmpsForTuple(value);
+            const auto& phiTmps = m_tuplePhiToTmps.find(phi)->value;
+            ASSERT(valueTmps.size() == phiTmps.size());
+            for (unsigned i = 0; i < valueTmps.size(); ++i)
+                append(relaxedMoveForType(tuple[i]), valueTmps[i], phiTmps[i]);
             return;
         }
 
@@ -3303,22 +3395,39 @@ private:
             // Upsilon(@x, ^a)
             // @a => this should get the value of the Phi before the Upsilon, i.e. not @x.
 
-            append(relaxedMoveForType(m_value->type()), m_phiToTmp[m_value], tmp(m_value));
+            if (m_value->type().isNumeric()) {
+                append(relaxedMoveForType(m_value->type()), m_phiToTmp[m_value], tmp(m_value));
+                return;
+            }
+
+            const Vector<Type>& tuple = m_procedure.tupleForType(m_value->type());
+            const auto& valueTmps = tmpsForTuple(m_value);
+            const auto& phiTmps = m_tuplePhiToTmps.find(m_value)->value;
+            ASSERT(valueTmps.size() == phiTmps.size());
+            for (unsigned i = 0; i < valueTmps.size(); ++i)
+                append(relaxedMoveForType(tuple[i]), phiTmps[i], valueTmps[i]);
             return;
         }
 
         case Set: {
             Value* value = m_value->child(0);
-            append(
-                relaxedMoveForType(value->type()), immOrTmp(value),
-                m_variableToTmp.get(m_value->as<VariableValue>()->variable()));
+            const Vector<Tmp>& variableTmps = m_variableToTmps.get(m_value->as<VariableValue>()->variable());
+            forEachImmOrTmp(value, [&] (Arg immOrTmp, Type type, unsigned index) {
+                append(relaxedMoveForType(type), immOrTmp, variableTmps[index]);
+            });
             return;
         }
 
         case Get: {
-            append(
-                relaxedMoveForType(m_value->type()),
-                m_variableToTmp.get(m_value->as<VariableValue>()->variable()), tmp(m_value));
+            // Snapshot the value of the Get. It may change under us because you could do:
+            // a = Get(var)
+            // Set(@x, var)
+            // @a => this should get the value of the Get before the Set, i.e. not @x.
+
+            const Vector<Tmp>& variableTmps = m_variableToTmps.get(m_value->as<VariableValue>()->variable());
+            forEachImmOrTmp(m_value, [&] (Arg tmp, Type type, unsigned index) {
+                append(relaxedMoveForType(type), variableTmps[index], tmp.tmp());
+            });
             return;
         }
 
@@ -3463,8 +3572,9 @@ private:
             Value* value = m_value->child(0);
             Tmp returnValueGPR = Tmp(GPRInfo::returnValueGPR);
             Tmp returnValueFPR = Tmp(FPRInfo::returnValueFPR);
-            switch (value->type()) {
+            switch (value->type().kind()) {
             case Void:
+            case Tuple:
                 // It's impossible for a void value to be used as a child. We use RetVoid
                 // for void returns.
                 RELEASE_ASSERT_NOT_REACHED();
@@ -3584,9 +3694,11 @@ private:
     IndexSet<Value*> m_locked; // These are values that will have no Tmp in Air.
     IndexMap<Value*, Tmp> m_valueToTmp; // These are values that must have a Tmp in Air. We say that a Value* with a non-null Tmp is "pinned".
     IndexMap<Value*, Tmp> m_phiToTmp; // Each Phi gets its own Tmp.
+    HashMap<Value*, Vector<Tmp>> m_tupleValueToTmps; // This is the same as m_valueToTmp for Values that are Tuples.
+    HashMap<Value*, Vector<Tmp>> m_tuplePhiToTmps; // This is the same as m_phiToTmp for Phis that are Tuples.
     IndexMap<B3::BasicBlock*, Air::BasicBlock*> m_blockToBlock;
     HashMap<B3::StackSlot*, Air::StackSlot*> m_stackToStack;
-    HashMap<Variable*, Tmp> m_variableToTmp;
+    HashMap<Variable*, Vector<Tmp>> m_variableToTmps;
 
     UseCounts m_useCounts;
     PhiChildren m_phiChildren;
index 5b1787d..a1bb2a7 100644 (file)
@@ -39,70 +39,70 @@ template<> struct NativeTraits<int8_t> {
     typedef int32_t CanonicalType;
     static const Bank bank = GP;
     static const Width width = Width8;
-    static const Type type = Int32;
+    static constexpr Type type = Int32;
 };
 
 template<> struct NativeTraits<uint8_t> {
     typedef int32_t CanonicalType;
     static const Bank bank = GP;
     static const Width width = Width8;
-    static const Type type = Int32;
+    static constexpr Type type = Int32;
 };
 
 template<> struct NativeTraits<int16_t> {
     typedef int32_t CanonicalType;
     static const Bank bank = GP;
     static const Width width = Width16;
-    static const Type type = Int32;
+    static constexpr Type type = Int32;
 };
 
 template<> struct NativeTraits<uint16_t> {
     typedef int32_t CanonicalType;
     static const Bank bank = GP;
     static const Width width = Width16;
-    static const Type type = Int32;
+    static constexpr Type type = Int32;
 };
 
 template<> struct NativeTraits<int32_t> {
     typedef int32_t CanonicalType;
     static const Bank bank = GP;
     static const Width width = Width32;
-    static const Type type = Int32;
+    static constexpr Type type = Int32;
 };
 
 template<> struct NativeTraits<uint32_t> {
     typedef int32_t CanonicalType;
     static const Bank bank = GP;
     static const Width width = Width32;
-    static const Type type = Int32;
+    static constexpr Type type = Int32;
 };
 
 template<> struct NativeTraits<int64_t> {
     typedef int64_t CanonicalType;
     static const Bank bank = GP;
     static const Width width = Width64;
-    static const Type type = Int64;
+    static constexpr Type type = Int64;
 };
 
 template<> struct NativeTraits<uint64_t> {
     typedef int64_t CanonicalType;
     static const Bank bank = GP;
     static const Width width = Width64;
-    static const Type type = Int64;
+    static constexpr Type type = Int64;
 };
 
 template<> struct NativeTraits<float> {
     typedef float CanonicalType;
     static const Bank bank = FP;
     static const Width width = Width32;
-    static const Type type = Float;
+    static constexpr Type type = Float;
 };
 
 template<> struct NativeTraits<double> {
     typedef double CanonicalType;
     static const Bank bank = FP;
     static const Width width = Width64;
-    static const Type type = Double;
+    static constexpr Type type = Double;
 };
 
 } } // namespace JSC::B3
index 90937df..96c0b9a 100644 (file)
@@ -44,19 +44,19 @@ Optional<Opcode> invertedCompare(Opcode opcode, Type type)
     case NotEqual:
         return Equal;
     case LessThan:
-        if (isInt(type))
+        if (type.isInt())
             return GreaterEqual;
         return WTF::nullopt;
     case GreaterThan:
-        if (isInt(type))
+        if (type.isInt())
             return LessEqual;
         return WTF::nullopt;
     case LessEqual:
-        if (isInt(type))
+        if (type.isInt())
             return GreaterThan;
         return WTF::nullopt;
     case GreaterEqual:
-        if (isInt(type))
+        if (type.isInt())
             return LessThan;
         return WTF::nullopt;
     case Above:
@@ -327,6 +327,9 @@ void printInternal(PrintStream& out, Opcode opcode)
     case Patchpoint:
         out.print("Patchpoint");
         return;
+    case Extract:
+        out.print("Extract");
+        return;
     case CheckAdd:
         out.print("CheckAdd");
         return;
index 7aa6de3..bb57dce 100644 (file)
@@ -296,6 +296,10 @@ enum Opcode : uint8_t {
     // stack.
     Patchpoint,
 
+    // This is a projection out of a tuple. Currently only patchpoints can generate a tuple. It's assumumed that
+    // each entry in a tuple has a fixed Numeric B3 Type (i.e. not Void or Tuple).
+    Extract,
+
     // Checked math. Use the CheckValue class. Like a Patchpoint, this takes a code generation
     // callback. That callback gets to emit some code after the epilogue, and gets to link the jump
     // from the check, and the choice of registers. You also get to supply a stackmap. Note that you
@@ -398,7 +402,7 @@ inline bool isConstant(Opcode opcode)
 
 inline Opcode opcodeForConstant(Type type)
 {
-    switch (type) {
+    switch (type.kind()) {
     case Int32: return Const32;
     case Int64: return Const64;
     case Float: return ConstFloat;
index 1532edf..a86abcf 100644 (file)
@@ -28,6 +28,7 @@
 
 #if ENABLE(B3_JIT)
 
+#include "AirCode.h"
 #include "AirGenerationContext.h"
 #include "B3StackmapGenerationParams.h"
 #include "B3ValueInlines.h"
@@ -47,17 +48,20 @@ PatchpointSpecial::~PatchpointSpecial()
 
 void PatchpointSpecial::forEachArg(Inst& inst, const ScopedLambda<Inst::EachArgCallback>& callback)
 {
+    const Procedure& procedure = code().proc();
     PatchpointValue* patchpoint = inst.origin->as<PatchpointValue>();
     unsigned argIndex = 1;
 
-    if (patchpoint->type() != Void) {
+    Type type = patchpoint->type();
+    for (; argIndex <= procedure.returnCount(type); ++argIndex) {
         Arg::Role role;
-        if (patchpoint->resultConstraint.kind() == ValueRep::SomeEarlyRegister)
+        if (patchpoint->resultConstraints[argIndex - 1].kind() == ValueRep::SomeEarlyRegister)
             role = Arg::EarlyDef;
         else
             role = Arg::Def;
-        
-        callback(inst.args[argIndex++], role, inst.origin->resultBank(), inst.origin->resultWidth());
+
+        Type argType = type.isTuple() ? procedure.extractFromTuple(type, argIndex - 1) : type;
+        callback(inst.args[argIndex], role, bankForType(argType), widthForType(argType));
     }
 
     forEachArgImpl(0, argIndex, inst, SameAsRep, WTF::nullopt, callback, WTF::nullopt);
@@ -71,18 +75,19 @@ void PatchpointSpecial::forEachArg(Inst& inst, const ScopedLambda<Inst::EachArgC
 
 bool PatchpointSpecial::isValid(Inst& inst)
 {
+    const Procedure& procedure = code().proc();
     PatchpointValue* patchpoint = inst.origin->as<PatchpointValue>();
     unsigned argIndex = 1;
 
-    if (inst.origin->type() != Void) {
+    Type type = patchpoint->type();
+    for (; argIndex <= procedure.returnCount(type); ++argIndex) {
         if (argIndex >= inst.args.size())
             return false;
         
-        if (!isArgValidForValue(inst.args[argIndex], patchpoint))
+        if (!isArgValidForType(inst.args[argIndex], type.isTuple() ? procedure.extractFromTuple(type, argIndex - 1) : type))
             return false;
-        if (!isArgValidForRep(code(), inst.args[argIndex], patchpoint->resultConstraint))
+        if (!isArgValidForRep(code(), inst.args[argIndex], patchpoint->resultConstraints[argIndex - 1]))
             return false;
-        argIndex++;
     }
 
     if (!isValidImpl(0, argIndex, inst))
@@ -109,11 +114,13 @@ bool PatchpointSpecial::isValid(Inst& inst)
 
 bool PatchpointSpecial::admitsStack(Inst& inst, unsigned argIndex)
 {
-    if (inst.origin->type() == Void)
-        return admitsStackImpl(0, 1, inst, argIndex);
+    ASSERT(argIndex);
 
-    if (argIndex == 1) {
-        switch (inst.origin->as<PatchpointValue>()->resultConstraint.kind()) {
+    Type type = inst.origin->type();
+    unsigned returnCount = code().proc().returnCount(type);
+
+    if (argIndex <= returnCount) {
+        switch (inst.origin->as<PatchpointValue>()->resultConstraints[argIndex - 1].kind()) {
         case ValueRep::WarmAny:
         case ValueRep::StackArgument:
             return true;
@@ -130,7 +137,7 @@ bool PatchpointSpecial::admitsStack(Inst& inst, unsigned argIndex)
         }
     }
 
-    return admitsStackImpl(0, 2, inst, argIndex);
+    return admitsStackImpl(0, returnCount + 1, inst, argIndex);
 }
 
 bool PatchpointSpecial::admitsExtendedOffsetAddr(Inst& inst, unsigned argIndex)
@@ -140,12 +147,15 @@ bool PatchpointSpecial::admitsExtendedOffsetAddr(Inst& inst, unsigned argIndex)
 
 CCallHelpers::Jump PatchpointSpecial::generate(Inst& inst, CCallHelpers& jit, Air::GenerationContext& context)
 {
+    const Procedure& procedure = code().proc();
     PatchpointValue* value = inst.origin->as<PatchpointValue>();
     ASSERT(value);
 
     Vector<ValueRep> reps;
     unsigned offset = 1;
-    if (inst.origin->type() != Void)
+
+    Type type = value->type();
+    while (offset <= procedure.returnCount(type))
         reps.append(repForArg(*context.code, inst.args[offset++]));
     reps.appendVector(repsImpl(context, 0, offset, inst));
     offset += value->numChildren();
index c7f7678..217a896 100644 (file)
@@ -37,7 +37,14 @@ PatchpointValue::~PatchpointValue()
 void PatchpointValue::dumpMeta(CommaPrinter& comma, PrintStream& out) const
 {
     Base::dumpMeta(comma, out);
-    out.print(comma, "resultConstraint = ", resultConstraint);
+    out.print(comma, "resultConstraints = ");
+    out.print(resultConstraints.size() > 1 ? "[" : "");
+
+    CommaPrinter constraintComma;
+    for (const auto& constraint : resultConstraints)
+        out.print(constraintComma, constraint);
+    out.print(resultConstraints.size() > 1 ? "]" : "");
+
     if (numGPScratchRegisters)
         out.print(comma, "numGPScratchRegisters = ", numGPScratchRegisters);
     if (numFPScratchRegisters)
@@ -47,8 +54,9 @@ void PatchpointValue::dumpMeta(CommaPrinter& comma, PrintStream& out) const
 PatchpointValue::PatchpointValue(Type type, Origin origin)
     : Base(CheckedOpcode, Patchpoint, type, origin)
     , effects(Effects::forCall())
-    , resultConstraint(type == Void ? ValueRep::WarmAny : ValueRep::SomeRegister)
 {
+    if (!type.isTuple())
+        resultConstraints.append(type == Void ? ValueRep::WarmAny : ValueRep::SomeRegister);
 }
 
 } } // namespace JSC::B3
index 42b5471..f3a41a4 100644 (file)
@@ -52,8 +52,9 @@ public:
 
     // The input representation (i.e. constraint) of the return value. This defaults to WarmAny if the
     // type is Void and it defaults to SomeRegister otherwise. It's illegal to mess with this if the type
-    // is Void. Otherwise you can set this to any input constraint.
-    ValueRep resultConstraint;
+    // is Void. Otherwise you can set this to any input constraint. If the type of the patchpoint is a tuple
+    // the constrants must be set explicitly.
+    Vector<ValueRep, 1> resultConstraints;
 
     // The number of scratch registers that this patchpoint gets. The scratch register is guaranteed
     // to be different from any input register and the destination register. It's also guaranteed not
index 9f816c9..ffcb2d4 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2015-2017 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2019 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -85,6 +85,24 @@ Variable* Procedure::addVariable(Type type)
     return m_variables.addNew(type); 
 }
 
+Type Procedure::addTuple(Vector<Type>&& types)
+{
+    Type result = Type::tupleFromIndex(m_tuples.size());
+    m_tuples.append(WTFMove(types));
+    ASSERT(result.isTuple());
+    return result;
+}
+
+bool Procedure::isValidTuple(Type tuple) const
+{
+    return tuple.tupleIndex() < m_tuples.size();
+}
+
+const Vector<Type>& Procedure::tupleForType(Type tuple) const
+{
+    return m_tuples[tuple.tupleIndex()];
+}
+
 Value* Procedure::clone(Value* value)
 {
     std::unique_ptr<Value> clone(value->cloneImpl());
@@ -95,7 +113,7 @@ Value* Procedure::clone(Value* value)
 
 Value* Procedure::addIntConstant(Origin origin, Type type, int64_t value)
 {
-    switch (type) {
+    switch (type.kind()) {
     case Int32:
         return add<Const32Value>(origin, static_cast<int32_t>(value));
     case Int64:
@@ -117,7 +135,7 @@ Value* Procedure::addIntConstant(Value* likeValue, int64_t value)
 
 Value* Procedure::addConstant(Origin origin, Type type, uint64_t bits)
 {
-    switch (type) {
+    switch (type.kind()) {
     case Int32:
         return add<Const32Value>(origin, static_cast<int32_t>(bits));
     case Int64:
index e2106ec..d87c2f7 100644 (file)
@@ -111,7 +111,14 @@ public:
 
     JS_EXPORT_PRIVATE StackSlot* addStackSlot(unsigned byteSize);
     JS_EXPORT_PRIVATE Variable* addVariable(Type);
-    
+
+    JS_EXPORT_PRIVATE Type addTuple(Vector<Type>&& types);
+    bool isValidTuple(Type tuple) const;
+    Type extractFromTuple(Type tuple, unsigned index) const;
+    const Vector<Type>& tupleForType(Type tuple) const;
+
+    unsigned returnCount(Type type) const { return type.isTuple() ? tupleForType(type).size() : type.isNumeric(); }
+
     template<typename ValueType, typename... Arguments>
     ValueType* add(Arguments...);
 
@@ -273,6 +280,7 @@ private:
 
     SparseCollection<StackSlot> m_stackSlots;
     SparseCollection<Variable> m_variables;
+    Vector<Vector<Type>> m_tuples;
     Vector<std::unique_ptr<BasicBlock>> m_blocks;
     SparseCollection<Value> m_values;
     std::unique_ptr<CFG> m_cfg;
index 1156a55..5030148 100644 (file)
@@ -39,6 +39,13 @@ ValueType* Procedure::add(Arguments... arguments)
     return static_cast<ValueType*>(addValueImpl(Value::allocate<ValueType>(arguments...)));
 }
 
+inline Type Procedure::extractFromTuple(Type tuple, unsigned index) const
+{
+    ASSERT(tuple.tupleIndex() < m_tuples.size());
+    ASSERT(index < m_tuples[tuple.tupleIndex()].size());
+    return m_tuples[tuple.tupleIndex()][index];
+}
+
 } } // namespace JSC::B3
 
 #endif // ENABLE(B3_JIT)
index 9dc2633..fdc6ffd 100644 (file)
@@ -115,7 +115,7 @@ public:
 
     static IntRange top(Type type)
     {
-        switch (type) {
+        switch (type.kind()) {
         case Int32:
             return top<int32_t>();
         case Int64:
@@ -136,7 +136,7 @@ public:
 
     static IntRange rangeForMask(int64_t mask, Type type)
     {
-        switch (type) {
+        switch (type.kind()) {
         case Int32:
             return rangeForMask<int32_t>(static_cast<int32_t>(mask));
         case Int64:
@@ -158,7 +158,7 @@ public:
 
     static IntRange rangeForZShr(int32_t shiftAmount, Type type)
     {
-        switch (type) {
+        switch (type.kind()) {
         case Int32:
             return rangeForZShr<int32_t>(shiftAmount);
         case Int64:
@@ -188,7 +188,7 @@ public:
 
     bool couldOverflowAdd(const IntRange& other, Type type)
     {
-        switch (type) {
+        switch (type.kind()) {
         case Int32:
             return couldOverflowAdd<int32_t>(other);
         case Int64:
@@ -209,7 +209,7 @@ public:
 
     bool couldOverflowSub(const IntRange& other, Type type)
     {
-        switch (type) {
+        switch (type.kind()) {
         case Int32:
             return couldOverflowSub<int32_t>(other);
         case Int64:
@@ -230,7 +230,7 @@ public:
 
     bool couldOverflowMul(const IntRange& other, Type type)
     {
-        switch (type) {
+        switch (type.kind()) {
         case Int32:
             return couldOverflowMul<int32_t>(other);
         case Int64:
@@ -256,7 +256,7 @@ public:
 
     IntRange shl(int32_t shiftAmount, Type type)
     {
-        switch (type) {
+        switch (type.kind()) {
         case Int32:
             return shl<int32_t>(shiftAmount);
         case Int64:
@@ -278,7 +278,7 @@ public:
 
     IntRange sShr(int32_t shiftAmount, Type type)
     {
-        switch (type) {
+        switch (type.kind()) {
         case Int32:
             return sShr<int32_t>(shiftAmount);
         case Int64:
@@ -311,7 +311,7 @@ public:
 
     IntRange zShr(int32_t shiftAmount, Type type)
     {
-        switch (type) {
+        switch (type.kind()) {
         case Int32:
             return zShr<int32_t>(shiftAmount);
         case Int64:
@@ -332,7 +332,7 @@ public:
 
     IntRange add(const IntRange& other, Type type)
     {
-        switch (type) {
+        switch (type.kind()) {
         case Int32:
             return add<int32_t>(other);
         case Int64:
@@ -353,7 +353,7 @@ public:
 
     IntRange sub(const IntRange& other, Type type)
     {
-        switch (type) {
+        switch (type.kind()) {
         case Int32:
             return sub<int32_t>(other);
         case Int64:
@@ -380,7 +380,7 @@ public:
 
     IntRange mul(const IntRange& other, Type type)
     {
-        switch (type) {
+        switch (type.kind()) {
         case Int32:
             return mul<int32_t>(other);
         case Int64:
index 298c94c..7ccc3b2 100644 (file)
@@ -174,7 +174,7 @@ bool StackmapSpecial::isValidImpl(
         Value* child = value->child(i + numIgnoredB3Args);
         Arg& arg = inst.args[i + numIgnoredAirArgs];
 
-        if (!isArgValidForValue(arg, child))
+        if (!isArgValidForType(arg, child->type()))
             return false;
     }
 
@@ -228,7 +228,7 @@ Vector<ValueRep> StackmapSpecial::repsImpl(Air::GenerationContext& context, unsi
     return result;
 }
 
-bool StackmapSpecial::isArgValidForValue(const Air::Arg& arg, Value* value)
+bool StackmapSpecial::isArgValidForType(const Air::Arg& arg, Type type)
 {
     switch (arg.kind()) {
     case Arg::Tmp:
@@ -241,7 +241,7 @@ bool StackmapSpecial::isArgValidForValue(const Air::Arg& arg, Value* value)
         break;
     }
 
-    return arg.canRepresent(value);
+    return arg.canRepresent(type);
 }
 
 bool StackmapSpecial::isArgValidForRep(Air::Code& code, const Air::Arg& arg, const ValueRep& rep)
@@ -250,7 +250,7 @@ bool StackmapSpecial::isArgValidForRep(Air::Code& code, const Air::Arg& arg, con
     case ValueRep::WarmAny:
     case ValueRep::ColdAny:
     case ValueRep::LateColdAny:
-        // We already verified by isArgValidForValue().
+        // We already verified by isArgValidForType().
         return true;
     case ValueRep::SomeRegister:
     case ValueRep::SomeRegisterWithClobber:
index 00e9bd2..58b8c96 100644 (file)
@@ -73,7 +73,7 @@ protected:
     Vector<ValueRep> repsImpl(
         Air::GenerationContext&, unsigned numIgnoredB3Args, unsigned numIgnoredAirArgs, Air::Inst&);
 
-    static bool isArgValidForValue(const Air::Arg&, Value*);
+    static bool isArgValidForType(const Air::Arg&, Type);
     static bool isArgValidForRep(Air::Code&, const Air::Arg&, const ValueRep&);
     static ValueRep repForArg(Air::Code&, const Air::Arg&);
 };
index 1828e50..70c49c8 100644 (file)
@@ -63,6 +63,7 @@ public:
     // Use this to add children.
     void append(const ConstrainedValue& value)
     {
+        ASSERT(value.value()->type().isNumeric());
         append(value.value(), value.rep());
     }
 
index 0057eaf..67841bd 100644 (file)
@@ -36,7 +36,7 @@ using namespace JSC::B3;
 
 void printInternal(PrintStream& out, Type type)
 {
-    switch (type) {
+    switch (type.kind()) {
     case Void:
         out.print("Void");
         return;
@@ -52,10 +52,15 @@ void printInternal(PrintStream& out, Type type)
     case Double:
         out.print("Double");
         return;
+    case Tuple:
+        out.print("Tuple");
+        return;
     }
     RELEASE_ASSERT_NOT_REACHED();
 }
 
+static_assert(std::is_pod_v<JSC::B3::TypeKind>);
 } // namespace WTF
 
+
 #endif // ENABLE(B3_JIT)
index 4cd4710..647f8da 100644 (file)
@@ -36,22 +36,70 @@ IGNORE_RETURN_TYPE_WARNINGS_BEGIN
 
 namespace JSC { namespace B3 {
 
-enum Type : int8_t {
+static constexpr uint32_t tupleFlag = 1ul << (std::numeric_limits<uint32_t>::digits - 1);
+static constexpr uint32_t tupleIndexMask = tupleFlag - 1;
+
+enum TypeKind : uint32_t {
     Void,
     Int32,
     Int64,
     Float,
     Double,
+
+    // Tuples are represented as the tupleFlag | with the tuple's index into Procedure's m_tuples table.
+    Tuple = tupleFlag,
 };
 
-inline bool isInt(Type type)
+class Type {
+public:
+    constexpr Type() = default;
+    constexpr Type(const Type&) = default;
+    constexpr Type(TypeKind kind)
+        : m_kind(kind)
+    { }
+
+    ~Type() = default;
+
+    static Type tupleFromIndex(unsigned index) { ASSERT(!(index & tupleFlag)); return static_cast<TypeKind>(index | tupleFlag); }
+
+    TypeKind kind() const { return m_kind & tupleFlag ? Tuple : m_kind; }
+    uint32_t tupleIndex() const { ASSERT(m_kind & tupleFlag); return m_kind & tupleIndexMask; }
+    uint32_t hash() const { return m_kind; }
+
+    inline bool isInt() const;
+    inline bool isFloat() const;
+    inline bool isNumeric() const;
+    inline bool isTuple() const;
+
+    bool operator==(const TypeKind& otherKind) const { return kind() == otherKind; }
+    bool operator==(const Type& type) const { return m_kind == type.m_kind; }
+    bool operator!=(const TypeKind& otherKind) const { return !(*this == otherKind); }
+    bool operator!=(const Type& type) const { return !(*this == type); }
+
+private:
+    TypeKind m_kind { Void };
+};
+
+static_assert(sizeof(TypeKind) == sizeof(Type));
+
+inline bool Type::isInt() const
+{
+    return kind() == Int32 || kind() == Int64;
+}
+
+inline bool Type::isFloat() const
+{
+    return kind() == Float || kind() == Double;
+}
+
+inline bool Type::isNumeric() const
 {
-    return type == Int32 || type == Int64;
+    return isInt() || isFloat();
 }
 
-inline bool isFloat(Type type)
+inline bool Type::isTuple() const
 {
-    return type == Float || type == Double;
+    return kind() == Tuple;
 }
 
 inline Type pointerType()
@@ -63,8 +111,9 @@ inline Type pointerType()
 
 inline size_t sizeofType(Type type)
 {
-    switch (type) {
+    switch (type.kind()) {
     case Void:
+    case Tuple:
         return 0;
     case Int32:
     case Float:
index 16d807e..d06e661 100644 (file)
@@ -50,7 +50,7 @@ public:
     
     T& at(Type type)
     {
-        switch (type) {
+        switch (type.kind()) {
         case Void:
             return m_void;
         case Int32:
@@ -61,6 +61,8 @@ public:
             return m_float;
         case Double:
             return m_double;
+        case Tuple:
+            return m_tuple;
         }
         ASSERT_NOT_REACHED();
     }
@@ -96,6 +98,7 @@ private:
     T m_int64;
     T m_float;
     T m_double;
+    T m_tuple;
 };
 
 } } // namespace JSC::B3
index f7b3c55..16cf043 100644 (file)
@@ -72,6 +72,7 @@ public:
         HashMap<Value*, unsigned> valueInBlock;
         HashMap<Value*, BasicBlock*> valueOwner;
         HashMap<Value*, unsigned> valueIndex;
+        HashMap<Value*, Vector<Optional<Type>>> extractions;
 
         for (BasicBlock* block : m_procedure) {
             blocks.add(block);
@@ -204,7 +205,7 @@ public:
                 case Mod:
                     if (value->isChill()) {
                         VALIDATE(value->opcode() == Div || value->opcode() == Mod, ("At ", *value));
-                        VALIDATE(isInt(value->type()), ("At ", *value));
+                        VALIDATE(value->type().isInt(), ("At ", *value));
                     }
                     break;
                 default:
@@ -214,13 +215,13 @@ public:
                 VALIDATE(value->numChildren() == 2, ("At ", *value));
                 VALIDATE(value->type() == value->child(0)->type(), ("At ", *value));
                 VALIDATE(value->type() == value->child(1)->type(), ("At ", *value));
-                VALIDATE(value->type() != Void, ("At ", *value));
+                VALIDATE(value->type().isNumeric(), ("At ", *value));
                 break;
             case Neg:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
                 VALIDATE(value->numChildren() == 1, ("At ", *value));
                 VALIDATE(value->type() == value->child(0)->type(), ("At ", *value));
-                VALIDATE(value->type() != Void, ("At ", *value));
+                VALIDATE(value->type().isNumeric(), ("At ", *value));
                 break;
             case Shl:
             case SShr:
@@ -231,7 +232,7 @@ public:
                 VALIDATE(value->numChildren() == 2, ("At ", *value));
                 VALIDATE(value->type() == value->child(0)->type(), ("At ", *value));
                 VALIDATE(value->child(1)->type() == Int32, ("At ", *value));
-                VALIDATE(isInt(value->type()), ("At ", *value));
+                VALIDATE(value->type().isInt(), ("At ", *value));
                 break;
             case BitwiseCast:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
@@ -261,8 +262,8 @@ public:
             case Clz:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
                 VALIDATE(value->numChildren() == 1, ("At ", *value));
-                VALIDATE(isInt(value->child(0)->type()), ("At ", *value));
-                VALIDATE(isInt(value->type()), ("At ", *value));
+                VALIDATE(value->child(0)->type().isInt(), ("At ", *value));
+                VALIDATE(value->type().isInt(), ("At ", *value));
                 break;
             case Trunc:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
@@ -278,19 +279,19 @@ public:
             case Sqrt:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
                 VALIDATE(value->numChildren() == 1, ("At ", *value));
-                VALIDATE(isFloat(value->child(0)->type()), ("At ", *value));
-                VALIDATE(isFloat(value->type()), ("At ", *value));
+                VALIDATE(value->child(0)->type().isFloat(), ("At ", *value));
+                VALIDATE(value->type().isFloat(), ("At ", *value));
                 break;
             case IToD:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
                 VALIDATE(value->numChildren() == 1, ("At ", *value));
-                VALIDATE(isInt(value->child(0)->type()), ("At ", *value));
+                VALIDATE(value->child(0)->type().isInt(), ("At ", *value));
                 VALIDATE(value->type() == Double, ("At ", *value));
                 break;
             case IToF:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
                 VALIDATE(value->numChildren() == 1, ("At ", *value));
-                VALIDATE(isInt(value->child(0)->type()), ("At ", *value));
+                VALIDATE(value->child(0)->type().isInt(), ("At ", *value));
                 VALIDATE(value->type() == Float, ("At ", *value));
                 break;
             case FloatToDouble:
@@ -323,20 +324,20 @@ public:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
                 VALIDATE(value->numChildren() == 2, ("At ", *value));
                 VALIDATE(value->child(0)->type() == value->child(1)->type(), ("At ", *value));
-                VALIDATE(isInt(value->child(0)->type()), ("At ", *value));
+                VALIDATE(value->child(0)->type().isInt(), ("At ", *value));
                 VALIDATE(value->type() == Int32, ("At ", *value));
                 break;
             case EqualOrUnordered:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
                 VALIDATE(value->numChildren() == 2, ("At ", *value));
                 VALIDATE(value->child(0)->type() == value->child(1)->type(), ("At ", *value));
-                VALIDATE(isFloat(value->child(0)->type()), ("At ", *value));
+                VALIDATE(value->child(0)->type().isFloat(), ("At ", *value));
                 VALIDATE(value->type() == Int32, ("At ", *value));
                 break;
             case Select:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
                 VALIDATE(value->numChildren() == 3, ("At ", *value));
-                VALIDATE(isInt(value->child(0)->type()), ("At ", *value));
+                VALIDATE(value->child(0)->type().isInt(), ("At ", *value));
                 VALIDATE(value->type() == value->child(1)->type(), ("At ", *value));
                 VALIDATE(value->type() == value->child(2)->type(), ("At ", *value));
                 break;
@@ -355,7 +356,7 @@ public:
                 VALIDATE(!value->kind().isChill(), ("At ", *value));
                 VALIDATE(value->numChildren() == 1, ("At ", *value));
                 VALIDATE(value->child(0)->type() == pointerType(), ("At ", *value));
-                VALIDATE(value->type() != Void, ("At ", *value));
+                VALIDATE(value->type().isNumeric(), ("At ", *value));
                 validateFence(value);
                 validateStackAccess(value);
                 break;
@@ -382,7 +383,7 @@ public:
                 VALIDATE(value->numChildren() == 3, ("At ", *value));
                 VALIDATE(value->type() == Int32, ("At ", *value));
                 VALIDATE(value->child(0)->type() == value->child(1)->type(), ("At ", *value));
-                VALIDATE(isInt(value->child(0)->type()), ("At ", *value));
+                VALIDATE(value->child(0)->type().isInt(), ("At ", *value));
                 VALIDATE(value->child(2)->type() == pointerType(), ("At ", *value));
                 validateAtomic(value);
                 validateStackAccess(value);
@@ -392,7 +393,7 @@ public:
                 VALIDATE(value->numChildren() == 3, ("At ", *value));
                 VALIDATE(value->type() == value->child(0)->type(), ("At ", *value));
                 VALIDATE(value->type() == value->child(1)->type(), ("At ", *value));
-                VALIDATE(isInt(value->type()), ("At ", *value));
+                VALIDATE(value->type().isInt(), ("At ", *value));
                 VALIDATE(value->child(2)->type() == pointerType(), ("At ", *value));
                 validateAtomic(value);
                 validateStackAccess(value);
@@ -406,7 +407,7 @@ public:
                 VALIDATE(!value->kind().isChill(), ("At ", *value));
                 VALIDATE(value->numChildren() == 2, ("At ", *value));
                 VALIDATE(value->type() == value->child(0)->type(), ("At ", *value));
-                VALIDATE(isInt(value->type()), ("At ", *value));
+                VALIDATE(value->type().isInt(), ("At ", *value));
                 VALIDATE(value->child(1)->type() == pointerType(), ("At ", *value));
                 validateAtomic(value);
                 validateStackAccess(value);
@@ -415,7 +416,7 @@ public:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
                 VALIDATE(value->numChildren() == 1, ("At ", *value));
                 VALIDATE(value->type() == value->child(0)->type(), ("At ", *value));
-                VALIDATE(isInt(value->type()), ("At ", *value));
+                VALIDATE(value->type().isInt(), ("At ", *value));
                 break;
             case WasmAddress:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
@@ -430,19 +431,35 @@ public:
                 break;
             case Patchpoint:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
-                if (value->type() == Void)
-                    VALIDATE(value->as<PatchpointValue>()->resultConstraint == ValueRep::WarmAny, ("At ", *value));
-                else
-                    validateStackmapConstraint(value, ConstrainedValue(value, value->as<PatchpointValue>()->resultConstraint), ConstraintRole::Def);
+                if (value->type() == Void) {
+                    VALIDATE(value->as<PatchpointValue>()->resultConstraints.size() == 1, ("At ", *value));
+                    VALIDATE(value->as<PatchpointValue>()->resultConstraints[0] == ValueRep::WarmAny, ("At ", *value));
+                } else {
+                    if (value->type().isNumeric()) {
+                        VALIDATE(value->as<PatchpointValue>()->resultConstraints.size() == 1, ("At ", *value));
+                        validateStackmapConstraint(value, ConstrainedValue(value, value->as<PatchpointValue>()->resultConstraints[0]), ConstraintRole::Def);
+                    } else {
+                        VALIDATE(m_procedure.isValidTuple(value->type()), ("At ", *value));
+                        VALIDATE(value->as<PatchpointValue>()->resultConstraints.size() == m_procedure.tupleForType(value->type()).size(), ("At ", *value));
+                        for (unsigned i = 0; i < value->as<PatchpointValue>()->resultConstraints.size(); ++i)
+                            validateStackmapConstraint(value, ConstrainedValue(value, value->as<PatchpointValue>()->resultConstraints[i]), ConstraintRole::Def, i);
+                    }
+                }
                 validateStackmap(value);
                 break;
+            case Extract: {
+                VALIDATE(value->numChildren() == 1, ("At ", *value));
+                VALIDATE(value->child(0)->type() == Tuple, ("At ", *value));
+                VALIDATE(value->type().isNumeric(), ("At ", *value));
+                break;
+            }
             case CheckAdd:
             case CheckSub:
             case CheckMul:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
                 VALIDATE(value->numChildren() >= 2, ("At ", *value));
-                VALIDATE(isInt(value->child(0)->type()), ("At ", *value));
-                VALIDATE(isInt(value->child(1)->type()), ("At ", *value));
+                VALIDATE(value->child(0)->type().isInt(), ("At ", *value));
+                VALIDATE(value->child(1)->type().isInt(), ("At ", *value));
                 VALIDATE(value->as<StackmapValue>()->constrainedChild(0).rep() == ValueRep::WarmAny, ("At ", *value));
                 VALIDATE(value->as<StackmapValue>()->constrainedChild(1).rep() == ValueRep::WarmAny, ("At ", *value));
                 validateStackmap(value);
@@ -450,7 +467,7 @@ public:
             case Check:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
                 VALIDATE(value->numChildren() >= 1, ("At ", *value));
-                VALIDATE(isInt(value->child(0)->type()), ("At ", *value));
+                VALIDATE(value->child(0)->type().isInt(), ("At ", *value));
                 VALIDATE(value->as<StackmapValue>()->constrainedChild(0).rep() == ValueRep::WarmAny, ("At ", *value));
                 validateStackmap(value);
                 break;
@@ -472,6 +489,7 @@ public:
                 VALIDATE(value->numChildren() == 1, ("At ", *value));
                 VALIDATE(value->as<UpsilonValue>()->phi(), ("At ", *value));
                 VALIDATE(value->as<UpsilonValue>()->phi()->opcode() == Phi, ("At ", *value));
+                VALIDATE(value->child(0)->type() != Void, ("At ", *value));
                 VALIDATE(value->child(0)->type() == value->as<UpsilonValue>()->phi()->type(), ("At ", *value));
                 VALIDATE(valueInProc.contains(value->as<UpsilonValue>()->phi()), ("At ", *value));
                 break;
@@ -501,14 +519,14 @@ public:
             case Branch:
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
                 VALIDATE(value->numChildren() == 1, ("At ", *value));
-                VALIDATE(isInt(value->child(0)->type()), ("At ", *value));
+                VALIDATE(value->child(0)->type().isInt(), ("At ", *value));
                 VALIDATE(value->type() == Void, ("At ", *value));
                 VALIDATE(valueOwner.get(value)->numSuccessors() == 2, ("At ", *value));
                 break;
             case Switch: {
                 VALIDATE(!value->kind().hasExtraBits(), ("At ", *value));
                 VALIDATE(value->numChildren() == 1, ("At ", *value));
-                VALIDATE(isInt(value->child(0)->type()), ("At ", *value));
+                VALIDATE(value->child(0)->type().isInt(), ("At ", *value));
                 VALIDATE(value->type() == Void, ("At ", *value));
                 VALIDATE(value->as<SwitchValue>()->hasFallThrough(valueOwner.get(value)), ("At ", *value));
                 // This validates the same thing as hasFallThrough, but more explicitly. We want to
@@ -560,7 +578,7 @@ private:
         Use,
         Def
     };
-    void validateStackmapConstraint(Value* context, const ConstrainedValue& value, ConstraintRole role = ConstraintRole::Use)
+    void validateStackmapConstraint(Value* context, const ConstrainedValue& value, ConstraintRole role = ConstraintRole::Use, unsigned tupleIndex = 0)
     {
         switch (value.rep().kind()) {
         case ValueRep::WarmAny:
@@ -583,10 +601,17 @@ private:
         case ValueRep::SomeLateRegister:
             if (value.rep().kind() == ValueRep::LateRegister)
                 VALIDATE(role == ConstraintRole::Use, ("At ", *context, ": ", value));
-            if (value.rep().reg().isGPR())
-                VALIDATE(isInt(value.value()->type()), ("At ", *context, ": ", value));
-            else
-                VALIDATE(isFloat(value.value()->type()), ("At ", *context, ": ", value));
+            if (value.rep().reg().isGPR()) {
+                if (value.value()->type().isTuple())
+                    VALIDATE(m_procedure.extractFromTuple(value.value()->type(), tupleIndex).isInt(), ("At ", *context, ": ", value));
+                else
+                    VALIDATE(value.value()->type().isInt(), ("At ", *context, ": ", value));
+            } else {
+                if (value.value()->type().isTuple())
+                    VALIDATE(m_procedure.extractFromTuple(value.value()->type(), tupleIndex).isFloat(), ("At ", *context, ": ", value));
+                else
+                    VALIDATE(value.value()->type().isFloat(), ("At ", *context, ": ", value));
+            }
             break;
         default:
             VALIDATE(false, ("At ", *context, ": ", value));
index a80d661..a4faa6c 100644 (file)
@@ -445,7 +445,7 @@ Value* Value::invertedCompare(Procedure& proc) const
 
 bool Value::isRounded() const
 {
-    ASSERT(isFloat(type()));
+    ASSERT(type().isFloat());
     switch (opcode()) {
     case Floor:
     case Ceil:
@@ -578,6 +578,7 @@ Effects Value::effects() const
     case EqualOrUnordered:
     case Select:
     case Depend:
+    case Extract:
         break;
     case Div:
     case UDiv:
@@ -861,7 +862,7 @@ Type Value::typeFor(Kind kind, Value* firstChild, Value* secondChild)
     case IToF:
         return Float;
     case BitwiseCast:
-        switch (firstChild->type()) {
+        switch (firstChild->type().kind()) {
         case Int64:
             return Double;
         case Double:
@@ -871,6 +872,7 @@ Type Value::typeFor(Kind kind, Value* firstChild, Value* secondChild)
         case Float:
             return Int32;
         case Void:
+        case Tuple:
             ASSERT_NOT_REACHED();
         }
         return Void;
index 49f0c54..0100bb4 100644 (file)
@@ -423,6 +423,7 @@ protected:
         case Load:
         case Switch:
         case Upsilon:
+        case Extract:
         case Set:
         case WasmAddress:
         case WasmBoundsCheck:
@@ -474,8 +475,10 @@ protected:
         case CheckMul:
         case Patchpoint:
             return sizeof(Vector<Value*, 3>);
+#ifdef NDEBUG
         default:
             break;
+#endif
         }
         RELEASE_ASSERT_NOT_REACHED();
         return 0;
index 5c3c47f..b1cd0a6 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2015-2016 Apple Inc. All rights reserved.
+ * Copyright (C) 2015-2019 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -35,6 +35,7 @@
 #include "B3Const64Value.h"
 #include "B3ConstDoubleValue.h"
 #include "B3ConstFloatValue.h"
+#include "B3ExtractValue.h"
 #include "B3FenceValue.h"
 #include "B3MemoryValue.h"
 #include "B3PatchpointValue.h"
@@ -138,6 +139,8 @@ namespace JSC { namespace B3 {
         return MACRO(SwitchValue); \
     case Upsilon: \
         return MACRO(UpsilonValue); \
+    case Extract: \
+        return MACRO(ExtractValue); \
     case WasmAddress: \
         return MACRO(WasmAddressValue); \
     case WasmBoundsCheck: \
index b839a3f..b4f09e7 100644 (file)
@@ -38,7 +38,7 @@ namespace JSC { namespace B3 {
 
 ValueKey ValueKey::intConstant(Type type, int64_t value)
 {
-    switch (type) {
+    switch (type.kind()) {
     case Int32:
         return ValueKey(Const32, Int32, value);
     case Int64:
index 18b092c..cd47a60 100644 (file)
@@ -108,7 +108,7 @@ public:
 
     unsigned hash() const
     {
-        return m_kind.hash() + m_type + WTF::IntHash<int32_t>::hash(u.indices[0]) + u.indices[1] + u.indices[2];
+        return m_kind.hash() + m_type.hash() + WTF::IntHash<int32_t>::hash(u.indices[0]) + u.indices[1] + u.indices[2];
     }
 
     explicit operator bool() const { return *this != ValueKey(); }
index 0ff39d9..7f96e7d 100644 (file)
@@ -94,7 +94,7 @@ public:
         Stack,
 
         // As an input representation, this forces the value to end up in the argument area at some
-        // offset.
+        // offset. As an output representation this tells us what offset from SP B3 picked.
         StackArgument,
 
         // As an output representation, this tells us that B3 constant-folded the value.
index e92adda..ca23dc4 100644 (file)
@@ -59,8 +59,9 @@ inline Width pointerWidth()
 
 inline Width widthForType(Type type)
 {
-    switch (type) {
+    switch (type.kind()) {
     case Void:
+    case Tuple:
         ASSERT_NOT_REACHED();
         return Width8;
     case Int32:
index 2dac2ea..8fa0d50 100644 (file)
@@ -71,9 +71,14 @@ bool Arg::usesTmp(Air::Tmp tmp) const
     return uses;
 }
 
+bool Arg::canRepresent(Type type) const
+{
+    return isBank(bankForType(type));
+}
+
 bool Arg::canRepresent(Value* value) const
 {
-    return isBank(bankForType(value->type()));
+    return canRepresent(value->type());
 }
 
 bool Arg::isCompatibleBank(const Arg& other) const
index 1f925c0..82413c8 100644 (file)
@@ -1104,6 +1104,7 @@ public:
         ASSERT_NOT_REACHED();
     }
 
+    bool canRepresent(Type) const;
     bool canRepresent(Value* value) const;
 
     bool isCompatibleBank(const Arg& other) const;
index 0abe1b3..e1329a4 100644 (file)
@@ -91,7 +91,7 @@ Vector<Arg> computeCCallingConvention(Code& code, CCallValue* value)
 
 Tmp cCallResult(Type type)
 {
-    switch (type) {
+    switch (type.kind()) {
     case Void:
         return Tmp();
     case Int32:
@@ -100,6 +100,8 @@ Tmp cCallResult(Type type)
     case Float:
     case Double:
         return Tmp(FPRInfo::returnValueFPR);
+    case Tuple:
+        break;
     }
 
     RELEASE_ASSERT_NOT_REACHED();
index 5bc28f0..56ab6cf 100644 (file)
@@ -99,8 +99,9 @@ void lowerMacros(Code& code)
                     inst.kind.effects = true;
 
                 Tmp result = cCallResult(value->type());
-                switch (value->type()) {
+                switch (value->type().kind()) {
                 case Void:
+                case Tuple:
                     break;
                 case Float:
                     insertionSet.insert(instIndex + 1, MoveFloat, value, result, resultDst);
index 687262a..155ca4a 100644 (file)
@@ -270,6 +270,8 @@ struct Operand {
 typedef Operand<int64_t> Int64Operand;
 typedef Operand<int32_t> Int32Operand;
 
+#define MAKE_OPERAND(value) Operand<decltype(value)> { #value, value }
+
 template<typename FloatType>
 void populateWithInterestingValues(Vector<Operand<FloatType>>& operands)
 {
@@ -279,8 +281,8 @@ void populateWithInterestingValues(Vector<Operand<FloatType>>& operands)
     operands.append({ "-0.4", static_cast<FloatType>(-0.5) });
     operands.append({ "0.5", static_cast<FloatType>(0.5) });
     operands.append({ "-0.5", static_cast<FloatType>(-0.5) });
-    operands.append({ "0.6", static_cast<FloatType>(0.5) });
-    operands.append({ "-0.6", static_cast<FloatType>(-0.5) });
+    operands.append({ "0.6", static_cast<FloatType>(0.6) });
+    operands.append({ "-0.6", static_cast<FloatType>(-0.6) });
     operands.append({ "1.", static_cast<FloatType>(1.) });
     operands.append({ "-1.", static_cast<FloatType>(-1.) });
     operands.append({ "2.", static_cast<FloatType>(2.) });
@@ -1011,6 +1013,7 @@ void addSShrShTests(const char* filter, Deque<RefPtr<SharedTask<void()>>>&);
 void addShrTests(const char* filter, Deque<RefPtr<SharedTask<void()>>>&);
 void addAtomicTests(const char* filter, Deque<RefPtr<SharedTask<void()>>>&);
 void addLoadTests(const char* filter, Deque<RefPtr<SharedTask<void()>>>&);
+void addTupleTests(const char* filter, Deque<RefPtr<SharedTask<void()>>>&);
 
 bool shouldRun(const char* filter, const char* testName);
 
index 82dea35..20f3843 100644 (file)
@@ -535,6 +535,7 @@ void run(const char* filter)
     RUN(testEqualDouble(PNaN, PNaN, false));
 
     addLoadTests(filter, tasks);
+    addTupleTests(filter, tasks);
 
     RUN(testSpillGP());
     RUN(testSpillFP());
@@ -903,7 +904,7 @@ int main(int argc, char** argv)
 
     JSC::initializeThreading();
     
-    for (unsigned i = 0; i <= 2; ++i) {
+    for (unsigned i = 1; i <= 2; ++i) {
         JSC::Options::defaultB3OptLevel() = i;
         run(filter);
     }
index d706210..336c822 100644 (file)
@@ -2793,7 +2793,7 @@ void testStorePartial8BitRegisterOnX86()
     patchpoint->append(ConstrainedValue(whereToStore, ValueRep(GPRInfo::regT2)));
 
     // We'll produce EDI.
-    patchpoint->resultConstraint = ValueRep::reg(GPRInfo::regT6);
+    patchpoint->resultConstraints = { ValueRep::reg(GPRInfo::regT6) };
 
     // Give the allocator a good reason not to use any other register.
     RegisterSet clobberSet = RegisterSet::allGPRs();
index c96909b..dd407d7 100644 (file)
@@ -65,7 +65,7 @@ void testPatchpointWithRegisterResult()
     PatchpointValue* patchpoint = root->appendNew<PatchpointValue>(proc, Int32, Origin());
     patchpoint->append(ConstrainedValue(arg1, ValueRep::SomeRegister));
     patchpoint->append(ConstrainedValue(arg2, ValueRep::SomeRegister));
-    patchpoint->resultConstraint = ValueRep::reg(GPRInfo::nonArgGPR0);
+    patchpoint->resultConstraints = { ValueRep::reg(GPRInfo::nonArgGPR0) };
     patchpoint->setGenerator(
         [&] (CCallHelpers& jit, const StackmapGenerationParams& params) {
             AllowMacroScratchRegisterUsage allowScratch(jit);
@@ -89,7 +89,7 @@ void testPatchpointWithStackArgumentResult()
     PatchpointValue* patchpoint = root->appendNew<PatchpointValue>(proc, Int32, Origin());
     patchpoint->append(ConstrainedValue(arg1, ValueRep::SomeRegister));
     patchpoint->append(ConstrainedValue(arg2, ValueRep::SomeRegister));
-    patchpoint->resultConstraint = ValueRep::stackArgument(0);
+    patchpoint->resultConstraints = { ValueRep::stackArgument(0) };
     patchpoint->clobber(RegisterSet::macroScratchRegisters());
     patchpoint->setGenerator(
         [&] (CCallHelpers& jit, const StackmapGenerationParams& params) {
@@ -115,7 +115,7 @@ void testPatchpointWithAnyResult()
     PatchpointValue* patchpoint = root->appendNew<PatchpointValue>(proc, Double, Origin());
     patchpoint->append(ConstrainedValue(arg1, ValueRep::SomeRegister));
     patchpoint->append(ConstrainedValue(arg2, ValueRep::SomeRegister));
-    patchpoint->resultConstraint = ValueRep::WarmAny;
+    patchpoint->resultConstraints = { ValueRep::WarmAny };
     patchpoint->clobberLate(RegisterSet::allFPRs());
     patchpoint->clobber(RegisterSet::macroScratchRegisters());
     patchpoint->clobber(RegisterSet(GPRInfo::regT0));
index d316cb7..05c7158 100644 (file)
@@ -1420,7 +1420,7 @@ void testPatchpointDoubleRegs()
 
     PatchpointValue* patchpoint = root->appendNew<PatchpointValue>(proc, Double, Origin());
     patchpoint->append(arg, ValueRep(FPRInfo::fpRegT0));
-    patchpoint->resultConstraint = ValueRep(FPRInfo::fpRegT0);
+    patchpoint->resultConstraints = { ValueRep(FPRInfo::fpRegT0) };
 
     unsigned numCalls = 0;
     patchpoint->setGenerator(
@@ -2273,7 +2273,7 @@ void testSomeEarlyRegister()
         BasicBlock* root = proc.addBlock();
     
         PatchpointValue* patchpoint = root->appendNew<PatchpointValue>(proc, Int32, Origin());
-        patchpoint->resultConstraint = ValueRep::reg(GPRInfo::returnValueGPR);
+        patchpoint->resultConstraints = { ValueRep::reg(GPRInfo::returnValueGPR) };
         bool ranFirstPatchpoint = false;
         patchpoint->setGenerator(
             [&] (CCallHelpers&, const StackmapGenerationParams& params) {
@@ -2286,7 +2286,7 @@ void testSomeEarlyRegister()
         patchpoint = root->appendNew<PatchpointValue>(proc, Int32, Origin());
         patchpoint->appendSomeRegister(arg);
         if (succeed)
-            patchpoint->resultConstraint = ValueRep::SomeEarlyRegister;
+            patchpoint->resultConstraints = { ValueRep::SomeEarlyRegister };
         bool ranSecondPatchpoint = false;
         unsigned optLevel = proc.optLevel();
         patchpoint->setGenerator(
index 7dece0d..1462843 100644 (file)
@@ -1363,7 +1363,7 @@ void testShuffleDoesntTrashCalleeSaves()
         PatchpointValue* patchpoint = root->appendNew<PatchpointValue>(proc, Int32, Origin());
         patchpoint->clobber(RegisterSet::macroScratchRegisters());
         RELEASE_ASSERT(reg.isGPR());
-        patchpoint->resultConstraint = ValueRep::reg(reg.gpr());
+        patchpoint->resultConstraints = { ValueRep::reg(reg.gpr()) };
         patchpoint->setGenerator(
             [=] (CCallHelpers& jit, const StackmapGenerationParams& params) {
                 AllowMacroScratchRegisterUsage allowScratch(jit);
@@ -1383,7 +1383,7 @@ void testShuffleDoesntTrashCalleeSaves()
 
     PatchpointValue* ptr = root->appendNew<PatchpointValue>(proc, Int64, Origin());
     ptr->clobber(RegisterSet::macroScratchRegisters());
-    ptr->resultConstraint = ValueRep::reg(GPRInfo::regCS0);
+    ptr->resultConstraints = { ValueRep::reg(GPRInfo::regCS0) };
     ptr->appendSomeRegister(arg1);
     ptr->setGenerator(
         [=] (CCallHelpers& jit, const StackmapGenerationParams& params) {
@@ -1491,7 +1491,7 @@ void testReportUsedRegistersLateUseFollowedByEarlyDefDoesNotMarkUseAsDead()
 
     {
         PatchpointValue* patchpoint = root->appendNew<PatchpointValue>(proc, Int32, Origin());
-        patchpoint->resultConstraint = ValueRep::SomeEarlyRegister;
+        patchpoint->resultConstraints = { ValueRep::SomeEarlyRegister };
         patchpoint->setGenerator([&] (CCallHelpers&, const StackmapGenerationParams& params) {
             RELEASE_ASSERT(allRegs.contains(params[0].gpr()));
         });
@@ -1547,4 +1547,277 @@ void testInfiniteLoopDoesntCauseBadHoisting()
     invoke<void>(*code, static_cast<uint64_t>(55)); // Shouldn't crash dereferncing 55.
 }
 
+static void testSimpleTuplePair(unsigned first, int64_t second)
+{
+    Procedure proc;
+    BasicBlock* root = proc.addBlock();
+
+    PatchpointValue* patchpoint = root->appendNew<PatchpointValue>(proc, proc.addTuple({ Int32, Int64 }), Origin());
+    patchpoint->clobber(RegisterSet::macroScratchRegisters());
+    patchpoint->resultConstraints = { ValueRep::SomeRegister, ValueRep::SomeRegister };
+    patchpoint->setGenerator([&] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+        AllowMacroScratchRegisterUsage allowScratch(jit);
+        jit.move(CCallHelpers::TrustedImm32(first), params[0].gpr());
+        jit.move(CCallHelpers::TrustedImm64(second), params[1].gpr());
+    });
+    Value* i32 = root->appendNew<Value>(proc, ZExt32, Origin(),
+        root->appendNew<ExtractValue>(proc, Origin(), Int32, patchpoint, 0));
+    Value* i64 = root->appendNew<ExtractValue>(proc, Origin(), Int64, patchpoint, 1);
+    root->appendNew<Value>(proc, Return, Origin(), root->appendNew<Value>(proc, Add, Origin(), i32, i64));
+
+    CHECK_EQ(compileAndRun<int64_t>(proc), first + second);
+}
+
+static void testSimpleTuplePairUnused(unsigned first, int64_t second)
+{
+    Procedure proc;
+    BasicBlock* root = proc.addBlock();
+
+    PatchpointValue* patchpoint = root->appendNew<PatchpointValue>(proc, proc.addTuple({ Int32, Int64, Double }), Origin());
+    patchpoint->clobber(RegisterSet::macroScratchRegisters());
+    patchpoint->resultConstraints = { ValueRep::SomeRegister, ValueRep::SomeRegister, ValueRep::SomeRegister };
+    patchpoint->setGenerator([&] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+        AllowMacroScratchRegisterUsage allowScratch(jit);
+        jit.move(CCallHelpers::TrustedImm32(first), params[0].gpr());
+        jit.move(CCallHelpers::TrustedImm64(second), params[1].gpr());
+        jit.moveDouble(CCallHelpers::Imm64(bitwise_cast<uint64_t>(0.0)), params[2].fpr());
+    });
+    Value* i32 = root->appendNew<Value>(proc, ZExt32, Origin(),
+        root->appendNew<ExtractValue>(proc, Origin(), Int32, patchpoint, 0));
+    Value* i64 = root->appendNew<ExtractValue>(proc, Origin(), Int64, patchpoint, 1);
+    root->appendNew<Value>(proc, Return, Origin(), root->appendNew<Value>(proc, Add, Origin(), i32, i64));
+
+    CHECK_EQ(compileAndRun<int64_t>(proc), first + second);
+}
+
+static void testSimpleTuplePairStack(unsigned first, int64_t second)
+{
+    Procedure proc;
+    BasicBlock* root = proc.addBlock();
+
+    PatchpointValue* patchpoint = root->appendNew<PatchpointValue>(proc, proc.addTuple({ Int32, Int64 }), Origin());
+    patchpoint->clobber(RegisterSet::macroScratchRegisters());
+    patchpoint->resultConstraints = { ValueRep::SomeRegister, ValueRep::stackArgument(0) };
+    patchpoint->setGenerator([&] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+        AllowMacroScratchRegisterUsage allowScratch(jit);
+        jit.move(CCallHelpers::TrustedImm32(first), params[0].gpr());
+        jit.store64(CCallHelpers::TrustedImm64(second), CCallHelpers::Address(CCallHelpers::framePointerRegister, params[1].offsetFromFP()));
+    });
+    Value* i32 = root->appendNew<Value>(proc, ZExt32, Origin(),
+        root->appendNew<ExtractValue>(proc, Origin(), Int32, patchpoint, 0));
+    Value* i64 = root->appendNew<ExtractValue>(proc, Origin(), Int64, patchpoint, 1);
+    root->appendNew<Value>(proc, Return, Origin(), root->appendNew<Value>(proc, Add, Origin(), i32, i64));
+
+    CHECK_EQ(compileAndRun<int64_t>(proc), first + second);
+}
+
+template<bool shouldFixSSA>
+static void tailDupedTuplePair(unsigned first, double second)
+{
+    Procedure proc;
+    BasicBlock* root = proc.addBlock();
+    BasicBlock* truthy = proc.addBlock();
+    BasicBlock* falsey = proc.addBlock();
+
+    Type tupleType = proc.addTuple({ Int32, Double });
+    Variable* var = proc.addVariable(tupleType);
+
+    Value* test = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR0);
+    PatchpointValue* patchpoint = root->appendNew<PatchpointValue>(proc, tupleType, Origin());
+    patchpoint->clobber(RegisterSet::macroScratchRegisters());
+    patchpoint->resultConstraints = { ValueRep::SomeRegister, ValueRep::stackArgument(0) };
+    patchpoint->setGenerator([&] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+        AllowMacroScratchRegisterUsage allowScratch(jit);
+        jit.move(CCallHelpers::TrustedImm32(first), params[0].gpr());
+        jit.store64(CCallHelpers::TrustedImm64(bitwise_cast<uint64_t>(second)), CCallHelpers::Address(CCallHelpers::framePointerRegister, params[1].offsetFromFP()));
+    });
+    root->appendNew<VariableValue>(proc, Set, Origin(), var, patchpoint);
+    root->appendNewControlValue(proc, Branch, Origin(), test, FrequentedBlock(truthy), FrequentedBlock(falsey));
+
+    auto addDup = [&] (BasicBlock* block) {
+        Value* tuple = block->appendNew<VariableValue>(proc, B3::Get, Origin(), var);
+        Value* i32 = block->appendNew<Value>(proc, ZExt32, Origin(),
+            block->appendNew<ExtractValue>(proc, Origin(), Int32, tuple, 0));
+        i32 = block->appendNew<Value>(proc, IToD, Origin(), i32);
+        Value* f64 = block->appendNew<ExtractValue>(proc, Origin(), Double, tuple, 1);
+        block->appendNew<Value>(proc, Return, Origin(), block->appendNew<Value>(proc, Add, Origin(), i32, f64));
+    };
+
+    addDup(truthy);
+    addDup(falsey);
+
+    proc.resetReachability();
+    if (shouldFixSSA)
+        fixSSA(proc);
+    CHECK_EQ(compileAndRun<double>(proc, first), first + second);
+}
+
+template<bool shouldFixSSA>
+static void tuplePairVariableLoop(unsigned first, uint64_t second)
+{
+    Procedure proc;
+    BasicBlock* root = proc.addBlock();
+    BasicBlock* body = proc.addBlock();
+    BasicBlock* exit = proc.addBlock();
+
+    Type tupleType = proc.addTuple({ Int32, Int64 });
+    Variable* var = proc.addVariable(tupleType);
+
+    {
+        Value* first = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR0);
+        Value* second = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR1);
+        PatchpointValue* patchpoint = root->appendNew<PatchpointValue>(proc, tupleType, Origin());
+        patchpoint->append({ first, ValueRep::SomeRegister });
+        patchpoint->append({ second, ValueRep::SomeRegister });
+        patchpoint->resultConstraints = { ValueRep::SomeEarlyRegister, ValueRep::SomeEarlyRegister };
+        patchpoint->setGenerator([&] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+            jit.move(params[2].gpr(), params[0].gpr());
+            jit.move(params[3].gpr(), params[1].gpr());
+        });
+        root->appendNew<VariableValue>(proc, Set, Origin(), var, patchpoint);
+        root->appendNewControlValue(proc, Jump, Origin(), body);
+    }
+
+    {
+        Value* tuple = body->appendNew<VariableValue>(proc, B3::Get, Origin(), var);
+        Value* first = body->appendNew<ExtractValue>(proc, Origin(), Int32, tuple, 0);
+        Value* second = body->appendNew<ExtractValue>(proc, Origin(), Int64, tuple, 1);
+        PatchpointValue* patchpoint = body->appendNew<PatchpointValue>(proc, tupleType, Origin());
+        patchpoint->clobber(RegisterSet::macroScratchRegisters());
+        patchpoint->append({ first, ValueRep::SomeRegister });
+        patchpoint->append({ second, ValueRep::SomeRegister });
+        patchpoint->resultConstraints = { ValueRep::SomeEarlyRegister, ValueRep::stackArgument(0) };
+        patchpoint->setGenerator([&] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+            AllowMacroScratchRegisterUsage allowScratch(jit);
+            CHECK(params[3].gpr() != params[0].gpr());
+            CHECK(params[2].gpr() != params[0].gpr());
+            jit.add64(CCallHelpers::TrustedImm32(1), params[3].gpr(), params[0].gpr());
+            jit.store64(params[0].gpr(), CCallHelpers::Address(CCallHelpers::framePointerRegister, params[1].offsetFromFP()));
+
+            jit.move(params[2].gpr(), params[0].gpr());
+            jit.urshift32(CCallHelpers::TrustedImm32(1), params[0].gpr());
+        });
+        body->appendNew<VariableValue>(proc, Set, Origin(), var, patchpoint);
+        Value* condition = body->appendNew<ExtractValue>(proc, Origin(), Int32, patchpoint, 0);
+        body->appendNewControlValue(proc, Branch, Origin(), condition, FrequentedBlock(body), FrequentedBlock(exit));
+    }
+
+    {
+        Value* tuple = exit->appendNew<VariableValue>(proc, B3::Get, Origin(), var);
+        Value* second = exit->appendNew<ExtractValue>(proc, Origin(), Int64, tuple, 1);
+        exit->appendNew<Value>(proc, Return, Origin(), second);
+    }
+
+    proc.resetReachability();
+    validate(proc);
+    if (shouldFixSSA)
+        fixSSA(proc);
+    CHECK_EQ(compileAndRun<uint64_t>(proc, first, second), second + (first ? getMSBSet(first) : first) + 1);
+}
+
+template<bool shouldFixSSA>
+static void tupleNestedLoop(int32_t first, double second)
+{
+    Procedure proc;
+    BasicBlock* root = proc.addBlock();
+    BasicBlock* outerLoop = proc.addBlock();
+    BasicBlock* innerLoop = proc.addBlock();
+    BasicBlock* outerContinuation = proc.addBlock();
+
+    Type tupleType = proc.addTuple({ Int32, Double, Int32 });
+    Variable* varOuter = proc.addVariable(tupleType);
+    Variable* varInner = proc.addVariable(tupleType);
+    Variable* tookInner = proc.addVariable(Int32);
+
+    {
+        Value* first = root->appendNew<ArgumentRegValue>(proc, Origin(), GPRInfo::argumentGPR0);
+        Value* second = root->appendNew<ArgumentRegValue>(proc, Origin(), FPRInfo::argumentFPR0);
+        PatchpointValue* patchpoint = root->appendNew<PatchpointValue>(proc, tupleType, Origin());
+        patchpoint->append({ first, ValueRep::SomeRegisterWithClobber });
+        patchpoint->append({ second, ValueRep::SomeRegisterWithClobber });
+        patchpoint->resultConstraints = { ValueRep::SomeRegister, ValueRep::SomeRegister, ValueRep::SomeEarlyRegister };
+        patchpoint->setGenerator([&] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+            jit.move(params[3].gpr(), params[0].gpr());
+            jit.move(params[0].gpr(), params[2].gpr());
+            jit.move(params[4].fpr(), params[1].fpr());
+        });
+        root->appendNew<VariableValue>(proc, Set, Origin(), varOuter, patchpoint);
+        root->appendNew<VariableValue>(proc, Set, Origin(), tookInner, root->appendIntConstant(proc, Origin(), Int32, 0));
+        root->appendNewControlValue(proc, Jump, Origin(), outerLoop);
+    }
+
+    {
+        Value* tuple = outerLoop->appendNew<VariableValue>(proc, B3::Get, Origin(), varOuter);
+        Value* first = outerLoop->appendNew<ExtractValue>(proc, Origin(), Int32, tuple, 0);
+        Value* second = outerLoop->appendNew<ExtractValue>(proc, Origin(), Double, tuple, 1);
+        Value* third = outerLoop->appendNew<VariableValue>(proc, B3::Get, Origin(), tookInner);
+        PatchpointValue* patchpoint = outerLoop->appendNew<PatchpointValue>(proc, tupleType, Origin());
+        patchpoint->clobber(RegisterSet::macroScratchRegisters());
+        patchpoint->append({ first, ValueRep::SomeRegisterWithClobber });
+        patchpoint->append({ second, ValueRep::SomeRegisterWithClobber });
+        patchpoint->append({ third, ValueRep::SomeRegisterWithClobber });
+        patchpoint->resultConstraints = { ValueRep::SomeRegister, ValueRep::SomeRegister, ValueRep::SomeRegister };
+        patchpoint->setGenerator([&] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+            AllowMacroScratchRegisterUsage allowScratch(jit);
+            jit.move(params[3].gpr(), params[0].gpr());
+            jit.moveConditionally32(CCallHelpers::Equal, params[5].gpr(), CCallHelpers::TrustedImm32(0), params[0].gpr(), params[5].gpr(), params[2].gpr());
+            jit.move(params[4].fpr(), params[1].fpr());
+        });
+        outerLoop->appendNew<VariableValue>(proc, Set, Origin(), varOuter, patchpoint);
+        outerLoop->appendNew<VariableValue>(proc, Set, Origin(), varInner, patchpoint);
+        Value* condition = outerLoop->appendNew<ExtractValue>(proc, Origin(), Int32, patchpoint, 2);
+        outerLoop->appendNewControlValue(proc, Branch, Origin(), condition, FrequentedBlock(outerContinuation), FrequentedBlock(innerLoop));
+    }
+
+    {
+        Value* tuple = innerLoop->appendNew<VariableValue>(proc, B3::Get, Origin(), varInner);
+        Value* first = innerLoop->appendNew<ExtractValue>(proc, Origin(), Int32, tuple, 0);
+        Value* second = innerLoop->appendNew<ExtractValue>(proc, Origin(), Double, tuple, 1);
+        PatchpointValue* patchpoint = innerLoop->appendNew<PatchpointValue>(proc, tupleType, Origin());
+        patchpoint->clobber(RegisterSet::macroScratchRegisters());
+        patchpoint->append({ first, ValueRep::SomeRegisterWithClobber });
+        patchpoint->append({ second, ValueRep::SomeRegisterWithClobber });
+        patchpoint->resultConstraints = { ValueRep::SomeRegister, ValueRep::SomeRegister, ValueRep::SomeEarlyRegister };
+        patchpoint->setGenerator([&] (CCallHelpers& jit, const StackmapGenerationParams& params) {
+            AllowMacroScratchRegisterUsage allowScratch(jit);
+            jit.move(params[3].gpr(), params[0].gpr());
+            jit.move(CCallHelpers::TrustedImm32(0), params[2].gpr());
+            jit.move(params[4].fpr(), params[1].fpr());
+        });
+        innerLoop->appendNew<VariableValue>(proc, Set, Origin(), varOuter, patchpoint);
+        innerLoop->appendNew<VariableValue>(proc, Set, Origin(), varInner, patchpoint);
+        Value* condition = innerLoop->appendNew<ExtractValue>(proc, Origin(), Int32, patchpoint, 2);
+        innerLoop->appendNew<VariableValue>(proc, Set, Origin(), tookInner, innerLoop->appendIntConstant(proc, Origin(), Int32, 1));
+        innerLoop->appendNewControlValue(proc, Branch, Origin(), condition, FrequentedBlock(innerLoop), FrequentedBlock(outerLoop));
+    }
+
+    {
+        Value* tuple = outerContinuation->appendNew<VariableValue>(proc, B3::Get, Origin(), varInner);
+        Value* first = outerContinuation->appendNew<ExtractValue>(proc, Origin(), Int32, tuple, 0);
+        Value* second = outerContinuation->appendNew<ExtractValue>(proc, Origin(), Double, tuple, 1);
+        Value* result = outerContinuation->appendNew<Value>(proc, Add, Origin(), second, outerContinuation->appendNew<Value>(proc, IToD, Origin(), first));
+        outerContinuation->appendNewControlValue(proc, Return, Origin(), result);
+    }
+
+    proc.resetReachability();
+    validate(proc);
+    if (shouldFixSSA)
+        fixSSA(proc);
+    CHECK_EQ(compileAndRun<double>(proc, first, second), first + second);
+}
+
+void addTupleTests(const char* filter, Deque<RefPtr<SharedTask<void()>>>& tasks)
+{
+    RUN_BINARY(testSimpleTuplePair, int32Operands(), int64Operands());
+    RUN_BINARY(testSimpleTuplePairUnused, int32Operands(), int64Operands());
+    RUN_BINARY(testSimpleTuplePairStack, int32Operands(), int64Operands());
+    // use int64 as second argument because checking for NaN is annoying and doesn't really matter for this test.
+    RUN_BINARY(tailDupedTuplePair<true>, int32Operands(), int64Operands());
+    RUN_BINARY(tailDupedTuplePair<false>, int32Operands(), int64Operands());
+    RUN_BINARY(tuplePairVariableLoop<true>, int32Operands(), int64Operands());
+    RUN_BINARY(tuplePairVariableLoop<false>, int32Operands(), int64Operands());
+    RUN_BINARY(tupleNestedLoop<true>, int32Operands(), int64Operands());
+    RUN_BINARY(tupleNestedLoop<false>, int32Operands(), int64Operands());
+}
+
 #endif // ENABLE(B3_JIT)
index 9f6727c..76c8366 100644 (file)
@@ -31,8 +31,8 @@
 template<typename T>
 void testAtomicWeakCAS()
 {
-    Type type = NativeTraits<T>::type;
-    Width width = NativeTraits<T>::width;
+    constexpr Type type = NativeTraits<T>::type;
+    constexpr Width width = NativeTraits<T>::width;
 
     auto checkMyDisassembly = [&] (Compilation& compilation, bool fenced) {
         if (isX86()) {
@@ -278,8 +278,8 @@ void testAtomicWeakCAS()
 template<typename T>
 void testAtomicStrongCAS()
 {
-    Type type = NativeTraits<T>::type;
-    Width width = NativeTraits<T>::width;
+    constexpr Type type = NativeTraits<T>::type;
+    constexpr Width width = NativeTraits<T>::width;
 
     auto checkMyDisassembly = [&] (Compilation& compilation, bool fenced) {
         if (isX86()) {
@@ -547,8 +547,8 @@ void testAtomicStrongCAS()
 template<typename T>
 void testAtomicXchg(B3::Opcode opcode)
 {
-    Type type = NativeTraits<T>::type;
-    Width width = NativeTraits<T>::width;
+    constexpr Type type = NativeTraits<T>::type;
+    constexpr Width width = NativeTraits<T>::width;
 
     auto doTheMath = [&] (T& memory, T operand) -> T {
         T oldValue = memory;
@@ -716,8 +716,8 @@ void addAtomicTests(const char* filter, Deque<RefPtr<SharedTask<void()>>>& tasks
     RUN(testAtomicXchg<int64_t>(AtomicXchg));
 }
 
-template<B3::Type type, typename CType, typename InputType>
-void testLoad(B3::Opcode opcode, InputType value)
+template<typename CType, typename InputType>
+void testLoad(B3::Type type, B3::Opcode opcode, InputType value)
 {
     // Simple load from an absolute address.
     {
@@ -806,29 +806,29 @@ void testLoad(B3::Opcode opcode, InputType value)
 template<typename T>
 void testLoad(B3::Opcode opcode, int32_t value)
 {
-    return testLoad<Int32, T>(opcode, value);
+    return testLoad<T>(B3::Int32, opcode, value);
 }
 
-template<B3::Type type, typename T>
-void testLoad(T value)
+template<typename T>
+void testLoad(B3::Type type, T value)
 {
-    return testLoad<type, T>(Load, value);
+    return testLoad<T>(type, Load, value);
 }
 
 void addLoadTests(const char* filter, Deque<RefPtr<SharedTask<void()>>>& tasks)
 {
-    RUN(testLoad<Int32>(60));
-    RUN(testLoad<Int32>(-60));
-    RUN(testLoad<Int32>(1000));
-    RUN(testLoad<Int32>(-1000));
-    RUN(testLoad<Int32>(1000000));
-    RUN(testLoad<Int32>(-1000000));
-    RUN(testLoad<Int32>(1000000000));
-    RUN(testLoad<Int32>(-1000000000));
-    RUN_UNARY(testLoad<Int64>, int64Operands());
-    RUN_UNARY(testLoad<Float>, floatingPointOperands<float>());
-    RUN_UNARY(testLoad<Double>, floatingPointOperands<double>());
-    
+    RUN(testLoad(Int32, 60));
+    RUN(testLoad(Int32, -60));
+    RUN(testLoad(Int32, 1000));
+    RUN(testLoad(Int32, -1000));
+    RUN(testLoad(Int32, 1000000));
+    RUN(testLoad(Int32, -1000000));
+    RUN(testLoad(Int32, 1000000000));
+    RUN(testLoad(Int32, -1000000000));
+    RUN_BINARY(testLoad, { MAKE_OPERAND(Int64) }, int64Operands());
+    RUN_BINARY(testLoad, { MAKE_OPERAND(Float) }, floatingPointOperands<float>());
+    RUN_BINARY(testLoad, { MAKE_OPERAND(Double) }, floatingPointOperands<double>());
+
     RUN(testLoad<int8_t>(Load8S, 60));
     RUN(testLoad<int8_t>(Load8S, -60));
     RUN(testLoad<int8_t>(Load8S, 1000));
index 7a64d4d..b4184e7 100644 (file)
@@ -32,7 +32,7 @@
 namespace JSC { namespace B3 {
 class BasicBlock;
 class Value;
-enum Type : int8_t;
+class Type;
 } }
 
 namespace JSC { namespace FTL {
index 8213570..ac1adbb 100644 (file)
@@ -7890,7 +7890,7 @@ private:
         patchpoint->append(m_tagTypeNumber, ValueRep::reg(GPRInfo::tagTypeNumberRegister));
         patchpoint->clobber(RegisterSet::macroScratchRegisters());
         patchpoint->clobberLate(RegisterSet::volatileRegistersForJSCall());
-        patchpoint->resultConstraint = ValueRep::reg(GPRInfo::returnValueGPR);
+        patchpoint->resultConstraints = { ValueRep::reg(GPRInfo::returnValueGPR) };
 
         CodeOrigin codeOrigin = codeOriginDescriptionOfCallSite();
         State* state = &m_ftlState;
@@ -8009,7 +8009,7 @@ private:
         patchpoint->clobber(RegisterSet::macroScratchRegisters());
         if (!isTail) {
             patchpoint->clobberLate(RegisterSet::volatileRegistersForJSCall());
-            patchpoint->resultConstraint = ValueRep::reg(GPRInfo::returnValueGPR);
+            patchpoint->resultConstraints = { ValueRep::reg(GPRInfo::returnValueGPR) };
         }
         
         CodeOrigin codeOrigin = codeOriginDescriptionOfCallSite();
@@ -8330,7 +8330,7 @@ private:
 
         patchpoint->clobber(RegisterSet::macroScratchRegisters());
         patchpoint->clobber(RegisterSet::volatileRegistersForJSCall()); // No inputs will be in a volatile register.
-        patchpoint->resultConstraint = ValueRep::reg(GPRInfo::returnValueGPR);
+        patchpoint->resultConstraints = { ValueRep::reg(GPRInfo::returnValueGPR) };
 
         patchpoint->numGPScratchRegisters = 0;
 
@@ -8632,7 +8632,7 @@ private:
 
         patchpoint->clobber(RegisterSet::macroScratchRegisters());
         patchpoint->clobberLate(RegisterSet::volatileRegistersForJSCall());
-        patchpoint->resultConstraint = ValueRep::reg(GPRInfo::returnValueGPR);
+        patchpoint->resultConstraints = { ValueRep::reg(GPRInfo::returnValueGPR) };
 
         // This is the minimum amount of call arg area stack space that all JS->JS calls always have.
         unsigned minimumJSCallAreaSize =
@@ -8889,7 +8889,7 @@ private:
         patchpoint->append(m_tagTypeNumber, ValueRep::reg(GPRInfo::tagTypeNumberRegister));
         patchpoint->clobber(RegisterSet::macroScratchRegisters());
         patchpoint->clobberLate(RegisterSet::volatileRegistersForJSCall());
-        patchpoint->resultConstraint = ValueRep::reg(GPRInfo::returnValueGPR);
+        patchpoint->resultConstraints = { ValueRep::reg(GPRInfo::returnValueGPR) };
         
         CodeOrigin codeOrigin = codeOriginDescriptionOfCallSite();
         State* state = &m_ftlState;
@@ -9543,7 +9543,7 @@ private:
             patchpoint->effects = Effects::forCall();
             patchpoint->clobber(RegisterSet { X86Registers::eax, X86Registers::edx });
             // The low 32-bits of rdtsc go into rax.
-            patchpoint->resultConstraint = ValueRep::reg(X86Registers::eax);
+            patchpoint->resultConstraints = { ValueRep::reg(X86Registers::eax) };
             patchpoint->setGenerator( [=] (CCallHelpers& jit, const B3::StackmapGenerationParams&) {
                 jit.rdtsc();
             });
@@ -10653,7 +10653,7 @@ private:
         patchpoint->append(m_tagMask, ValueRep::lateReg(GPRInfo::tagMaskRegister));
         patchpoint->append(m_tagTypeNumber, ValueRep::lateReg(GPRInfo::tagTypeNumberRegister));
         patchpoint->numGPScratchRegisters = 2;
-        patchpoint->resultConstraint = ValueRep::SomeEarlyRegister;
+        patchpoint->resultConstraints = { ValueRep::SomeEarlyRegister };
         patchpoint->clobber(RegisterSet::macroScratchRegisters());
         
         RefPtr<PatchpointExceptionHandle> exceptionHandle =
@@ -12635,7 +12635,7 @@ private:
         patchpoint->clobber(RegisterSet::macroScratchRegisters());
         patchpoint->numGPScratchRegisters = domJIT->numGPScratchRegisters;
         patchpoint->numFPScratchRegisters = domJIT->numFPScratchRegisters;
-        patchpoint->resultConstraint = ValueRep::SomeEarlyRegister;
+        patchpoint->resultConstraints = { ValueRep::SomeEarlyRegister };
 
         State* state = &m_ftlState;
         Node* node = m_node;
@@ -13241,7 +13241,7 @@ private:
         if (scratchFPRUsage == NeedScratchFPR)
             patchpoint->numFPScratchRegisters++;
         patchpoint->clobber(RegisterSet::macroScratchRegisters());
-        patchpoint->resultConstraint = ValueRep::SomeEarlyRegister;
+        patchpoint->resultConstraints = { ValueRep::SomeEarlyRegister };
         State* state = &m_ftlState;
         patchpoint->setGenerator(
             [=] (CCallHelpers& jit, const StackmapGenerationParams& params) {
@@ -13304,7 +13304,7 @@ private:
             preparePatchpointForExceptions(patchpoint);
         patchpoint->numGPScratchRegisters = 1;
         patchpoint->clobber(RegisterSet::macroScratchRegisters());
-        patchpoint->resultConstraint = ValueRep::SomeEarlyRegister;
+        patchpoint->resultConstraints = { ValueRep::SomeEarlyRegister };
         State* state = &m_ftlState;
         patchpoint->setGenerator(
             [=] (CCallHelpers& jit, const StackmapGenerationParams& params) {
@@ -13360,7 +13360,7 @@ private:
         patchpoint->numGPScratchRegisters = 1;
         patchpoint->numFPScratchRegisters = 1;
         patchpoint->clobber(RegisterSet::macroScratchRegisters());
-        patchpoint->resultConstraint = ValueRep::SomeEarlyRegister;
+        patchpoint->resultConstraints = { ValueRep::SomeEarlyRegister };
         State* state = &m_ftlState;
         patchpoint->setGenerator(
             [=] (CCallHelpers& jit, const StackmapGenerationParams& params) {
@@ -13441,7 +13441,7 @@ private:
         else
             patchpoint->appendSomeRegisterWithClobber(allocator);
         patchpoint->numGPScratchRegisters++;
-        patchpoint->resultConstraint = ValueRep::SomeEarlyRegister;
+        patchpoint->resultConstraints = { ValueRep::SomeEarlyRegister };
         
         m_out.appendSuccessor(usually(continuation));
         m_out.appendSuccessor(rarely(slowPath));
index 91e8e59..24f18ad 100644 (file)
@@ -423,10 +423,10 @@ private:
         Inst resultMov;
         if (result) {
             ASSERT(patch->type() != B3::Void);
-            switch (patch->resultConstraint.kind()) {
+            switch (patch->resultConstraints[0].kind()) {
             case B3::ValueRep::Register:
-                inst.args.append(Tmp(patch->resultConstraint.reg()));
-                resultMov = Inst(result.isGP() ? Move : MoveDouble, nullptr, Tmp(patch->resultConstraint.reg()), result);
+                inst.args.append(Tmp(patch->resultConstraints[0].reg()));
+                resultMov = Inst(result.isGP() ? Move : MoveDouble, nullptr, Tmp(patch->resultConstraints[0].reg()), result);
                 break;
             case B3::ValueRep::SomeRegister:
                 inst.args.append(result);
@@ -464,8 +464,8 @@ private:
             }
         }
 
-        if (patch->resultConstraint.isReg())
-            patch->lateClobbered().clear(patch->resultConstraint.reg());
+        if (patch->resultConstraints[0].isReg())
+            patch->lateClobbered().clear(patch->resultConstraints[0].reg());
         for (unsigned i = patch->numGPScratchRegisters; i--;)
             inst.args.append(g64().tmp());
         for (unsigned i = patch->numFPScratchRegisters; i--;)
index 99967fd..2094177 100644 (file)
@@ -409,7 +409,7 @@ B3IRGenerator::B3IRGenerator(const ModuleInformation& info, Procedure& procedure
             // This prevents us from using ArgumentReg to this (logically) immutable pinned register.
             stackOverflowCheck->effects.writesPinned = false;
             stackOverflowCheck->effects.readsPinned = true;
-            stackOverflowCheck->resultConstraint = ValueRep::reg(m_wasmContextInstanceGPR);
+            stackOverflowCheck->resultConstraints = { ValueRep::reg(m_wasmContextInstanceGPR) };
         }
         stackOverflowCheck->numGPScratchRegisters = 2;
         stackOverflowCheck->setGenerator([=] (CCallHelpers& jit, const B3::StackmapGenerationParams& params) {
index 8192621..b442b61 100644 (file)
@@ -71,7 +71,7 @@ private:
 
     B3::ValueRep marshallArgument(B3::Type type, size_t& gpArgumentCount, size_t& fpArgumentCount, size_t& stackOffset) const
     {
-        switch (type) {
+        switch (type.kind()) {
         case B3::Int32:
         case B3::Int64:
             return marshallArgumentImpl(m_gprArgs, gpArgumentCount, stackOffset);
@@ -79,7 +79,9 @@ private:
         case B3::Double:
             return marshallArgumentImpl(m_fprArgs, fpArgumentCount, stackOffset);
         case B3::Void:
+        case B3::Tuple:
             break;
+
         }
         RELEASE_ASSERT_NOT_REACHED();
     }
@@ -92,7 +94,7 @@ public:
         static_assert(CallFrameSlot::codeBlock * sizeof(Register) < headerSize, "We rely on this here for now.");
 
         B3::PatchpointValue* getCalleePatchpoint = block->appendNew<B3::PatchpointValue>(proc, B3::Int64, origin);
-        getCalleePatchpoint->resultConstraint = B3::ValueRep::SomeRegister;
+        getCalleePatchpoint->resultConstraints = { B3::ValueRep::SomeRegister };
         getCalleePatchpoint->effects = B3::Effects::none();
         getCalleePatchpoint->setGenerator(
             [=] (CCallHelpers& jit, const B3::StackmapGenerationParams& params) {
@@ -169,16 +171,19 @@ public:
         patchpointFunctor(patchpoint);
         patchpoint->appendVector(constrainedArguments);
 
-        switch (returnType) {
+        switch (returnType.kind()) {
         case B3::Void:
             return nullptr;
         case B3::Float:
         case B3::Double:
-            patchpoint->resultConstraint = B3::ValueRep::reg(FPRInfo::returnValueFPR);
+            patchpoint->resultConstraints = { B3::ValueRep::reg(FPRInfo::returnValueFPR) };
             break;
         case B3::Int32:
         case B3::Int64:
-            patchpoint->resultConstraint = B3::ValueRep::reg(GPRInfo::returnValueGPR);
+            patchpoint->resultConstraints = { B3::ValueRep::reg(GPRInfo::returnValueGPR) };
+            break;
+        case B3::Tuple:
+            RELEASE_ASSERT_NOT_REACHED();
             break;
         }
         return patchpoint;
@@ -297,13 +302,13 @@ public:
             break;
         case Type::F32:
         case Type::F64:
-            patchpoint->resultConstraint = B3::ValueRep::reg(FPRInfo::returnValueFPR);
+            patchpoint->resultConstraints = { B3::ValueRep::reg(FPRInfo::returnValueFPR) };
             break;
         case Type::I32:
         case Type::I64:
         case Type::Anyref:
         case Wasm::Funcref:
-            patchpoint->resultConstraint = B3::ValueRep::reg(GPRInfo::returnValueGPR);
+            patchpoint->resultConstraints = { B3::ValueRep::reg(GPRInfo::returnValueGPR) };
             break;
         default:
             RELEASE_ASSERT_NOT_REACHED();