Polymorphic call inlining should be based on polymorphic call inline caching rather...
authorfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Mon, 2 Feb 2015 18:38:08 +0000 (18:38 +0000)
committerfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Mon, 2 Feb 2015 18:38:08 +0000 (18:38 +0000)
https://bugs.webkit.org/show_bug.cgi?id=140660

Reviewed by Geoffrey Garen.

When we first implemented polymorphic call inlining, we did the profiling based on a call
edge log. The idea was to store each call edge (a tuple of call site and callee) into a
global log that was processed lazily. Processing the log would give precise counts of call
edges, and could be used to drive well-informed inlining decisions - polymorphic or not.
This was a speed-up on throughput tests but a slow-down for latency tests. It was a net win
nonetheless.

Experience with this code shows three things. First, the call edge profiler is buggy and
complex. It would take work to fix the bugs. Second, the call edge profiler incurs lots of
overhead for latency code that we care deeply about. Third, it's not at all clear that
having call edge counts for every possible callee is any better than just having call edge
counts for the limited number of callees that an inline cache would catch.

So, this patch removes the call edge profiler and replaces it with a polymorphic call inline
cache. If we miss the basic call inline cache, we inflate the cache to be a jump to an
out-of-line stub that cases on the previously known callees. If that misses again, then we
rewrite that stub to include the new callee. We do this up to some number of callees. If we
hit the limit then we switch to using a plain virtual call.

Substantial speed-up on V8Spider; undoes the slow-down that the original call edge profiler
caused. Might be a SunSpider speed-up (below 1%), depending on hardware.

Rolling this back in after fixing https://bugs.webkit.org/show_bug.cgi?id=141107.

* CMakeLists.txt:
* JavaScriptCore.vcxproj/JavaScriptCore.vcxproj:
* JavaScriptCore.xcodeproj/project.pbxproj:
* bytecode/CallEdge.h:
(JSC::CallEdge::count):
(JSC::CallEdge::CallEdge):
* bytecode/CallEdgeProfile.cpp: Removed.
* bytecode/CallEdgeProfile.h: Removed.
* bytecode/CallEdgeProfileInlines.h: Removed.
* bytecode/CallLinkInfo.cpp:
(JSC::CallLinkInfo::unlink):
(JSC::CallLinkInfo::visitWeak):
* bytecode/CallLinkInfo.h:
* bytecode/CallLinkStatus.cpp:
(JSC::CallLinkStatus::CallLinkStatus):
(JSC::CallLinkStatus::computeFor):
(JSC::CallLinkStatus::computeFromCallLinkInfo):
(JSC::CallLinkStatus::isClosureCall):
(JSC::CallLinkStatus::makeClosureCall):
(JSC::CallLinkStatus::dump):
(JSC::CallLinkStatus::computeFromCallEdgeProfile): Deleted.
* bytecode/CallLinkStatus.h:
(JSC::CallLinkStatus::CallLinkStatus):
(JSC::CallLinkStatus::isSet):
(JSC::CallLinkStatus::variants):
(JSC::CallLinkStatus::size):
(JSC::CallLinkStatus::at):
(JSC::CallLinkStatus::operator[]):
(JSC::CallLinkStatus::canOptimize):
(JSC::CallLinkStatus::edges): Deleted.
(JSC::CallLinkStatus::canTrustCounts): Deleted.
* bytecode/CallVariant.cpp:
(JSC::variantListWithVariant):
(JSC::despecifiedVariantList):
* bytecode/CallVariant.h:
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::~CodeBlock):
(JSC::CodeBlock::linkIncomingPolymorphicCall):
(JSC::CodeBlock::unlinkIncomingCalls):
(JSC::CodeBlock::noticeIncomingCall):
* bytecode/CodeBlock.h:
(JSC::CodeBlock::isIncomingCallAlreadyLinked): Deleted.
* dfg/DFGAbstractInterpreterInlines.h:
(JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::addCallWithoutSettingResult):
(JSC::DFG::ByteCodeParser::handleCall):
(JSC::DFG::ByteCodeParser::handleInlining):
* dfg/DFGClobberize.h:
(JSC::DFG::clobberize):
* dfg/DFGConstantFoldingPhase.cpp:
(JSC::DFG::ConstantFoldingPhase::foldConstants):
* dfg/DFGDoesGC.cpp:
(JSC::DFG::doesGC):
* dfg/DFGDriver.cpp:
(JSC::DFG::compileImpl):
* dfg/DFGFixupPhase.cpp:
(JSC::DFG::FixupPhase::fixupNode):
* dfg/DFGNode.h:
(JSC::DFG::Node::hasHeapPrediction):
* dfg/DFGNodeType.h:
* dfg/DFGOperations.cpp:
* dfg/DFGPredictionPropagationPhase.cpp:
(JSC::DFG::PredictionPropagationPhase::propagate):
* dfg/DFGSafeToExecute.h:
(JSC::DFG::safeToExecute):
* dfg/DFGSpeculativeJIT32_64.cpp:
(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGSpeculativeJIT64.cpp:
(JSC::DFG::SpeculativeJIT::emitCall):
(JSC::DFG::SpeculativeJIT::compile):
* dfg/DFGTierUpCheckInjectionPhase.cpp:
(JSC::DFG::TierUpCheckInjectionPhase::run):
(JSC::DFG::TierUpCheckInjectionPhase::removeFTLProfiling): Deleted.
* ftl/FTLCapabilities.cpp:
(JSC::FTL::canCompile):
* heap/Heap.cpp:
(JSC::Heap::collect):
* jit/BinarySwitch.h:
* jit/ClosureCallStubRoutine.cpp: Removed.
* jit/ClosureCallStubRoutine.h: Removed.
* jit/JITCall.cpp:
(JSC::JIT::compileOpCall):
* jit/JITCall32_64.cpp:
(JSC::JIT::compileOpCall):
* jit/JITOperations.cpp:
* jit/JITOperations.h:
(JSC::operationLinkPolymorphicCallFor):
(JSC::operationLinkClosureCallFor): Deleted.
* jit/JITStubRoutine.h:
* jit/JITWriteBarrier.h:
* jit/PolymorphicCallStubRoutine.cpp: Added.
(JSC::PolymorphicCallNode::~PolymorphicCallNode):
(JSC::PolymorphicCallNode::unlink):
(JSC::PolymorphicCallCase::dump):
(JSC::PolymorphicCallStubRoutine::PolymorphicCallStubRoutine):
(JSC::PolymorphicCallStubRoutine::~PolymorphicCallStubRoutine):
(JSC::PolymorphicCallStubRoutine::variants):
(JSC::PolymorphicCallStubRoutine::edges):
(JSC::PolymorphicCallStubRoutine::visitWeak):
(JSC::PolymorphicCallStubRoutine::markRequiredObjectsInternal):
* jit/PolymorphicCallStubRoutine.h: Added.
(JSC::PolymorphicCallNode::PolymorphicCallNode):
(JSC::PolymorphicCallCase::PolymorphicCallCase):
(JSC::PolymorphicCallCase::variant):
(JSC::PolymorphicCallCase::codeBlock):
* jit/Repatch.cpp:
(JSC::linkSlowFor):
(JSC::linkFor):
(JSC::revertCall):
(JSC::unlinkFor):
(JSC::linkVirtualFor):
(JSC::linkPolymorphicCall):
(JSC::linkClosureCall): Deleted.
* jit/Repatch.h:
* jit/ThunkGenerators.cpp:
(JSC::linkPolymorphicCallForThunkGenerator):
(JSC::linkPolymorphicCallThunkGenerator):
(JSC::linkPolymorphicCallThatPreservesRegsThunkGenerator):
(JSC::linkClosureCallForThunkGenerator): Deleted.
(JSC::linkClosureCallThunkGenerator): Deleted.
(JSC::linkClosureCallThatPreservesRegsThunkGenerator): Deleted.
* jit/ThunkGenerators.h:
(JSC::linkPolymorphicCallThunkGeneratorFor):
(JSC::linkClosureCallThunkGeneratorFor): Deleted.
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::jitCompileAndSetHeuristics):
* runtime/Options.h:
* runtime/VM.cpp:
(JSC::VM::prepareToDiscardCode):
(JSC::VM::ensureCallEdgeLog): Deleted.
* runtime/VM.h:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@179478 268f45cc-cd09-0410-ab3c-d52691b4dbfc

53 files changed:
Makefile.shared
Source/JavaScriptCore/CMakeLists.txt
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.vcxproj/JavaScriptCore.vcxproj
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/bytecode/CallEdge.h
Source/JavaScriptCore/bytecode/CallEdgeProfile.cpp [deleted file]
Source/JavaScriptCore/bytecode/CallEdgeProfile.h [deleted file]
Source/JavaScriptCore/bytecode/CallEdgeProfileInlines.h [deleted file]
Source/JavaScriptCore/bytecode/CallLinkInfo.cpp
Source/JavaScriptCore/bytecode/CallLinkInfo.h
Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
Source/JavaScriptCore/bytecode/CallLinkStatus.h
Source/JavaScriptCore/bytecode/CallVariant.cpp
Source/JavaScriptCore/bytecode/CallVariant.h
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/bytecode/CodeBlock.h
Source/JavaScriptCore/dfg/DFGAbstractInterpreterInlines.h
Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
Source/JavaScriptCore/dfg/DFGClobberize.h
Source/JavaScriptCore/dfg/DFGConstantFoldingPhase.cpp
Source/JavaScriptCore/dfg/DFGDoesGC.cpp
Source/JavaScriptCore/dfg/DFGDriver.cpp
Source/JavaScriptCore/dfg/DFGFixupPhase.cpp
Source/JavaScriptCore/dfg/DFGNode.h
Source/JavaScriptCore/dfg/DFGNodeType.h
Source/JavaScriptCore/dfg/DFGOperations.cpp
Source/JavaScriptCore/dfg/DFGPredictionPropagationPhase.cpp
Source/JavaScriptCore/dfg/DFGSafeToExecute.h
Source/JavaScriptCore/dfg/DFGSpeculativeJIT32_64.cpp
Source/JavaScriptCore/dfg/DFGSpeculativeJIT64.cpp
Source/JavaScriptCore/dfg/DFGTierUpCheckInjectionPhase.cpp
Source/JavaScriptCore/ftl/FTLCapabilities.cpp
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/jit/BinarySwitch.h
Source/JavaScriptCore/jit/ClosureCallStubRoutine.cpp [deleted file]
Source/JavaScriptCore/jit/ClosureCallStubRoutine.h [deleted file]
Source/JavaScriptCore/jit/JITCall.cpp
Source/JavaScriptCore/jit/JITCall32_64.cpp
Source/JavaScriptCore/jit/JITOperations.cpp
Source/JavaScriptCore/jit/JITOperations.h
Source/JavaScriptCore/jit/JITStubRoutine.h
Source/JavaScriptCore/jit/JITWriteBarrier.h
Source/JavaScriptCore/jit/PolymorphicCallStubRoutine.cpp [new file with mode: 0644]
Source/JavaScriptCore/jit/PolymorphicCallStubRoutine.h [new file with mode: 0644]
Source/JavaScriptCore/jit/Repatch.cpp
Source/JavaScriptCore/jit/Repatch.h
Source/JavaScriptCore/jit/ThunkGenerators.cpp
Source/JavaScriptCore/jit/ThunkGenerators.h
Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
Source/JavaScriptCore/runtime/Options.h
Source/JavaScriptCore/runtime/VM.cpp
Source/JavaScriptCore/runtime/VM.h

index 61c066d..22faa94 100644 (file)
@@ -12,6 +12,8 @@ ifneq (,$(ARCHS))
        XCODE_OPTIONS += ONLY_ACTIVE_ARCH=NO
 endif
 
+XCODE_OPTIONS += TOOLCHAINS=com.apple.dt.toolchain.OSX10_11
+
 DEFAULT_VERBOSITY := $(shell defaults read org.webkit.BuildConfiguration BuildTranscriptVerbosity 2>/dev/null || echo "default")
 VERBOSITY ?= $(DEFAULT_VERBOSITY)
 
index 34e2461..8c7b0f0 100644 (file)
@@ -66,7 +66,6 @@ set(JavaScriptCore_SOURCES
     bytecode/BytecodeBasicBlock.cpp
     bytecode/BytecodeLivenessAnalysis.cpp
     bytecode/CallEdge.cpp
-    bytecode/CallEdgeProfile.cpp
     bytecode/CallLinkInfo.cpp
     bytecode/CallLinkStatus.cpp
     bytecode/CallVariant.cpp
@@ -327,7 +326,6 @@ set(JavaScriptCore_SOURCES
     jit/AssemblyHelpers.cpp
     jit/ArityCheckFailReturnThunks.cpp
     jit/BinarySwitch.cpp
-    jit/ClosureCallStubRoutine.cpp
     jit/ExecutableAllocator.cpp
     jit/ExecutableAllocatorFixedVMPool.cpp
     jit/GCAwareJITStubRoutine.cpp
@@ -350,6 +348,7 @@ set(JavaScriptCore_SOURCES
     jit/JITStubs.cpp
     jit/JITThunks.cpp
     jit/JITToDFGDeferredCompilationCallback.cpp
+    jit/PolymorphicCallStubRoutine.cpp
     jit/Reg.cpp
     jit/RegisterPreservationWrapperGenerator.cpp
     jit/RegisterSet.cpp
index 39c60fe..1c9eb52 100644 (file)
@@ -1,3 +1,168 @@
+2015-01-28  Filip Pizlo  <fpizlo@apple.com>
+
+        Polymorphic call inlining should be based on polymorphic call inline caching rather than logging
+        https://bugs.webkit.org/show_bug.cgi?id=140660
+
+        Reviewed by Geoffrey Garen.
+        
+        When we first implemented polymorphic call inlining, we did the profiling based on a call
+        edge log. The idea was to store each call edge (a tuple of call site and callee) into a
+        global log that was processed lazily. Processing the log would give precise counts of call
+        edges, and could be used to drive well-informed inlining decisions - polymorphic or not.
+        This was a speed-up on throughput tests but a slow-down for latency tests. It was a net win
+        nonetheless.
+        
+        Experience with this code shows three things. First, the call edge profiler is buggy and
+        complex. It would take work to fix the bugs. Second, the call edge profiler incurs lots of
+        overhead for latency code that we care deeply about. Third, it's not at all clear that
+        having call edge counts for every possible callee is any better than just having call edge
+        counts for the limited number of callees that an inline cache would catch.
+        
+        So, this patch removes the call edge profiler and replaces it with a polymorphic call inline
+        cache. If we miss the basic call inline cache, we inflate the cache to be a jump to an
+        out-of-line stub that cases on the previously known callees. If that misses again, then we
+        rewrite that stub to include the new callee. We do this up to some number of callees. If we
+        hit the limit then we switch to using a plain virtual call.
+        
+        Substantial speed-up on V8Spider; undoes the slow-down that the original call edge profiler
+        caused. Might be a SunSpider speed-up (below 1%), depending on hardware.
+        
+        Rolling this back in after fixing https://bugs.webkit.org/show_bug.cgi?id=141107.
+
+        * CMakeLists.txt:
+        * JavaScriptCore.vcxproj/JavaScriptCore.vcxproj:
+        * JavaScriptCore.xcodeproj/project.pbxproj:
+        * bytecode/CallEdge.h:
+        (JSC::CallEdge::count):
+        (JSC::CallEdge::CallEdge):
+        * bytecode/CallEdgeProfile.cpp: Removed.
+        * bytecode/CallEdgeProfile.h: Removed.
+        * bytecode/CallEdgeProfileInlines.h: Removed.
+        * bytecode/CallLinkInfo.cpp:
+        (JSC::CallLinkInfo::unlink):
+        (JSC::CallLinkInfo::visitWeak):
+        * bytecode/CallLinkInfo.h:
+        * bytecode/CallLinkStatus.cpp:
+        (JSC::CallLinkStatus::CallLinkStatus):
+        (JSC::CallLinkStatus::computeFor):
+        (JSC::CallLinkStatus::computeFromCallLinkInfo):
+        (JSC::CallLinkStatus::isClosureCall):
+        (JSC::CallLinkStatus::makeClosureCall):
+        (JSC::CallLinkStatus::dump):
+        (JSC::CallLinkStatus::computeFromCallEdgeProfile): Deleted.
+        * bytecode/CallLinkStatus.h:
+        (JSC::CallLinkStatus::CallLinkStatus):
+        (JSC::CallLinkStatus::isSet):
+        (JSC::CallLinkStatus::variants):
+        (JSC::CallLinkStatus::size):
+        (JSC::CallLinkStatus::at):
+        (JSC::CallLinkStatus::operator[]):
+        (JSC::CallLinkStatus::canOptimize):
+        (JSC::CallLinkStatus::edges): Deleted.
+        (JSC::CallLinkStatus::canTrustCounts): Deleted.
+        * bytecode/CallVariant.cpp:
+        (JSC::variantListWithVariant):
+        (JSC::despecifiedVariantList):
+        * bytecode/CallVariant.h:
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::~CodeBlock):
+        (JSC::CodeBlock::linkIncomingPolymorphicCall):
+        (JSC::CodeBlock::unlinkIncomingCalls):
+        (JSC::CodeBlock::noticeIncomingCall):
+        * bytecode/CodeBlock.h:
+        (JSC::CodeBlock::isIncomingCallAlreadyLinked): Deleted.
+        * dfg/DFGAbstractInterpreterInlines.h:
+        (JSC::DFG::AbstractInterpreter<AbstractStateType>::executeEffects):
+        * dfg/DFGByteCodeParser.cpp:
+        (JSC::DFG::ByteCodeParser::addCallWithoutSettingResult):
+        (JSC::DFG::ByteCodeParser::handleCall):
+        (JSC::DFG::ByteCodeParser::handleInlining):
+        * dfg/DFGClobberize.h:
+        (JSC::DFG::clobberize):
+        * dfg/DFGConstantFoldingPhase.cpp:
+        (JSC::DFG::ConstantFoldingPhase::foldConstants):
+        * dfg/DFGDoesGC.cpp:
+        (JSC::DFG::doesGC):
+        * dfg/DFGDriver.cpp:
+        (JSC::DFG::compileImpl):
+        * dfg/DFGFixupPhase.cpp:
+        (JSC::DFG::FixupPhase::fixupNode):
+        * dfg/DFGNode.h:
+        (JSC::DFG::Node::hasHeapPrediction):
+        * dfg/DFGNodeType.h:
+        * dfg/DFGOperations.cpp:
+        * dfg/DFGPredictionPropagationPhase.cpp:
+        (JSC::DFG::PredictionPropagationPhase::propagate):
+        * dfg/DFGSafeToExecute.h:
+        (JSC::DFG::safeToExecute):
+        * dfg/DFGSpeculativeJIT32_64.cpp:
+        (JSC::DFG::SpeculativeJIT::emitCall):
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGSpeculativeJIT64.cpp:
+        (JSC::DFG::SpeculativeJIT::emitCall):
+        (JSC::DFG::SpeculativeJIT::compile):
+        * dfg/DFGTierUpCheckInjectionPhase.cpp:
+        (JSC::DFG::TierUpCheckInjectionPhase::run):
+        (JSC::DFG::TierUpCheckInjectionPhase::removeFTLProfiling): Deleted.
+        * ftl/FTLCapabilities.cpp:
+        (JSC::FTL::canCompile):
+        * heap/Heap.cpp:
+        (JSC::Heap::collect):
+        * jit/BinarySwitch.h:
+        * jit/ClosureCallStubRoutine.cpp: Removed.
+        * jit/ClosureCallStubRoutine.h: Removed.
+        * jit/JITCall.cpp:
+        (JSC::JIT::compileOpCall):
+        * jit/JITCall32_64.cpp:
+        (JSC::JIT::compileOpCall):
+        * jit/JITOperations.cpp:
+        * jit/JITOperations.h:
+        (JSC::operationLinkPolymorphicCallFor):
+        (JSC::operationLinkClosureCallFor): Deleted.
+        * jit/JITStubRoutine.h:
+        * jit/JITWriteBarrier.h:
+        * jit/PolymorphicCallStubRoutine.cpp: Added.
+        (JSC::PolymorphicCallNode::~PolymorphicCallNode):
+        (JSC::PolymorphicCallNode::unlink):
+        (JSC::PolymorphicCallCase::dump):
+        (JSC::PolymorphicCallStubRoutine::PolymorphicCallStubRoutine):
+        (JSC::PolymorphicCallStubRoutine::~PolymorphicCallStubRoutine):
+        (JSC::PolymorphicCallStubRoutine::variants):
+        (JSC::PolymorphicCallStubRoutine::edges):
+        (JSC::PolymorphicCallStubRoutine::visitWeak):
+        (JSC::PolymorphicCallStubRoutine::markRequiredObjectsInternal):
+        * jit/PolymorphicCallStubRoutine.h: Added.
+        (JSC::PolymorphicCallNode::PolymorphicCallNode):
+        (JSC::PolymorphicCallCase::PolymorphicCallCase):
+        (JSC::PolymorphicCallCase::variant):
+        (JSC::PolymorphicCallCase::codeBlock):
+        * jit/Repatch.cpp:
+        (JSC::linkSlowFor):
+        (JSC::linkFor):
+        (JSC::revertCall):
+        (JSC::unlinkFor):
+        (JSC::linkVirtualFor):
+        (JSC::linkPolymorphicCall):
+        (JSC::linkClosureCall): Deleted.
+        * jit/Repatch.h:
+        * jit/ThunkGenerators.cpp:
+        (JSC::linkPolymorphicCallForThunkGenerator):
+        (JSC::linkPolymorphicCallThunkGenerator):
+        (JSC::linkPolymorphicCallThatPreservesRegsThunkGenerator):
+        (JSC::linkClosureCallForThunkGenerator): Deleted.
+        (JSC::linkClosureCallThunkGenerator): Deleted.
+        (JSC::linkClosureCallThatPreservesRegsThunkGenerator): Deleted.
+        * jit/ThunkGenerators.h:
+        (JSC::linkPolymorphicCallThunkGeneratorFor):
+        (JSC::linkClosureCallThunkGeneratorFor): Deleted.
+        * llint/LLIntSlowPaths.cpp:
+        (JSC::LLInt::jitCompileAndSetHeuristics):
+        * runtime/Options.h:
+        * runtime/VM.cpp:
+        (JSC::VM::prepareToDiscardCode):
+        (JSC::VM::ensureCallEdgeLog): Deleted.
+        * runtime/VM.h:
+
 2015-01-30  Filip Pizlo  <fpizlo@apple.com>
 
         Converting Flushes and PhantomLocals to Phantoms requires an OSR availability analysis rather than just using the SetLocal's child
index ba18a37..6d00715 100644 (file)
     <ClCompile Include="..\bytecode\BytecodeBasicBlock.cpp" />
     <ClCompile Include="..\bytecode\BytecodeLivenessAnalysis.cpp" />
     <ClCompile Include="..\bytecode\CallEdge.cpp" />
-    <ClCompile Include="..\bytecode\CallEdgeProfile.cpp" />
     <ClCompile Include="..\bytecode\CallLinkInfo.cpp" />
     <ClCompile Include="..\bytecode\CallLinkStatus.cpp" />
     <ClCompile Include="..\bytecode\CallVariant.cpp" />
     <ClCompile Include="..\jit\ArityCheckFailReturnThunks.cpp" />
     <ClCompile Include="..\jit\AssemblyHelpers.cpp" />
     <ClCompile Include="..\jit\BinarySwitch.cpp" />
-    <ClCompile Include="..\jit\ClosureCallStubRoutine.cpp" />
     <ClCompile Include="..\jit\ExecutableAllocator.cpp" />
     <ClCompile Include="..\jit\GCAwareJITStubRoutine.cpp" />
     <ClCompile Include="..\jit\HostCallReturnValue.cpp" />
     <ClCompile Include="..\jit\JITStubs.cpp" />
     <ClCompile Include="..\jit\JITThunks.cpp" />
     <ClCompile Include="..\jit\JITToDFGDeferredCompilationCallback.cpp" />
+    <ClCompile Include="..\jit\PolymorphicCallStubRoutine.cpp" />
     <ClCompile Include="..\jit\Reg.cpp" />
     <ClCompile Include="..\jit\RegisterPreservationWrapperGenerator.cpp" />
     <ClCompile Include="..\jit\RegisterSet.cpp" />
     <ClInclude Include="..\bytecode\BytecodeLivenessAnalysis.h" />
     <ClInclude Include="..\bytecode\BytecodeUseDef.h" />
     <ClInclude Include="..\bytecode\CallEdge.h" />
-    <ClInclude Include="..\bytecode\CallEdgeProfile.h" />
-    <ClInclude Include="..\bytecode\CallEdgeProfileInlines.h" />
     <ClInclude Include="..\bytecode\CallLinkInfo.h" />
     <ClInclude Include="..\bytecode\CallLinkStatus.h" />
     <ClInclude Include="..\bytecode\CallReturnOffsetToBytecodeOffset.h" />
     <ClInclude Include="..\jit\AssemblyHelpers.h" />
     <ClInclude Include="..\jit\BinarySwitch.h" />
     <ClInclude Include="..\jit\CCallHelpers.h" />
-    <ClInclude Include="..\jit\ClosureCallStubRoutine.h" />
     <ClInclude Include="..\jit\CompactJITCodeMap.h" />
     <ClInclude Include="..\jit\ExecutableAllocator.h" />
     <ClInclude Include="..\jit\FPRInfo.h" />
     <ClInclude Include="..\jit\JITToDFGDeferredCompilationCallback.h" />
     <ClInclude Include="..\jit\JITWriteBarrier.h" />
     <ClInclude Include="..\jit\JSInterfaceJIT.h" />
+    <ClInclude Include="..\jit\PolymorphicCallStubRoutine.h" />
     <ClInclude Include="..\jit\Reg.h" />
     <ClInclude Include="..\jit\RegisterPreservationWrapperGenerator.h" />
     <ClInclude Include="..\jit\RegisterSet.h" />
index 253df2d..23940aa 100644 (file)
                0F3B3A281544C997003ED0FF /* DFGCFGSimplificationPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3B3A251544C991003ED0FF /* DFGCFGSimplificationPhase.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F3B3A2B15475000003ED0FF /* DFGValidate.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F3B3A2915474FF4003ED0FF /* DFGValidate.cpp */; };
                0F3B3A2C15475002003ED0FF /* DFGValidate.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3B3A2A15474FF4003ED0FF /* DFGValidate.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               0F3B7E2619A11B8000D9BC56 /* CallEdge.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3B7E2019A11B8000D9BC56 /* CallEdge.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               0F3B7E2719A11B8000D9BC56 /* CallEdgeProfile.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F3B7E2119A11B8000D9BC56 /* CallEdgeProfile.cpp */; };
-               0F3B7E2819A11B8000D9BC56 /* CallEdgeProfile.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3B7E2219A11B8000D9BC56 /* CallEdgeProfile.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               0F3B7E2919A11B8000D9BC56 /* CallEdgeProfileInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3B7E2319A11B8000D9BC56 /* CallEdgeProfileInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F3B7E2A19A11B8000D9BC56 /* CallVariant.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F3B7E2419A11B8000D9BC56 /* CallVariant.cpp */; };
                0F3B7E2B19A11B8000D9BC56 /* CallVariant.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3B7E2519A11B8000D9BC56 /* CallVariant.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               0F3B7E2D19A12AAE00D9BC56 /* CallEdge.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F3B7E2C19A12AAE00D9BC56 /* CallEdge.cpp */; };
                0F3D0BBC194A414300FC9CF9 /* ConstantStructureCheck.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F3D0BBA194A414300FC9CF9 /* ConstantStructureCheck.cpp */; };
                0F3D0BBD194A414300FC9CF9 /* ConstantStructureCheck.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F3D0BBB194A414300FC9CF9 /* ConstantStructureCheck.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F3E01AA19D353A500F61B7F /* DFGPrePostNumbering.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F3E01A819D353A500F61B7F /* DFGPrePostNumbering.cpp */; };
                0F63948515E4811B006A597C /* DFGArrayMode.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F63948215E48114006A597C /* DFGArrayMode.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F64B2711A784BAF006E4E66 /* BinarySwitch.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F64B26F1A784BAF006E4E66 /* BinarySwitch.cpp */; };
                0F64B2721A784BAF006E4E66 /* BinarySwitch.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F64B2701A784BAF006E4E66 /* BinarySwitch.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0F64B2791A7957B2006E4E66 /* CallEdge.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F64B2771A7957B2006E4E66 /* CallEdge.cpp */; };
+               0F64B27A1A7957B2006E4E66 /* CallEdge.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F64B2781A7957B2006E4E66 /* CallEdge.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F666EC0183566F900D017F1 /* BytecodeLivenessAnalysisInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F666EBE183566F900D017F1 /* BytecodeLivenessAnalysisInlines.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F666EC1183566F900D017F1 /* FullBytecodeLiveness.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F666EBF183566F900D017F1 /* FullBytecodeLiveness.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F666EC61835672B00D017F1 /* DFGAvailability.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F666EC21835672B00D017F1 /* DFGAvailability.cpp */; };
                0F7025AA1714B0FC00382C0E /* DFGOSRExitCompilerCommon.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7025A81714B0F800382C0E /* DFGOSRExitCompilerCommon.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F714CA416EA92F000F3EBEB /* DFGBackwardsPropagationPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F714CA116EA92ED00F3EBEB /* DFGBackwardsPropagationPhase.cpp */; };
                0F714CA516EA92F200F3EBEB /* DFGBackwardsPropagationPhase.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F714CA216EA92ED00F3EBEB /* DFGBackwardsPropagationPhase.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               0F73D7AE165A142D00ACAB71 /* ClosureCallStubRoutine.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F73D7AB165A142A00ACAB71 /* ClosureCallStubRoutine.cpp */; };
-               0F73D7AF165A143000ACAB71 /* ClosureCallStubRoutine.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F73D7AC165A142A00ACAB71 /* ClosureCallStubRoutine.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0F743BAA16B88249009F9277 /* ARM64Disassembler.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 652A3A201651C66100A80AFE /* ARM64Disassembler.cpp */; };
                0F7576D218E1FEE9002EF4CD /* AccessorCallJITStubRoutine.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7576D018E1FEE9002EF4CD /* AccessorCallJITStubRoutine.cpp */; };
                0F7576D318E1FEE9002EF4CD /* AccessorCallJITStubRoutine.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7576D118E1FEE9002EF4CD /* AccessorCallJITStubRoutine.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FE228EE1436AB2C00196C48 /* Options.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FE228EA1436AB2300196C48 /* Options.cpp */; };
                0FE7211D193B9C590031F6ED /* DFGTransition.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FE7211B193B9C590031F6ED /* DFGTransition.cpp */; };
                0FE7211E193B9C590031F6ED /* DFGTransition.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FE7211C193B9C590031F6ED /* DFGTransition.h */; settings = {ATTRIBUTES = (Private, ); }; };
+               0FE834171A6EF97B00D04847 /* PolymorphicCallStubRoutine.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FE834151A6EF97B00D04847 /* PolymorphicCallStubRoutine.cpp */; };
+               0FE834181A6EF97B00D04847 /* PolymorphicCallStubRoutine.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FE834161A6EF97B00D04847 /* PolymorphicCallStubRoutine.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FE8534B1723CDA500B618F5 /* DFGDesiredWatchpoints.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FE853491723CDA500B618F5 /* DFGDesiredWatchpoints.cpp */; };
                0FE8534C1723CDA500B618F5 /* DFGDesiredWatchpoints.h in Headers */ = {isa = PBXBuildFile; fileRef = 0FE8534A1723CDA500B618F5 /* DFGDesiredWatchpoints.h */; settings = {ATTRIBUTES = (Private, ); }; };
                0FE95F7918B5694700B531FB /* FTLDataSection.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0FE95F7718B5694700B531FB /* FTLDataSection.cpp */; };
                0F3B3A251544C991003ED0FF /* DFGCFGSimplificationPhase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGCFGSimplificationPhase.h; path = dfg/DFGCFGSimplificationPhase.h; sourceTree = "<group>"; };
                0F3B3A2915474FF4003ED0FF /* DFGValidate.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGValidate.cpp; path = dfg/DFGValidate.cpp; sourceTree = "<group>"; };
                0F3B3A2A15474FF4003ED0FF /* DFGValidate.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGValidate.h; path = dfg/DFGValidate.h; sourceTree = "<group>"; };
-               0F3B7E2019A11B8000D9BC56 /* CallEdge.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallEdge.h; sourceTree = "<group>"; };
-               0F3B7E2119A11B8000D9BC56 /* CallEdgeProfile.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallEdgeProfile.cpp; sourceTree = "<group>"; };
-               0F3B7E2219A11B8000D9BC56 /* CallEdgeProfile.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallEdgeProfile.h; sourceTree = "<group>"; };
-               0F3B7E2319A11B8000D9BC56 /* CallEdgeProfileInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallEdgeProfileInlines.h; sourceTree = "<group>"; };
                0F3B7E2419A11B8000D9BC56 /* CallVariant.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallVariant.cpp; sourceTree = "<group>"; };
                0F3B7E2519A11B8000D9BC56 /* CallVariant.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallVariant.h; sourceTree = "<group>"; };
-               0F3B7E2C19A12AAE00D9BC56 /* CallEdge.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallEdge.cpp; sourceTree = "<group>"; };
                0F3D0BBA194A414300FC9CF9 /* ConstantStructureCheck.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ConstantStructureCheck.cpp; sourceTree = "<group>"; };
                0F3D0BBB194A414300FC9CF9 /* ConstantStructureCheck.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ConstantStructureCheck.h; sourceTree = "<group>"; };
                0F3E01A819D353A500F61B7F /* DFGPrePostNumbering.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGPrePostNumbering.cpp; path = dfg/DFGPrePostNumbering.cpp; sourceTree = "<group>"; };
                0F63948215E48114006A597C /* DFGArrayMode.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGArrayMode.h; path = dfg/DFGArrayMode.h; sourceTree = "<group>"; };
                0F64B26F1A784BAF006E4E66 /* BinarySwitch.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = BinarySwitch.cpp; sourceTree = "<group>"; };
                0F64B2701A784BAF006E4E66 /* BinarySwitch.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BinarySwitch.h; sourceTree = "<group>"; };
+               0F64B2771A7957B2006E4E66 /* CallEdge.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = CallEdge.cpp; sourceTree = "<group>"; };
+               0F64B2781A7957B2006E4E66 /* CallEdge.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CallEdge.h; sourceTree = "<group>"; };
                0F666EBE183566F900D017F1 /* BytecodeLivenessAnalysisInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BytecodeLivenessAnalysisInlines.h; sourceTree = "<group>"; };
                0F666EBF183566F900D017F1 /* FullBytecodeLiveness.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = FullBytecodeLiveness.h; sourceTree = "<group>"; };
                0F666EC21835672B00D017F1 /* DFGAvailability.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGAvailability.cpp; path = dfg/DFGAvailability.cpp; sourceTree = "<group>"; };
                0F7025A81714B0F800382C0E /* DFGOSRExitCompilerCommon.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGOSRExitCompilerCommon.h; path = dfg/DFGOSRExitCompilerCommon.h; sourceTree = "<group>"; };
                0F714CA116EA92ED00F3EBEB /* DFGBackwardsPropagationPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGBackwardsPropagationPhase.cpp; path = dfg/DFGBackwardsPropagationPhase.cpp; sourceTree = "<group>"; };
                0F714CA216EA92ED00F3EBEB /* DFGBackwardsPropagationPhase.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGBackwardsPropagationPhase.h; path = dfg/DFGBackwardsPropagationPhase.h; sourceTree = "<group>"; };
-               0F73D7AB165A142A00ACAB71 /* ClosureCallStubRoutine.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = ClosureCallStubRoutine.cpp; sourceTree = "<group>"; };
-               0F73D7AC165A142A00ACAB71 /* ClosureCallStubRoutine.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ClosureCallStubRoutine.h; sourceTree = "<group>"; };
                0F7576D018E1FEE9002EF4CD /* AccessorCallJITStubRoutine.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = AccessorCallJITStubRoutine.cpp; sourceTree = "<group>"; };
                0F7576D118E1FEE9002EF4CD /* AccessorCallJITStubRoutine.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AccessorCallJITStubRoutine.h; sourceTree = "<group>"; };
                0F766D1C15A5028D008F363E /* JITStubRoutine.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JITStubRoutine.h; sourceTree = "<group>"; };
                0FE228EB1436AB2300196C48 /* Options.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Options.h; sourceTree = "<group>"; };
                0FE7211B193B9C590031F6ED /* DFGTransition.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGTransition.cpp; path = dfg/DFGTransition.cpp; sourceTree = "<group>"; };
                0FE7211C193B9C590031F6ED /* DFGTransition.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGTransition.h; path = dfg/DFGTransition.h; sourceTree = "<group>"; };
+               0FE834151A6EF97B00D04847 /* PolymorphicCallStubRoutine.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = PolymorphicCallStubRoutine.cpp; sourceTree = "<group>"; };
+               0FE834161A6EF97B00D04847 /* PolymorphicCallStubRoutine.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PolymorphicCallStubRoutine.h; sourceTree = "<group>"; };
                0FE853491723CDA500B618F5 /* DFGDesiredWatchpoints.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGDesiredWatchpoints.cpp; path = dfg/DFGDesiredWatchpoints.cpp; sourceTree = "<group>"; };
                0FE8534A1723CDA500B618F5 /* DFGDesiredWatchpoints.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGDesiredWatchpoints.h; path = dfg/DFGDesiredWatchpoints.h; sourceTree = "<group>"; };
                0FE95F7718B5694700B531FB /* FTLDataSection.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = FTLDataSection.cpp; path = ftl/FTLDataSection.cpp; sourceTree = "<group>"; };
                                0F64B26F1A784BAF006E4E66 /* BinarySwitch.cpp */,
                                0F64B2701A784BAF006E4E66 /* BinarySwitch.h */,
                                0F24E53D17EA9F5900ABB217 /* CCallHelpers.h */,
-                               0F73D7AB165A142A00ACAB71 /* ClosureCallStubRoutine.cpp */,
-                               0F73D7AC165A142A00ACAB71 /* ClosureCallStubRoutine.h */,
                                0FD82E37141AB14200179C94 /* CompactJITCodeMap.h */,
                                A7B48DB60EE74CFC00DCBDB6 /* ExecutableAllocator.cpp */,
                                A7B48DB50EE74CFC00DCBDB6 /* ExecutableAllocator.h */,
                                0FC712E117CD878F008CC93C /* JITToDFGDeferredCompilationCallback.h */,
                                A76F54A213B28AAB00EF2BCE /* JITWriteBarrier.h */,
                                A76C51741182748D00715B05 /* JSInterfaceJIT.h */,
+                               0FE834151A6EF97B00D04847 /* PolymorphicCallStubRoutine.cpp */,
+                               0FE834161A6EF97B00D04847 /* PolymorphicCallStubRoutine.h */,
                                0FA7A8E918B413C80052371D /* Reg.cpp */,
                                0FA7A8EA18B413C80052371D /* Reg.h */,
                                0F6B1CBB1861246A00845D97 /* RegisterPreservationWrapperGenerator.cpp */,
                                0F666EBE183566F900D017F1 /* BytecodeLivenessAnalysisInlines.h */,
                                0F885E101849A3BE00F1E3FA /* BytecodeUseDef.h */,
                                0F8023E91613832300A0BA45 /* ByValInfo.h */,
-                               0F3B7E2C19A12AAE00D9BC56 /* CallEdge.cpp */,
-                               0F3B7E2019A11B8000D9BC56 /* CallEdge.h */,
-                               0F3B7E2119A11B8000D9BC56 /* CallEdgeProfile.cpp */,
-                               0F3B7E2219A11B8000D9BC56 /* CallEdgeProfile.h */,
-                               0F3B7E2319A11B8000D9BC56 /* CallEdgeProfileInlines.h */,
+                               0F64B2771A7957B2006E4E66 /* CallEdge.cpp */,
+                               0F64B2781A7957B2006E4E66 /* CallEdge.h */,
                                0F0B83AE14BCF71400885B4F /* CallLinkInfo.cpp */,
                                0F0B83AF14BCF71400885B4F /* CallLinkInfo.h */,
                                0F93329314CA7DC10085F3C6 /* CallLinkStatus.cpp */,
                                0F69CC89193AC60A0045759E /* DFGFrozenValue.h in Headers */,
                                0F24E54217EA9F5900ABB217 /* CCallHelpers.h in Headers */,
                                BC6AAAE50E1F426500AD87D8 /* ClassInfo.h in Headers */,
-                               0F73D7AF165A143000ACAB71 /* ClosureCallStubRoutine.h in Headers */,
                                969A07970ED1D3AE00F1F681 /* CodeBlock.h in Headers */,
                                0F8F94411667633200D61971 /* CodeBlockHash.h in Headers */,
                                0FC97F34182020D7002C9B26 /* CodeBlockJettisoningWatchpoint.h in Headers */,
                                BCD2034A0E17135E002C7E82 /* DateConstructor.h in Headers */,
                                41359CF30FDD89AD00206180 /* DateConversion.h in Headers */,
                                BC1166020E1997B4008066DD /* DateInstance.h in Headers */,
+                               0F64B27A1A7957B2006E4E66 /* CallEdge.h in Headers */,
                                14A1563210966365006FA260 /* DateInstanceCache.h in Headers */,
                                BCD2034C0E17135E002C7E82 /* DatePrototype.h in Headers */,
                                BCD203E80E1718F4002C7E82 /* DatePrototype.lut.h in Headers */,
                                0FFFC95A14EF90A900C72532 /* DFGCSEPhase.h in Headers */,
                                0F2FC77316E12F740038D976 /* DFGDCEPhase.h in Headers */,
                                0F8F2B9A172F0501007DBDA5 /* DFGDesiredIdentifiers.h in Headers */,
-                               0F3B7E2819A11B8000D9BC56 /* CallEdgeProfile.h in Headers */,
                                C2C0F7CE17BBFC5B00464FE4 /* DFGDesiredTransitions.h in Headers */,
                                0FE8534C1723CDA500B618F5 /* DFGDesiredWatchpoints.h in Headers */,
                                C2981FD917BAEE4B00A3BC98 /* DFGDesiredWeakReferences.h in Headers */,
                                A76F54A313B28AAB00EF2BCE /* JITWriteBarrier.h in Headers */,
                                BC18C4160E16F5CD00B34460 /* JSLexicalEnvironment.h in Headers */,
                                840480131021A1D9008E7F01 /* JSAPIValueWrapper.h in Headers */,
-                               0F3B7E2919A11B8000D9BC56 /* CallEdgeProfileInlines.h in Headers */,
                                C2CF39C216E15A8100DD69BE /* JSAPIWrapperObject.h in Headers */,
                                A76140D2182982CB00750624 /* JSArgumentsIterator.h in Headers */,
                                BC18C4170E16F5CD00B34460 /* JSArray.h in Headers */,
                                E49DC16C12EF294E00184A1F /* SourceProviderCache.h in Headers */,
                                E49DC16D12EF295300184A1F /* SourceProviderCacheItem.h in Headers */,
                                0FB7F39E15ED8E4600F167B2 /* SparseArrayValueMap.h in Headers */,
-                               0F3B7E2619A11B8000D9BC56 /* CallEdge.h in Headers */,
                                A7386554118697B400540279 /* SpecializedThunkJIT.h in Headers */,
                                0F5541B21613C1FB00CE3E25 /* SpecialPointer.h in Headers */,
                                0FD82E54141DAEEE00179C94 /* SpeculatedType.h in Headers */,
                                0FF42749158EBE91004CB9FF /* udis86_types.h in Headers */,
                                A7E5AB391799E4B200D2833D /* UDis86Disassembler.h in Headers */,
                                A7A8AF4117ADB5F3005AB174 /* Uint16Array.h in Headers */,
+                               0FE834181A6EF97B00D04847 /* PolymorphicCallStubRoutine.h in Headers */,
                                866739D313BFDE710023D87C /* Uint16WithFraction.h in Headers */,
                                A7A8AF4217ADB5F3005AB174 /* Uint32Array.h in Headers */,
                                A7A8AF3F17ADB5F3005AB174 /* Uint8Array.h in Headers */,
                                0F0B83B014BCF71600885B4F /* CallLinkInfo.cpp in Sources */,
                                0F93329D14CA7DC30085F3C6 /* CallLinkStatus.cpp in Sources */,
                                0F2B9CE419D0BA7D00B1D1B5 /* DFGInsertOSRHintsForUpdate.cpp in Sources */,
-                               0F73D7AE165A142D00ACAB71 /* ClosureCallStubRoutine.cpp in Sources */,
                                969A07960ED1D3AE00F1F681 /* CodeBlock.cpp in Sources */,
                                0F8F94401667633000D61971 /* CodeBlockHash.cpp in Sources */,
                                0FC97F33182020D7002C9B26 /* CodeBlockJettisoningWatchpoint.cpp in Sources */,
                                0F9C5E5E18E35F5E00D431C3 /* FTLDWARFRegister.cpp in Sources */,
                                A709F2F217A0AC2A00512E98 /* CommonSlowPaths.cpp in Sources */,
                                6553A33117A1F1EE008CF6F3 /* CommonSlowPathsExceptions.cpp in Sources */,
+                               0F64B2791A7957B2006E4E66 /* CallEdge.cpp in Sources */,
                                A7E5A3A71797432D00E893C0 /* CompilationResult.cpp in Sources */,
                                147F39C2107EC37600427A48 /* Completion.cpp in Sources */,
                                146B16D812EB5B59001BEC1B /* ConservativeRoots.cpp in Sources */,
                                0F235BD517178E1C00690C7F /* FTLExitArgumentForOperand.cpp in Sources */,
                                0F235BD817178E1C00690C7F /* FTLExitThunkGenerator.cpp in Sources */,
                                0F235BDA17178E1C00690C7F /* FTLExitValue.cpp in Sources */,
-                               0F3B7E2719A11B8000D9BC56 /* CallEdgeProfile.cpp in Sources */,
                                A7F2996B17A0BB670010417A /* FTLFail.cpp in Sources */,
+                               0FE834171A6EF97B00D04847 /* PolymorphicCallStubRoutine.cpp in Sources */,
                                0FD8A31917D51F2200CA2C40 /* FTLForOSREntryJITCode.cpp in Sources */,
                                0F25F1AF181635F300522F39 /* FTLInlineCacheSize.cpp in Sources */,
                                0FEA0A281709623B00BB722C /* FTLIntrinsicRepository.cpp in Sources */,
                                0F4680D214BBD16500BFE272 /* LLIntData.cpp in Sources */,
                                0F38B01117CF078000B144D3 /* LLIntEntrypoint.cpp in Sources */,
                                0F4680A814BA7FAB00BFE272 /* LLIntExceptions.cpp in Sources */,
-                               0F3B7E2D19A12AAE00D9BC56 /* CallEdge.cpp in Sources */,
                                0F4680A414BA7F8D00BFE272 /* LLIntSlowPaths.cpp in Sources */,
                                0F0B839C14BCF46300885B4F /* LLIntThunks.cpp in Sources */,
                                0FCEFACD1805E75500472CE4 /* LLVMAPI.cpp in Sources */,
index 7288492..3045209 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 
 namespace JSC {
 
-typedef uint16_t CallEdgeCountType;
-
 class CallEdge {
 public:
     CallEdge();
-    CallEdge(CallVariant, CallEdgeCountType);
+    CallEdge(CallVariant, uint32_t);
     
     bool operator!() const { return !m_callee; }
     
     CallVariant callee() const { return m_callee; }
-    CallEdgeCountType count() const { return m_count; }
+    uint32_t count() const { return m_count; }
     
     CallEdge despecifiedClosure() const
     {
@@ -49,12 +47,12 @@ public:
     
     void dump(PrintStream&) const;
     
-public:
+private:
     CallVariant m_callee;
-    CallEdgeCountType m_count;
+    uint32_t m_count;
 };
 
-inline CallEdge::CallEdge(CallVariant callee, CallEdgeCountType count)
+inline CallEdge::CallEdge(CallVariant callee, uint32_t count)
     : m_callee(callee)
     , m_count(count)
 {
diff --git a/Source/JavaScriptCore/bytecode/CallEdgeProfile.cpp b/Source/JavaScriptCore/bytecode/CallEdgeProfile.cpp
deleted file mode 100644 (file)
index c20b5c4..0000000
+++ /dev/null
@@ -1,360 +0,0 @@
-/*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#include "config.h"
-#include "CallEdgeProfile.h"
-
-#include "CCallHelpers.h"
-#include "CallEdgeProfileInlines.h"
-#include "JITOperations.h"
-#include "JSCInlines.h"
-
-namespace JSC {
-
-CallEdgeList CallEdgeProfile::callEdges() const
-{
-    ConcurrentJITLocker locker(m_lock);
-    
-    CallEdgeList result;
-    
-    CallVariant primaryCallee = m_primaryCallee;
-    CallEdgeCountType numCallsToPrimary = m_numCallsToPrimary;
-    // Defend against races. These fields are modified by the log processor without locking.
-    if (!!primaryCallee && numCallsToPrimary)
-        result.append(CallEdge(primaryCallee, numCallsToPrimary));
-    
-    if (m_otherCallees) {
-        // Make sure that if the primary thread had just created a m_otherCalles while log
-        // processing, we see a consistently created one. The lock being held is insufficient for
-        // this, since the log processor will only grab the lock when merging the secondary
-        // spectrum into the primary one but may still create the data structure without holding
-        // locks.
-        WTF::loadLoadFence();
-        for (CallEdge& entry : m_otherCallees->m_processed) {
-            // Defend against the possibility that the primary duplicates an entry in the secondary
-            // spectrum. That can happen when the GC removes the primary. We could have the GC fix
-            // the situation by changing the primary to be something from the secondary spectrum,
-            // but this fix seems simpler to implement and also cheaper.
-            if (entry.callee() == result[0].callee()) {
-                result[0] = CallEdge(result[0].callee(), entry.count() + result[0].count());
-                continue;
-            }
-            
-            result.append(entry);
-        }
-    }
-    
-    std::sort(result.begin(), result.end(), [] (const CallEdge& a, const CallEdge& b) -> bool {
-            return a.count() > b.count();
-        });
-    
-    if (result.size() >= 2)
-        ASSERT(result[0].count() >= result.last().count());
-    
-    return result;
-}
-
-CallEdgeCountType CallEdgeProfile::numCallsToKnownCells() const
-{
-    CallEdgeCountType result = 0;
-    for (CallEdge& edge : callEdges())
-        result += edge.count();
-    return result;
-}
-
-static bool worthDespecifying(const CallVariant& variant)
-{
-    return !Heap::isMarked(variant.rawCalleeCell())
-        && Heap::isMarked(variant.despecifiedClosure().rawCalleeCell());
-}
-
-bool CallEdgeProfile::worthDespecifying()
-{
-    if (m_closuresAreDespecified)
-        return false;
-    
-    bool didSeeEntry = false;
-    
-    if (!!m_primaryCallee) {
-        didSeeEntry = true;
-        if (!JSC::worthDespecifying(m_primaryCallee))
-            return false;
-    }
-    
-    if (m_otherCallees) {
-        for (unsigned i = m_otherCallees->m_processed.size(); i--;) {
-            didSeeEntry = true;
-            if (!JSC::worthDespecifying(m_otherCallees->m_processed[i].callee()))
-                return false;
-        }
-    }
-    
-    return didSeeEntry;
-}
-
-void CallEdgeProfile::visitWeak()
-{
-    if (!m_primaryCallee && !m_otherCallees)
-        return;
-    
-    ConcurrentJITLocker locker(m_lock);
-    
-    // See if anything is dead and if that can be rectified by despecifying.
-    if (worthDespecifying()) {
-        CallSpectrum newSpectrum;
-        
-        if (!!m_primaryCallee)
-            newSpectrum.add(m_primaryCallee.despecifiedClosure(), m_numCallsToPrimary);
-        
-        if (m_otherCallees) {
-            for (unsigned i = m_otherCallees->m_processed.size(); i--;) {
-                newSpectrum.add(
-                    m_otherCallees->m_processed[i].callee().despecifiedClosure(),
-                    m_otherCallees->m_processed[i].count());
-            }
-        }
-        
-        Vector<CallSpectrum::KeyAndCount> list = newSpectrum.buildList();
-        RELEASE_ASSERT(list.size());
-        m_primaryCallee = list.last().key;
-        m_numCallsToPrimary = list.last().count;
-        
-        if (m_otherCallees) {
-            m_otherCallees->m_processed.clear();
-
-            // We could have a situation where the GC clears the primary and then log processing
-            // reinstates it without ever doing an addSlow and subsequent mergeBack. In such a case
-            // the primary could duplicate an entry in otherCallees, which means that even though we
-            // had an otherCallees object, the list size is just 1.
-            if (list.size() >= 2) {
-                for (unsigned i = list.size() - 1; i--;)
-                    m_otherCallees->m_processed.append(CallEdge(list[i].key, list[i].count));
-            }
-        }
-        
-        m_closuresAreDespecified = true;
-        
-        return;
-    }
-    
-    if (!!m_primaryCallee && !Heap::isMarked(m_primaryCallee.rawCalleeCell())) {
-        m_numCallsToUnknownCell += m_numCallsToPrimary;
-        
-        m_primaryCallee = CallVariant();
-        m_numCallsToPrimary = 0;
-    }
-    
-    if (m_otherCallees) {
-        for (unsigned i = 0; i < m_otherCallees->m_processed.size(); i++) {
-            if (Heap::isMarked(m_otherCallees->m_processed[i].callee().rawCalleeCell()))
-                continue;
-            
-            m_numCallsToUnknownCell += m_otherCallees->m_processed[i].count();
-            m_otherCallees->m_processed[i--] = m_otherCallees->m_processed.last();
-            m_otherCallees->m_processed.removeLast();
-        }
-        
-        // Only exists while we are processing the log.
-        RELEASE_ASSERT(!m_otherCallees->m_temporarySpectrum);
-    }
-}
-
-void CallEdgeProfile::addSlow(CallVariant callee, CallEdgeProfileVector& mergeBackLog)
-{
-    // This exists to handle cases where the spectrum wasn't created yet, or we're storing to a
-    // particular spectrum for the first time during a log processing iteration.
-    
-    if (!m_otherCallees) {
-        m_otherCallees = std::make_unique<Secondary>();
-        // If a compiler thread notices the m_otherCallees being non-null, we want to make sure
-        // that it sees a fully created one.
-        WTF::storeStoreFence();
-    }
-    
-    if (!m_otherCallees->m_temporarySpectrum) {
-        m_otherCallees->m_temporarySpectrum = std::make_unique<CallSpectrum>();
-        for (unsigned i = m_otherCallees->m_processed.size(); i--;) {
-            m_otherCallees->m_temporarySpectrum->add(
-                m_otherCallees->m_processed[i].callee(),
-                m_otherCallees->m_processed[i].count());
-        }
-        
-        // This means that this is the first time we're seeing this profile during this log
-        // processing iteration.
-        mergeBackLog.append(this);
-    }
-    
-    m_otherCallees->m_temporarySpectrum->add(callee);
-}
-
-void CallEdgeProfile::mergeBack()
-{
-    ConcurrentJITLocker locker(m_lock);
-    
-    RELEASE_ASSERT(m_otherCallees);
-    RELEASE_ASSERT(m_otherCallees->m_temporarySpectrum);
-    
-    if (!!m_primaryCallee)
-        m_otherCallees->m_temporarySpectrum->add(m_primaryCallee, m_numCallsToPrimary);
-    
-    if (!m_closuresAreDespecified) {
-        CallSpectrum newSpectrum;
-        for (auto& entry : *m_otherCallees->m_temporarySpectrum)
-            newSpectrum.add(entry.key.despecifiedClosure(), entry.value);
-        
-        if (newSpectrum.size() < m_otherCallees->m_temporarySpectrum->size()) {
-            *m_otherCallees->m_temporarySpectrum = newSpectrum;
-            m_closuresAreDespecified = true;
-        }
-    }
-    
-    Vector<CallSpectrum::KeyAndCount> list = m_otherCallees->m_temporarySpectrum->buildList();
-    m_otherCallees->m_temporarySpectrum = nullptr;
-    
-    m_primaryCallee = list.last().key;
-    m_numCallsToPrimary = list.last().count;
-    list.removeLast();
-    
-    m_otherCallees->m_processed.clear();
-    for (unsigned count = maxKnownCallees; count-- && !list.isEmpty();) {
-        m_otherCallees->m_processed.append(CallEdge(list.last().key, list.last().count));
-        list.removeLast();
-    }
-    
-    for (unsigned i = list.size(); i--;)
-        m_numCallsToUnknownCell += list[i].count;
-}
-
-void CallEdgeProfile::fadeByHalf()
-{
-    m_numCallsToPrimary >>= 1;
-    m_numCallsToNotCell >>= 1;
-    m_numCallsToUnknownCell >>= 1;
-    m_totalCount >>= 1;
-    
-    if (m_otherCallees) {
-        for (unsigned i = m_otherCallees->m_processed.size(); i--;) {
-            m_otherCallees->m_processed[i] = CallEdge(
-                m_otherCallees->m_processed[i].callee(),
-                m_otherCallees->m_processed[i].count() >> 1);
-        }
-        
-        if (m_otherCallees->m_temporarySpectrum) {
-            for (auto& entry : *m_otherCallees->m_temporarySpectrum)
-                entry.value >>= 1;
-        }
-    }
-}
-
-CallEdgeLog::CallEdgeLog()
-    : m_scaledLogIndex(logSize * sizeof(Entry))
-{
-    ASSERT(!(m_scaledLogIndex % sizeof(Entry)));
-}
-
-CallEdgeLog::~CallEdgeLog() { }
-
-bool CallEdgeLog::isEnabled()
-{
-    return Options::enableCallEdgeProfiling() && Options::useFTLJIT();
-}
-
-#if ENABLE(JIT)
-
-extern "C" JIT_OPERATION void operationProcessCallEdgeLog(CallEdgeLog*) WTF_INTERNAL;
-extern "C" JIT_OPERATION void operationProcessCallEdgeLog(CallEdgeLog* log)
-{
-    log->processLog();
-}
-
-void CallEdgeLog::emitLogCode(CCallHelpers& jit, CallEdgeProfile& profile, JSValueRegs calleeRegs)
-{
-    const unsigned numberOfArguments = 1;
-    
-    GPRReg scratchGPR;
-    if (!calleeRegs.uses(GPRInfo::regT0))
-        scratchGPR = GPRInfo::regT0;
-    else if (!calleeRegs.uses(GPRInfo::regT1))
-        scratchGPR = GPRInfo::regT1;
-    else
-        scratchGPR = GPRInfo::regT2;
-    
-    jit.load32(&m_scaledLogIndex, scratchGPR);
-    
-    CCallHelpers::Jump ok = jit.branchTest32(CCallHelpers::NonZero, scratchGPR);
-    
-    ASSERT_UNUSED(numberOfArguments, stackAlignmentRegisters() >= 1 + numberOfArguments);
-    
-    jit.subPtr(CCallHelpers::TrustedImm32(stackAlignmentBytes()), CCallHelpers::stackPointerRegister);
-    
-    jit.storeValue(calleeRegs, CCallHelpers::Address(CCallHelpers::stackPointerRegister, sizeof(JSValue)));
-    jit.setupArguments(CCallHelpers::TrustedImmPtr(this));
-    jit.move(CCallHelpers::TrustedImmPtr(bitwise_cast<void*>(operationProcessCallEdgeLog)), GPRInfo::nonArgGPR0);
-    jit.call(GPRInfo::nonArgGPR0);
-    jit.loadValue(CCallHelpers::Address(CCallHelpers::stackPointerRegister, sizeof(JSValue)), calleeRegs);
-    
-    jit.addPtr(CCallHelpers::TrustedImm32(stackAlignmentBytes()), CCallHelpers::stackPointerRegister);
-    
-    jit.move(CCallHelpers::TrustedImm32(logSize * sizeof(Entry)), scratchGPR);
-    ok.link(&jit);
-
-    jit.sub32(CCallHelpers::TrustedImm32(sizeof(Entry)), scratchGPR);
-    jit.store32(scratchGPR, &m_scaledLogIndex);
-    jit.addPtr(CCallHelpers::TrustedImmPtr(m_log), scratchGPR);
-    jit.storeValue(calleeRegs, CCallHelpers::Address(scratchGPR, OBJECT_OFFSETOF(Entry, m_value)));
-    jit.storePtr(CCallHelpers::TrustedImmPtr(&profile), CCallHelpers::Address(scratchGPR, OBJECT_OFFSETOF(Entry, m_profile)));
-}
-
-void CallEdgeLog::emitLogCode(
-    CCallHelpers& jit, OwnPtr<CallEdgeProfile>& profilePointer, JSValueRegs calleeRegs)
-{
-    if (!isEnabled())
-        return;
-    
-    profilePointer.createTransactionally();
-    emitLogCode(jit, *profilePointer, calleeRegs);
-}
-
-#endif // ENABLE(JIT)
-
-void CallEdgeLog::processLog()
-{
-    ASSERT(!(m_scaledLogIndex % sizeof(Entry)));
-    
-    if (Options::callEdgeProfileReallyProcessesLog()) {
-        CallEdgeProfileVector mergeBackLog;
-        
-        for (unsigned i = m_scaledLogIndex / sizeof(Entry); i < logSize; ++i)
-            m_log[i].m_profile->add(m_log[i].m_value, mergeBackLog);
-        
-        for (unsigned i = mergeBackLog.size(); i--;)
-            mergeBackLog[i]->mergeBack();
-    }
-    
-    m_scaledLogIndex = logSize * sizeof(Entry);
-}
-
-} // namespace JSC
-
diff --git a/Source/JavaScriptCore/bytecode/CallEdgeProfile.h b/Source/JavaScriptCore/bytecode/CallEdgeProfile.h
deleted file mode 100644 (file)
index 4aee627..0000000
+++ /dev/null
@@ -1,131 +0,0 @@
-/*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#ifndef CallEdgeProfile_h
-#define CallEdgeProfile_h
-
-#include "CallEdge.h"
-#include "CallVariant.h"
-#include "ConcurrentJITLock.h"
-#include "GPRInfo.h"
-#include "JSCell.h"
-#include <wtf/OwnPtr.h>
-
-namespace JSC {
-
-class CCallHelpers;
-class LLIntOffsetsExtractor;
-
-class CallEdgeLog;
-class CallEdgeProfile;
-typedef Vector<CallEdgeProfile*, 10> CallEdgeProfileVector;
-
-class CallEdgeProfile {
-public:
-    CallEdgeProfile();
-    
-    CallEdgeCountType numCallsToNotCell() const { return m_numCallsToNotCell; }
-    CallEdgeCountType numCallsToUnknownCell() const { return m_numCallsToUnknownCell; }
-    CallEdgeCountType numCallsToKnownCells() const;
-    
-    CallEdgeCountType totalCalls() const { return m_totalCount; }
-    
-    // Call while holding the owning CodeBlock's lock.
-    CallEdgeList callEdges() const;
-    
-    void visitWeak();
-    
-private:
-    friend class CallEdgeLog;
-    
-    static const unsigned maxKnownCallees = 5;
-    
-    void add(JSValue, CallEdgeProfileVector& mergeBackLog);
-    
-    bool worthDespecifying();
-    void addSlow(CallVariant, CallEdgeProfileVector& mergeBackLog);
-    void mergeBack();
-    void fadeByHalf();
-    
-    // It's cheaper to let this have its own lock. It needs to be able to find which lock to
-    // lock. Normally it would lock the owning CodeBlock's lock, but that would require a
-    // pointer-width word to point at the CodeBlock. Having the byte-sized lock here is
-    // cheaper. However, this means that the relationship with the CodeBlock lock is:
-    // acquire the CodeBlock lock before this one.
-    mutable ConcurrentJITLock m_lock;
-    
-    bool m_closuresAreDespecified;
-    
-    CallEdgeCountType m_numCallsToPrimary;
-    CallEdgeCountType m_numCallsToNotCell;
-    CallEdgeCountType m_numCallsToUnknownCell;
-    CallEdgeCountType m_totalCount;
-    CallVariant m_primaryCallee;
-    
-    typedef Spectrum<CallVariant, CallEdgeCountType> CallSpectrum;
-    
-    struct Secondary {
-        Vector<CallEdge> m_processed; // Processed but not necessarily sorted.
-        std::unique_ptr<CallSpectrum> m_temporarySpectrum;
-    };
-    
-    std::unique_ptr<Secondary> m_otherCallees;
-};
-
-class CallEdgeLog {
-public:
-    CallEdgeLog();
-    ~CallEdgeLog();
-
-    static bool isEnabled();
-    
-#if ENABLE(JIT)
-    void emitLogCode(CCallHelpers&, CallEdgeProfile&, JSValueRegs calleeRegs); // Assumes that stack is aligned, all volatile registers - other than calleeGPR - are clobberable, and the parameter space is in use.
-    
-    // Same as above but creates a CallEdgeProfile instance if one did not already exist. Does
-    // this in a thread-safe manner by calling OwnPtr::createTransactionally.
-    void emitLogCode(CCallHelpers&, OwnPtr<CallEdgeProfile>&, JSValueRegs calleeRegs);
-#endif // ENABLE(JIT)
-    
-    void processLog();
-    
-private:
-    friend class LLIntOffsetsExtractor;
-
-    static const unsigned logSize = 10000;
-    
-    struct Entry {
-        JSValue m_value;
-        CallEdgeProfile* m_profile;
-    };
-    
-    unsigned m_scaledLogIndex;
-    Entry m_log[logSize];
-};
-
-} // namespace JSC
-
-#endif // CallEdgeProfile_h
-
diff --git a/Source/JavaScriptCore/bytecode/CallEdgeProfileInlines.h b/Source/JavaScriptCore/bytecode/CallEdgeProfileInlines.h
deleted file mode 100644 (file)
index e6ea320..0000000
+++ /dev/null
@@ -1,91 +0,0 @@
-/*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#ifndef CallEdgeProfileInlines_h
-#define CallEdgeProfileInlines_h
-
-#include "CallEdgeProfile.h"
-
-namespace JSC {
-
-inline CallEdgeProfile::CallEdgeProfile()
-    : m_closuresAreDespecified(false)
-    , m_numCallsToPrimary(0)
-    , m_numCallsToNotCell(0)
-    , m_numCallsToUnknownCell(0)
-    , m_totalCount(0)
-    , m_primaryCallee(nullptr)
-{
-}
-
-ALWAYS_INLINE void CallEdgeProfile::add(JSValue value, CallEdgeProfileVector& mergeBackLog)
-{
-    unsigned newTotalCount = m_totalCount + 1;
-    if (UNLIKELY(!newTotalCount)) {
-        fadeByHalf(); // Tackle overflows by dividing all counts by two.
-        newTotalCount = m_totalCount + 1;
-    }
-    ASSERT(newTotalCount);
-    m_totalCount = newTotalCount;
-    
-    if (UNLIKELY(!value.isCell())) {
-        m_numCallsToNotCell++;
-        return;
-    }
-
-    CallVariant callee = CallVariant(value.asCell());
-    
-    if (m_closuresAreDespecified)
-        callee = callee.despecifiedClosure();
-    
-    if (UNLIKELY(!m_primaryCallee)) {
-        m_primaryCallee = callee;
-        m_numCallsToPrimary = 1;
-        return;
-    }
-        
-    if (LIKELY(m_primaryCallee == callee)) {
-        m_numCallsToPrimary++;
-        return;
-    }
-        
-    if (UNLIKELY(!m_otherCallees)) {
-        addSlow(callee, mergeBackLog);
-        return;
-    }
-        
-    CallSpectrum* secondary = m_otherCallees->m_temporarySpectrum.get();
-    if (!secondary) {
-        addSlow(callee, mergeBackLog);
-        return;
-    }
-        
-    secondary->add(callee);
-}
-
-} // namespace JSC
-
-#endif // CallEdgeProfileInlines_h
-
index c284c5b..ad88905 100644 (file)
@@ -29,7 +29,9 @@
 #include "DFGOperations.h"
 #include "DFGThunks.h"
 #include "JSCInlines.h"
+#include "Repatch.h"
 #include "RepatchBuffer.h"
+#include <wtf/ListDump.h>
 #include <wtf/NeverDestroyed.h>
 
 #if ENABLE(JIT)
@@ -37,20 +39,17 @@ namespace JSC {
 
 void CallLinkInfo::unlink(RepatchBuffer& repatchBuffer)
 {
-    ASSERT(isLinked());
+    if (!isLinked()) {
+        // We could be called even if we're not linked anymore because of how polymorphic calls
+        // work. Each callsite within the polymorphic call stub may separately ask us to unlink().
+        RELEASE_ASSERT(!isOnList());
+        return;
+    }
     
-    if (Options::showDisassembly())
-        dataLog("Unlinking call from ", callReturnLocation, " to ", pointerDump(repatchBuffer.codeBlock()), "\n");
-
-    repatchBuffer.revertJumpReplacementToBranchPtrWithPatch(RepatchBuffer::startOfBranchPtrWithPatchOnRegister(hotPathBegin), static_cast<MacroAssembler::RegisterID>(calleeGPR), 0);
-    repatchBuffer.relink(
-        callReturnLocation,
-        repatchBuffer.codeBlock()->vm()->getCTIStub(linkThunkGeneratorFor(
-            (callType == Construct || callType == ConstructVarargs)? CodeForConstruct : CodeForCall,
-            isFTL ? MustPreserveRegisters : RegisterPreservationNotRequired)).code());
-    hasSeenShouldRepatch = false;
-    callee.clear();
-    stub.clear();
+    unlinkFor(
+        repatchBuffer, *this,
+        (callType == Construct || callType == ConstructVarargs)? CodeForConstruct : CodeForCall,
+        isFTL ? MustPreserveRegisters : RegisterPreservationNotRequired);
 
     // It will be on a list if the callee has a code block.
     if (isOnList())
@@ -61,12 +60,12 @@ void CallLinkInfo::visitWeak(RepatchBuffer& repatchBuffer)
 {
     if (isLinked()) {
         if (stub) {
-            if (!Heap::isMarked(stub->executable())) {
+            if (!stub->visitWeak(repatchBuffer)) {
                 if (Options::verboseOSR()) {
                     dataLog(
                         "Clearing closure call from ", *repatchBuffer.codeBlock(), " to ",
-                        stub->executable()->hashFor(specializationKind()),
-                        ", stub routine ", RawPointer(stub.get()), ".\n");
+                        listDump(stub->variants()), ", stub routine ", RawPointer(stub.get()),
+                        ".\n");
                 }
                 unlink(repatchBuffer);
             }
@@ -83,11 +82,6 @@ void CallLinkInfo::visitWeak(RepatchBuffer& repatchBuffer)
     }
     if (!!lastSeenCallee && !Heap::isMarked(lastSeenCallee.get()))
         lastSeenCallee.clear();
-    
-    if (callEdgeProfile) {
-        WTF::loadLoadFence();
-        callEdgeProfile->visitWeak();
-    }
 }
 
 CallLinkInfo& CallLinkInfo::dummy()
index 3a65ef5..829a076 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2012, 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 #ifndef CallLinkInfo_h
 #define CallLinkInfo_h
 
-#include "CallEdgeProfile.h"
-#include "ClosureCallStubRoutine.h"
 #include "CodeLocation.h"
 #include "CodeSpecializationKind.h"
 #include "JITWriteBarrier.h"
 #include "JSFunction.h"
 #include "Opcode.h"
+#include "PolymorphicCallStubRoutine.h"
 #include "WriteBarrier.h"
 #include <wtf/OwnPtr.h>
 #include <wtf/SentinelLinkedList.h>
@@ -82,15 +81,14 @@ struct CallLinkInfo : public BasicRawSentinelNode<CallLinkInfo> {
     CodeLocationNearCall hotPathOther;
     JITWriteBarrier<JSFunction> callee;
     WriteBarrier<JSFunction> lastSeenCallee;
-    RefPtr<ClosureCallStubRoutine> stub;
+    RefPtr<PolymorphicCallStubRoutine> stub;
     bool isFTL : 1;
     bool hasSeenShouldRepatch : 1;
     bool hasSeenClosure : 1;
     unsigned callType : 5; // CallType
     unsigned calleeGPR : 8;
-    unsigned slowPathCount;
+    uint32_t slowPathCount;
     CodeOrigin codeOrigin;
-    OwnPtr<CallEdgeProfile> callEdgeProfile;
 
     bool isLinked() { return stub || callee; }
     void unlink(RepatchBuffer&);
index b971a60..c8271e0 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -47,7 +47,7 @@ CallLinkStatus::CallLinkStatus(JSValue value)
         return;
     }
     
-    m_edges.append(CallEdge(CallVariant(value.asCell()), 1));
+    m_variants.append(CallVariant(value.asCell()));
 }
 
 CallLinkStatus CallLinkStatus::computeFromLLInt(const ConcurrentJITLocker& locker, CodeBlock* profiledBlock, unsigned bytecodeIndex)
@@ -129,23 +129,6 @@ CallLinkStatus CallLinkStatus::computeFor(
     // We don't really need this, but anytime we have to debug this code, it becomes indispensable.
     UNUSED_PARAM(profiledBlock);
     
-    if (Options::callStatusShouldUseCallEdgeProfile()) {
-        // Always trust the call edge profile over anything else since this has precise counts.
-        // It can make the best possible decision because it never "forgets" what happened for any
-        // call, with the exception of fading out the counts of old calls (for example if the
-        // counter type is 16-bit then calls that happened more than 2^16 calls ago are given half
-        // weight, and this compounds for every 2^15 [sic] calls after that). The combination of
-        // high fidelity for recent calls and fading for older calls makes this the most useful
-        // mechamism of choosing how to optimize future calls.
-        CallEdgeProfile* edgeProfile = callLinkInfo.callEdgeProfile.get();
-        WTF::loadLoadFence();
-        if (edgeProfile) {
-            CallLinkStatus result = computeFromCallEdgeProfile(edgeProfile);
-            if (!!result)
-                return result;
-        }
-    }
-    
     return computeFromCallLinkInfo(locker, callLinkInfo);
 }
 
@@ -165,12 +148,68 @@ CallLinkStatus CallLinkStatus::computeFromCallLinkInfo(
     // that is still marginally valid (i.e. the pointers ain't stale). This kind of raciness
     // is probably OK for now.
     
+    // PolymorphicCallStubRoutine is a GCAwareJITStubRoutine, so if non-null, it will stay alive
+    // until next GC even if the CallLinkInfo is concurrently cleared. Also, the variants list is
+    // never mutated after the PolymorphicCallStubRoutine is instantiated. We have some conservative
+    // fencing in place to make sure that we see the variants list after construction.
+    if (PolymorphicCallStubRoutine* stub = callLinkInfo.stub.get()) {
+        WTF::loadLoadFence();
+        
+        CallEdgeList edges = stub->edges();
+        
+        // Now that we've loaded the edges list, there are no further concurrency concerns. We will
+        // just manipulate and prune this list to our liking - mostly removing entries that are too
+        // infrequent and ensuring that it's sorted in descending order of frequency.
+        
+        RELEASE_ASSERT(edges.size());
+        
+        std::sort(
+            edges.begin(), edges.end(),
+            [] (CallEdge a, CallEdge b) {
+                return a.count() > b.count();
+            });
+        RELEASE_ASSERT(edges.first().count() >= edges.last().count());
+        
+        double totalCallsToKnown = 0;
+        double totalCallsToUnknown = callLinkInfo.slowPathCount;
+        CallVariantList variants;
+        for (size_t i = 0; i < edges.size(); ++i) {
+            CallEdge edge = edges[i];
+            // If the call is at the tail of the distribution, then we don't optimize it and we
+            // treat it as if it was a call to something unknown. We define the tail as being either
+            // a call that doesn't belong to the N most frequent callees (N =
+            // maxPolymorphicCallVariantsForInlining) or that has a total call count that is too
+            // small.
+            if (i >= Options::maxPolymorphicCallVariantsForInlining()
+                || edge.count() < Options::frequentCallThreshold())
+                totalCallsToUnknown += edge.count();
+            else {
+                totalCallsToKnown += edge.count();
+                variants.append(edge.callee());
+            }
+        }
+        
+        // Bail if we didn't find any calls that qualified.
+        RELEASE_ASSERT(!!totalCallsToKnown == !!variants.size());
+        if (variants.isEmpty())
+            return takesSlowPath();
+        
+        // We require that the distribution of callees is skewed towards a handful of common ones.
+        if (totalCallsToKnown / totalCallsToUnknown < Options::minimumCallToKnownRate())
+            return takesSlowPath();
+        
+        RELEASE_ASSERT(totalCallsToKnown);
+        RELEASE_ASSERT(variants.size());
+        
+        CallLinkStatus result;
+        result.m_variants = variants;
+        result.m_couldTakeSlowPath = !!totalCallsToUnknown;
+        return result;
+    }
+    
     if (callLinkInfo.slowPathCount >= Options::couldTakeSlowCaseMinimumCount())
         return takesSlowPath();
     
-    if (ClosureCallStubRoutine* stub = callLinkInfo.stub.get())
-        return CallLinkStatus(stub->executable());
-    
     JSFunction* target = callLinkInfo.lastSeenCallee.get();
     if (!target)
         return takesSlowPath();
@@ -181,34 +220,6 @@ CallLinkStatus CallLinkStatus::computeFromCallLinkInfo(
     return CallLinkStatus(target);
 }
 
-CallLinkStatus CallLinkStatus::computeFromCallEdgeProfile(CallEdgeProfile* edgeProfile)
-{
-    // In cases where the call edge profile saw nothing, use the CallLinkInfo instead.
-    if (!edgeProfile->totalCalls())
-        return CallLinkStatus();
-    
-    // To do anything meaningful, we require that the majority of calls are to something we
-    // know how to handle.
-    unsigned numCallsToKnown = edgeProfile->numCallsToKnownCells();
-    unsigned numCallsToUnknown = edgeProfile->numCallsToNotCell() + edgeProfile->numCallsToUnknownCell();
-    
-    // We require that the majority of calls were to something that we could possibly inline.
-    if (numCallsToKnown <= numCallsToUnknown)
-        return takesSlowPath();
-    
-    // We require that the number of such calls is greater than some minimal threshold, so that we
-    // avoid inlining completely cold calls.
-    if (numCallsToKnown < Options::frequentCallThreshold())
-        return takesSlowPath();
-    
-    CallLinkStatus result;
-    result.m_edges = edgeProfile->callEdges();
-    result.m_couldTakeSlowPath = !!numCallsToUnknown;
-    result.m_canTrustCounts = true;
-    
-    return result;
-}
-
 CallLinkStatus CallLinkStatus::computeFor(
     const ConcurrentJITLocker& locker, CodeBlock* profiledBlock, CallLinkInfo& callLinkInfo,
     ExitSiteData exitSiteData)
@@ -282,8 +293,8 @@ CallLinkStatus CallLinkStatus::computeFor(
 
 bool CallLinkStatus::isClosureCall() const
 {
-    for (unsigned i = m_edges.size(); i--;) {
-        if (m_edges[i].callee().isClosureCall())
+    for (unsigned i = m_variants.size(); i--;) {
+        if (m_variants[i].isClosureCall())
             return true;
     }
     return false;
@@ -291,18 +302,7 @@ bool CallLinkStatus::isClosureCall() const
 
 void CallLinkStatus::makeClosureCall()
 {
-    ASSERT(!m_isProved);
-    for (unsigned i = m_edges.size(); i--;)
-        m_edges[i] = m_edges[i].despecifiedClosure();
-    
-    if (!ASSERT_DISABLED) {
-        // Doing this should not have created duplicates, because the CallEdgeProfile
-        // should despecify closures if doing so would reduce the number of known callees.
-        for (unsigned i = 0; i < m_edges.size(); ++i) {
-            for (unsigned j = i + 1; j < m_edges.size(); ++j)
-                ASSERT(m_edges[i].callee() != m_edges[j].callee());
-        }
-    }
+    m_variants = despecifiedVariantList(m_variants);
 }
 
 void CallLinkStatus::dump(PrintStream& out) const
@@ -320,7 +320,8 @@ void CallLinkStatus::dump(PrintStream& out) const
     if (m_couldTakeSlowPath)
         out.print(comma, "Could Take Slow Path");
     
-    out.print(listDump(m_edges));
+    if (!m_variants.isEmpty())
+        out.print(comma, listDump(m_variants));
 }
 
 } // namespace JSC
index 6a3d388..545c1bc 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2012-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -27,6 +27,7 @@
 #define CallLinkStatus_h
 
 #include "CallLinkInfo.h"
+#include "CallVariant.h"
 #include "CodeOrigin.h"
 #include "CodeSpecializationKind.h"
 #include "ConcurrentJITLock.h"
@@ -48,7 +49,6 @@ public:
     CallLinkStatus()
         : m_couldTakeSlowPath(false)
         , m_isProved(false)
-        , m_canTrustCounts(false)
     {
     }
     
@@ -62,10 +62,9 @@ public:
     explicit CallLinkStatus(JSValue);
     
     CallLinkStatus(CallVariant variant)
-        : m_edges(1, CallEdge(variant, 1))
+        : m_variants(1, variant)
         , m_couldTakeSlowPath(false)
         , m_isProved(false)
-        , m_canTrustCounts(false)
     {
     }
     
@@ -109,19 +108,18 @@ public:
     static CallLinkStatus computeFor(
         CodeBlock*, CodeOrigin, const CallLinkInfoMap&, const ContextMap&);
     
-    bool isSet() const { return !m_edges.isEmpty() || m_couldTakeSlowPath; }
+    bool isSet() const { return !m_variants.isEmpty() || m_couldTakeSlowPath; }
     
     bool operator!() const { return !isSet(); }
     
     bool couldTakeSlowPath() const { return m_couldTakeSlowPath; }
     
-    CallEdgeList edges() const { return m_edges; }
-    unsigned size() const { return m_edges.size(); }
-    CallEdge at(unsigned i) const { return m_edges[i]; }
-    CallEdge operator[](unsigned i) const { return at(i); }
+    CallVariantList variants() const { return m_variants; }
+    unsigned size() const { return m_variants.size(); }
+    CallVariant at(unsigned i) const { return m_variants[i]; }
+    CallVariant operator[](unsigned i) const { return at(i); }
     bool isProved() const { return m_isProved; }
-    bool canOptimize() const { return !m_edges.isEmpty(); }
-    bool canTrustCounts() const { return m_canTrustCounts; }
+    bool canOptimize() const { return !m_variants.isEmpty(); }
     
     bool isClosureCall() const; // Returns true if any callee is a closure call.
     
@@ -132,15 +130,13 @@ private:
     
     static CallLinkStatus computeFromLLInt(const ConcurrentJITLocker&, CodeBlock*, unsigned bytecodeIndex);
 #if ENABLE(JIT)
-    static CallLinkStatus computeFromCallEdgeProfile(CallEdgeProfile*);
     static CallLinkStatus computeFromCallLinkInfo(
         const ConcurrentJITLocker&, CallLinkInfo&);
 #endif
     
-    CallEdgeList m_edges;
+    CallVariantList m_variants;
     bool m_couldTakeSlowPath;
     bool m_isProved;
-    bool m_canTrustCounts;
 };
 
 } // namespace JSC
index 5fe0f74..d7b736a 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -50,5 +50,34 @@ void CallVariant::dump(PrintStream& out) const
     out.print("Executable: ", *executable());
 }
 
+CallVariantList variantListWithVariant(const CallVariantList& list, CallVariant variantToAdd)
+{
+    ASSERT(variantToAdd);
+    CallVariantList result;
+    for (CallVariant variant : list) {
+        ASSERT(variant);
+        if (!!variantToAdd) {
+            if (variant == variantToAdd)
+                variantToAdd = CallVariant();
+            else if (variant.despecifiedClosure() == variantToAdd.despecifiedClosure()) {
+                variant = variant.despecifiedClosure();
+                variantToAdd = CallVariant();
+            }
+        }
+        result.append(variant);
+    }
+    if (!!variantToAdd)
+        result.append(variantToAdd);
+    return result;
+}
+
+CallVariantList despecifiedVariantList(const CallVariantList& list)
+{
+    CallVariantList result;
+    for (CallVariant variant : list)
+        result = variantListWithVariant(result, variant.despecifiedClosure());
+    return result;
+}
+
 } // namespace JSC
 
index 3657918..2514f72 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -56,11 +56,8 @@ namespace JSC {
 //     than the fact that they don't fall into any of the above categories.
 //
 // This class serves as a kind of union over these four things. It does so by just holding a
-// JSCell*. We determine which of the modes its in by doing type checks on the cell. Note that there
-// is no lifecycle management for the cell because this class is always used in contexts where we
-// either do custom weak reference logic over instances of this class (see CallEdgeProfile), or we
-// are inside the compiler and we assume that the compiler runs in between collections and so can
-// touch the heap without notifying anyone.
+// JSCell*. We determine which of the modes its in by doing type checks on the cell. Note that we
+// cannot use WriteBarrier<> here because this gets used inside the compiler.
 
 class CallVariant {
 public:
@@ -181,6 +178,13 @@ struct CallVariantHash {
 
 typedef Vector<CallVariant, 1> CallVariantList;
 
+// Returns a new variant list by attempting to either append the given variant or merge it with one
+// of the variants we already have by despecifying closures.
+CallVariantList variantListWithVariant(const CallVariantList&, CallVariant);
+
+// Returns a new list where every element is despecified, and the list is deduplicated.
+CallVariantList despecifiedVariantList(const CallVariantList&);
+
 } // namespace JSC
 
 namespace WTF {
index e217256..0edafc3 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2008, 2009, 2010, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2008-2010, 2012-2015 Apple Inc. All rights reserved.
  * Copyright (C) 2008 Cameron Zwarich <cwzwarich@uwaterloo.ca>
  *
  * Redistribution and use in source and binary forms, with or without
@@ -2174,6 +2174,8 @@ CodeBlock::~CodeBlock()
     // destructor will try to remove nodes from our (no longer valid) linked list.
     while (m_incomingCalls.begin() != m_incomingCalls.end())
         m_incomingCalls.begin()->remove();
+    while (m_incomingPolymorphicCalls.begin() != m_incomingPolymorphicCalls.end())
+        m_incomingPolymorphicCalls.begin()->remove();
     
     // Note that our outgoing calls will be removed from other CodeBlocks'
     // m_incomingCalls linked lists through the execution of the ~CallLinkInfo
@@ -3040,6 +3042,12 @@ void CodeBlock::linkIncomingCall(ExecState* callerFrame, CallLinkInfo* incoming)
     noticeIncomingCall(callerFrame);
     m_incomingCalls.push(incoming);
 }
+
+void CodeBlock::linkIncomingPolymorphicCall(ExecState* callerFrame, PolymorphicCallNode* incoming)
+{
+    noticeIncomingCall(callerFrame);
+    m_incomingPolymorphicCalls.push(incoming);
+}
 #endif // ENABLE(JIT)
 
 void CodeBlock::unlinkIncomingCalls()
@@ -3047,11 +3055,13 @@ void CodeBlock::unlinkIncomingCalls()
     while (m_incomingLLIntCalls.begin() != m_incomingLLIntCalls.end())
         m_incomingLLIntCalls.begin()->unlink();
 #if ENABLE(JIT)
-    if (m_incomingCalls.isEmpty())
+    if (m_incomingCalls.isEmpty() && m_incomingPolymorphicCalls.isEmpty())
         return;
     RepatchBuffer repatchBuffer(this);
     while (m_incomingCalls.begin() != m_incomingCalls.end())
         m_incomingCalls.begin()->unlink(repatchBuffer);
+    while (m_incomingPolymorphicCalls.begin() != m_incomingPolymorphicCalls.end())
+        m_incomingPolymorphicCalls.begin()->unlink(repatchBuffer);
 #endif // ENABLE(JIT)
 }
 
@@ -3245,12 +3255,19 @@ void CodeBlock::noticeIncomingCall(ExecState* callerFrame)
     CodeBlock* callerCodeBlock = callerFrame->codeBlock();
     
     if (Options::verboseCallLink())
-        dataLog("Noticing call link from ", *callerCodeBlock, " to ", *this, "\n");
+        dataLog("Noticing call link from ", pointerDump(callerCodeBlock), " to ", *this, "\n");
     
+#if ENABLE(DFG_JIT)
     if (!m_shouldAlwaysBeInlined)
         return;
+    
+    if (!callerCodeBlock) {
+        m_shouldAlwaysBeInlined = false;
+        if (Options::verboseCallLink())
+            dataLog("    Clearing SABI because caller is native.\n");
+        return;
+    }
 
-#if ENABLE(DFG_JIT)
     if (!hasBaselineJITProfiling())
         return;
 
@@ -3278,6 +3295,13 @@ void CodeBlock::noticeIncomingCall(ExecState* callerFrame)
         return;
     }
     
+    if (JITCode::isOptimizingJIT(callerCodeBlock->jitType())) {
+        m_shouldAlwaysBeInlined = false;
+        if (Options::verboseCallLink())
+            dataLog("    Clearing SABI bcause caller was already optimized.\n");
+        return;
+    }
+    
     if (callerCodeBlock->codeType() != FunctionCode) {
         // If the caller is either eval or global code, assume that that won't be
         // optimized anytime soon. For eval code this is particularly true since we
@@ -3298,8 +3322,11 @@ void CodeBlock::noticeIncomingCall(ExecState* callerFrame)
         m_shouldAlwaysBeInlined = false;
         return;
     }
-
-    RELEASE_ASSERT(callerCodeBlock->m_capabilityLevelState != DFG::CapabilityLevelNotSet);
+    
+    if (callerCodeBlock->m_capabilityLevelState == DFG::CapabilityLevelNotSet) {
+        dataLog("In call from ", *callerCodeBlock, " ", callerFrame->codeOrigin(), " to ", *this, ": caller's DFG capability level is not set.\n");
+        CRASH();
+    }
     
     if (canCompile(callerCodeBlock->m_capabilityLevelState))
         return;
index 5f3d3c6..6580557 100644 (file)
@@ -229,11 +229,7 @@ public:
     void unlinkCalls();
         
     void linkIncomingCall(ExecState* callerFrame, CallLinkInfo*);
-        
-    bool isIncomingCallAlreadyLinked(CallLinkInfo* incoming)
-    {
-        return m_incomingCalls.isOnList(incoming);
-    }
+    void linkIncomingPolymorphicCall(ExecState* callerFrame, PolymorphicCallNode*);
 #endif // ENABLE(JIT)
 
     void linkIncomingCall(ExecState* callerFrame, LLIntCallLinkInfo*);
@@ -1077,6 +1073,7 @@ private:
     Vector<ByValInfo> m_byValInfos;
     Bag<CallLinkInfo> m_callLinkInfos;
     SentinelLinkedList<CallLinkInfo, BasicRawSentinelNode<CallLinkInfo>> m_incomingCalls;
+    SentinelLinkedList<PolymorphicCallNode, BasicRawSentinelNode<PolymorphicCallNode>> m_incomingPolymorphicCalls;
 #endif
     std::unique_ptr<CompactJITCodeMap> m_jitCodeMap;
 #if ENABLE(DFG_JIT)
index c7e6a46..d7b5b0f 100644 (file)
@@ -741,7 +741,7 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
         
     case ArithSin: {
         JSValue child = forNode(node->child1()).value();
-        if (child && child.isNumber()) {
+        if (false && child && child.isNumber()) {
             setConstant(node, jsDoubleNumber(sin(child.asNumber())));
             break;
         }
@@ -751,7 +751,7 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
     
     case ArithCos: {
         JSValue child = forNode(node->child1()).value();
-        if (child && child.isNumber()) {
+        if (false && child && child.isNumber()) {
             setConstant(node, jsDoubleNumber(cos(child.asNumber())));
             break;
         }
@@ -1968,14 +1968,6 @@ bool AbstractInterpreter<AbstractStateType>::executeEffects(unsigned clobberLimi
         forNode(node).makeHeapTop();
         break;
 
-    case ProfiledCall:
-    case ProfiledConstruct:
-        if (forNode(m_graph.varArgChild(node, 0)).m_value)
-            m_state.setFoundConstants(true);
-        clobberWorld(node->origin.semantic, clobberLimit);
-        forNode(node).makeHeapTop();
-        break;
-
     case ForceOSRExit:
     case CheckBadCell:
         m_state.setIsValid(false);
index ed809f2..60e61be 100644 (file)
@@ -1,5 +1,5 @@
  /*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -675,7 +675,7 @@ private:
         if (parameterSlots > m_parameterSlots)
             m_parameterSlots = parameterSlots;
 
-        int dummyThisArgument = op == Call || op == NativeCall || op == ProfiledCall ? 0 : 1;
+        int dummyThisArgument = op == Call || op == NativeCall ? 0 : 1;
         for (int i = 0 + dummyThisArgument; i < argCount; ++i)
             addVarArgChild(get(virtualRegisterForArgument(i, registerOffset)));
 
@@ -1044,17 +1044,8 @@ void ByteCodeParser::handleCall(
     if (callTarget->hasConstant())
         callLinkStatus = CallLinkStatus(callTarget->asJSValue()).setIsProved(true);
     
-    if ((!callLinkStatus.canOptimize() || callLinkStatus.size() != 1)
-        && !isFTL(m_graph.m_plan.mode) && Options::useFTLJIT()
-        && InlineCallFrame::isNormalCall(kind)
-        && CallEdgeLog::isEnabled()
-        && Options::dfgDoesCallEdgeProfiling()) {
-        ASSERT(op == Call || op == Construct);
-        if (op == Call)
-            op = ProfiledCall;
-        else
-            op = ProfiledConstruct;
-    }
+    if (Options::verboseDFGByteCodeParsing())
+        dataLog("    Handling call at ", currentCodeOrigin(), ": ", callLinkStatus, "\n");
     
     if (!callLinkStatus.canOptimize()) {
         // Oddly, this conflates calls that haven't executed with calls that behaved sufficiently polymorphically
@@ -1076,17 +1067,17 @@ void ByteCodeParser::handleCall(
     
 #if ENABLE(FTL_NATIVE_CALL_INLINING)
     if (isFTL(m_graph.m_plan.mode) && Options::optimizeNativeCalls() && callLinkStatus.size() == 1 && !callLinkStatus.couldTakeSlowPath()) {
-        CallVariant callee = callLinkStatus[0].callee();
+        CallVariant callee = callLinkStatus[0];
         JSFunction* function = callee.function();
         CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
         if (function && function->isHostFunction()) {
             emitFunctionChecks(callee, callTarget, registerOffset, specializationKind);
             callOpInfo = OpInfo(m_graph.freeze(function));
 
-            if (op == Call || op == ProfiledCall)
+            if (op == Call)
                 op = NativeCall;
             else {
-                ASSERT(op == Construct || op == ProfiledConstruct);
+                ASSERT(op == Construct);
                 op = NativeConstruct;
             }
         }
@@ -1426,13 +1417,13 @@ bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, con
     // this in cases where we don't need control flow diamonds to check the callee.
     if (!callLinkStatus.couldTakeSlowPath() && callLinkStatus.size() == 1) {
         emitFunctionChecks(
-            callLinkStatus[0].callee(), callTargetNode, registerOffset, specializationKind);
+            callLinkStatus[0], callTargetNode, registerOffset, specializationKind);
         bool result = attemptToInlineCall(
-            callTargetNode, resultOperand, callLinkStatus[0].callee(), registerOffset,
+            callTargetNode, resultOperand, callLinkStatus[0], registerOffset,
             argumentCountIncludingThis, nextOffset, kind, CallerDoesNormalLinking, prediction,
             inliningBalance);
         if (!result && !callLinkStatus.isProved())
-            undoFunctionChecks(callLinkStatus[0].callee());
+            undoFunctionChecks(callLinkStatus[0]);
         if (verbose) {
             dataLog("Done inlining (simple).\n");
             dataLog("Stack: ", currentCodeOrigin(), "\n");
@@ -1462,7 +1453,7 @@ bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, con
     bool allAreClosureCalls = true;
     bool allAreDirectCalls = true;
     for (unsigned i = callLinkStatus.size(); i--;) {
-        if (callLinkStatus[i].callee().isClosureCall())
+        if (callLinkStatus[i].isClosureCall())
             allAreDirectCalls = false;
         else
             allAreClosureCalls = false;
@@ -1475,9 +1466,8 @@ bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, con
         thingToSwitchOn = addToGraph(GetExecutable, callTargetNode);
     else {
         // FIXME: We should be able to handle this case, but it's tricky and we don't know of cases
-        // where it would be beneficial. Also, CallLinkStatus would make all callees appear like
-        // closure calls if any calls were closure calls - except for calls to internal functions.
-        // So this will only arise if some callees are internal functions and others are closures.
+        // where it would be beneficial. It might be best to handle these cases as if all calls were
+        // closure calls.
         // https://bugs.webkit.org/show_bug.cgi?id=136020
         if (verbose) {
             dataLog("Bailing inlining (mix).\n");
@@ -1517,7 +1507,7 @@ bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, con
     // to the continuation block, which we create last.
     Vector<BasicBlock*> landingBlocks;
     
-    // We make force this true if we give up on inlining any of the edges.
+    // We may force this true if we give up on inlining any of the edges.
     bool couldTakeSlowPath = callLinkStatus.couldTakeSlowPath();
     
     if (verbose)
@@ -1533,7 +1523,7 @@ bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, con
         Node* myCallTargetNode = getDirect(calleeReg);
         
         bool inliningResult = attemptToInlineCall(
-            myCallTargetNode, resultOperand, callLinkStatus[i].callee(), registerOffset,
+            myCallTargetNode, resultOperand, callLinkStatus[i], registerOffset,
             argumentCountIncludingThis, nextOffset, kind, CallerLinksManually, prediction,
             inliningBalance);
         
@@ -1552,10 +1542,10 @@ bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, con
         
         JSCell* thingToCaseOn;
         if (allAreDirectCalls)
-            thingToCaseOn = callLinkStatus[i].callee().nonExecutableCallee();
+            thingToCaseOn = callLinkStatus[i].nonExecutableCallee();
         else {
             ASSERT(allAreClosureCalls);
-            thingToCaseOn = callLinkStatus[i].callee().executable();
+            thingToCaseOn = callLinkStatus[i].executable();
         }
         data.cases.append(SwitchCase(m_graph.freeze(thingToCaseOn), block.get()));
         m_currentIndex = nextOffset;
@@ -1567,7 +1557,7 @@ bool ByteCodeParser::handleInlining(Node* callTargetNode, int resultOperand, con
         landingBlocks.append(m_currentBlock);
 
         if (verbose)
-            dataLog("Finished inlining ", callLinkStatus[i].callee(), " at ", currentCodeOrigin(), ".\n");
+            dataLog("Finished inlining ", callLinkStatus[i], " at ", currentCodeOrigin(), ".\n");
     }
     
     RefPtr<BasicBlock> slowPathBlock = adoptRef(
index 6c6927b..3b4d48e 100644 (file)
@@ -363,8 +363,6 @@ void clobberize(Graph& graph, Node* node, const ReadFunctor& read, const WriteFu
     case ArrayPop:
     case Call:
     case Construct:
-    case ProfiledCall:
-    case ProfiledConstruct:
     case NativeCall:
     case NativeConstruct:
     case ToPrimitive:
index 630f30c..31916d8 100644 (file)
@@ -420,20 +420,6 @@ private:
                 break;
             }
                 
-            case ProfiledCall:
-            case ProfiledConstruct: {
-                if (!m_state.forNode(m_graph.varArgChild(node, 0)).m_value)
-                    break;
-                
-                // If we were able to prove that the callee is a constant then the normal call
-                // inline cache will record this callee. This means that there is no need to do any
-                // additional profiling.
-                m_interpreter.execute(indexInBlock);
-                node->setOp(node->op() == ProfiledCall ? Call : Construct);
-                eliminated = true;
-                break;
-            }
-
             default:
                 break;
             }
index d635775..7c6090a 100644 (file)
@@ -118,8 +118,6 @@ bool doesGC(Graph& graph, Node* node)
     case Construct:
     case NativeCall:
     case NativeConstruct:
-    case ProfiledCall:
-    case ProfiledConstruct:
     case Breakpoint:
     case ProfileWillCall:
     case ProfileDidCall:
index e6c8e8b..a822079 100644 (file)
@@ -79,20 +79,17 @@ static CompilationResult compileImpl(
     if (mode == DFGMode) {
         vm.getCTIStub(linkCallThunkGenerator);
         vm.getCTIStub(linkConstructThunkGenerator);
-        vm.getCTIStub(linkClosureCallThunkGenerator);
+        vm.getCTIStub(linkPolymorphicCallThunkGenerator);
         vm.getCTIStub(virtualCallThunkGenerator);
         vm.getCTIStub(virtualConstructThunkGenerator);
     } else {
         vm.getCTIStub(linkCallThatPreservesRegsThunkGenerator);
         vm.getCTIStub(linkConstructThatPreservesRegsThunkGenerator);
-        vm.getCTIStub(linkClosureCallThatPreservesRegsThunkGenerator);
+        vm.getCTIStub(linkPolymorphicCallThatPreservesRegsThunkGenerator);
         vm.getCTIStub(virtualCallThatPreservesRegsThunkGenerator);
         vm.getCTIStub(virtualConstructThatPreservesRegsThunkGenerator);
     }
     
-    if (CallEdgeLog::isEnabled())
-        vm.ensureCallEdgeLog().processLog();
-
     if (vm.typeProfiler())
         vm.typeProfilerLog()->processLogEntries(ASCIILiteral("Preparing for DFG compilation."));
     
index 5208b52..f50cb69 100644 (file)
@@ -1202,8 +1202,6 @@ private:
         case AllocationProfileWatchpoint:
         case Call:
         case Construct:
-        case ProfiledCall:
-        case ProfiledConstruct:
         case ProfileControlFlow:
         case NativeCall:
         case NativeConstruct:
index 4606720..e884f5e 100644 (file)
@@ -1064,8 +1064,6 @@ struct Node {
         case GetMyArgumentByValSafe:
         case Call:
         case Construct:
-        case ProfiledCall:
-        case ProfiledConstruct:
         case NativeCall:
         case NativeConstruct:
         case GetByOffset:
index 8e79564..f4cfe75 100644 (file)
@@ -217,8 +217,6 @@ namespace JSC { namespace DFG {
     /* Calls. */\
     macro(Call, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \
     macro(Construct, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \
-    macro(ProfiledCall, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \
-    macro(ProfiledConstruct, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \
     macro(NativeCall, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \
     macro(NativeConstruct, NodeResultJS | NodeMustGenerate | NodeHasVarArgs | NodeClobbersWorld) \
     \
index 92e9cc2..33641e9 100644 (file)
@@ -1203,6 +1203,11 @@ void JIT_OPERATION triggerTierUpNow(ExecState* exec)
     DeferGC deferGC(vm->heap);
     CodeBlock* codeBlock = exec->codeBlock();
     
+    if (codeBlock->jitType() != JITCode::DFGJIT) {
+        dataLog("Unexpected code block in DFG->FTL tier-up: ", *codeBlock, "\n");
+        RELEASE_ASSERT_NOT_REACHED();
+    }
+    
     JITCode* jitCode = codeBlock->jitCode()->dfg();
     
     if (Options::verboseOSR()) {
@@ -1222,6 +1227,11 @@ char* JIT_OPERATION triggerOSREntryNow(
     DeferGC deferGC(vm->heap);
     CodeBlock* codeBlock = exec->codeBlock();
     
+    if (codeBlock->jitType() != JITCode::DFGJIT) {
+        dataLog("Unexpected code block in DFG->FTL tier-up: ", *codeBlock, "\n");
+        RELEASE_ASSERT_NOT_REACHED();
+    }
+    
     JITCode* jitCode = codeBlock->jitCode()->dfg();
     
     if (Options::verboseOSR()) {
index 9d52d4f..b5b3ab5 100644 (file)
@@ -188,8 +188,6 @@ private:
         case GetDirectPname:
         case Call:
         case Construct:
-        case ProfiledCall:
-        case ProfiledConstruct:
         case NativeCall:
         case NativeConstruct:
         case GetGlobalVar:
index 32c3a2b..8e8ae07 100644 (file)
@@ -189,8 +189,6 @@ bool safeToExecute(AbstractStateType& state, Graph& graph, Node* node)
     case CompareStrictEq:
     case Call:
     case Construct:
-    case ProfiledCall:
-    case ProfiledConstruct:
     case NewObject:
     case NewArray:
     case NewArrayWithSize:
index 7a504dd..02186f8 100644 (file)
@@ -638,9 +638,9 @@ void SpeculativeJIT::compileMiscStrictEq(Node* node)
 
 void SpeculativeJIT::emitCall(Node* node)
 {
-    bool isCall = node->op() == Call || node->op() == ProfiledCall;
+    bool isCall = node->op() == Call;
     if (!isCall)
-        ASSERT(node->op() == Construct || node->op() == ProfiledConstruct);
+        ASSERT(node->op() == Construct);
 
     // For constructors, the this argument is not passed but we have to make space
     // for it.
@@ -689,11 +689,6 @@ void SpeculativeJIT::emitCall(Node* node)
     
     CallLinkInfo* info = m_jit.codeBlock()->addCallLinkInfo();
 
-    if (node->op() == ProfiledCall || node->op() == ProfiledConstruct) {
-        m_jit.vm()->callEdgeLog->emitLogCode(
-            m_jit, info->callEdgeProfile, callee.jsValueRegs());
-    }
-    
     slowPath.append(branchNotCell(callee.jsValueRegs()));
     slowPath.append(m_jit.branchPtrWithPatch(MacroAssembler::NotEqual, calleePayloadGPR, targetToCheck));
 
@@ -4169,8 +4164,6 @@ void SpeculativeJIT::compile(Node* node)
 
     case Call:
     case Construct:
-    case ProfiledCall:
-    case ProfiledConstruct:
         emitCall(node);
         break;
 
index 6bfa562..803d6c5 100644 (file)
@@ -624,9 +624,9 @@ void SpeculativeJIT::compileMiscStrictEq(Node* node)
 
 void SpeculativeJIT::emitCall(Node* node)
 {
-    bool isCall = node->op() == Call || node->op() == ProfiledCall;
+    bool isCall = node->op() == Call;
     if (!isCall)
-        DFG_ASSERT(m_jit.graph(), node, node->op() == Construct || node->op() == ProfiledConstruct);
+        DFG_ASSERT(m_jit.graph(), node, node->op() == Construct);
     
     // For constructors, the this argument is not passed but we have to make space
     // for it.
@@ -669,11 +669,6 @@ void SpeculativeJIT::emitCall(Node* node)
     
     CallLinkInfo* callLinkInfo = m_jit.codeBlock()->addCallLinkInfo();
     
-    if (node->op() == ProfiledCall || node->op() == ProfiledConstruct) {
-        m_jit.vm()->callEdgeLog->emitLogCode(
-            m_jit, callLinkInfo->callEdgeProfile, JSValueRegs(calleeGPR));
-    }
-
     slowPath = m_jit.branchPtrWithPatch(MacroAssembler::NotEqual, calleeGPR, targetToCheck, MacroAssembler::TrustedImmPtr(0));
 
     JITCompiler::Call fastCall = m_jit.nearCall();
@@ -4236,8 +4231,6 @@ void SpeculativeJIT::compile(Node* node)
 
     case Call:
     case Construct:
-    case ProfiledCall:
-    case ProfiledConstruct:
         emitCall(node);
         break;
         
index d1b6e7a..09c22b8 100644 (file)
@@ -50,17 +50,13 @@ public:
         if (!Options::useFTLJIT())
             return false;
         
-        if (m_graph.m_profiledBlock->m_didFailFTLCompilation) {
-            removeFTLProfiling();
+        if (m_graph.m_profiledBlock->m_didFailFTLCompilation)
             return false;
-        }
         
 #if ENABLE(FTL_JIT)
         FTL::CapabilityLevel level = FTL::canCompile(m_graph);
-        if (level == FTL::CannotCompile) {
-            removeFTLProfiling();
+        if (level == FTL::CannotCompile)
             return false;
-        }
         
         if (!Options::enableOSREntryToFTL())
             level = FTL::CanCompile;
@@ -122,32 +118,6 @@ public:
         return false;
 #endif // ENABLE(FTL_JIT)
     }
-
-private:
-    void removeFTLProfiling()
-    {
-        for (BlockIndex blockIndex = m_graph.numBlocks(); blockIndex--;) {
-            BasicBlock* block = m_graph.block(blockIndex);
-            if (!block)
-                continue;
-            
-            for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
-                Node* node = block->at(nodeIndex);
-                switch (node->op()) {
-                case ProfiledCall:
-                    node->setOp(Call);
-                    break;
-                    
-                case ProfiledConstruct:
-                    node->setOp(Construct);
-                    break;
-                    
-                default:
-                    break;
-                }
-            }
-        }
-    }
 };
 
 bool performTierUpCheckInjection(Graph& graph)
index 61b31db..6d90d69 100644 (file)
@@ -176,11 +176,6 @@ inline CapabilityLevel canCompile(Node* node)
     case MaterializeNewObject:
         // These are OK.
         break;
-    case ProfiledCall:
-    case ProfiledConstruct:
-        // These are OK not because the FTL can support them, but because if the DFG sees one of
-        // these then the FTL will see a normal Call/Construct.
-        break;
     case Identity:
         // No backend handles this because it will be optimized out. But we may check
         // for capabilities before optimization. It would be a deep error to remove this
index 134232a..035dc78 100644 (file)
@@ -992,11 +992,6 @@ void Heap::collect(HeapOperation collectionType)
         vm()->typeProfilerLog()->processLogEntries(ASCIILiteral("GC"));
     }
     
-    if (vm()->callEdgeLog) {
-        DeferGCForAWhile awhile(*this);
-        vm()->callEdgeLog->processLog();
-    }
-    
     RELEASE_ASSERT(!m_deferralDepth);
     ASSERT(vm()->currentThreadIsHoldingAPILock());
     RELEASE_ASSERT(vm()->atomicStringTable() == wtfThreadData().atomicStringTable());
index e65ed0a..f6a4967 100644 (file)
@@ -54,6 +54,7 @@ namespace JSC {
 //     int value = switch.caseValue();
 //     unsigned index = switch.caseIndex(); // index into casesVector, above
 //     ... // generate code for this case
+//     ... = jit.jump(); // you have to jump out yourself; falling through causes undefined behavior
 // }
 // switch.fallThrough().link(&jit);
 
diff --git a/Source/JavaScriptCore/jit/ClosureCallStubRoutine.cpp b/Source/JavaScriptCore/jit/ClosureCallStubRoutine.cpp
deleted file mode 100644 (file)
index b4b3352..0000000
+++ /dev/null
@@ -1,60 +0,0 @@
-/*
- * Copyright (C) 2012, 2014, 2015 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#include "config.h"
-#include "ClosureCallStubRoutine.h"
-
-#if ENABLE(JIT)
-
-#include "Executable.h"
-#include "Heap.h"
-#include "VM.h"
-#include "JSCInlines.h"
-#include "SlotVisitor.h"
-#include "Structure.h"
-
-namespace JSC {
-
-ClosureCallStubRoutine::ClosureCallStubRoutine(
-    const MacroAssemblerCodeRef& code, VM& vm, const JSCell* owner,
-    ExecutableBase* executable)
-    : GCAwareJITStubRoutine(code, vm)
-    , m_executable(vm, owner, executable)
-{
-}
-
-ClosureCallStubRoutine::~ClosureCallStubRoutine()
-{
-}
-
-void ClosureCallStubRoutine::markRequiredObjectsInternal(SlotVisitor& visitor)
-{
-    visitor.append(&m_executable);
-}
-
-} // namespace JSC
-
-#endif // ENABLE(JIT)
-
diff --git a/Source/JavaScriptCore/jit/ClosureCallStubRoutine.h b/Source/JavaScriptCore/jit/ClosureCallStubRoutine.h
deleted file mode 100644 (file)
index 4e93bf6..0000000
+++ /dev/null
@@ -1,58 +0,0 @@
-/*
- * Copyright (C) 2012, 2014, 2015 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
- * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
- * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
- * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
- * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
- * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
- * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
- * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
- * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
- * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
- */
-
-#ifndef ClosureCallStubRoutine_h
-#define ClosureCallStubRoutine_h
-
-#if ENABLE(JIT)
-
-#include "CodeOrigin.h"
-#include "GCAwareJITStubRoutine.h"
-
-namespace JSC {
-
-class ClosureCallStubRoutine : public GCAwareJITStubRoutine {
-public:
-    ClosureCallStubRoutine(
-        const MacroAssemblerCodeRef&, VM&, const JSCell* owner,
-        ExecutableBase*);
-    
-    virtual ~ClosureCallStubRoutine();
-    
-    ExecutableBase* executable() const { return m_executable.get(); }
-
-protected:
-    virtual void markRequiredObjectsInternal(SlotVisitor&) override;
-
-private:
-    WriteBarrier<ExecutableBase> m_executable;
-};
-
-} // namespace JSC
-
-#endif // ENABLE(JIT)
-
-#endif // ClosureCallStubRoutine_h
-
index 0abdf78..218d1e6 100644 (file)
@@ -214,10 +214,6 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca
     
     CallLinkInfo* info = m_codeBlock->addCallLinkInfo();
 
-    if (CallEdgeLog::isEnabled() && shouldEmitProfiling()
-        && Options::baselineDoesCallEdgeProfiling())
-        m_vm->ensureCallEdgeLog().emitLogCode(*this, info->callEdgeProfile, JSValueRegs(regT0));
-
     if (opcodeID == op_call_eval) {
         compileCallEval(instruction);
         return;
index 5c5e0e7..713803f 100644 (file)
@@ -278,12 +278,6 @@ void JIT::compileOpCall(OpcodeID opcodeID, Instruction* instruction, unsigned ca
 
     CallLinkInfo* info = m_codeBlock->addCallLinkInfo();
 
-    if (CallEdgeLog::isEnabled() && shouldEmitProfiling()
-        && Options::baselineDoesCallEdgeProfiling()) {
-        m_vm->ensureCallEdgeLog().emitLogCode(
-            *this, info->callEdgeProfile, JSValueRegs(regT1, regT0));
-    }
-
     if (opcodeID == op_call_eval) {
         compileCallEval(instruction);
         return;
index 3885c06..7222cd4 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -770,47 +770,12 @@ inline char* virtualFor(
     return virtualForWithFunction(execCallee, kind, registers, calleeAsFunctionCellIgnored);
 }
 
-static bool attemptToOptimizeClosureCall(
-    ExecState* execCallee, RegisterPreservationMode registers, JSCell* calleeAsFunctionCell,
-    CallLinkInfo& callLinkInfo)
-{
-    if (!calleeAsFunctionCell)
-        return false;
-    
-    JSFunction* callee = jsCast<JSFunction*>(calleeAsFunctionCell);
-    JSFunction* oldCallee = callLinkInfo.callee.get();
-    
-    if (!oldCallee || oldCallee->executable() != callee->executable())
-        return false;
-    
-    ASSERT(callee->executable()->hasJITCodeForCall());
-    MacroAssemblerCodePtr codePtr =
-        callee->executable()->generatedJITCodeForCall()->addressForCall(
-            *execCallee->callerFrame()->codeBlock()->vm(), callee->executable(),
-            ArityCheckNotRequired, registers);
-    
-    CodeBlock* codeBlock;
-    if (callee->executable()->isHostFunction())
-        codeBlock = 0;
-    else {
-        codeBlock = jsCast<FunctionExecutable*>(callee->executable())->codeBlockForCall();
-        if (execCallee->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo.callType == CallLinkInfo::CallVarargs || callLinkInfo.callType == CallLinkInfo::ConstructVarargs)
-            return false;
-    }
-    
-    linkClosureCall(
-        execCallee, callLinkInfo, codeBlock, callee->executable(), codePtr, registers);
-    
-    return true;
-}
-
-char* JIT_OPERATION operationLinkClosureCall(ExecState* execCallee, CallLinkInfo* callLinkInfo)
+char* JIT_OPERATION operationLinkPolymorphicCall(ExecState* execCallee, CallLinkInfo* callLinkInfo)
 {
     JSCell* calleeAsFunctionCell;
     char* result = virtualForWithFunction(execCallee, CodeForCall, RegisterPreservationNotRequired, calleeAsFunctionCell);
 
-    if (!attemptToOptimizeClosureCall(execCallee, RegisterPreservationNotRequired, calleeAsFunctionCell, *callLinkInfo))
-        linkSlowFor(execCallee, *callLinkInfo, CodeForCall, RegisterPreservationNotRequired);
+    linkPolymorphicCall(execCallee, *callLinkInfo, CallVariant(calleeAsFunctionCell), RegisterPreservationNotRequired);
     
     return result;
 }
@@ -825,13 +790,12 @@ char* JIT_OPERATION operationVirtualConstruct(ExecState* execCallee, CallLinkInf
     return virtualFor(execCallee, CodeForConstruct, RegisterPreservationNotRequired);
 }
 
-char* JIT_OPERATION operationLinkClosureCallThatPreservesRegs(ExecState* execCallee, CallLinkInfo* callLinkInfo)
+char* JIT_OPERATION operationLinkPolymorphicCallThatPreservesRegs(ExecState* execCallee, CallLinkInfo* callLinkInfo)
 {
     JSCell* calleeAsFunctionCell;
     char* result = virtualForWithFunction(execCallee, CodeForCall, MustPreserveRegisters, calleeAsFunctionCell);
 
-    if (!attemptToOptimizeClosureCall(execCallee, MustPreserveRegisters, calleeAsFunctionCell, *callLinkInfo))
-        linkSlowFor(execCallee, *callLinkInfo, CodeForCall, MustPreserveRegisters);
+    linkPolymorphicCall(execCallee, *callLinkInfo, CallVariant(calleeAsFunctionCell), MustPreserveRegisters);
     
     return result;
 }
@@ -1027,6 +991,10 @@ SlowPathReturnType JIT_OPERATION operationOptimize(ExecState* exec, int32_t byte
     DeferGCForAWhile deferGC(vm.heap);
     
     CodeBlock* codeBlock = exec->codeBlock();
+    if (codeBlock->jitType() != JITCode::BaselineJIT) {
+        dataLog("Unexpected code block in Baseline->DFG tier-up: ", *codeBlock, "\n");
+        RELEASE_ASSERT_NOT_REACHED();
+    }
     
     if (bytecodeIndex) {
         // If we're attempting to OSR from a loop, assume that this should be
index 804291e..c43bf7b 100644 (file)
@@ -246,12 +246,12 @@ void JIT_OPERATION operationPutByValGeneric(ExecState*, EncodedJSValue, EncodedJ
 void JIT_OPERATION operationDirectPutByValGeneric(ExecState*, EncodedJSValue, EncodedJSValue, EncodedJSValue) WTF_INTERNAL;
 EncodedJSValue JIT_OPERATION operationCallEval(ExecState*, ExecState*) WTF_INTERNAL;
 char* JIT_OPERATION operationLinkCall(ExecState*, CallLinkInfo*) WTF_INTERNAL;
-char* JIT_OPERATION operationLinkClosureCall(ExecState*, CallLinkInfo*) WTF_INTERNAL;
+char* JIT_OPERATION operationLinkPolymorphicCall(ExecState*, CallLinkInfo*) WTF_INTERNAL;
 char* JIT_OPERATION operationVirtualCall(ExecState*, CallLinkInfo*) WTF_INTERNAL;
 char* JIT_OPERATION operationVirtualConstruct(ExecState*, CallLinkInfo*) WTF_INTERNAL;
 char* JIT_OPERATION operationLinkConstruct(ExecState*, CallLinkInfo*) WTF_INTERNAL;
 char* JIT_OPERATION operationLinkCallThatPreservesRegs(ExecState*, CallLinkInfo*) WTF_INTERNAL;
-char* JIT_OPERATION operationLinkClosureCallThatPreservesRegs(ExecState*, CallLinkInfo*) WTF_INTERNAL;
+char* JIT_OPERATION operationLinkPolymorphicCallThatPreservesRegs(ExecState*, CallLinkInfo*) WTF_INTERNAL;
 char* JIT_OPERATION operationVirtualCallThatPreservesRegs(ExecState*, CallLinkInfo*) WTF_INTERNAL;
 char* JIT_OPERATION operationVirtualConstructThatPreservesRegs(ExecState*, CallLinkInfo*) WTF_INTERNAL;
 char* JIT_OPERATION operationLinkConstructThatPreservesRegs(ExecState*, CallLinkInfo*) WTF_INTERNAL;
@@ -391,13 +391,13 @@ inline P_JITOperation_ECli operationVirtualFor(
     return 0;
 }
 
-inline P_JITOperation_ECli operationLinkClosureCallFor(RegisterPreservationMode registers)
+inline P_JITOperation_ECli operationLinkPolymorphicCallFor(RegisterPreservationMode registers)
 {
     switch (registers) {
     case RegisterPreservationNotRequired:
-        return operationLinkClosureCall;
+        return operationLinkPolymorphicCall;
     case MustPreserveRegisters:
-        return operationLinkClosureCallThatPreservesRegs;
+        return operationLinkPolymorphicCallThatPreservesRegs;
     }
     RELEASE_ASSERT_NOT_REACHED();
     return 0;
index 163705d..849492b 100644 (file)
@@ -141,6 +141,9 @@ public:
         return true;
     }
     
+    // Return true if you are still valid after. Return false if you are now invalid. If you return
+    // false, you will usually not do any clearing because the idea is that you will simply be
+    // destroyed.
     virtual bool visitWeak(RepatchBuffer&);
 
 protected:
index 34c20db..b9da498 100644 (file)
@@ -31,6 +31,7 @@
 #include "MacroAssembler.h"
 #include "SlotVisitor.h"
 #include "UnusedPointer.h"
+#include "VM.h"
 #include "WriteBarrier.h"
 
 namespace JSC {
diff --git a/Source/JavaScriptCore/jit/PolymorphicCallStubRoutine.cpp b/Source/JavaScriptCore/jit/PolymorphicCallStubRoutine.cpp
new file mode 100644 (file)
index 0000000..a9382da
--- /dev/null
@@ -0,0 +1,117 @@
+/*
+ * Copyright (C) 2015 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "PolymorphicCallStubRoutine.h"
+
+#if ENABLE(JIT)
+
+#include "CallLinkInfo.h"
+#include "CodeBlock.h"
+#include "JSCInlines.h"
+#include "LinkBuffer.h"
+
+namespace JSC {
+
+PolymorphicCallNode::~PolymorphicCallNode()
+{
+    if (isOnList())
+        remove();
+}
+
+void PolymorphicCallNode::unlink(RepatchBuffer& repatchBuffer)
+{
+    if (Options::showDisassembly())
+        dataLog("Unlinking polymorphic call at ", m_callLinkInfo->callReturnLocation, ", ", m_callLinkInfo->codeOrigin, "\n");
+    
+    m_callLinkInfo->unlink(repatchBuffer);
+    
+    if (isOnList())
+        remove();
+}
+
+void PolymorphicCallCase::dump(PrintStream& out) const
+{
+    out.print("<variant = ", m_variant, ", codeBlock = ", pointerDump(m_codeBlock), ">");
+}
+
+PolymorphicCallStubRoutine::PolymorphicCallStubRoutine(
+    const MacroAssemblerCodeRef& codeRef, VM& vm, const JSCell* owner, ExecState* callerFrame,
+    CallLinkInfo& info, const Vector<PolymorphicCallCase>& cases,
+    std::unique_ptr<uint32_t[]> fastCounts)
+    : GCAwareJITStubRoutine(codeRef, vm)
+    , m_fastCounts(WTF::move(fastCounts))
+{
+    for (PolymorphicCallCase callCase : cases) {
+        m_variants.append(WriteBarrier<JSCell>(vm, owner, callCase.variant().rawCalleeCell()));
+        if (shouldShowDisassemblyFor(callerFrame->codeBlock()))
+            dataLog("Linking polymorphic call in ", *callerFrame->codeBlock(), " at ", callerFrame->codeOrigin(), " to ", callCase.variant(), ", codeBlock = ", pointerDump(callCase.codeBlock()), "\n");
+        if (CodeBlock* codeBlock = callCase.codeBlock())
+            codeBlock->linkIncomingPolymorphicCall(callerFrame, m_callNodes.add(&info));
+    }
+    m_variants.shrinkToFit();
+    WTF::storeStoreFence();
+}
+
+PolymorphicCallStubRoutine::~PolymorphicCallStubRoutine() { }
+
+CallVariantList PolymorphicCallStubRoutine::variants() const
+{
+    CallVariantList result;
+    for (size_t i = 0; i < m_variants.size(); ++i)
+        result.append(CallVariant(m_variants[i].get()));
+    return result;
+}
+
+CallEdgeList PolymorphicCallStubRoutine::edges() const
+{
+    // We wouldn't have these if this was an FTL stub routine. We shouldn't be asking for profiling
+    // from the FTL.
+    RELEASE_ASSERT(m_fastCounts);
+    
+    CallEdgeList result;
+    for (size_t i = 0; i < m_variants.size(); ++i)
+        result.append(CallEdge(CallVariant(m_variants[i].get()), m_fastCounts[i]));
+    return result;
+}
+
+bool PolymorphicCallStubRoutine::visitWeak(RepatchBuffer&)
+{
+    for (auto& variant : m_variants) {
+        if (!Heap::isMarked(variant.get()))
+            return false;
+    }
+    return true;
+}
+
+void PolymorphicCallStubRoutine::markRequiredObjectsInternal(SlotVisitor& visitor)
+{
+    for (auto& variant : m_variants)
+        visitor.append(&variant);
+}
+
+} // namespace JSC
+
+#endif // ENABLE(JIT)
diff --git a/Source/JavaScriptCore/jit/PolymorphicCallStubRoutine.h b/Source/JavaScriptCore/jit/PolymorphicCallStubRoutine.h
new file mode 100644 (file)
index 0000000..05ab578
--- /dev/null
@@ -0,0 +1,110 @@
+/*
+ * Copyright (C) 2015 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#ifndef PolymorphicCallStubRoutine_h
+#define PolymorphicCallStubRoutine_h
+
+#if ENABLE(JIT)
+
+#include "CallEdge.h"
+#include "CallVariant.h"
+#include "CodeOrigin.h"
+#include "GCAwareJITStubRoutine.h"
+#include <wtf/FastMalloc.h>
+#include <wtf/Noncopyable.h>
+#include <wtf/Vector.h>
+
+namespace JSC {
+
+struct CallLinkInfo;
+
+class PolymorphicCallNode : public BasicRawSentinelNode<PolymorphicCallNode> {
+    WTF_MAKE_NONCOPYABLE(PolymorphicCallNode);
+public:
+    PolymorphicCallNode(CallLinkInfo* info)
+        : m_callLinkInfo(info)
+    {
+    }
+    
+    ~PolymorphicCallNode();
+    
+    void unlink(RepatchBuffer&);
+    
+private:
+    CallLinkInfo* m_callLinkInfo;
+};
+
+class PolymorphicCallCase {
+public:
+    PolymorphicCallCase()
+        : m_codeBlock(nullptr)
+    {
+    }
+    
+    PolymorphicCallCase(CallVariant variant, CodeBlock* codeBlock)
+        : m_variant(variant)
+        , m_codeBlock(codeBlock)
+    {
+    }
+    
+    CallVariant variant() const { return m_variant; }
+    CodeBlock* codeBlock() const { return m_codeBlock; }
+    
+    void dump(PrintStream&) const;
+    
+private:
+    CallVariant m_variant;
+    CodeBlock* m_codeBlock;
+};
+
+class PolymorphicCallStubRoutine : public GCAwareJITStubRoutine {
+public:
+    PolymorphicCallStubRoutine(
+        const MacroAssemblerCodeRef&, VM&, const JSCell* owner,
+        ExecState* callerFrame, CallLinkInfo&, const Vector<PolymorphicCallCase>&,
+        std::unique_ptr<uint32_t[]> fastCounts);
+    
+    virtual ~PolymorphicCallStubRoutine();
+    
+    CallVariantList variants() const;
+    CallEdgeList edges() const;
+    
+    bool visitWeak(RepatchBuffer&) override;
+
+protected:
+    virtual void markRequiredObjectsInternal(SlotVisitor&) override;
+
+private:
+    Vector<WriteBarrier<JSCell>, 2> m_variants;
+    std::unique_ptr<uint32_t[]> m_fastCounts;
+    Bag<PolymorphicCallNode> m_callNodes;
+};
+
+} // namespace JSC
+
+#endif // ENABLE(JIT)
+
+#endif // PolymorphicCallStubRoutine_h
+
index d94b9e3..a787bd1 100644 (file)
@@ -29,6 +29,7 @@
 #if ENABLE(JIT)
 
 #include "AccessorCallJITStubRoutine.h"
+#include "BinarySwitch.h"
 #include "CCallHelpers.h"
 #include "DFGOperations.h"
 #include "DFGSpeculativeJIT.h"
@@ -1575,12 +1576,17 @@ void repatchIn(
 }
 
 static void linkSlowFor(
+    RepatchBuffer& repatchBuffer, VM* vm, CallLinkInfo& callLinkInfo, ThunkGenerator generator)
+{
+    repatchBuffer.relink(
+        callLinkInfo.callReturnLocation, vm->getCTIStub(generator).code());
+}
+
+static void linkSlowFor(
     RepatchBuffer& repatchBuffer, VM* vm, CallLinkInfo& callLinkInfo,
     CodeSpecializationKind kind, RegisterPreservationMode registers)
 {
-    repatchBuffer.relink(
-        callLinkInfo.callReturnLocation,
-        vm->getCTIStub(virtualThunkGeneratorFor(kind, registers)).code());
+    linkSlowFor(repatchBuffer, vm, callLinkInfo, virtualThunkGeneratorFor(kind, registers));
 }
 
 void linkFor(
@@ -1592,10 +1598,6 @@ void linkFor(
     
     CodeBlock* callerCodeBlock = exec->callerFrame()->codeBlock();
 
-    // If you're being call-linked from a DFG caller then you obviously didn't get inlined.
-    if (calleeCodeBlock && JITCode::isOptimizingJIT(callerCodeBlock->jitType()))
-        calleeCodeBlock->m_shouldAlwaysBeInlined = false;
-    
     VM* vm = callerCodeBlock->vm();
     
     RepatchBuffer repatchBuffer(callerCodeBlock);
@@ -1611,7 +1613,8 @@ void linkFor(
         calleeCodeBlock->linkIncomingCall(exec->callerFrame(), &callLinkInfo);
     
     if (kind == CodeForCall) {
-        repatchBuffer.relink(callLinkInfo.callReturnLocation, vm->getCTIStub(linkClosureCallThunkGeneratorFor(registers)).code());
+        linkSlowFor(
+            repatchBuffer, vm, callLinkInfo, linkPolymorphicCallThunkGeneratorFor(registers));
         return;
     }
     
@@ -1631,16 +1634,122 @@ void linkSlowFor(
     linkSlowFor(repatchBuffer, vm, callLinkInfo, kind, registers);
 }
 
-void linkClosureCall(
-    ExecState* exec, CallLinkInfo& callLinkInfo, CodeBlock* calleeCodeBlock, 
-    ExecutableBase* executable, MacroAssemblerCodePtr codePtr,
+static void revertCall(
+    RepatchBuffer& repatchBuffer, VM* vm, CallLinkInfo& callLinkInfo, ThunkGenerator generator)
+{
+    repatchBuffer.revertJumpReplacementToBranchPtrWithPatch(
+        RepatchBuffer::startOfBranchPtrWithPatchOnRegister(callLinkInfo.hotPathBegin),
+        static_cast<MacroAssembler::RegisterID>(callLinkInfo.calleeGPR), 0);
+    linkSlowFor(repatchBuffer, vm, callLinkInfo, generator);
+    callLinkInfo.hasSeenShouldRepatch = false;
+    callLinkInfo.callee.clear();
+    callLinkInfo.stub.clear();
+    if (callLinkInfo.isOnList())
+        callLinkInfo.remove();
+}
+
+void unlinkFor(
+    RepatchBuffer& repatchBuffer, CallLinkInfo& callLinkInfo,
+    CodeSpecializationKind kind, RegisterPreservationMode registers)
+{
+    if (Options::showDisassembly())
+        dataLog("Unlinking call from ", callLinkInfo.callReturnLocation, " in request from ", pointerDump(repatchBuffer.codeBlock()), "\n");
+    
+    revertCall(
+        repatchBuffer, repatchBuffer.codeBlock()->vm(), callLinkInfo,
+        linkThunkGeneratorFor(kind, registers));
+}
+
+void linkVirtualFor(
+    ExecState* exec, CallLinkInfo& callLinkInfo,
+    CodeSpecializationKind kind, RegisterPreservationMode registers)
+{
+    // FIXME: We could generate a virtual call stub here. This would lead to faster virtual calls
+    // by eliminating the branch prediction bottleneck inside the shared virtual call thunk.
+    
+    CodeBlock* callerCodeBlock = exec->callerFrame()->codeBlock();
+    VM* vm = callerCodeBlock->vm();
+    
+    if (shouldShowDisassemblyFor(callerCodeBlock))
+        dataLog("Linking virtual call at ", *callerCodeBlock, " ", exec->callerFrame()->codeOrigin(), "\n");
+    
+    RepatchBuffer repatchBuffer(callerCodeBlock);
+    revertCall(repatchBuffer, vm, callLinkInfo, virtualThunkGeneratorFor(kind, registers));
+}
+
+namespace {
+struct CallToCodePtr {
+    CCallHelpers::Call call;
+    MacroAssemblerCodePtr codePtr;
+};
+} // annonymous namespace
+
+void linkPolymorphicCall(
+    ExecState* exec, CallLinkInfo& callLinkInfo, CallVariant newVariant,
     RegisterPreservationMode registers)
 {
-    ASSERT(!callLinkInfo.stub);
+    // Currently we can't do anything for non-function callees.
+    // https://bugs.webkit.org/show_bug.cgi?id=140685
+    if (!newVariant || !newVariant.executable()) {
+        linkVirtualFor(exec, callLinkInfo, CodeForCall, registers);
+        return;
+    }
     
     CodeBlock* callerCodeBlock = exec->callerFrame()->codeBlock();
     VM* vm = callerCodeBlock->vm();
     
+    CallVariantList list;
+    if (PolymorphicCallStubRoutine* stub = callLinkInfo.stub.get())
+        list = stub->variants();
+    else if (JSFunction* oldCallee = callLinkInfo.callee.get())
+        list = CallVariantList{ CallVariant(oldCallee) };
+    
+    list = variantListWithVariant(list, newVariant);
+
+    // If there are any closure calls then it makes sense to treat all of them as closure calls.
+    // This makes switching on callee cheaper. It also produces profiling that's easier on the DFG;
+    // the DFG doesn't really want to deal with a combination of closure and non-closure callees.
+    bool isClosureCall = false;
+    for (CallVariant variant : list)  {
+        if (variant.isClosureCall()) {
+            list = despecifiedVariantList(list);
+            isClosureCall = true;
+            break;
+        }
+    }
+    
+    Vector<PolymorphicCallCase> callCases;
+    
+    // Figure out what our cases are.
+    for (CallVariant variant : list) {
+        CodeBlock* codeBlock;
+        if (variant.executable()->isHostFunction())
+            codeBlock = nullptr;
+        else {
+            codeBlock = jsCast<FunctionExecutable*>(variant.executable())->codeBlockForCall();
+            
+            // If we cannot handle a callee, assume that it's better for this whole thing to be a
+            // virtual call.
+            if (exec->argumentCountIncludingThis() < static_cast<size_t>(codeBlock->numParameters()) || callLinkInfo.callType == CallLinkInfo::CallVarargs || callLinkInfo.callType == CallLinkInfo::ConstructVarargs) {
+                linkVirtualFor(exec, callLinkInfo, CodeForCall, registers);
+                return;
+            }
+        }
+        
+        callCases.append(PolymorphicCallCase(variant, codeBlock));
+    }
+    
+    // If we are over the limit, just use a normal virtual call.
+    unsigned maxPolymorphicCallVariantListSize;
+    if (callerCodeBlock->jitType() == JITCode::topTierJIT())
+        maxPolymorphicCallVariantListSize = Options::maxPolymorphicCallVariantListSizeForTopTier();
+    else
+        maxPolymorphicCallVariantListSize = Options::maxPolymorphicCallVariantListSize();
+    if (list.size() > maxPolymorphicCallVariantListSize) {
+        linkVirtualFor(exec, callLinkInfo, CodeForCall, registers);
+        return;
+    }
+    
     GPRReg calleeGPR = static_cast<GPRReg>(callLinkInfo.calleeGPR);
     
     CCallHelpers stubJit(vm, callerCodeBlock);
@@ -1655,33 +1764,82 @@ void linkClosureCall(
         stubJit.abortWithReason(RepatchInsaneArgumentCount);
         okArgumentCount.link(&stubJit);
     }
+    
+    GPRReg scratch = AssemblyHelpers::selectScratchGPR(calleeGPR);
+    GPRReg comparisonValueGPR;
+    
+    if (isClosureCall) {
+        // Verify that we have a function and stash the executable in scratch.
 
 #if USE(JSVALUE64)
-    // We can safely clobber everything except the calleeGPR. We can't rely on tagMaskRegister
-    // being set. So we do this the hard way.
-    GPRReg scratch = AssemblyHelpers::selectScratchGPR(calleeGPR);
-    stubJit.move(MacroAssembler::TrustedImm64(TagMask), scratch);
-    slowPath.append(stubJit.branchTest64(CCallHelpers::NonZero, calleeGPR, scratch));
+        // We can safely clobber everything except the calleeGPR. We can't rely on tagMaskRegister
+        // being set. So we do this the hard way.
+        stubJit.move(MacroAssembler::TrustedImm64(TagMask), scratch);
+        slowPath.append(stubJit.branchTest64(CCallHelpers::NonZero, calleeGPR, scratch));
 #else
-    // We would have already checked that the callee is a cell.
+        // We would have already checked that the callee is a cell.
 #endif
     
-    slowPath.append(
-        stubJit.branch8(
-            CCallHelpers::NotEqual,
-            CCallHelpers::Address(calleeGPR, JSCell::typeInfoTypeOffset()),
-            CCallHelpers::TrustedImm32(JSFunctionType)));
+        slowPath.append(
+            stubJit.branch8(
+                CCallHelpers::NotEqual,
+                CCallHelpers::Address(calleeGPR, JSCell::typeInfoTypeOffset()),
+                CCallHelpers::TrustedImm32(JSFunctionType)));
     
-    slowPath.append(
-        stubJit.branchPtr(
-            CCallHelpers::NotEqual,
+        stubJit.loadPtr(
             CCallHelpers::Address(calleeGPR, JSFunction::offsetOfExecutable()),
-            CCallHelpers::TrustedImmPtr(executable)));
+            scratch);
+        
+        comparisonValueGPR = scratch;
+    } else
+        comparisonValueGPR = calleeGPR;
+    
+    Vector<int64_t> caseValues(callCases.size());
+    Vector<CallToCodePtr> calls(callCases.size());
+    std::unique_ptr<uint32_t[]> fastCounts;
     
-    AssemblyHelpers::Call call = stubJit.nearCall();
-    AssemblyHelpers::Jump done = stubJit.jump();
+    if (callerCodeBlock->jitType() != JITCode::topTierJIT())
+        fastCounts = std::make_unique<uint32_t[]>(callCases.size());
+    
+    for (size_t i = callCases.size(); i--;) {
+        if (fastCounts)
+            fastCounts[i] = 0;
+        
+        CallVariant variant = callCases[i].variant();
+        if (isClosureCall)
+            caseValues[i] = bitwise_cast<intptr_t>(variant.executable());
+        else
+            caseValues[i] = bitwise_cast<intptr_t>(variant.function());
+    }
+    
+    GPRReg fastCountsBaseGPR =
+        AssemblyHelpers::selectScratchGPR(calleeGPR, comparisonValueGPR, GPRInfo::regT3);
+    stubJit.move(CCallHelpers::TrustedImmPtr(fastCounts.get()), fastCountsBaseGPR);
+    
+    BinarySwitch binarySwitch(comparisonValueGPR, caseValues, BinarySwitch::IntPtr);
+    CCallHelpers::JumpList done;
+    while (binarySwitch.advance(stubJit)) {
+        size_t caseIndex = binarySwitch.caseIndex();
+        
+        CallVariant variant = callCases[caseIndex].variant();
+        
+        ASSERT(variant.executable()->hasJITCodeForCall());
+        MacroAssemblerCodePtr codePtr =
+            variant.executable()->generatedJITCodeForCall()->addressForCall(
+                *vm, variant.executable(), ArityCheckNotRequired, registers);
+        
+        if (fastCounts) {
+            stubJit.add32(
+                CCallHelpers::TrustedImm32(1),
+                CCallHelpers::Address(fastCountsBaseGPR, caseIndex * sizeof(uint32_t)));
+        }
+        calls[caseIndex].call = stubJit.nearCall();
+        calls[caseIndex].codePtr = codePtr;
+        done.append(stubJit.jump());
+    }
     
     slowPath.link(&stubJit);
+    binarySwitch.fallThrough().link(&stubJit);
     stubJit.move(calleeGPR, GPRInfo::regT0);
 #if USE(JSVALUE32_64)
     stubJit.move(CCallHelpers::TrustedImm32(JSValue::CellTag), GPRInfo::regT1);
@@ -1691,34 +1849,45 @@ void linkClosureCall(
     
     stubJit.restoreReturnAddressBeforeReturn(GPRInfo::regT4);
     AssemblyHelpers::Jump slow = stubJit.jump();
-    
+        
     LinkBuffer patchBuffer(*vm, stubJit, callerCodeBlock);
     
-    patchBuffer.link(call, FunctionPtr(codePtr.executableAddress()));
+    RELEASE_ASSERT(callCases.size() == calls.size());
+    for (CallToCodePtr callToCodePtr : calls) {
+        patchBuffer.link(
+            callToCodePtr.call, FunctionPtr(callToCodePtr.codePtr.executableAddress()));
+    }
     if (JITCode::isOptimizingJIT(callerCodeBlock->jitType()))
         patchBuffer.link(done, callLinkInfo.callReturnLocation.labelAtOffset(0));
     else
         patchBuffer.link(done, callLinkInfo.hotPathOther.labelAtOffset(0));
-    patchBuffer.link(slow, CodeLocationLabel(vm->getCTIStub(virtualThunkGeneratorFor(CodeForCall, registers)).code()));
+    patchBuffer.link(slow, CodeLocationLabel(vm->getCTIStub(linkPolymorphicCallThunkGeneratorFor(registers)).code()));
     
-    RefPtr<ClosureCallStubRoutine> stubRoutine = adoptRef(new ClosureCallStubRoutine(
+    RefPtr<PolymorphicCallStubRoutine> stubRoutine = adoptRef(new PolymorphicCallStubRoutine(
         FINALIZE_CODE_FOR(
             callerCodeBlock, patchBuffer,
-            ("Closure call stub for %s, return point %p, target %p (%s)",
+            ("Polymorphic call stub for %s, return point %p, targets %s",
                 toCString(*callerCodeBlock).data(), callLinkInfo.callReturnLocation.labelAtOffset(0).executableAddress(),
-                codePtr.executableAddress(), toCString(pointerDump(calleeCodeBlock)).data())),
-        *vm, callerCodeBlock->ownerExecutable(), executable));
+                toCString(listDump(callCases)).data())),
+        *vm, callerCodeBlock->ownerExecutable(), exec->callerFrame(), callLinkInfo, callCases,
+        WTF::move(fastCounts)));
     
     RepatchBuffer repatchBuffer(callerCodeBlock);
     
     repatchBuffer.replaceWithJump(
         RepatchBuffer::startOfBranchPtrWithPatchOnRegister(callLinkInfo.hotPathBegin),
         CodeLocationLabel(stubRoutine->code().code()));
+    // This is weird. The original slow path should no longer be reachable.
     linkSlowFor(repatchBuffer, vm, callLinkInfo, CodeForCall, registers);
     
+    // If there had been a previous stub routine, that one will die as soon as the GC runs and sees
+    // that it's no longer on stack.
     callLinkInfo.stub = stubRoutine.release();
     
-    ASSERT(!calleeCodeBlock || calleeCodeBlock->isIncomingCallAlreadyLinked(&callLinkInfo));
+    // The call link info no longer has a call cache apart from the jump to the polymorphic call
+    // stub.
+    if (callLinkInfo.isOnList())
+        callLinkInfo.remove();
 }
 
 void resetGetByID(RepatchBuffer& repatchBuffer, StructureStubInfo& stubInfo)
index f37ac7b..ddd008b 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011 Apple Inc. All rights reserved.
+ * Copyright (C) 2011, 2015 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -29,6 +29,7 @@
 #if ENABLE(JIT)
 
 #include "CCallHelpers.h"
+#include "CallVariant.h"
 #include "JITOperations.h"
 
 namespace JSC {
@@ -41,7 +42,9 @@ void buildPutByIdList(ExecState*, JSValue, Structure*, const Identifier&, const
 void repatchIn(ExecState*, JSCell*, const Identifier&, bool wasFound, const PropertySlot&, StructureStubInfo&);
 void linkFor(ExecState*, CallLinkInfo&, CodeBlock*, JSFunction* callee, MacroAssemblerCodePtr, CodeSpecializationKind, RegisterPreservationMode);
 void linkSlowFor(ExecState*, CallLinkInfo&, CodeSpecializationKind, RegisterPreservationMode);
-void linkClosureCall(ExecState*, CallLinkInfo&, CodeBlock*, ExecutableBase*, MacroAssemblerCodePtr, RegisterPreservationMode);
+void unlinkFor(RepatchBuffer&, CallLinkInfo&, CodeSpecializationKind, RegisterPreservationMode);
+void linkVirtualFor(ExecState*, CallLinkInfo&, CodeSpecializationKind, RegisterPreservationMode);
+void linkPolymorphicCall(ExecState*, CallLinkInfo&, CallVariant, RegisterPreservationMode);
 void resetGetByID(RepatchBuffer&, StructureStubInfo&);
 void resetPutByID(RepatchBuffer&, StructureStubInfo&);
 void resetIn(RepatchBuffer&, StructureStubInfo&);
index be8763b..7b0cda6 100644 (file)
@@ -137,27 +137,27 @@ MacroAssemblerCodeRef linkConstructThatPreservesRegsThunkGenerator(VM* vm)
     return linkForThunkGenerator(vm, CodeForConstruct, MustPreserveRegisters);
 }
 
-static MacroAssemblerCodeRef linkClosureCallForThunkGenerator(
+static MacroAssemblerCodeRef linkPolymorphicCallForThunkGenerator(
     VM* vm, RegisterPreservationMode registers)
 {
     CCallHelpers jit(vm);
     
-    slowPathFor(jit, vm, operationLinkClosureCallFor(registers));
+    slowPathFor(jit, vm, operationLinkPolymorphicCallFor(registers));
     
     LinkBuffer patchBuffer(*vm, jit, GLOBAL_THUNK_ID);
-    return FINALIZE_CODE(patchBuffer, ("Link closure call %s slow path thunk", registers == MustPreserveRegisters ? " that preserves registers" : ""));
+    return FINALIZE_CODE(patchBuffer, ("Link polymorphic call %s slow path thunk", registers == MustPreserveRegisters ? " that preserves registers" : ""));
 }
 
 // For closure optimizations, we only include calls, since if you're using closures for
 // object construction then you're going to lose big time anyway.
-MacroAssemblerCodeRef linkClosureCallThunkGenerator(VM* vm)
+MacroAssemblerCodeRef linkPolymorphicCallThunkGenerator(VM* vm)
 {
-    return linkClosureCallForThunkGenerator(vm, RegisterPreservationNotRequired);
+    return linkPolymorphicCallForThunkGenerator(vm, RegisterPreservationNotRequired);
 }
 
-MacroAssemblerCodeRef linkClosureCallThatPreservesRegsThunkGenerator(VM* vm)
+MacroAssemblerCodeRef linkPolymorphicCallThatPreservesRegsThunkGenerator(VM* vm)
 {
-    return linkClosureCallForThunkGenerator(vm, MustPreserveRegisters);
+    return linkPolymorphicCallForThunkGenerator(vm, MustPreserveRegisters);
 }
 
 static MacroAssemblerCodeRef virtualForThunkGenerator(
index 0abe1af..246da2e 100644 (file)
@@ -65,16 +65,16 @@ inline ThunkGenerator linkThunkGeneratorFor(
     return 0;
 }
 
-MacroAssemblerCodeRef linkClosureCallThunkGenerator(VM*);
-MacroAssemblerCodeRef linkClosureCallThatPreservesRegsThunkGenerator(VM*);
+MacroAssemblerCodeRef linkPolymorphicCallThunkGenerator(VM*);
+MacroAssemblerCodeRef linkPolymorphicCallThatPreservesRegsThunkGenerator(VM*);
 
-inline ThunkGenerator linkClosureCallThunkGeneratorFor(RegisterPreservationMode registers)
+inline ThunkGenerator linkPolymorphicCallThunkGeneratorFor(RegisterPreservationMode registers)
 {
     switch (registers) {
     case RegisterPreservationNotRequired:
-        return linkClosureCallThunkGenerator;
+        return linkPolymorphicCallThunkGenerator;
     case MustPreserveRegisters:
-        return linkClosureCallThatPreservesRegsThunkGenerator;
+        return linkPolymorphicCallThatPreservesRegsThunkGenerator;
     }
     RELEASE_ASSERT_NOT_REACHED();
     return 0;
index a3022cb..56d63e8 100644 (file)
@@ -332,6 +332,7 @@ inline bool jitCompileAndSetHeuristics(CodeBlock* codeBlock, ExecState* exec)
         }
     }
     default:
+        dataLog("Unexpected code block in LLInt: ", *codeBlock, "\n");
         RELEASE_ASSERT_NOT_REACHED();
         return false;
     }
index d7f0e01..381f246 100644 (file)
@@ -170,12 +170,11 @@ typedef const char* optionString;
     v(bool, enablePolyvariantDevirtualization, true) \
     v(bool, enablePolymorphicAccessInlining, true) \
     v(bool, enablePolymorphicCallInlining, true) \
-    v(bool, callStatusShouldUseCallEdgeProfile, true) \
-    v(bool, callEdgeProfileReallyProcessesLog, true) \
-    v(bool, baselineDoesCallEdgeProfiling, false) \
-    v(bool, dfgDoesCallEdgeProfiling, true) \
-    v(bool, enableCallEdgeProfiling, true) \
+    v(unsigned, maxPolymorphicCallVariantListSize, 15) \
+    v(unsigned, maxPolymorphicCallVariantListSizeForTopTier, 5) \
+    v(unsigned, maxPolymorphicCallVariantsForInlining, 5) \
     v(unsigned, frequentCallThreshold, 2) \
+    v(double, minimumCallToKnownRate, 0.51) \
     v(bool, optimizeNativeCalls, false) \
     v(bool, enableObjectAllocationSinking, true) \
     \
index 80ed9ca..d09fd45 100644 (file)
@@ -370,13 +370,6 @@ VM*& VM::sharedInstanceInternal()
     return sharedInstance;
 }
 
-CallEdgeLog& VM::ensureCallEdgeLog()
-{
-    if (!callEdgeLog)
-        callEdgeLog = std::make_unique<CallEdgeLog>();
-    return *callEdgeLog;
-}
-
 #if ENABLE(JIT)
 static ThunkGenerator thunkGeneratorForIntrinsic(Intrinsic intrinsic)
 {
@@ -460,9 +453,6 @@ void VM::stopSampling()
 
 void VM::prepareToDiscardCode()
 {
-    if (callEdgeLog)
-        callEdgeLog->processLog();
-    
 #if ENABLE(DFG_JIT)
     for (unsigned i = DFG::numberOfWorklists(); i--;) {
         if (DFG::Worklist* worklist = DFG::worklistForIndexOrNull(i))
index ce337b8..2243e13 100644 (file)
@@ -73,7 +73,6 @@ namespace JSC {
 
 class ArityCheckFailReturnThunks;
 class BuiltinExecutables;
-class CallEdgeLog;
 class CodeBlock;
 class CodeCache;
 class CommonIdentifiers;
@@ -238,9 +237,6 @@ public:
     std::unique_ptr<DFG::LongLivedState> dfgState;
 #endif // ENABLE(DFG_JIT)
 
-    std::unique_ptr<CallEdgeLog> callEdgeLog;
-    CallEdgeLog& ensureCallEdgeLog();
-
     VMType vmType;
     ClientData* clientData;
     VMEntryFrame* topVMEntryFrame;