Do unified source builds for JSC
authorkeith_miller@apple.com <keith_miller@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 13 Sep 2017 01:31:07 +0000 (01:31 +0000)
committerkeith_miller@apple.com <keith_miller@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 13 Sep 2017 01:31:07 +0000 (01:31 +0000)
https://bugs.webkit.org/show_bug.cgi?id=176076

Reviewed by Geoffrey Garen.

Source/JavaScriptCore:

This patch switches the CMake JavaScriptCore build to use unified sources.
The Xcode build will be upgraded in a follow up patch.

Most of the source changes in this patch are fixing static
variable/functions name collisions. The most common collisions
were from our use of "static const bool verbose" and "using
namespace ...". I fixed all the verbose cases and fixed the "using
namespace" issues that occurred under the current bundling
strategy. It's likely that more of the "using namespace" issues
will need to be resolved in the future, particularly in the FTL.

I don't expect either of these problems will apply to other parts
of the project nearly as much as in JSC. Using a verbose variable
is a JSC idiom and JSC tends use the same, canonical, class name
in multiple parts of the engine.

* CMakeLists.txt:
* b3/B3CheckSpecial.cpp:
(JSC::B3::CheckSpecial::forEachArg):
(JSC::B3::CheckSpecial::generate):
(JSC::B3::Air::numB3Args): Deleted.
* b3/B3DuplicateTails.cpp:
* b3/B3EliminateCommonSubexpressions.cpp:
* b3/B3FixSSA.cpp:
(JSC::B3::demoteValues):
* b3/B3FoldPathConstants.cpp:
* b3/B3InferSwitches.cpp:
* b3/B3LowerMacrosAfterOptimizations.cpp:
(): Deleted.
* b3/B3LowerToAir.cpp:
(JSC::B3::Air::LowerToAir::LowerToAir): Deleted.
(JSC::B3::Air::LowerToAir::run): Deleted.
(JSC::B3::Air::LowerToAir::shouldCopyPropagate): Deleted.
(JSC::B3::Air::LowerToAir::ArgPromise::ArgPromise): Deleted.
(JSC::B3::Air::LowerToAir::ArgPromise::swap): Deleted.
(JSC::B3::Air::LowerToAir::ArgPromise::operator=): Deleted.
(JSC::B3::Air::LowerToAir::ArgPromise::~ArgPromise): Deleted.
(JSC::B3::Air::LowerToAir::ArgPromise::setTraps): Deleted.
(JSC::B3::Air::LowerToAir::ArgPromise::tmp): Deleted.
(JSC::B3::Air::LowerToAir::ArgPromise::operator bool const): Deleted.
(JSC::B3::Air::LowerToAir::ArgPromise::kind const): Deleted.
(JSC::B3::Air::LowerToAir::ArgPromise::peek const): Deleted.
(JSC::B3::Air::LowerToAir::ArgPromise::consume): Deleted.
(JSC::B3::Air::LowerToAir::ArgPromise::inst): Deleted.
(JSC::B3::Air::LowerToAir::tmp): Deleted.
(JSC::B3::Air::LowerToAir::tmpPromise): Deleted.
(JSC::B3::Air::LowerToAir::canBeInternal): Deleted.
(JSC::B3::Air::LowerToAir::commitInternal): Deleted.
(JSC::B3::Air::LowerToAir::crossesInterference): Deleted.
(JSC::B3::Air::LowerToAir::scaleForShl): Deleted.
(JSC::B3::Air::LowerToAir::effectiveAddr): Deleted.
(JSC::B3::Air::LowerToAir::addr): Deleted.
(JSC::B3::Air::LowerToAir::trappingInst): Deleted.
(JSC::B3::Air::LowerToAir::loadPromiseAnyOpcode): Deleted.
(JSC::B3::Air::LowerToAir::loadPromise): Deleted.
(JSC::B3::Air::LowerToAir::imm): Deleted.
(JSC::B3::Air::LowerToAir::bitImm): Deleted.
(JSC::B3::Air::LowerToAir::bitImm64): Deleted.
(JSC::B3::Air::LowerToAir::immOrTmp): Deleted.
(JSC::B3::Air::LowerToAir::tryOpcodeForType): Deleted.
(JSC::B3::Air::LowerToAir::opcodeForType): Deleted.
(JSC::B3::Air::LowerToAir::appendUnOp): Deleted.
(JSC::B3::Air::LowerToAir::preferRightForResult): Deleted.
(JSC::B3::Air::LowerToAir::appendBinOp): Deleted.
(JSC::B3::Air::LowerToAir::appendShift): Deleted.
(JSC::B3::Air::LowerToAir::tryAppendStoreUnOp): Deleted.
(JSC::B3::Air::LowerToAir::tryAppendStoreBinOp): Deleted.
(JSC::B3::Air::LowerToAir::createStore): Deleted.
(JSC::B3::Air::LowerToAir::storeOpcode): Deleted.
(JSC::B3::Air::LowerToAir::appendStore): Deleted.
(JSC::B3::Air::LowerToAir::moveForType): Deleted.
(JSC::B3::Air::LowerToAir::relaxedMoveForType): Deleted.
(JSC::B3::Air::LowerToAir::print): Deleted.
(JSC::B3::Air::LowerToAir::append): Deleted.
(JSC::B3::Air::LowerToAir::appendTrapping): Deleted.
(JSC::B3::Air::LowerToAir::finishAppendingInstructions): Deleted.
(JSC::B3::Air::LowerToAir::newBlock): Deleted.
(JSC::B3::Air::LowerToAir::splitBlock): Deleted.
(JSC::B3::Air::LowerToAir::ensureSpecial): Deleted.
(JSC::B3::Air::LowerToAir::ensureCheckSpecial): Deleted.
(JSC::B3::Air::LowerToAir::fillStackmap): Deleted.
(JSC::B3::Air::LowerToAir::createGenericCompare): Deleted.
(JSC::B3::Air::LowerToAir::createBranch): Deleted.
(JSC::B3::Air::LowerToAir::createCompare): Deleted.
(JSC::B3::Air::LowerToAir::createSelect): Deleted.
(JSC::B3::Air::LowerToAir::tryAppendLea): Deleted.
(JSC::B3::Air::LowerToAir::appendX86Div): Deleted.
(JSC::B3::Air::LowerToAir::appendX86UDiv): Deleted.
(JSC::B3::Air::LowerToAir::loadLinkOpcode): Deleted.
(JSC::B3::Air::LowerToAir::storeCondOpcode): Deleted.
(JSC::B3::Air::LowerToAir::appendCAS): Deleted.
(JSC::B3::Air::LowerToAir::appendVoidAtomic): Deleted.
(JSC::B3::Air::LowerToAir::appendGeneralAtomic): Deleted.
(JSC::B3::Air::LowerToAir::lower): Deleted.
* b3/B3PatchpointSpecial.cpp:
(JSC::B3::PatchpointSpecial::generate):
* b3/B3ReduceDoubleToFloat.cpp:
(JSC::B3::reduceDoubleToFloat):
* b3/B3ReduceStrength.cpp:
* b3/B3StackmapGenerationParams.cpp:
* b3/B3StackmapSpecial.cpp:
(JSC::B3::StackmapSpecial::repsImpl):
(JSC::B3::StackmapSpecial::repForArg):
* b3/air/AirAllocateStackByGraphColoring.cpp:
(JSC::B3::Air::allocateStackByGraphColoring):
* b3/air/AirEmitShuffle.cpp:
(JSC::B3::Air::emitShuffle):
* b3/air/AirFixObviousSpills.cpp:
* b3/air/AirLowerAfterRegAlloc.cpp:
(JSC::B3::Air::lowerAfterRegAlloc):
* b3/air/AirStackAllocation.cpp:
(JSC::B3::Air::attemptAssignment):
(JSC::B3::Air::assign):
* bytecode/AccessCase.cpp:
(JSC::AccessCase::generateImpl):
* bytecode/CallLinkStatus.cpp:
(JSC::CallLinkStatus::computeDFGStatuses):
* bytecode/GetterSetterAccessCase.cpp:
(JSC::GetterSetterAccessCase::emitDOMJITGetter):
* bytecode/ObjectPropertyConditionSet.cpp:
* bytecode/PolymorphicAccess.cpp:
(JSC::PolymorphicAccess::addCases):
(JSC::PolymorphicAccess::regenerate):
* bytecode/PropertyCondition.cpp:
(JSC::PropertyCondition::isStillValidAssumingImpurePropertyWatchpoint const):
* bytecode/StructureStubInfo.cpp:
(JSC::StructureStubInfo::addAccessCase):
* dfg/DFGArgumentsEliminationPhase.cpp:
* dfg/DFGByteCodeParser.cpp:
(JSC::DFG::ByteCodeParser::DelayedSetLocal::DelayedSetLocal):
(JSC::DFG::ByteCodeParser::inliningCost):
(JSC::DFG::ByteCodeParser::inlineCall):
(JSC::DFG::ByteCodeParser::attemptToInlineCall):
(JSC::DFG::ByteCodeParser::handleInlining):
(JSC::DFG::ByteCodeParser::planLoad):
(JSC::DFG::ByteCodeParser::store):
(JSC::DFG::ByteCodeParser::parseBlock):
(JSC::DFG::ByteCodeParser::linkBlock):
(JSC::DFG::ByteCodeParser::linkBlocks):
* dfg/DFGCSEPhase.cpp:
* dfg/DFGInPlaceAbstractState.cpp:
(JSC::DFG::InPlaceAbstractState::merge):
* dfg/DFGIntegerCheckCombiningPhase.cpp:
(JSC::DFG::IntegerCheckCombiningPhase::handleBlock):
* dfg/DFGIntegerRangeOptimizationPhase.cpp:
* dfg/DFGMovHintRemovalPhase.cpp:
* dfg/DFGObjectAllocationSinkingPhase.cpp:
* dfg/DFGPhantomInsertionPhase.cpp:
* dfg/DFGPutStackSinkingPhase.cpp:
* dfg/DFGStoreBarrierInsertionPhase.cpp:
* dfg/DFGVarargsForwardingPhase.cpp:
* ftl/FTLAbstractHeap.cpp:
(JSC::FTL::AbstractHeap::compute):
* ftl/FTLAbstractHeapRepository.cpp:
(JSC::FTL::AbstractHeapRepository::decorateMemory):
(JSC::FTL::AbstractHeapRepository::decorateCCallRead):
(JSC::FTL::AbstractHeapRepository::decorateCCallWrite):
(JSC::FTL::AbstractHeapRepository::decoratePatchpointRead):
(JSC::FTL::AbstractHeapRepository::decoratePatchpointWrite):
(JSC::FTL::AbstractHeapRepository::decorateFenceRead):
(JSC::FTL::AbstractHeapRepository::decorateFenceWrite):
(JSC::FTL::AbstractHeapRepository::decorateFencedAccess):
(JSC::FTL::AbstractHeapRepository::computeRangesAndDecorateInstructions):
* ftl/FTLLink.cpp:
(JSC::FTL::link):
* heap/MarkingConstraintSet.cpp:
(JSC::MarkingConstraintSet::add):
* interpreter/ShadowChicken.cpp:
(JSC::ShadowChicken::update):
* jit/BinarySwitch.cpp:
(JSC::BinarySwitch::BinarySwitch):
(JSC::BinarySwitch::build):
* llint/LLIntData.cpp:
(JSC::LLInt::Data::loadStats):
(JSC::LLInt::Data::saveStats):
* runtime/ArrayPrototype.cpp:
(JSC::ArrayPrototype::tryInitializeSpeciesWatchpoint):
(JSC::ArrayPrototypeAdaptiveInferredPropertyWatchpoint::handleFire):
* runtime/ErrorInstance.cpp:
(JSC::FindFirstCallerFrameWithCodeblockFunctor::FindFirstCallerFrameWithCodeblockFunctor): Deleted.
(JSC::FindFirstCallerFrameWithCodeblockFunctor::operator()): Deleted.
(JSC::FindFirstCallerFrameWithCodeblockFunctor::foundCallFrame const): Deleted.
(JSC::FindFirstCallerFrameWithCodeblockFunctor::index const): Deleted.
* runtime/IntlDateTimeFormat.cpp:
(JSC::IntlDateTimeFormat::initializeDateTimeFormat):
* runtime/PromiseDeferredTimer.cpp:
(JSC::PromiseDeferredTimer::doWork):
(JSC::PromiseDeferredTimer::addPendingPromise):
(JSC::PromiseDeferredTimer::cancelPendingPromise):
* runtime/TypeProfiler.cpp:
(JSC::TypeProfiler::insertNewLocation):
* runtime/TypeProfilerLog.cpp:
(JSC::TypeProfilerLog::processLogEntries):
* runtime/WeakMapPrototype.cpp:
(JSC::protoFuncWeakMapDelete):
(JSC::protoFuncWeakMapGet):
(JSC::protoFuncWeakMapHas):
(JSC::protoFuncWeakMapSet):
(JSC::getWeakMapData): Deleted.
* runtime/WeakSetPrototype.cpp:
(JSC::protoFuncWeakSetDelete):
(JSC::protoFuncWeakSetHas):
(JSC::protoFuncWeakSetAdd):
(JSC::getWeakMapData): Deleted.
* testRegExp.cpp:
(testOneRegExp):
(runFromFiles):
* wasm/WasmB3IRGenerator.cpp:
(JSC::Wasm::parseAndCompile):
* wasm/WasmBBQPlan.cpp:
(JSC::Wasm::BBQPlan::moveToState):
(JSC::Wasm::BBQPlan::parseAndValidateModule):
(JSC::Wasm::BBQPlan::prepare):
(JSC::Wasm::BBQPlan::compileFunctions):
(JSC::Wasm::BBQPlan::complete):
* wasm/WasmFaultSignalHandler.cpp:
(JSC::Wasm::trapHandler):
* wasm/WasmOMGPlan.cpp:
(JSC::Wasm::OMGPlan::OMGPlan):
(JSC::Wasm::OMGPlan::work):
* wasm/WasmPlan.cpp:
(JSC::Wasm::Plan::fail):
* wasm/WasmSignature.cpp:
(JSC::Wasm::SignatureInformation::adopt):
* wasm/WasmWorklist.cpp:
(JSC::Wasm::Worklist::enqueue):

Source/WTF:

This patch adds a script that will automatically bundle source
files, which is currently only used by the CMake build. It's
important that we use the same script to generate the bundles
for the CMake build as the Xcode build. If we didn't do this then
it's likely that there would be build errors that occur in only
one build system. On the same note, we also need to be careful to
not bundle platform specific source files with platform
independent ones. There are a couple of things the script does not
currently handle but are not essential for the CMake build. First,
it does not handle the max bundle size restrictions that the Xcode
build will require. It also does not handle C files.

The unified source generator script works by collecting groups of
up to 8 files from the same directory. We don't bundle files from
across directories since I didn't see a speedup from doing
so. Additionally, splitting at the directory boundary means that
it is less likely that adding a new file will force a "clean"
build. This would happen because the new file will shift every
subsequent file into the next unified source bundle.

Using unified sources appears to be a roughly 3.5x build time
speed up for clean builds on my MBP and appears to have a
negligible effect in incremental builds.

* generate-unified-source-bundles.rb: Added.
* wtf/Assertions.h:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@221954 268f45cc-cd09-0410-ab3c-d52691b4dbfc

72 files changed:
Source/JavaScriptCore/CMakeLists.txt
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/b3/B3CheckSpecial.cpp
Source/JavaScriptCore/b3/B3DuplicateTails.cpp
Source/JavaScriptCore/b3/B3EliminateCommonSubexpressions.cpp
Source/JavaScriptCore/b3/B3FixSSA.cpp
Source/JavaScriptCore/b3/B3FoldPathConstants.cpp
Source/JavaScriptCore/b3/B3InferSwitches.cpp
Source/JavaScriptCore/b3/B3LowerMacrosAfterOptimizations.cpp
Source/JavaScriptCore/b3/B3LowerToAir.cpp
Source/JavaScriptCore/b3/B3PatchpointSpecial.cpp
Source/JavaScriptCore/b3/B3ReduceDoubleToFloat.cpp
Source/JavaScriptCore/b3/B3ReduceStrength.cpp
Source/JavaScriptCore/b3/B3StackmapGenerationParams.cpp
Source/JavaScriptCore/b3/B3StackmapSpecial.cpp
Source/JavaScriptCore/b3/air/AirAllocateStackByGraphColoring.cpp
Source/JavaScriptCore/b3/air/AirEmitShuffle.cpp
Source/JavaScriptCore/b3/air/AirFixObviousSpills.cpp
Source/JavaScriptCore/b3/air/AirLowerAfterRegAlloc.cpp
Source/JavaScriptCore/b3/air/AirStackAllocation.cpp
Source/JavaScriptCore/bytecode/AccessCase.cpp
Source/JavaScriptCore/bytecode/CallLinkStatus.cpp
Source/JavaScriptCore/bytecode/GetterSetterAccessCase.cpp
Source/JavaScriptCore/bytecode/ObjectPropertyConditionSet.cpp
Source/JavaScriptCore/bytecode/PolymorphicAccess.cpp
Source/JavaScriptCore/bytecode/PropertyCondition.cpp
Source/JavaScriptCore/bytecode/StructureStubInfo.cpp
Source/JavaScriptCore/dfg/DFGAbstractHeap.cpp
Source/JavaScriptCore/dfg/DFGArgumentsEliminationPhase.cpp
Source/JavaScriptCore/dfg/DFGByteCodeParser.cpp
Source/JavaScriptCore/dfg/DFGCSEPhase.cpp
Source/JavaScriptCore/dfg/DFGInPlaceAbstractState.cpp
Source/JavaScriptCore/dfg/DFGIntegerCheckCombiningPhase.cpp
Source/JavaScriptCore/dfg/DFGIntegerRangeOptimizationPhase.cpp
Source/JavaScriptCore/dfg/DFGMovHintRemovalPhase.cpp
Source/JavaScriptCore/dfg/DFGObjectAllocationSinkingPhase.cpp
Source/JavaScriptCore/dfg/DFGPhantomInsertionPhase.cpp
Source/JavaScriptCore/dfg/DFGPutStackSinkingPhase.cpp
Source/JavaScriptCore/dfg/DFGStoreBarrierInsertionPhase.cpp
Source/JavaScriptCore/dfg/DFGVarargsForwardingPhase.cpp
Source/JavaScriptCore/ftl/FTLAbstractHeap.cpp
Source/JavaScriptCore/ftl/FTLAbstractHeapRepository.cpp
Source/JavaScriptCore/ftl/FTLJITFinalizer.cpp
Source/JavaScriptCore/ftl/FTLLink.cpp
Source/JavaScriptCore/ftl/FTLOperations.cpp
Source/JavaScriptCore/heap/MarkingConstraintSet.cpp
Source/JavaScriptCore/interpreter/ShadowChicken.cpp
Source/JavaScriptCore/jit/BinarySwitch.cpp
Source/JavaScriptCore/llint/LLIntData.cpp
Source/JavaScriptCore/runtime/ArrayPrototype.cpp
Source/JavaScriptCore/runtime/ErrorInstance.cpp
Source/JavaScriptCore/runtime/IntlDateTimeFormat.cpp
Source/JavaScriptCore/runtime/IntlNumberFormat.cpp
Source/JavaScriptCore/runtime/JSTypedArrayConstructors.cpp
Source/JavaScriptCore/runtime/JSTypedArrayPrototypes.cpp
Source/JavaScriptCore/runtime/JSTypedArrays.cpp
Source/JavaScriptCore/runtime/NullGetterFunction.cpp
Source/JavaScriptCore/runtime/NullSetterFunction.cpp
Source/JavaScriptCore/runtime/NumberPrototype.cpp
Source/JavaScriptCore/runtime/PromiseDeferredTimer.cpp
Source/JavaScriptCore/runtime/TypeProfiler.cpp
Source/JavaScriptCore/runtime/TypeProfilerLog.cpp
Source/JavaScriptCore/wasm/WasmB3IRGenerator.cpp
Source/JavaScriptCore/wasm/WasmBBQPlan.cpp
Source/JavaScriptCore/wasm/WasmFaultSignalHandler.cpp
Source/JavaScriptCore/wasm/WasmOMGPlan.cpp
Source/JavaScriptCore/wasm/WasmPlan.cpp
Source/JavaScriptCore/wasm/WasmSignature.cpp
Source/JavaScriptCore/wasm/WasmWorklist.cpp
Source/WTF/ChangeLog
Source/WTF/generate-unified-source-bundles.rb [new file with mode: 0644]
Source/WTF/wtf/Assertions.h

index da1ba1d..debcac8 100644 (file)
@@ -46,7 +46,7 @@ set(JavaScriptCore_SYSTEM_INCLUDE_DIRECTORIES
     "${ICU_INCLUDE_DIRS}"
 )
 
-set(JavaScriptCore_SOURCES
+set(JavaScriptCore_OG_SOURCES
     API/JSBase.cpp
     API/JSCTestRunnerUtils.cpp
     API/JSCallbackConstructor.cpp
@@ -879,7 +879,6 @@ set(JavaScriptCore_SOURCES
     runtime/PropertyTable.cpp
     runtime/PrototypeMap.cpp
     runtime/ProxyConstructor.cpp
-    runtime/ProxyObject.cpp
     runtime/ProxyRevoke.cpp
     runtime/ReflectObject.cpp
     runtime/RegExp.cpp
@@ -1017,6 +1016,30 @@ set(JavaScriptCore_SOURCES
     yarr/YarrSyntaxChecker.cpp
 )
 
+foreach (_sourceFile IN LISTS JavaScriptCore_OG_SOURCES)
+    if (NOT (${_sourceFile} MATCHES "[.]c$"))
+        set_source_files_properties(${_sourceFile} PROPERTIES HEADER_FILE_ONLY ON)
+        list(APPEND JavaScriptCore_HEADERS ${_sourceFile})
+    endif ()
+endforeach ()
+
+execute_process(COMMAND ${RUBY_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/../WTF/generate-unified-source-bundles.rb
+  "--derived-sources-path" ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR} ${JavaScriptCore_OG_SOURCES}
+  RESULT_VARIABLE generateUnifiedSourcesResult
+  OUTPUT_VARIABLE generateUnifiedSourcesOutput
+)
+
+if (${generateUnifiedSourcesResult})
+    message(FATAL_ERROR "unified-source-bundler.rb exited with non-zero status not appending results")
+else ()
+    list(APPEND JavaScriptCore_SOURCES ${generateUnifiedSourcesOutput})
+endif ()
+
+# These are special files that we can't or don't want to unified source compile
+list(APPEND JavaScriptCore_SOURCES
+    runtime/ProxyObject.cpp
+)
+
 # Extra flags for compile sources can go here.
 if (NOT MSVC)
     set_source_files_properties(runtime/ProxyObject.cpp PROPERTIES COMPILE_FLAGS -fno-optimize-sibling-calls)
@@ -1336,6 +1359,7 @@ add_custom_command(
     MAIN_DEPENDENCY ${CMAKE_CURRENT_SOURCE_DIR}/create_regex_tables
     COMMAND ${PYTHON_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/create_regex_tables > ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/RegExpJitTables.h
     VERBATIM)
+list(APPEND JavaScriptCore_HEADERS ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/RegExpJitTables.h)
 WEBKIT_ADD_SOURCE_DEPENDENCIES(${CMAKE_CURRENT_SOURCE_DIR}/yarr/YarrPattern.cpp ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/RegExpJitTables.h)
 
 add_custom_command(
@@ -1356,6 +1380,7 @@ add_custom_command(
     DEPENDS ${CMAKE_CURRENT_SOURCE_DIR}/parser/Keywords.table
     COMMAND ${PYTHON_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/KeywordLookupGenerator.py ${CMAKE_CURRENT_SOURCE_DIR}/parser/Keywords.table > ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/KeywordLookup.h
     VERBATIM)
+list(APPEND JavaScriptCore_HEADERS ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/KeywordLookup.h)
 WEBKIT_ADD_SOURCE_DEPENDENCIES(${CMAKE_CURRENT_SOURCE_DIR}/parser/Lexer.cpp ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/KeywordLookup.h)
 
 
@@ -1503,7 +1528,7 @@ add_custom_command(
     WORKING_DIRECTORY ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}
 )
 
-list(APPEND JavaScriptCore_SOURCES
+list(APPEND JavaScriptCore_HEADERS
     ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/AirOpcode.h
     ${DERIVED_SOURCES_JAVASCRIPTCORE_DIR}/AirOpcodeGenerated.h
 )
index db79ba1..b88d682 100644 (file)
@@ -1,3 +1,237 @@
+2017-09-12  Keith Miller  <keith_miller@apple.com>
+
+        Do unified source builds for JSC
+        https://bugs.webkit.org/show_bug.cgi?id=176076
+
+        Reviewed by Geoffrey Garen.
+
+        This patch switches the CMake JavaScriptCore build to use unified sources.
+        The Xcode build will be upgraded in a follow up patch.
+
+        Most of the source changes in this patch are fixing static
+        variable/functions name collisions. The most common collisions
+        were from our use of "static const bool verbose" and "using
+        namespace ...". I fixed all the verbose cases and fixed the "using
+        namespace" issues that occurred under the current bundling
+        strategy. It's likely that more of the "using namespace" issues
+        will need to be resolved in the future, particularly in the FTL.
+
+        I don't expect either of these problems will apply to other parts
+        of the project nearly as much as in JSC. Using a verbose variable
+        is a JSC idiom and JSC tends use the same, canonical, class name
+        in multiple parts of the engine.
+
+        * CMakeLists.txt:
+        * b3/B3CheckSpecial.cpp:
+        (JSC::B3::CheckSpecial::forEachArg):
+        (JSC::B3::CheckSpecial::generate):
+        (JSC::B3::Air::numB3Args): Deleted.
+        * b3/B3DuplicateTails.cpp:
+        * b3/B3EliminateCommonSubexpressions.cpp:
+        * b3/B3FixSSA.cpp:
+        (JSC::B3::demoteValues):
+        * b3/B3FoldPathConstants.cpp:
+        * b3/B3InferSwitches.cpp:
+        * b3/B3LowerMacrosAfterOptimizations.cpp:
+        (): Deleted.
+        * b3/B3LowerToAir.cpp:
+        (JSC::B3::Air::LowerToAir::LowerToAir): Deleted.
+        (JSC::B3::Air::LowerToAir::run): Deleted.
+        (JSC::B3::Air::LowerToAir::shouldCopyPropagate): Deleted.
+        (JSC::B3::Air::LowerToAir::ArgPromise::ArgPromise): Deleted.
+        (JSC::B3::Air::LowerToAir::ArgPromise::swap): Deleted.
+        (JSC::B3::Air::LowerToAir::ArgPromise::operator=): Deleted.
+        (JSC::B3::Air::LowerToAir::ArgPromise::~ArgPromise): Deleted.
+        (JSC::B3::Air::LowerToAir::ArgPromise::setTraps): Deleted.
+        (JSC::B3::Air::LowerToAir::ArgPromise::tmp): Deleted.
+        (JSC::B3::Air::LowerToAir::ArgPromise::operator bool const): Deleted.
+        (JSC::B3::Air::LowerToAir::ArgPromise::kind const): Deleted.
+        (JSC::B3::Air::LowerToAir::ArgPromise::peek const): Deleted.
+        (JSC::B3::Air::LowerToAir::ArgPromise::consume): Deleted.
+        (JSC::B3::Air::LowerToAir::ArgPromise::inst): Deleted.
+        (JSC::B3::Air::LowerToAir::tmp): Deleted.
+        (JSC::B3::Air::LowerToAir::tmpPromise): Deleted.
+        (JSC::B3::Air::LowerToAir::canBeInternal): Deleted.
+        (JSC::B3::Air::LowerToAir::commitInternal): Deleted.
+        (JSC::B3::Air::LowerToAir::crossesInterference): Deleted.
+        (JSC::B3::Air::LowerToAir::scaleForShl): Deleted.
+        (JSC::B3::Air::LowerToAir::effectiveAddr): Deleted.
+        (JSC::B3::Air::LowerToAir::addr): Deleted.
+        (JSC::B3::Air::LowerToAir::trappingInst): Deleted.
+        (JSC::B3::Air::LowerToAir::loadPromiseAnyOpcode): Deleted.
+        (JSC::B3::Air::LowerToAir::loadPromise): Deleted.
+        (JSC::B3::Air::LowerToAir::imm): Deleted.
+        (JSC::B3::Air::LowerToAir::bitImm): Deleted.
+        (JSC::B3::Air::LowerToAir::bitImm64): Deleted.
+        (JSC::B3::Air::LowerToAir::immOrTmp): Deleted.
+        (JSC::B3::Air::LowerToAir::tryOpcodeForType): Deleted.
+        (JSC::B3::Air::LowerToAir::opcodeForType): Deleted.
+        (JSC::B3::Air::LowerToAir::appendUnOp): Deleted.
+        (JSC::B3::Air::LowerToAir::preferRightForResult): Deleted.
+        (JSC::B3::Air::LowerToAir::appendBinOp): Deleted.
+        (JSC::B3::Air::LowerToAir::appendShift): Deleted.
+        (JSC::B3::Air::LowerToAir::tryAppendStoreUnOp): Deleted.
+        (JSC::B3::Air::LowerToAir::tryAppendStoreBinOp): Deleted.
+        (JSC::B3::Air::LowerToAir::createStore): Deleted.
+        (JSC::B3::Air::LowerToAir::storeOpcode): Deleted.
+        (JSC::B3::Air::LowerToAir::appendStore): Deleted.
+        (JSC::B3::Air::LowerToAir::moveForType): Deleted.
+        (JSC::B3::Air::LowerToAir::relaxedMoveForType): Deleted.
+        (JSC::B3::Air::LowerToAir::print): Deleted.
+        (JSC::B3::Air::LowerToAir::append): Deleted.
+        (JSC::B3::Air::LowerToAir::appendTrapping): Deleted.
+        (JSC::B3::Air::LowerToAir::finishAppendingInstructions): Deleted.
+        (JSC::B3::Air::LowerToAir::newBlock): Deleted.
+        (JSC::B3::Air::LowerToAir::splitBlock): Deleted.
+        (JSC::B3::Air::LowerToAir::ensureSpecial): Deleted.
+        (JSC::B3::Air::LowerToAir::ensureCheckSpecial): Deleted.
+        (JSC::B3::Air::LowerToAir::fillStackmap): Deleted.
+        (JSC::B3::Air::LowerToAir::createGenericCompare): Deleted.
+        (JSC::B3::Air::LowerToAir::createBranch): Deleted.
+        (JSC::B3::Air::LowerToAir::createCompare): Deleted.
+        (JSC::B3::Air::LowerToAir::createSelect): Deleted.
+        (JSC::B3::Air::LowerToAir::tryAppendLea): Deleted.
+        (JSC::B3::Air::LowerToAir::appendX86Div): Deleted.
+        (JSC::B3::Air::LowerToAir::appendX86UDiv): Deleted.
+        (JSC::B3::Air::LowerToAir::loadLinkOpcode): Deleted.
+        (JSC::B3::Air::LowerToAir::storeCondOpcode): Deleted.
+        (JSC::B3::Air::LowerToAir::appendCAS): Deleted.
+        (JSC::B3::Air::LowerToAir::appendVoidAtomic): Deleted.
+        (JSC::B3::Air::LowerToAir::appendGeneralAtomic): Deleted.
+        (JSC::B3::Air::LowerToAir::lower): Deleted.
+        * b3/B3PatchpointSpecial.cpp:
+        (JSC::B3::PatchpointSpecial::generate):
+        * b3/B3ReduceDoubleToFloat.cpp:
+        (JSC::B3::reduceDoubleToFloat):
+        * b3/B3ReduceStrength.cpp:
+        * b3/B3StackmapGenerationParams.cpp:
+        * b3/B3StackmapSpecial.cpp:
+        (JSC::B3::StackmapSpecial::repsImpl):
+        (JSC::B3::StackmapSpecial::repForArg):
+        * b3/air/AirAllocateStackByGraphColoring.cpp:
+        (JSC::B3::Air::allocateStackByGraphColoring):
+        * b3/air/AirEmitShuffle.cpp:
+        (JSC::B3::Air::emitShuffle):
+        * b3/air/AirFixObviousSpills.cpp:
+        * b3/air/AirLowerAfterRegAlloc.cpp:
+        (JSC::B3::Air::lowerAfterRegAlloc):
+        * b3/air/AirStackAllocation.cpp:
+        (JSC::B3::Air::attemptAssignment):
+        (JSC::B3::Air::assign):
+        * bytecode/AccessCase.cpp:
+        (JSC::AccessCase::generateImpl):
+        * bytecode/CallLinkStatus.cpp:
+        (JSC::CallLinkStatus::computeDFGStatuses):
+        * bytecode/GetterSetterAccessCase.cpp:
+        (JSC::GetterSetterAccessCase::emitDOMJITGetter):
+        * bytecode/ObjectPropertyConditionSet.cpp:
+        * bytecode/PolymorphicAccess.cpp:
+        (JSC::PolymorphicAccess::addCases):
+        (JSC::PolymorphicAccess::regenerate):
+        * bytecode/PropertyCondition.cpp:
+        (JSC::PropertyCondition::isStillValidAssumingImpurePropertyWatchpoint const):
+        * bytecode/StructureStubInfo.cpp:
+        (JSC::StructureStubInfo::addAccessCase):
+        * dfg/DFGArgumentsEliminationPhase.cpp:
+        * dfg/DFGByteCodeParser.cpp:
+        (JSC::DFG::ByteCodeParser::DelayedSetLocal::DelayedSetLocal):
+        (JSC::DFG::ByteCodeParser::inliningCost):
+        (JSC::DFG::ByteCodeParser::inlineCall):
+        (JSC::DFG::ByteCodeParser::attemptToInlineCall):
+        (JSC::DFG::ByteCodeParser::handleInlining):
+        (JSC::DFG::ByteCodeParser::planLoad):
+        (JSC::DFG::ByteCodeParser::store):
+        (JSC::DFG::ByteCodeParser::parseBlock):
+        (JSC::DFG::ByteCodeParser::linkBlock):
+        (JSC::DFG::ByteCodeParser::linkBlocks):
+        * dfg/DFGCSEPhase.cpp:
+        * dfg/DFGInPlaceAbstractState.cpp:
+        (JSC::DFG::InPlaceAbstractState::merge):
+        * dfg/DFGIntegerCheckCombiningPhase.cpp:
+        (JSC::DFG::IntegerCheckCombiningPhase::handleBlock):
+        * dfg/DFGIntegerRangeOptimizationPhase.cpp:
+        * dfg/DFGMovHintRemovalPhase.cpp:
+        * dfg/DFGObjectAllocationSinkingPhase.cpp:
+        * dfg/DFGPhantomInsertionPhase.cpp:
+        * dfg/DFGPutStackSinkingPhase.cpp:
+        * dfg/DFGStoreBarrierInsertionPhase.cpp:
+        * dfg/DFGVarargsForwardingPhase.cpp:
+        * ftl/FTLAbstractHeap.cpp:
+        (JSC::FTL::AbstractHeap::compute):
+        * ftl/FTLAbstractHeapRepository.cpp:
+        (JSC::FTL::AbstractHeapRepository::decorateMemory):
+        (JSC::FTL::AbstractHeapRepository::decorateCCallRead):
+        (JSC::FTL::AbstractHeapRepository::decorateCCallWrite):
+        (JSC::FTL::AbstractHeapRepository::decoratePatchpointRead):
+        (JSC::FTL::AbstractHeapRepository::decoratePatchpointWrite):
+        (JSC::FTL::AbstractHeapRepository::decorateFenceRead):
+        (JSC::FTL::AbstractHeapRepository::decorateFenceWrite):
+        (JSC::FTL::AbstractHeapRepository::decorateFencedAccess):
+        (JSC::FTL::AbstractHeapRepository::computeRangesAndDecorateInstructions):
+        * ftl/FTLLink.cpp:
+        (JSC::FTL::link):
+        * heap/MarkingConstraintSet.cpp:
+        (JSC::MarkingConstraintSet::add):
+        * interpreter/ShadowChicken.cpp:
+        (JSC::ShadowChicken::update):
+        * jit/BinarySwitch.cpp:
+        (JSC::BinarySwitch::BinarySwitch):
+        (JSC::BinarySwitch::build):
+        * llint/LLIntData.cpp:
+        (JSC::LLInt::Data::loadStats):
+        (JSC::LLInt::Data::saveStats):
+        * runtime/ArrayPrototype.cpp:
+        (JSC::ArrayPrototype::tryInitializeSpeciesWatchpoint):
+        (JSC::ArrayPrototypeAdaptiveInferredPropertyWatchpoint::handleFire):
+        * runtime/ErrorInstance.cpp:
+        (JSC::FindFirstCallerFrameWithCodeblockFunctor::FindFirstCallerFrameWithCodeblockFunctor): Deleted.
+        (JSC::FindFirstCallerFrameWithCodeblockFunctor::operator()): Deleted.
+        (JSC::FindFirstCallerFrameWithCodeblockFunctor::foundCallFrame const): Deleted.
+        (JSC::FindFirstCallerFrameWithCodeblockFunctor::index const): Deleted.
+        * runtime/IntlDateTimeFormat.cpp:
+        (JSC::IntlDateTimeFormat::initializeDateTimeFormat):
+        * runtime/PromiseDeferredTimer.cpp:
+        (JSC::PromiseDeferredTimer::doWork):
+        (JSC::PromiseDeferredTimer::addPendingPromise):
+        (JSC::PromiseDeferredTimer::cancelPendingPromise):
+        * runtime/TypeProfiler.cpp:
+        (JSC::TypeProfiler::insertNewLocation):
+        * runtime/TypeProfilerLog.cpp:
+        (JSC::TypeProfilerLog::processLogEntries):
+        * runtime/WeakMapPrototype.cpp:
+        (JSC::protoFuncWeakMapDelete):
+        (JSC::protoFuncWeakMapGet):
+        (JSC::protoFuncWeakMapHas):
+        (JSC::protoFuncWeakMapSet):
+        (JSC::getWeakMapData): Deleted.
+        * runtime/WeakSetPrototype.cpp:
+        (JSC::protoFuncWeakSetDelete):
+        (JSC::protoFuncWeakSetHas):
+        (JSC::protoFuncWeakSetAdd):
+        (JSC::getWeakMapData): Deleted.
+        * testRegExp.cpp:
+        (testOneRegExp):
+        (runFromFiles):
+        * wasm/WasmB3IRGenerator.cpp:
+        (JSC::Wasm::parseAndCompile):
+        * wasm/WasmBBQPlan.cpp:
+        (JSC::Wasm::BBQPlan::moveToState):
+        (JSC::Wasm::BBQPlan::parseAndValidateModule):
+        (JSC::Wasm::BBQPlan::prepare):
+        (JSC::Wasm::BBQPlan::compileFunctions):
+        (JSC::Wasm::BBQPlan::complete):
+        * wasm/WasmFaultSignalHandler.cpp:
+        (JSC::Wasm::trapHandler):
+        * wasm/WasmOMGPlan.cpp:
+        (JSC::Wasm::OMGPlan::OMGPlan):
+        (JSC::Wasm::OMGPlan::work):
+        * wasm/WasmPlan.cpp:
+        (JSC::Wasm::Plan::fail):
+        * wasm/WasmSignature.cpp:
+        (JSC::Wasm::SignatureInformation::adopt):
+        * wasm/WasmWorklist.cpp:
+        (JSC::Wasm::Worklist::enqueue):
+
 2017-09-12  Michael Saboff  <msaboff@apple.com>
 
         String.prototype.replace() puts extra '<' in result when a named capture reference is used without named captures in the RegExp
index beb8792..7c5372a 100644 (file)
 
 namespace JSC { namespace B3 {
 
-using namespace Air;
+using Inst = Air::Inst;
+using Arg = Air::Arg;
+using GenerationContext = Air::GenerationContext;
 
 namespace {
 
-unsigned numB3Args(B3::Kind kind)
+unsigned numB3Args(Kind kind)
 {
     switch (kind.opcode()) {
     case CheckAdd:
@@ -108,6 +110,7 @@ Inst CheckSpecial::hiddenBranch(const Inst& inst) const
 
 void CheckSpecial::forEachArg(Inst& inst, const ScopedLambda<Inst::EachArgCallback>& callback)
 {
+    using namespace Air;
     std::optional<Width> optionalDefArgWidth;
     Inst hidden = hiddenBranch(inst);
     hidden.forEachArg(
@@ -156,6 +159,7 @@ std::optional<unsigned> CheckSpecial::shouldTryAliasingDef(Inst& inst)
 
 CCallHelpers::Jump CheckSpecial::generate(Inst& inst, CCallHelpers& jit, GenerationContext& context)
 {
+    using namespace Air;
     CCallHelpers::Jump fail = hiddenBranch(inst).generate(jit, context);
     ASSERT(fail.isSet());
 
index fd65790..cff44a4 100644 (file)
@@ -44,7 +44,9 @@ namespace JSC { namespace B3 {
 
 namespace {
 
-const bool verbose = false;
+namespace B3DuplicateTailsInternal {
+static const bool verbose = false;
+}
 
 class DuplicateTails {
 public:
@@ -94,7 +96,7 @@ public:
             }
         }
         demoteValues(m_proc, valuesToDemote);
-        if (verbose) {
+        if (B3DuplicateTailsInternal::verbose) {
             dataLog("Procedure after value demotion:\n");
             dataLog(m_proc);
         }
@@ -116,7 +118,7 @@ public:
             // point.
             candidates.remove(block);
 
-            if (verbose)
+            if (B3DuplicateTailsInternal::verbose)
                 dataLog("Duplicating ", *tail, " into ", *block, "\n");
 
             block->removeLast(m_proc);
index 1ae4355..9a2e98a 100644 (file)
@@ -53,7 +53,9 @@ namespace JSC { namespace B3 {
 
 namespace {
 
-const bool verbose = false;
+namespace B3EliminateCommonSubexpressionsInternal {
+static const bool verbose = false;
+}
 
 // FIXME: We could treat Patchpoints with a non-empty set of reads as a "memory value" and somehow
 // eliminate redundant ones. We would need some way of determining if two patchpoints are replacable.
@@ -160,7 +162,7 @@ public:
 
     bool run()
     {
-        if (verbose)
+        if (B3EliminateCommonSubexpressionsInternal::verbose)
             dataLog("B3 before CSE:\n", m_proc);
         
         m_proc.resetValueOwners();
@@ -188,7 +190,7 @@ public:
                     data.memoryValuesAtTail.add(memory);
             }
 
-            if (verbose)
+            if (B3EliminateCommonSubexpressionsInternal::verbose)
                 dataLog("Block ", *block, ": ", data, "\n");
         }
 
@@ -196,7 +198,7 @@ public:
         Vector<BasicBlock*> postOrder = m_proc.blocksInPostOrder();
         for (unsigned i = postOrder.size(); i--;) {
             m_block = postOrder[i];
-            if (verbose)
+            if (B3EliminateCommonSubexpressionsInternal::verbose)
                 dataLog("Looking at ", *m_block, ":\n");
 
             m_data = ImpureBlockData();
@@ -222,7 +224,7 @@ public:
             m_insertionSet.execute(block);
         }
 
-        if (verbose)
+        if (B3EliminateCommonSubexpressionsInternal::verbose)
             dataLog("B3 after CSE:\n", m_proc);
 
         return m_changed;
@@ -487,7 +489,7 @@ private:
         // expensive, but in the overwhelming majority of cases it will almost immediately hit an 
         // operation that interferes.
 
-        if (verbose)
+        if (B3EliminateCommonSubexpressionsInternal::verbose)
             dataLog(*m_value, ": looking forward for stores to ", *ptr, "...\n");
 
         // First search forward in this basic block.
@@ -557,7 +559,7 @@ private:
         if (matches.isEmpty())
             return false;
 
-        if (verbose)
+        if (B3EliminateCommonSubexpressionsInternal::verbose)
             dataLog("Eliminating ", *m_value, " due to ", pointerListDump(matches), "\n");
         
         m_changed = true;
@@ -566,7 +568,7 @@ private:
             MemoryValue* dominatingMatch = matches[0];
             RELEASE_ASSERT(m_dominators.dominates(dominatingMatch->owner, m_block));
             
-            if (verbose)
+            if (B3EliminateCommonSubexpressionsInternal::verbose)
                 dataLog("    Eliminating using ", *dominatingMatch, "\n");
             Vector<Value*> extraValues;
             if (Value* value = replace(dominatingMatch, extraValues)) {
@@ -590,7 +592,7 @@ private:
 
         VariableValue* get = m_insertionSet.insert<VariableValue>(
             m_index, Get, m_value->origin(), variable);
-        if (verbose)
+        if (B3EliminateCommonSubexpressionsInternal::verbose)
             dataLog("    Inserting get of value: ", *get, "\n");
         m_value->replaceWithIdentity(get);
             
@@ -615,23 +617,23 @@ private:
     template<typename Filter>
     MemoryMatches findMemoryValue(Value* ptr, HeapRange range, const Filter& filter)
     {
-        if (verbose)
+        if (B3EliminateCommonSubexpressionsInternal::verbose)
             dataLog(*m_value, ": looking backward for ", *ptr, "...\n");
         
         if (m_value->as<MemoryValue>()->hasFence()) {
-            if (verbose)
+            if (B3EliminateCommonSubexpressionsInternal::verbose)
                 dataLog("    Giving up because fences.\n");
             return { };
         }
         
         if (MemoryValue* match = m_data.memoryValuesAtTail.find(ptr, filter)) {
-            if (verbose)
+            if (B3EliminateCommonSubexpressionsInternal::verbose)
                 dataLog("    Found ", *match, " locally.\n");
             return { match };
         }
 
         if (m_data.writes.overlaps(range)) {
-            if (verbose)
+            if (B3EliminateCommonSubexpressionsInternal::verbose)
                 dataLog("    Giving up because of writes.\n");
             return { };
         }
@@ -642,27 +644,27 @@ private:
         MemoryMatches matches;
 
         while (BasicBlock* block = worklist.pop()) {
-            if (verbose)
+            if (B3EliminateCommonSubexpressionsInternal::verbose)
                 dataLog("    Looking at ", *block, "\n");
 
             ImpureBlockData& data = m_impureBlockData[block];
 
             MemoryValue* match = data.memoryValuesAtTail.find(ptr, filter);
             if (match && match != m_value) {
-                if (verbose)
+                if (B3EliminateCommonSubexpressionsInternal::verbose)
                     dataLog("    Found match: ", *match, "\n");
                 matches.append(match);
                 continue;
             }
 
             if (data.writes.overlaps(range)) {
-                if (verbose)
+                if (B3EliminateCommonSubexpressionsInternal::verbose)
                     dataLog("    Giving up because of writes.\n");
                 return { };
             }
 
             if (!block->numPredecessors()) {
-                if (verbose)
+                if (B3EliminateCommonSubexpressionsInternal::verbose)
                     dataLog("    Giving up because it's live at root.\n");
                 // This essentially proves that this is live at the prologue. That means that we
                 // cannot reliably optimize this case.
@@ -672,7 +674,7 @@ private:
             worklist.pushAll(block->predecessors());
         }
 
-        if (verbose)
+        if (B3EliminateCommonSubexpressionsInternal::verbose)
             dataLog("    Got matches: ", pointerListDump(matches), "\n");
         return matches;
     }
index d20381d..b28f21f 100644 (file)
@@ -48,7 +48,9 @@ namespace JSC { namespace B3 {
 
 namespace {
 
-const bool verbose = false;
+namespace B3FixSSAInternal {
+static const bool verbose = false;
+}
 
 void killDeadVariables(Procedure& proc)
 {
@@ -157,7 +159,7 @@ void fixSSAGlobally(Procedure& proc)
                     return nullptr;
                 
                 Value* phi = proc.add<Value>(Phi, variable->type(), block->at(0)->origin());
-                if (verbose) {
+                if (B3FixSSAInternal::verbose) {
                     dataLog(
                         "Adding Phi for ", pointerDump(variable), " at ", *block, ": ",
                         deepDump(proc, phi), "\n");
@@ -231,7 +233,7 @@ void fixSSAGlobally(Procedure& proc)
                 Variable* variable = calcVarToVariable[calcVar->index()];
 
                 Value* mappedValue = ensureMapping(variable, upsilonInsertionPoint, upsilonOrigin);
-                if (verbose) {
+                if (B3FixSSAInternal::verbose) {
                     dataLog(
                         "Mapped value for ", *variable, " with successor Phi ", *phi,
                         " at end of ", *block, ": ", pointerDump(mappedValue), "\n");
@@ -245,7 +247,7 @@ void fixSSAGlobally(Procedure& proc)
         insertionSet.execute(block);
     }
 
-    if (verbose) {
+    if (B3FixSSAInternal::verbose) {
         dataLog("B3 after SSA conversion:\n");
         dataLog(proc);
     }
@@ -266,7 +268,7 @@ void demoteValues(Procedure& proc, const IndexSet<Value*>& values)
             phiMap.add(value, proc.addVariable(value->type()));
     }
 
-    if (verbose) {
+    if (B3FixSSAInternal::verbose) {
         dataLog("Demoting values as follows:\n");
         dataLog("   map = ");
         CommaPrinter comma;
index b9bdc8d..22250cb 100644 (file)
@@ -41,7 +41,9 @@ namespace JSC { namespace B3 {
 
 namespace {
 
-const bool verbose = false;
+namespace B3FoldPathConstantsInternal {
+static const bool verbose = false;
+}
 
 class FoldPathConstants {
 public:
@@ -55,7 +57,7 @@ public:
     {
         bool changed = false;
 
-        if (verbose)
+        if (B3FoldPathConstantsInternal::verbose)
             dataLog("B3 before folding path constants: \n", m_proc, "\n");
         
         // Find all of the values that are the subject of a branch or switch. For any successor
@@ -80,7 +82,7 @@ public:
                     ASSERT_UNUSED(otherOverride, otherOverride.block != override.block);
             }
 
-            if (verbose)
+            if (B3FoldPathConstantsInternal::verbose)
                 dataLog("Overriding ", *value, " from ", *from, ": ", override, "\n");
             
             forValue.append(override);
@@ -149,7 +151,7 @@ public:
                     result = override;
             }
 
-            if (verbose)
+            if (B3FoldPathConstantsInternal::verbose)
                 dataLog("In block ", *block, " getting override for ", *value, ": ", result, "\n");
 
             return result;
index f29f4f0..ee612fe 100644 (file)
@@ -42,7 +42,9 @@ namespace JSC { namespace B3 {
 
 namespace {
 
-const bool verbose = false;
+namespace B3InferSwitchesInternal {
+static const bool verbose = false;
+}
 
 class InferSwitches {
 public:
@@ -55,7 +57,7 @@ public:
     
     bool run()
     {
-        if (verbose)
+        if (B3InferSwitchesInternal::verbose)
             dataLog("B3 before inferSwitches:\n", m_proc);
         
         bool changed = true;
@@ -63,7 +65,7 @@ public:
         while (changed) {
             changed = false;
             
-            if (verbose)
+            if (B3InferSwitchesInternal::verbose)
                 dataLog("Performing fixpoint iteration:\n");
             
             for (BasicBlock* block : m_proc)
@@ -78,7 +80,7 @@ public:
             
             m_proc.deleteOrphans();
             
-            if (verbose)
+            if (B3InferSwitchesInternal::verbose)
                 dataLog("B3 after inferSwitches:\n", m_proc);
             return true;
         }
@@ -96,10 +98,10 @@ private:
             return false;
         
         SwitchDescription description = describe(block);
-        if (verbose)
+        if (B3InferSwitchesInternal::verbose)
             dataLog("Description of primary block ", *block, ": ", description, "\n");
         if (!description) {
-            if (verbose)
+            if (B3InferSwitchesInternal::verbose)
                 dataLog("    Bailing because not switch-like.\n");
             return false;
         }
@@ -115,17 +117,17 @@ private:
                 continue;
             if (value == description.branch)
                 continue;
-            if (verbose)
+            if (B3InferSwitchesInternal::verbose)
                 dataLog("    Bailing because of ", deepDump(m_proc, value), "\n");
             return false;
         }
         
         BasicBlock* predecessor = block->predecessor(0);
         SwitchDescription predecessorDescription = describe(predecessor);
-        if (verbose)
+        if (B3InferSwitchesInternal::verbose)
             dataLog("    Description of predecessor block ", *predecessor, ": ", predecessorDescription, "\n");
         if (!predecessorDescription) {
-            if (verbose)
+            if (B3InferSwitchesInternal::verbose)
                 dataLog("    Bailing because not switch-like.\n");
             return false;
         }
@@ -133,7 +135,7 @@ private:
         // Both us and the predecessor are switch-like, but that doesn't mean that we're compatible.
         // We may be switching on different values!
         if (description.source != predecessorDescription.source) {
-            if (verbose)
+            if (B3InferSwitchesInternal::verbose)
                 dataLog("    Bailing because sources don't match.\n");
             return false;
         }
@@ -143,7 +145,7 @@ private:
         // just totally redundant and we should be getting rid of it. But we don't handle that here,
         // yet.
         if (predecessorDescription.fallThrough.block() != block) {
-            if (verbose)
+            if (B3InferSwitchesInternal::verbose)
                 dataLog("    Bailing because fall-through of predecessor is not the primary block.\n");
             return false;
         }
@@ -151,14 +153,14 @@ private:
         // Make sure that there ain't no loops.
         if (description.fallThrough.block() == block
             || description.fallThrough.block() == predecessor) {
-            if (verbose)
+            if (B3InferSwitchesInternal::verbose)
                 dataLog("    Bailing because of fall-through loop.\n");
             return false;
         }
         for (SwitchCase switchCase : description.cases) {
             if (switchCase.targetBlock() == block
                 || switchCase.targetBlock() == predecessor) {
-                if (verbose)
+                if (B3InferSwitchesInternal::verbose)
                     dataLog("    Bailing because of loop in primary cases.\n");
                 return false;
             }
@@ -166,13 +168,13 @@ private:
         for (SwitchCase switchCase : predecessorDescription.cases) {
             if (switchCase.targetBlock() == block
                 || switchCase.targetBlock() == predecessor) {
-                if (verbose)
+                if (B3InferSwitchesInternal::verbose)
                     dataLog("    Bailing because of loop in predecessor cases.\n");
                 return false;
             }
         }
         
-        if (verbose)
+        if (B3InferSwitchesInternal::verbose)
             dataLog("    Doing it!\n");
         // We're committed to doing the thing.
         
index 963c10e..14567fc 100644 (file)
@@ -28,6 +28,7 @@
 
 #if ENABLE(B3_JIT)
 
+#include "AirArg.h"
 #include "B3BasicBlockInlines.h"
 #include "B3BlockInsertionSet.h"
 #include "B3CCallValue.h"
 
 namespace JSC { namespace B3 {
 
+using Arg = Air::Arg;
+using Code = Air::Code;
+using Tmp = Air::Tmp;
+
 namespace {
 
-class LowerMacros {
+class LowerMacrosAfterOptimizations {
 public:
-    LowerMacros(Procedure& proc)
+    LowerMacrosAfterOptimizations(Procedure& proc)
         : m_proc(proc)
         , m_blockInsertionSet(proc)
         , m_insertionSet(proc)
@@ -183,7 +188,7 @@ private:
 
 bool lowerMacrosImpl(Procedure& proc)
 {
-    LowerMacros lowerMacros(proc);
+    LowerMacrosAfterOptimizations lowerMacros(proc);
     return lowerMacros.run();
 }
 
index 1489c31..764db4c 100644 (file)
 
 namespace JSC { namespace B3 {
 
-using namespace Air;
-
 namespace {
 
-const bool verbose = false;
+namespace B3LowerToAirInternal {
+static const bool verbose = false;
+}
+
+using Arg = Air::Arg;
+using Inst = Air::Inst;
+using Code = Air::Code;
+using Tmp = Air::Tmp;
 
 // FIXME: We wouldn't need this if Air supported Width modifiers in Air::Kind.
 // https://bugs.webkit.org/show_bug.cgi?id=169247
 #define OPCODE_FOR_WIDTH(opcode, width) ( \
-    (width) == Width8 ? opcode ## 8 : \
-    (width) == Width16 ? opcode ## 16 : \
-    (width) == Width32 ? opcode ## 32 : \
-    opcode ## 64)
+    (width) == Width8 ? Air::opcode ## 8 : \
+    (width) == Width16 ? Air::opcode ## 16 :    \
+    (width) == Width32 ? Air::opcode ## 32 :    \
+    Air::opcode ## 64)
 #define OPCODE_FOR_CANONICAL_WIDTH(opcode, width) ( \
-    (width) == Width64 ? opcode ## 64 : opcode ## 32)
+    (width) == Width64 ? Air::opcode ## 64 : Air::opcode ## 32)
 
 class LowerToAir {
 public:
@@ -107,6 +112,7 @@ public:
 
     void run()
     {
+        using namespace Air;
         for (B3::BasicBlock* block : m_procedure)
             m_blockToBlock[block] = m_code.addBlock(block->frequency());
         
@@ -114,7 +120,7 @@ public:
             switch (value->opcode()) {
             case Phi: {
                 m_phiToTmp[value] = m_code.newTmp(value->resultBank());
-                if (verbose)
+                if (B3LowerToAirInternal::verbose)
                     dataLog("Phi tmp for ", *value, ": ", m_phiToTmp[value], "\n");
                 break;
             }
@@ -146,7 +152,7 @@ public:
 
             m_isRare = !m_fastWorklist.saw(block);
 
-            if (verbose)
+            if (B3LowerToAirInternal::verbose)
                 dataLog("Lowering Block ", *block, ":\n");
             
             // Make sure that the successors are set up correctly.
@@ -163,10 +169,10 @@ public:
                 if (m_locked.contains(m_value))
                     continue;
                 m_insts.append(Vector<Inst>());
-                if (verbose)
+                if (B3LowerToAirInternal::verbose)
                     dataLog("Lowering ", deepDump(m_procedure, m_value), ":\n");
                 lower();
-                if (verbose) {
+                if (B3LowerToAirInternal::verbose) {
                     for (Inst& inst : m_insts.last())
                         dataLog("    ", inst, "\n");
                 }
@@ -379,7 +385,7 @@ private:
                 realTmp = m_code.newTmp(value->resultBank());
                 if (m_procedure.isFastConstant(value->key()))
                     m_code.addFastTmp(realTmp);
-                if (verbose)
+                if (B3LowerToAirInternal::verbose)
                     dataLog("Tmp for ", *value, ": ", realTmp, "\n");
             }
             tmp = realTmp;
@@ -906,6 +912,7 @@ private:
     template<Air::Opcode opcode32, Air::Opcode opcode64>
     void appendShift(Value* value, Value* amount)
     {
+        using namespace Air;
         Air::Opcode opcode = opcodeForType(opcode32, opcode64, value->type());
         
         if (imm(amount)) {
@@ -1019,6 +1026,7 @@ private:
 
     Inst createStore(Air::Kind move, Value* value, const Arg& dest)
     {
+        using namespace Air;
         if (auto imm_value = imm(value)) {
             if (isARM64() && imm_value.value() == 0) {
                 switch (move.opcode) {
@@ -1043,6 +1051,7 @@ private:
     
     Air::Opcode storeOpcode(Width width, Bank bank)
     {
+        using namespace Air;
         switch (width) {
         case Width8:
             RELEASE_ASSERT(bank == GP);
@@ -1073,6 +1082,7 @@ private:
     
     void appendStore(Value* value, const Arg& dest)
     {
+        using namespace Air;
         MemoryValue* memory = value->as<MemoryValue>();
         RELEASE_ASSERT(memory->isStore());
 
@@ -1100,6 +1110,7 @@ private:
 
     Air::Opcode moveForType(Type type)
     {
+        using namespace Air;
         switch (type) {
         case Int32:
             return Move32;
@@ -1119,6 +1130,7 @@ private:
 
     Air::Opcode relaxedMoveForType(Type type)
     {
+        using namespace Air;
         switch (type) {
         case Int32:
         case Int64:
@@ -1162,8 +1174,8 @@ private:
     void print(Value* origin, Arguments&&... arguments)
     {
         auto printList = Printer::makePrintRecordList(arguments...);
-        auto printSpecial = static_cast<PrintSpecial*>(m_code.addSpecial(std::make_unique<PrintSpecial>(printList)));
-        Inst inst(Patch, origin, Arg::special(printSpecial));
+        auto printSpecial = static_cast<Air::PrintSpecial*>(m_code.addSpecial(std::make_unique<Air::PrintSpecial>(printList)));
+        Inst inst(Air::Patch, origin, Arg::special(printSpecial));
         Printer::appendAirArgs(inst, std::forward<Arguments>(arguments)...);
         append(WTFMove(inst));
     }
@@ -1762,6 +1774,7 @@ private:
 
     Inst createBranch(Value* value, bool inverted = false)
     {
+        using namespace Air;
         return createGenericCompare(
             value,
             [this] (
@@ -1845,6 +1858,7 @@ private:
 
     Inst createCompare(Value* value, bool inverted = false)
     {
+        using namespace Air;
         return createGenericCompare(
             value,
             [this] (
@@ -1924,6 +1938,7 @@ private:
     };
     Inst createSelect(const MoveConditionallyConfig& config)
     {
+        using namespace Air;
         auto createSelectInstruction = [&] (Air::Opcode opcode, const Arg& condition, ArgPromise& left, ArgPromise& right) -> Inst {
             if (isValidForm(opcode, condition.kind(), left.kind(), right.kind(), Arg::Tmp, Arg::Tmp, Arg::Tmp)) {
                 Tmp result = tmp(m_value);
@@ -1987,6 +2002,7 @@ private:
     
     bool tryAppendLea()
     {
+        using namespace Air;
         Air::Opcode leaOpcode = tryOpcodeForType(Lea32, Lea64, m_value->type());
         if (!isValidForm(leaOpcode, Arg::Index, Arg::Tmp))
             return false;
@@ -2116,6 +2132,7 @@ private:
 
     void appendX86Div(B3::Opcode op)
     {
+        using namespace Air;
         Air::Opcode convertToDoubleWord;
         Air::Opcode div;
         switch (m_value->type()) {
@@ -2143,6 +2160,7 @@ private:
 
     void appendX86UDiv(B3::Opcode op)
     {
+        using namespace Air;
         Air::Opcode div = m_value->type() == Int32 ? X86UDiv32 : X86UDiv64;
 
         ASSERT(op == UDiv || op == UMod);
@@ -2178,6 +2196,7 @@ private:
     // generated. It assumes that you've consumed everything that needs to be consumed.
     void appendCAS(Value* atomicValue, bool invert)
     {
+        using namespace Air;
         AtomicValue* atomic = atomicValue->as<AtomicValue>();
         RELEASE_ASSERT(atomic);
         
@@ -2340,6 +2359,7 @@ private:
     
     void appendGeneralAtomic(Air::Opcode opcode, Commutativity commutativity = NotCommutative)
     {
+        using namespace Air;
         AtomicValue* atomic = m_value->as<AtomicValue>();
         
         Arg address = addr(m_value);
@@ -2424,6 +2444,7 @@ private:
     
     void lower()
     {
+        using namespace Air;
         switch (m_value->opcode()) {
         case B3::Nop: {
             // Yes, we will totally see Nop's because some phases will replaceWithNop() instead of
index 47f0abd..b375cdd 100644 (file)
@@ -34,7 +34,8 @@
 
 namespace JSC { namespace B3 {
 
-using namespace Air;
+using Arg = Air::Arg;
+using Inst = Air::Inst;
 
 PatchpointSpecial::PatchpointSpecial()
 {
@@ -135,8 +136,7 @@ bool PatchpointSpecial::admitsExtendedOffsetAddr(Inst& inst, unsigned argIndex)
     return admitsStack(inst, argIndex);
 }
 
-CCallHelpers::Jump PatchpointSpecial::generate(
-    Inst& inst, CCallHelpers& jit, GenerationContext& context)
+CCallHelpers::Jump PatchpointSpecial::generate(Inst& inst, CCallHelpers& jit, Air::GenerationContext& context)
 {
     PatchpointValue* value = inst.origin->as<PatchpointValue>();
     ASSERT(value);
index 1c7b0d0..6d31f7c 100644 (file)
@@ -39,7 +39,9 @@ namespace JSC { namespace B3 {
 
 namespace {
 
-bool verbose = false;
+namespace B3ReduceDoubleToFloatInternal {
+static const bool verbose = false;
+}
 bool printRemainingConversions = false;
 
 class DoubleToFloatReduction {
@@ -128,7 +130,7 @@ private:
             }
         } while (changedPhiState);
 
-        if (verbose) {
+        if (B3ReduceDoubleToFloatInternal::verbose) {
             dataLog("Conversion candidates:\n");
             for (BasicBlock* block : m_procedure) {
                 for (Value* value : *block) {
@@ -192,7 +194,7 @@ private:
             }
         } while (changedPhiState);
 
-        if (verbose) {
+        if (B3ReduceDoubleToFloatInternal::verbose) {
             dataLog("Phis containing float values:\n");
             for (BasicBlock* block : m_procedure) {
                 for (Value* value : *block) {
@@ -489,13 +491,13 @@ void reduceDoubleToFloat(Procedure& procedure)
 {
     PhaseScope phaseScope(procedure, "reduceDoubleToFloat");
 
-    if (verbose)
+    if (B3ReduceDoubleToFloatInternal::verbose)
         dataLog("Before DoubleToFloatReduction:\n", procedure, "\n");
 
     DoubleToFloatReduction doubleToFloatReduction(procedure);
     doubleToFloatReduction.run();
 
-    if (verbose)
+    if (B3ReduceDoubleToFloatInternal::verbose)
         dataLog("After DoubleToFloatReduction:\n", procedure, "\n");
 
     printGraphIfConverting(procedure);
index c22e9b4..5502d5e 100644 (file)
@@ -88,7 +88,9 @@ namespace {
 // constants then the canonical form involves the lower-indexed value first. Given Add(x, y), it's
 // canonical if x->index() <= y->index().
 
-bool verbose = false;
+namespace B3ReduceStrengthInternal {
+static const bool verbose = false;
+}
 
 // FIXME: This IntRange stuff should be refactored into a general constant propagator. It's weird
 // that it's just sitting here in this file.
@@ -414,7 +416,7 @@ public:
 
             if (first)
                 first = false;
-            else if (verbose) {
+            else if (B3ReduceStrengthInternal::verbose) {
                 dataLog("B3 after iteration #", index - 1, " of reduceStrength:\n");
                 dataLog(m_proc);
             }
@@ -452,7 +454,7 @@ public:
                 m_block = block;
                 
                 for (m_index = 0; m_index < block->size(); ++m_index) {
-                    if (verbose) {
+                    if (B3ReduceStrengthInternal::verbose) {
                         dataLog(
                             "Looking at ", *block, " #", m_index, ": ",
                             deepDump(m_proc, block->at(m_index)), "\n");
@@ -2035,7 +2037,7 @@ private:
     // early.
     void specializeSelect(Value* source)
     {
-        if (verbose)
+        if (B3ReduceStrengthInternal::verbose)
             dataLog("Specializing select: ", deepDump(m_proc, source), "\n");
 
         // This mutates startIndex to account for the fact that m_block got the front of it
@@ -2276,7 +2278,7 @@ private:
 
     void simplifyCFG()
     {
-        if (verbose) {
+        if (B3ReduceStrengthInternal::verbose) {
             dataLog("Before simplifyCFG:\n");
             dataLog(m_proc);
         }
@@ -2301,7 +2303,7 @@ private:
         // iterations needed to kill a lot of code.
 
         for (BasicBlock* block : m_proc) {
-            if (verbose)
+            if (B3ReduceStrengthInternal::verbose)
                 dataLog("Considering block ", *block, ":\n");
 
             checkPredecessorValidity();
@@ -2317,7 +2319,7 @@ private:
                     && successor->last()->opcode() == Jump) {
                     BasicBlock* newSuccessor = successor->successorBlock(0);
                     if (newSuccessor != successor) {
-                        if (verbose) {
+                        if (B3ReduceStrengthInternal::verbose) {
                             dataLog(
                                 "Replacing ", pointerDump(block), "->", pointerDump(successor),
                                 " with ", pointerDump(block), "->", pointerDump(newSuccessor),
@@ -2348,7 +2350,7 @@ private:
                         }
                     }
                     if (allSame) {
-                        if (verbose) {
+                        if (B3ReduceStrengthInternal::verbose) {
                             dataLog(
                                 "Changing ", pointerDump(block), "'s terminal to a Jump.\n");
                         }
@@ -2387,7 +2389,7 @@ private:
                     for (BasicBlock* newSuccessor : block->successorBlocks())
                         newSuccessor->replacePredecessor(successor, block);
 
-                    if (verbose) {
+                    if (B3ReduceStrengthInternal::verbose) {
                         dataLog(
                             "Merged ", pointerDump(block), "->", pointerDump(successor), "\n");
                     }
@@ -2397,7 +2399,7 @@ private:
             }
         }
 
-        if (m_changedCFG && verbose) {
+        if (m_changedCFG && B3ReduceStrengthInternal::verbose) {
             dataLog("B3 after simplifyCFG:\n");
             dataLog(m_proc);
         }
index bba84ca..389ef8b 100644 (file)
@@ -34,8 +34,6 @@
 
 namespace JSC { namespace B3 {
 
-using namespace Air;
-
 const RegisterSet& StackmapGenerationParams::usedRegisters() const
 {
     ASSERT(m_context.code->needsUsedRegisters());
index 60370c5..5eb5b9b 100644 (file)
@@ -34,7 +34,9 @@
 
 namespace JSC { namespace B3 {
 
-using namespace Air;
+using Arg = Air::Arg;
+using Inst = Air::Inst;
+using Tmp = Air::Tmp;
 
 StackmapSpecial::StackmapSpecial()
 {
@@ -210,8 +212,7 @@ bool StackmapSpecial::admitsStackImpl(
     return false;
 }
 
-Vector<ValueRep> StackmapSpecial::repsImpl(
-    GenerationContext& context, unsigned numIgnoredB3Args, unsigned numIgnoredAirArgs, Inst& inst)
+Vector<ValueRep> StackmapSpecial::repsImpl(Air::GenerationContext& context, unsigned numIgnoredB3Args, unsigned numIgnoredAirArgs, Inst& inst)
 {
     Vector<ValueRep> result;
     for (unsigned i = 0; i < inst.origin->numChildren() - numIgnoredB3Args; ++i)
@@ -267,7 +268,7 @@ bool StackmapSpecial::isArgValidForRep(Air::Code& code, const Air::Arg& arg, con
     }
 }
 
-ValueRep StackmapSpecial::repForArg(Code& code, const Arg& arg)
+ValueRep StackmapSpecial::repForArg(Air::Code& code, const Arg& arg)
 {
     switch (arg.kind()) {
     case Arg::Tmp:
index e860d9c..ed67d1c 100644 (file)
@@ -42,7 +42,9 @@ namespace JSC { namespace B3 { namespace Air {
 
 namespace {
 
-const bool verbose = false;
+namespace AirAllocateStackByGraphColoringInternal {
+static const bool verbose = false;
+}
 
 struct CoalescableMove {
     CoalescableMove()
@@ -153,7 +155,7 @@ void allocateStackByGraphColoring(Code& code)
         StackSlotLiveness::LocalCalc localCalc(liveness, block);
 
         auto interfere = [&] (unsigned instIndex) {
-            if (verbose)
+            if (AirAllocateStackByGraphColoringInternal::verbose)
                 dataLog("Interfering: ", WTF::pointerListDump(localCalc.live()), "\n");
 
             Inst* prevInst = block->get(instIndex);
@@ -185,7 +187,7 @@ void allocateStackByGraphColoring(Code& code)
         };
 
         for (unsigned instIndex = block->size(); instIndex--;) {
-            if (verbose)
+            if (AirAllocateStackByGraphColoringInternal::verbose)
                 dataLog("Analyzing: ", block->at(instIndex), "\n");
 
             // Kill dead stores. For simplicity we say that a store is killable if it has only late
@@ -232,7 +234,7 @@ void allocateStackByGraphColoring(Code& code)
             });
     }
 
-    if (verbose) {
+    if (AirAllocateStackByGraphColoringInternal::verbose) {
         for (StackSlot* slot : code.stackSlots())
             dataLog("Interference of ", pointerDump(slot), ": ", pointerListDump(interference[slot]), "\n");
     }
index 73d710f..7166ed8 100644 (file)
@@ -37,7 +37,9 @@ namespace JSC { namespace B3 { namespace Air {
 
 namespace {
 
-bool verbose = false;
+namespace AirEmitShuffleInternal {
+static const bool verbose = false;
+}
 
 template<typename Functor>
 Tmp findPossibleScratch(Code& code, Bank bank, const Functor& functor) {
@@ -125,7 +127,7 @@ Vector<Inst> emitShuffle(
     Code& code, Vector<ShufflePair> pairs, std::array<Arg, 2> scratches, Bank bank,
     Value* origin)
 {
-    if (verbose) {
+    if (AirEmitShuffleInternal::verbose) {
         dataLog(
             "Dealing with pairs: ", listDump(pairs), " and scratches ", scratches[0], ", ",
             scratches[1], "\n");
@@ -185,7 +187,7 @@ Vector<Inst> emitShuffle(
             ASSERT(currentPairs.isEmpty());
             Arg originalSrc = mapping.begin()->key;
             ASSERT(!shifts.contains(originalSrc));
-            if (verbose)
+            if (AirEmitShuffleInternal::verbose)
                 dataLog("Processing from ", originalSrc, "\n");
             
             GraphNodeWorklist<Arg> worklist;
@@ -195,7 +197,7 @@ Vector<Inst> emitShuffle(
                 if (iter == mapping.end()) {
                     // With a shift it's possible that we previously built the tail of this shift.
                     // See if that's the case now.
-                    if (verbose)
+                    if (AirEmitShuffleInternal::verbose)
                         dataLog("Trying to append shift at ", src, "\n");
                     currentPairs.appendVector(shifts.take(src));
                     continue;
@@ -213,7 +215,7 @@ Vector<Inst> emitShuffle(
             ASSERT(currentPairs.size());
             ASSERT(currentPairs[0].src() == originalSrc);
 
-            if (verbose)
+            if (AirEmitShuffleInternal::verbose)
                 dataLog("currentPairs = ", listDump(currentPairs), "\n");
 
             bool isRotate = false;
@@ -225,7 +227,7 @@ Vector<Inst> emitShuffle(
             }
 
             if (isRotate) {
-                if (verbose)
+                if (AirEmitShuffleInternal::verbose)
                     dataLog("It's a rotate.\n");
                 Rotate rotate;
 
@@ -277,14 +279,14 @@ Vector<Inst> emitShuffle(
                 rotates.append(WTFMove(rotate));
                 currentPairs.shrink(0);
             } else {
-                if (verbose)
+                if (AirEmitShuffleInternal::verbose)
                     dataLog("It's a shift.\n");
                 shifts.add(originalSrc, WTFMove(currentPairs));
             }
         }
     }
 
-    if (verbose) {
+    if (AirEmitShuffleInternal::verbose) {
         dataLog("Shifts:\n");
         for (auto& entry : shifts)
             dataLog("    ", entry.key, ": ", listDump(entry.value), "\n");
index 9cb1101..60a8cc0 100644 (file)
@@ -39,7 +39,9 @@ namespace JSC { namespace B3 { namespace Air {
 
 namespace {
 
-bool verbose = false;
+namespace AirFixObviousSpillsInternal {
+static const bool verbose = false;
+}
 
 class FixObviousSpills {
 public:
@@ -51,7 +53,7 @@ public:
 
     void run()
     {
-        if (verbose)
+        if (AirFixObviousSpillsInternal::verbose)
             dataLog("Code before fixObviousSpills:\n", m_code);
         
         computeAliases();
@@ -73,7 +75,7 @@ private:
                 if (!m_state.wasVisited)
                     continue;
 
-                if (verbose)
+                if (AirFixObviousSpillsInternal::verbose)
                     dataLog("Executing block ", *m_block, ": ", m_state, "\n");
                 
                 for (m_instIndex = 0; m_instIndex < block->size(); ++m_instIndex)
@@ -169,13 +171,13 @@ private:
     {
         Inst& inst = m_block->at(m_instIndex);
 
-        if (verbose)
+        if (AirFixObviousSpillsInternal::verbose)
             dataLog("    Executing ", inst, ": ", m_state, "\n");
 
         Inst::forEachDefWithExtraClobberedRegs<Arg>(
             &inst, &inst,
             [&] (const Arg& arg, Arg::Role, Bank, Width) {
-                if (verbose)
+                if (AirFixObviousSpillsInternal::verbose)
                     dataLog("        Clobbering ", arg, "\n");
                 m_state.clobber(arg);
             });
@@ -190,7 +192,7 @@ private:
     {
         Inst& inst = m_block->at(m_instIndex);
 
-        if (verbose)
+        if (AirFixObviousSpillsInternal::verbose)
             dataLog("Fixing inst ", inst, ": ", m_state, "\n");
         
         // Check if alias analysis says that this is unnecessary.
@@ -267,13 +269,13 @@ private:
                 case Width64:
                     if (alias->mode != RegSlot::AllBits)
                         return;
-                    if (verbose)
+                    if (AirFixObviousSpillsInternal::verbose)
                         dataLog("    Replacing ", arg, " with ", alias->reg, "\n");
                     arg = Tmp(alias->reg);
                     didThings = true;
                     return;
                 case Width32:
-                    if (verbose)
+                    if (AirFixObviousSpillsInternal::verbose)
                         dataLog("    Replacing ", arg, " with ", alias->reg, " (subwidth case)\n");
                     arg = Tmp(alias->reg);
                     didThings = true;
@@ -285,7 +287,7 @@ private:
 
             // Revert to immediate if that didn't work.
             if (const SlotConst* alias = m_state.getSlotConst(arg.stackSlot())) {
-                if (verbose)
+                if (AirFixObviousSpillsInternal::verbose)
                     dataLog("    Replacing ", arg, " with constant ", alias->constant, "\n");
                 if (Arg::isValidImmForm(alias->constant))
                     arg = Arg::imm(alias->constant);
index 1fa053d..bfef3b3 100644 (file)
@@ -46,7 +46,9 @@ namespace JSC { namespace B3 { namespace Air {
 
 namespace {
 
-bool verbose = false;
+namespace AirLowerAfterRegAllocInternal {
+static const bool verbose = false;
+}
     
 } // anonymous namespace
 
@@ -54,7 +56,7 @@ void lowerAfterRegAlloc(Code& code)
 {
     PhaseScope phaseScope(code, "lowerAfterRegAlloc");
 
-    if (verbose)
+    if (AirLowerAfterRegAllocInternal::verbose)
         dataLog("Code before lowerAfterRegAlloc:\n", code);
     
     auto isRelevant = [] (Inst& inst) -> bool {
@@ -221,7 +223,7 @@ void lowerAfterRegAlloc(Code& code)
                         stackSlots.append(stackSlot);
                     });
 
-                if (verbose)
+                if (AirLowerAfterRegAllocInternal::verbose)
                     dataLog("Pre-call pairs for ", inst, ": ", listDump(pairs), "\n");
                 
                 insertionSet.insertInsts(
@@ -273,7 +275,7 @@ void lowerAfterRegAlloc(Code& code)
             });
     }
 
-    if (verbose)
+    if (AirLowerAfterRegAllocInternal::verbose)
         dataLog("Code after lowerAfterRegAlloc:\n", code);
 }
 
index 392223c..6b6a26a 100644 (file)
@@ -38,7 +38,9 @@ namespace JSC { namespace B3 { namespace Air {
 
 namespace {
 
-const bool verbose = false;
+namespace AirStackAllocationInternal {
+static const bool verbose = false;
+}
 
 template<typename Collection>
 void updateFrameSizeBasedOnStackSlotsImpl(Code& code, const Collection& collection)
@@ -54,7 +56,7 @@ void updateFrameSizeBasedOnStackSlotsImpl(Code& code, const Collection& collecti
 bool attemptAssignment(
     StackSlot* slot, intptr_t offsetFromFP, const Vector<StackSlot*>& otherSlots)
 {
-    if (verbose)
+    if (AirStackAllocationInternal::verbose)
         dataLog("Attempting to assign ", pointerDump(slot), " to ", offsetFromFP, " with interference ", pointerListDump(otherSlots), "\n");
 
     // Need to align it to the slot's desired alignment.
@@ -72,7 +74,7 @@ bool attemptAssignment(
             return false;
     }
 
-    if (verbose)
+    if (AirStackAllocationInternal::verbose)
         dataLog("Assigned ", pointerDump(slot), " to ", offsetFromFP, "\n");
     slot->setOffsetFromFP(offsetFromFP);
     return true;
@@ -80,7 +82,7 @@ bool attemptAssignment(
 
 void assign(StackSlot* slot, const Vector<StackSlot*>& otherSlots)
 {
-    if (verbose)
+    if (AirStackAllocationInternal::verbose)
         dataLog("Attempting to assign ", pointerDump(slot), " with interference ", pointerListDump(otherSlots), "\n");
     
     if (attemptAssignment(slot, -static_cast<intptr_t>(slot->byteSize()), otherSlots))
index 35ffbf1..4198655 100644 (file)
@@ -51,7 +51,9 @@
 
 namespace JSC {
 
+namespace AccessCaseInternal {
 static const bool verbose = false;
+}
 
 AccessCase::AccessCase(VM& vm, JSCell* owner, AccessType type, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet)
     : m_type(type)
@@ -408,7 +410,7 @@ void AccessCase::generate(AccessGenerationState& state)
 void AccessCase::generateImpl(AccessGenerationState& state)
 {
     SuperSamplerScope superSamplerScope(false);
-    if (verbose)
+    if (AccessCaseInternal::verbose)
         dataLog("\n\nGenerating code for: ", *this, "\n");
 
     ASSERT(m_state == Generated); // We rely on the callers setting this for us.
@@ -790,11 +792,11 @@ void AccessCase::generateImpl(AccessGenerationState& state)
 
     case Replace: {
         if (InferredType* type = structure()->inferredTypeFor(ident.impl())) {
-            if (verbose)
+            if (AccessCaseInternal::verbose)
                 dataLog("Have type: ", type->descriptor(), "\n");
             state.failAndRepatch.append(
                 jit.branchIfNotType(valueRegs, scratchGPR, type->descriptor()));
-        } else if (verbose)
+        } else if (AccessCaseInternal::verbose)
             dataLog("Don't have type.\n");
 
         if (isInlineOffset(m_offset)) {
@@ -820,11 +822,11 @@ void AccessCase::generateImpl(AccessGenerationState& state)
         RELEASE_ASSERT(GPRInfo::numberOfRegisters >= 6 || !structure()->outOfLineCapacity() || structure()->outOfLineCapacity() == newStructure()->outOfLineCapacity());
 
         if (InferredType* type = newStructure()->inferredTypeFor(ident.impl())) {
-            if (verbose)
+            if (AccessCaseInternal::verbose)
                 dataLog("Have type: ", type->descriptor(), "\n");
             state.failAndRepatch.append(
                 jit.branchIfNotType(valueRegs, scratchGPR, type->descriptor()));
-        } else if (verbose)
+        } else if (AccessCaseInternal::verbose)
             dataLog("Don't have type.\n");
 
         // NOTE: This logic is duplicated in AccessCase::doesCalls(). It's important that doesCalls() knows
index f34c274..53b84d9 100644 (file)
@@ -38,7 +38,9 @@
 
 namespace JSC {
 
+namespace CallLinkStatusInternal {
 static const bool verbose = false;
+}
 
 CallLinkStatus::CallLinkStatus(JSValue value)
     : m_couldTakeSlowPath(false)
@@ -289,7 +291,7 @@ void CallLinkStatus::computeDFGStatuses(
     UNUSED_PARAM(dfgCodeBlock);
 #endif // ENABLE(DFG_JIT)
     
-    if (verbose) {
+    if (CallLinkStatusInternal::verbose) {
         dataLog("Context map:\n");
         ContextMap::iterator iter = map.begin();
         ContextMap::iterator end = map.end();
index c4da80e..606691b 100644 (file)
@@ -38,7 +38,9 @@
 
 namespace JSC {
 
+namespace GetterSetterAccessCaseInternal {
 static const bool verbose = false;
+}
 
 GetterSetterAccessCase::GetterSetterAccessCase(VM& vm, JSCell* owner, AccessType accessType, PropertyOffset offset, Structure* structure, const ObjectPropertyConditionSet& conditionSet, bool viaProxy, WatchpointSet* additionalSet, JSObject* customSlotBase)
     : Base(vm, owner, accessType, offset, structure, conditionSet, viaProxy, additionalSet)
@@ -185,7 +187,7 @@ void GetterSetterAccessCase::emitDOMJITGetter(AccessGenerationState& state, cons
     ScratchRegisterAllocator::PreservedState preservedState =
     allocator.preserveReusedRegistersByPushing(jit, ScratchRegisterAllocator::ExtraStackSpace::SpaceForCCall);
 
-    if (verbose) {
+    if (GetterSetterAccessCaseInternal::verbose) {
         dataLog("baseGPR = ", baseGPR, "\n");
         dataLog("valueRegs = ", valueRegs, "\n");
         dataLog("scratchGPR = ", scratchGPR, "\n");
index 619ea5d..9679523 100644 (file)
@@ -182,13 +182,15 @@ bool ObjectPropertyConditionSet::isValidAndWatchable() const
 
 namespace {
 
-bool verbose = false;
+namespace ObjectPropertyConditionSetInternal {
+static const bool verbose = false;
+}
 
 ObjectPropertyCondition generateCondition(
     VM& vm, JSCell* owner, JSObject* object, UniquedStringImpl* uid, PropertyCondition::Kind conditionKind)
 {
     Structure* structure = object->structure();
-    if (verbose)
+    if (ObjectPropertyConditionSetInternal::verbose)
         dataLog("Creating condition ", conditionKind, " for ", pointerDump(structure), "\n");
 
     ObjectPropertyCondition result;
@@ -226,12 +228,12 @@ ObjectPropertyCondition generateCondition(
     }
 
     if (!result.isStillValidAssumingImpurePropertyWatchpoint()) {
-        if (verbose)
+        if (ObjectPropertyConditionSetInternal::verbose)
             dataLog("Failed to create condition: ", result, "\n");
         return ObjectPropertyCondition();
     }
 
-    if (verbose)
+    if (ObjectPropertyConditionSetInternal::verbose)
         dataLog("New condition: ", result, "\n");
     return result;
 }
@@ -248,11 +250,11 @@ ObjectPropertyConditionSet generateConditions(
     Vector<ObjectPropertyCondition> conditions;
     
     for (;;) {
-        if (verbose)
+        if (ObjectPropertyConditionSetInternal::verbose)
             dataLog("Considering structure: ", pointerDump(structure), "\n");
         
         if (structure->isProxy()) {
-            if (verbose)
+            if (ObjectPropertyConditionSetInternal::verbose)
                 dataLog("It's a proxy, so invalid.\n");
             return ObjectPropertyConditionSet::invalid();
         }
@@ -261,11 +263,11 @@ ObjectPropertyConditionSet generateConditions(
         
         if (value.isNull()) {
             if (!prototype) {
-                if (verbose)
+                if (ObjectPropertyConditionSetInternal::verbose)
                     dataLog("Reached end of prototype chain as expected, done.\n");
                 break;
             }
-            if (verbose)
+            if (ObjectPropertyConditionSetInternal::verbose)
                 dataLog("Unexpectedly reached end of prototype chain, so invalid.\n");
             return ObjectPropertyConditionSet::invalid();
         }
@@ -276,35 +278,35 @@ ObjectPropertyConditionSet generateConditions(
         if (structure->isDictionary()) {
             if (concurrency == MainThread) {
                 if (structure->hasBeenFlattenedBefore()) {
-                    if (verbose)
+                    if (ObjectPropertyConditionSetInternal::verbose)
                         dataLog("Dictionary has been flattened before, so invalid.\n");
                     return ObjectPropertyConditionSet::invalid();
                 }
 
-                if (verbose)
+                if (ObjectPropertyConditionSetInternal::verbose)
                     dataLog("Flattening ", pointerDump(structure));
                 structure->flattenDictionaryStructure(vm, object);
             } else {
-                if (verbose)
+                if (ObjectPropertyConditionSetInternal::verbose)
                     dataLog("Cannot flatten dictionary when not on main thread, so invalid.\n");
                 return ObjectPropertyConditionSet::invalid();
             }
         }
 
         if (!functor(conditions, object)) {
-            if (verbose)
+            if (ObjectPropertyConditionSetInternal::verbose)
                 dataLog("Functor failed, invalid.\n");
             return ObjectPropertyConditionSet::invalid();
         }
         
         if (object == prototype) {
-            if (verbose)
+            if (ObjectPropertyConditionSetInternal::verbose)
                 dataLog("Reached desired prototype, done.\n");
             break;
         }
     }
 
-    if (verbose)
+    if (ObjectPropertyConditionSetInternal::verbose)
         dataLog("Returning conditions: ", listDump(conditions), "\n");
     return ObjectPropertyConditionSet::create(conditions);
 }
index fb0b6a9..42280c5 100644 (file)
@@ -44,7 +44,9 @@
 
 namespace JSC {
 
+namespace PolymorphicAccessInternal {
 static const bool verbose = false;
+}
 
 void AccessGenerationResult::dump(PrintStream& out) const
 {
@@ -247,7 +249,7 @@ AccessGenerationResult PolymorphicAccess::addCases(
         casesToAdd.append(WTFMove(myCase));
     }
 
-    if (verbose)
+    if (PolymorphicAccessInternal::verbose)
         dataLog("casesToAdd: ", listDump(casesToAdd), "\n");
 
     // If there aren't any cases to add, then fail on the grounds that there's no point to generating a
@@ -263,7 +265,7 @@ AccessGenerationResult PolymorphicAccess::addCases(
         m_list.append(WTFMove(caseToAdd));
     }
     
-    if (verbose)
+    if (PolymorphicAccessInternal::verbose)
         dataLog("After addCases: m_list: ", listDump(m_list), "\n");
 
     return AccessGenerationResult::Buffered;
@@ -334,7 +336,7 @@ AccessGenerationResult PolymorphicAccess::regenerate(
 {
     SuperSamplerScope superSamplerScope(false);
     
-    if (verbose)
+    if (PolymorphicAccessInternal::verbose)
         dataLog("Regenerate with m_list: ", listDump(m_list), "\n");
     
     AccessGenerationState state(vm);
@@ -402,7 +404,7 @@ AccessGenerationResult PolymorphicAccess::regenerate(
     }
     m_list.resize(dstIndex);
     
-    if (verbose)
+    if (PolymorphicAccessInternal::verbose)
         dataLog("Optimized cases: ", listDump(cases), "\n");
     
     // At this point we're convinced that 'cases' contains the cases that we want to JIT now and we
@@ -517,7 +519,7 @@ AccessGenerationResult PolymorphicAccess::regenerate(
 
     LinkBuffer linkBuffer(jit, codeBlock, JITCompilationCanFail);
     if (linkBuffer.didFailToAllocate()) {
-        if (verbose)
+        if (PolymorphicAccessInternal::verbose)
             dataLog("Did fail to allocate.\n");
         return AccessGenerationResult::GaveUp;
     }
@@ -528,7 +530,7 @@ AccessGenerationResult PolymorphicAccess::regenerate(
 
     linkBuffer.link(failure, stubInfo.slowPathStartLocation());
     
-    if (verbose)
+    if (PolymorphicAccessInternal::verbose)
         dataLog(FullCodeOrigin(codeBlock, stubInfo.codeOrigin), ": Generating polymorphic access stub for ", listDump(cases), "\n");
 
     MacroAssemblerCodeRef code = FINALIZE_CODE_FOR(
@@ -544,7 +546,7 @@ AccessGenerationResult PolymorphicAccess::regenerate(
     m_watchpoints = WTFMove(state.watchpoints);
     if (!state.weakReferences.isEmpty())
         m_weakReferences = std::make_unique<Vector<WriteBarrier<JSCell>>>(WTFMove(state.weakReferences));
-    if (verbose)
+    if (PolymorphicAccessInternal::verbose)
         dataLog("Returning: ", code.code(), "\n");
     
     m_list = WTFMove(cases);
index 423bc21..07644a3 100644 (file)
@@ -32,7 +32,9 @@
 
 namespace JSC {
 
+namespace PropertyConditionInternal {
 static bool verbose = false;
+}
 
 void PropertyCondition::dumpInContext(PrintStream& out, DumpContext* context) const
 {
@@ -65,20 +67,20 @@ void PropertyCondition::dump(PrintStream& out) const
 bool PropertyCondition::isStillValidAssumingImpurePropertyWatchpoint(
     Structure* structure, JSObject* base) const
 {
-    if (verbose) {
+    if (PropertyConditionInternal::verbose) {
         dataLog(
             "Determining validity of ", *this, " with structure ", pointerDump(structure), " and base ",
             JSValue(base), " assuming impure property watchpoints are set.\n");
     }
     
     if (!*this) {
-        if (verbose)
+        if (PropertyConditionInternal::verbose)
             dataLog("Invalid because unset.\n");
         return false;
     }
     
     if (!structure->propertyAccessesAreCacheable()) {
-        if (verbose)
+        if (PropertyConditionInternal::verbose)
             dataLog("Invalid because accesses are not cacheable.\n");
         return false;
     }
@@ -88,7 +90,7 @@ bool PropertyCondition::isStillValidAssumingImpurePropertyWatchpoint(
         unsigned currentAttributes;
         PropertyOffset currentOffset = structure->getConcurrently(uid(), currentAttributes);
         if (currentOffset != offset() || currentAttributes != attributes()) {
-            if (verbose) {
+            if (PropertyConditionInternal::verbose) {
                 dataLog(
                     "Invalid because we need offset, attributes to be ", offset(), ", ", attributes(),
                     " but they are ", currentOffset, ", ", currentAttributes, "\n");
@@ -100,20 +102,20 @@ bool PropertyCondition::isStillValidAssumingImpurePropertyWatchpoint(
         
     case Absence: {
         if (structure->isDictionary()) {
-            if (verbose)
+            if (PropertyConditionInternal::verbose)
                 dataLog("Invalid because it's a dictionary.\n");
             return false;
         }
 
         PropertyOffset currentOffset = structure->getConcurrently(uid());
         if (currentOffset != invalidOffset) {
-            if (verbose)
+            if (PropertyConditionInternal::verbose)
                 dataLog("Invalid because the property exists at offset: ", currentOffset, "\n");
             return false;
         }
         
         if (structure->storedPrototypeObject() != prototype()) {
-            if (verbose) {
+            if (PropertyConditionInternal::verbose) {
                 dataLog(
                     "Invalid because the prototype is ", structure->storedPrototype(), " even though "
                     "it should have been ", JSValue(prototype()), "\n");
@@ -126,7 +128,7 @@ bool PropertyCondition::isStillValidAssumingImpurePropertyWatchpoint(
     
     case AbsenceOfSetEffect: {
         if (structure->isDictionary()) {
-            if (verbose)
+            if (PropertyConditionInternal::verbose)
                 dataLog("Invalid because it's a dictionary.\n");
             return false;
         }
@@ -135,7 +137,7 @@ bool PropertyCondition::isStillValidAssumingImpurePropertyWatchpoint(
         PropertyOffset currentOffset = structure->getConcurrently(uid(), currentAttributes);
         if (currentOffset != invalidOffset) {
             if (currentAttributes & (ReadOnly | Accessor | CustomAccessor)) {
-                if (verbose) {
+                if (PropertyConditionInternal::verbose) {
                     dataLog(
                         "Invalid because we expected not to have a setter, but we have one at offset ",
                         currentOffset, " with attributes ", currentAttributes, "\n");
@@ -145,7 +147,7 @@ bool PropertyCondition::isStillValidAssumingImpurePropertyWatchpoint(
         }
         
         if (structure->storedPrototypeObject() != prototype()) {
-            if (verbose) {
+            if (PropertyConditionInternal::verbose) {
                 dataLog(
                     "Invalid because the prototype is ", structure->storedPrototype(), " even though "
                     "it should have been ", JSValue(prototype()), "\n");
@@ -160,7 +162,7 @@ bool PropertyCondition::isStillValidAssumingImpurePropertyWatchpoint(
         if (!base || base->structure() != structure) {
             // Conservatively return false, since we cannot verify this one without having the
             // object.
-            if (verbose) {
+            if (PropertyConditionInternal::verbose) {
                 dataLog(
                     "Invalid because we don't have a base or the base has the wrong structure: ",
                     RawPointer(base), "\n");
@@ -173,7 +175,7 @@ bool PropertyCondition::isStillValidAssumingImpurePropertyWatchpoint(
         
         PropertyOffset currentOffset = structure->getConcurrently(uid());
         if (currentOffset == invalidOffset) {
-            if (verbose) {
+            if (PropertyConditionInternal::verbose) {
                 dataLog(
                     "Invalid because the base no long appears to have ", uid(), " on its structure: ",
                         RawPointer(base), "\n");
@@ -183,7 +185,7 @@ bool PropertyCondition::isStillValidAssumingImpurePropertyWatchpoint(
 
         JSValue currentValue = base->getDirect(currentOffset);
         if (currentValue != requiredValue()) {
-            if (verbose) {
+            if (PropertyConditionInternal::verbose) {
                 dataLog(
                     "Invalid because the value is ", currentValue, " but we require ", requiredValue(),
                     "\n");
index 2187751..3e3bf24 100644 (file)
@@ -35,7 +35,9 @@ namespace JSC {
 
 #if ENABLE(JIT)
 
+namespace StructureStubInfoInternal {
 static const bool verbose = false;
+}
 
 StructureStubInfo::StructureStubInfo(AccessType accessType)
     : callSiteIndex(UINT_MAX)
@@ -115,7 +117,7 @@ AccessGenerationResult StructureStubInfo::addAccessCase(
 {
     VM& vm = *codeBlock->vm();
     
-    if (verbose)
+    if (StructureStubInfoInternal::verbose)
         dataLog("Adding access case: ", accessCase, "\n");
     
     if (!accessCase)
@@ -126,7 +128,7 @@ AccessGenerationResult StructureStubInfo::addAccessCase(
     if (cacheType == CacheType::Stub) {
         result = u.stub->addCase(locker, vm, codeBlock, *this, ident, WTFMove(accessCase));
         
-        if (verbose)
+        if (StructureStubInfoInternal::verbose)
             dataLog("Had stub, result: ", result, "\n");
 
         if (!result.buffered()) {
@@ -147,7 +149,7 @@ AccessGenerationResult StructureStubInfo::addAccessCase(
         
         result = access->addCases(locker, vm, codeBlock, *this, ident, WTFMove(accessCases));
         
-        if (verbose)
+        if (StructureStubInfoInternal::verbose)
             dataLog("Created stub, result: ", result, "\n");
 
         if (!result.buffered()) {
@@ -164,7 +166,7 @@ AccessGenerationResult StructureStubInfo::addAccessCase(
     // If we didn't buffer any cases then bail. If this made no changes then we'll just try again
     // subject to cool-down.
     if (!result.buffered()) {
-        if (verbose)
+        if (StructureStubInfoInternal::verbose)
             dataLog("Didn't buffer anything, bailing.\n");
         bufferedStructures.clear();
         return result;
@@ -172,7 +174,7 @@ AccessGenerationResult StructureStubInfo::addAccessCase(
     
     // The buffering countdown tells us if we should be repatching now.
     if (bufferingCountdown) {
-        if (verbose)
+        if (StructureStubInfoInternal::verbose)
             dataLog("Countdown is too high: ", bufferingCountdown, ".\n");
         return result;
     }
@@ -183,7 +185,7 @@ AccessGenerationResult StructureStubInfo::addAccessCase(
     
     result = u.stub->regenerate(locker, vm, codeBlock, *this, ident);
     
-    if (verbose)
+    if (StructureStubInfoInternal::verbose)
         dataLog("Regeneration result: ", result, "\n");
     
     RELEASE_ASSERT(!result.buffered());
index cd694bb..b83fee0 100644 (file)
@@ -62,7 +62,7 @@ void printInternal(PrintStream& out, AbstractHeapKind kind)
 {
     switch (kind) {
 #define ABSTRACT_HEAP_DUMP(name)            \
-    case name:                              \
+    case AbstractHeapKind::name:            \
         out.print(#name);                   \
         return;
     FOR_EACH_ABSTRACT_HEAP_KIND(ABSTRACT_HEAP_DUMP)
index 51650f5..a457f8f 100644 (file)
@@ -50,7 +50,9 @@ namespace JSC { namespace DFG {
 
 namespace {
 
-bool verbose = false;
+namespace DFGArgumentsEliminationPhaseInternal {
+static const bool verbose = false;
+}
 
 class ArgumentsEliminationPhase : public Phase {
 public:
@@ -65,7 +67,7 @@ public:
         // version over LoadStore.
         DFG_ASSERT(m_graph, nullptr, m_graph.m_form == SSA);
         
-        if (verbose) {
+        if (DFGArgumentsEliminationPhaseInternal::verbose) {
             dataLog("Graph before arguments elimination:\n");
             m_graph.dump();
         }
@@ -157,7 +159,7 @@ private:
             }
         }
         
-        if (verbose)
+        if (DFGArgumentsEliminationPhaseInternal::verbose)
             dataLog("Candidates: ", listDump(m_candidates), "\n");
     }
 
@@ -210,7 +212,7 @@ private:
     void transitivelyRemoveCandidate(Node* node, Node* source = nullptr)
     {
         bool removed = m_candidates.remove(node);
-        if (removed && verbose && source)
+        if (removed && DFGArgumentsEliminationPhaseInternal::verbose && source)
             dataLog("eliminating candidate: ", node, " because it escapes from: ", source, "\n");
 
         if (removed)
@@ -418,7 +420,7 @@ private:
             }
         }
 
-        if (verbose)
+        if (DFGArgumentsEliminationPhaseInternal::verbose)
             dataLog("After escape analysis: ", listDump(m_candidates), "\n");
     }
 
@@ -519,7 +521,7 @@ private:
                     // for this arguments allocation, and we'd have to examine every node in the block,
                     // then we can just eliminate the candidate.
                     if (nodeIndex == block->size() && candidate->owner != block) {
-                        if (verbose)
+                        if (DFGArgumentsEliminationPhaseInternal::verbose)
                             dataLog("eliminating candidate: ", candidate, " because it is clobbered by: ", block->at(nodeIndex), "\n");
                         transitivelyRemoveCandidate(candidate);
                         return;
@@ -546,7 +548,7 @@ private:
                             NoOpClobberize());
                         
                         if (found) {
-                            if (verbose)
+                            if (DFGArgumentsEliminationPhaseInternal::verbose)
                                 dataLog("eliminating candidate: ", candidate, " because it is clobbered by ", block->at(nodeIndex), "\n");
                             transitivelyRemoveCandidate(candidate);
                             return;
@@ -568,7 +570,7 @@ private:
         // since those availabilities speak of the stack before the optimizing compiler stack frame is
         // torn down.
 
-        if (verbose)
+        if (DFGArgumentsEliminationPhaseInternal::verbose)
             dataLog("After interference analysis: ", listDump(m_candidates), "\n");
     }
     
index b49f2e9..c8e096e 100644 (file)
@@ -70,22 +70,18 @@ namespace JSC { namespace DFG {
 
 namespace {
 
-NO_RETURN_DUE_TO_CRASH NEVER_INLINE void crash()
-{
-    CRASH();
-}
-
-#undef RELEASE_ASSERT
-#define RELEASE_ASSERT(assertion) do { \
+#define PARSER_ASSERT(assertion, ...) do {          \
     if (UNLIKELY(!(assertion))) { \
         WTFReportAssertionFailure(__FILE__, __LINE__, WTF_PRETTY_FUNCTION, #assertion); \
-        crash(); \
+        CRASH_WITH_INFO(__VA_ARGS__); \
     } \
 } while (0)
 
 } // anonymous namespace
 
+namespace DFGByteCodeParserInternal {
 static const bool verbose = false;
+}
 
 class ConstantBufferKey {
 public:
@@ -1207,7 +1203,7 @@ private:
             , m_value(value)
             , m_setMode(setMode)
         {
-            RELEASE_ASSERT(operand.isValid());
+            PARSER_ASSERT(operand.isValid());
         }
         
         Node* execute(ByteCodeParser* parser)
@@ -1430,18 +1426,18 @@ unsigned ByteCodeParser::inliningCost(CallVariant callee, int, InlineCallFrame::
 {
     CallMode callMode = InlineCallFrame::callModeFor(kind);
     CodeSpecializationKind specializationKind = specializationKindFor(callMode);
-    if (verbose)
+    if (DFGByteCodeParserInternal::verbose)
         dataLog("Considering inlining ", callee, " into ", currentCodeOrigin(), "\n");
     
     if (m_hasDebuggerEnabled) {
-        if (verbose)
+        if (DFGByteCodeParserInternal::verbose)
             dataLog("    Failing because the debugger is in use.\n");
         return UINT_MAX;
     }
 
     FunctionExecutable* executable = callee.functionExecutable();
     if (!executable) {
-        if (verbose)
+        if (DFGByteCodeParserInternal::verbose)
             dataLog("    Failing because there is no function executable.\n");
         return UINT_MAX;
     }
@@ -1455,14 +1451,14 @@ unsigned ByteCodeParser::inliningCost(CallVariant callee, int, InlineCallFrame::
     // it's a rare case because we expect that any hot callees would have already been compiled.
     CodeBlock* codeBlock = executable->baselineCodeBlockFor(specializationKind);
     if (!codeBlock) {
-        if (verbose)
+        if (DFGByteCodeParserInternal::verbose)
             dataLog("    Failing because no code block available.\n");
         return UINT_MAX;
     }
 
     CapabilityLevel capabilityLevel = inlineFunctionForCapabilityLevel(
         codeBlock, specializationKind, callee.isClosureCall());
-    if (verbose) {
+    if (DFGByteCodeParserInternal::verbose) {
         dataLog("    Call mode: ", callMode, "\n");
         dataLog("    Is closure call: ", callee.isClosureCall(), "\n");
         dataLog("    Capability level: ", capabilityLevel, "\n");
@@ -1472,7 +1468,7 @@ unsigned ByteCodeParser::inliningCost(CallVariant callee, int, InlineCallFrame::
         dataLog("    Is inlining candidate: ", codeBlock->ownerScriptExecutable()->isInliningCandidate(), "\n");
     }
     if (!canInline(capabilityLevel)) {
-        if (verbose)
+        if (DFGByteCodeParserInternal::verbose)
             dataLog("    Failing because the function is not inlineable.\n");
         return UINT_MAX;
     }
@@ -1482,7 +1478,7 @@ unsigned ByteCodeParser::inliningCost(CallVariant callee, int, InlineCallFrame::
     // purpose of unsetting SABI.
     if (!isSmallEnoughToInlineCodeInto(m_codeBlock)) {
         codeBlock->m_shouldAlwaysBeInlined = false;
-        if (verbose)
+        if (DFGByteCodeParserInternal::verbose)
             dataLog("    Failing because the caller is too large.\n");
         return UINT_MAX;
     }
@@ -1506,7 +1502,7 @@ unsigned ByteCodeParser::inliningCost(CallVariant callee, int, InlineCallFrame::
     for (InlineStackEntry* entry = m_inlineStackTop; entry; entry = entry->m_caller) {
         ++depth;
         if (depth >= Options::maximumInliningDepth()) {
-            if (verbose)
+            if (DFGByteCodeParserInternal::verbose)
                 dataLog("    Failing because depth exceeded.\n");
             return UINT_MAX;
         }
@@ -1514,14 +1510,14 @@ unsigned ByteCodeParser::inliningCost(CallVariant callee, int, InlineCallFrame::
         if (entry->executable() == executable) {
             ++recursion;
             if (recursion >= Options::maximumInliningRecursion()) {
-                if (verbose)
+                if (DFGByteCodeParserInternal::verbose)
                     dataLog("    Failing because recursion detected.\n");
                 return UINT_MAX;
             }
         }
     }
     
-    if (verbose)
+    if (DFGByteCodeParserInternal::verbose)
         dataLog("    Inlining should be possible.\n");
     
     // It might be possible to inline.
@@ -1618,11 +1614,11 @@ void ByteCodeParser::inlineCall(Node* callTargetNode, int resultOperand, CallVar
     inlineVariableData.argumentPositionStart = argumentPositionStart;
     inlineVariableData.calleeVariable = 0;
     
-    RELEASE_ASSERT(
+    PARSER_ASSERT(
         m_inlineStackTop->m_inlineCallFrame->isClosureCall
         == callee.isClosureCall());
     if (callee.isClosureCall()) {
-        RELEASE_ASSERT(calleeVariable);
+        PARSER_ASSERT(calleeVariable);
         inlineVariableData.calleeVariable = calleeVariable;
     }
     
@@ -1669,7 +1665,7 @@ void ByteCodeParser::inlineCall(Node* callTargetNode, int resultOperand, CallVar
                 dataLog("        Repurposing last block from ", lastBlock->bytecodeBegin, " to ", m_currentIndex, "\n");
             lastBlock->bytecodeBegin = m_currentIndex;
             if (callerLinkability == CallerDoesNormalLinking) {
-                if (verbose)
+                if (DFGByteCodeParserInternal::verbose)
                     dataLog("Adding unlinked block ", RawPointer(m_graph.lastBlock()), " (one return)\n");
                 m_inlineStackTop->m_caller->m_unlinkedBlocks.append(UnlinkedBlock(m_graph.lastBlock()));
             }
@@ -1699,14 +1695,14 @@ void ByteCodeParser::inlineCall(Node* callTargetNode, int resultOperand, CallVar
         ASSERT(!node->targetBlock());
         node->targetBlock() = block.ptr();
         inlineStackEntry.m_unlinkedBlocks[i].m_needsEarlyReturnLinking = false;
-        if (verbose)
+        if (DFGByteCodeParserInternal::verbose)
             dataLog("Marking ", RawPointer(blockToLink), " as linked (jumps to return)\n");
         blockToLink->didLink();
     }
     
     m_currentBlock = block.ptr();
     ASSERT(m_inlineStackTop->m_caller->m_blockLinkingTargets.isEmpty() || m_inlineStackTop->m_caller->m_blockLinkingTargets.last()->bytecodeBegin < nextOffset);
-    if (verbose)
+    if (DFGByteCodeParserInternal::verbose)
         dataLog("Adding unlinked block ", RawPointer(block.ptr()), " (many returns)\n");
     if (callerLinkability == CallerDoesNormalLinking) {
         m_inlineStackTop->m_caller->m_unlinkedBlocks.append(UnlinkedBlock(block.ptr()));
@@ -1742,7 +1738,7 @@ bool ByteCodeParser::attemptToInlineCall(Node* callTargetNode, int resultOperand
     if (!inliningBalance)
         return false;
     
-    if (verbose)
+    if (DFGByteCodeParserInternal::verbose)
         dataLog("    Considering callee ", callee, "\n");
     
     // Intrinsics and internal functions can only be inlined if we're not doing varargs. This is because
@@ -1762,40 +1758,40 @@ bool ByteCodeParser::attemptToInlineCall(Node* callTargetNode, int resultOperand
     
         if (InternalFunction* function = callee.internalFunction()) {
             if (handleConstantInternalFunction(callTargetNode, resultOperand, function, registerOffset, argumentCountIncludingThis, specializationKind, prediction, insertChecksWithAccounting)) {
-                RELEASE_ASSERT(didInsertChecks);
+                PARSER_ASSERT(didInsertChecks);
                 addToGraph(Phantom, callTargetNode);
                 emitArgumentPhantoms(registerOffset, argumentCountIncludingThis);
                 inliningBalance--;
                 return true;
             }
-            RELEASE_ASSERT(!didInsertChecks);
+            PARSER_ASSERT(!didInsertChecks);
             return false;
         }
     
         Intrinsic intrinsic = callee.intrinsicFor(specializationKind);
         if (intrinsic != NoIntrinsic) {
             if (handleIntrinsicCall(callTargetNode, resultOperand, intrinsic, registerOffset, argumentCountIncludingThis, prediction, insertChecksWithAccounting)) {
-                RELEASE_ASSERT(didInsertChecks);
+                PARSER_ASSERT(didInsertChecks);
                 addToGraph(Phantom, callTargetNode);
                 emitArgumentPhantoms(registerOffset, argumentCountIncludingThis);
                 inliningBalance--;
                 return true;
             }
 
-            RELEASE_ASSERT(!didInsertChecks);
+            PARSER_ASSERT(!didInsertChecks);
             // We might still try to inline the Intrinsic because it might be a builtin JS function.
         }
 
         if (Options::useDOMJIT()) {
             if (const DOMJIT::Signature* signature = callee.signatureFor(specializationKind)) {
                 if (handleDOMJITCall(callTargetNode, resultOperand, signature, registerOffset, argumentCountIncludingThis, prediction, insertChecksWithAccounting)) {
-                    RELEASE_ASSERT(didInsertChecks);
+                    PARSER_ASSERT(didInsertChecks);
                     addToGraph(Phantom, callTargetNode);
                     emitArgumentPhantoms(registerOffset, argumentCountIncludingThis);
                     inliningBalance--;
                     return true;
                 }
-                RELEASE_ASSERT(!didInsertChecks);
+                PARSER_ASSERT(!didInsertChecks);
             }
         }
     }
@@ -1817,21 +1813,21 @@ bool ByteCodeParser::handleInlining(
     VirtualRegister argumentsArgument, unsigned argumentsOffset, int argumentCountIncludingThis,
     unsigned nextOffset, NodeType callOp, InlineCallFrame::Kind kind, SpeculatedType prediction)
 {
-    if (verbose) {
+    if (DFGByteCodeParserInternal::verbose) {
         dataLog("Handling inlining...\n");
         dataLog("Stack: ", currentCodeOrigin(), "\n");
     }
     CodeSpecializationKind specializationKind = InlineCallFrame::specializationKindFor(kind);
     
     if (!callLinkStatus.size()) {
-        if (verbose)
+        if (DFGByteCodeParserInternal::verbose)
             dataLog("Bailing inlining.\n");
         return false;
     }
     
     if (InlineCallFrame::isVarargs(kind)
         && callLinkStatus.maxNumArguments() > Options::maximumVarargsForInlining()) {
-        if (verbose)
+        if (DFGByteCodeParserInternal::verbose)
             dataLog("Bailing inlining because of varargs.\n");
         return false;
     }
@@ -1950,7 +1946,7 @@ bool ByteCodeParser::handleInlining(
                     }
                 }
             });
-        if (verbose) {
+        if (DFGByteCodeParserInternal::verbose) {
             dataLog("Done inlining (simple).\n");
             dataLog("Stack: ", currentCodeOrigin(), "\n");
             dataLog("Result: ", result, "\n");
@@ -1966,7 +1962,7 @@ bool ByteCodeParser::handleInlining(
     // also.
     if (!isFTL(m_graph.m_plan.mode) || !Options::usePolymorphicCallInlining()
         || InlineCallFrame::isVarargs(kind)) {
-        if (verbose) {
+        if (DFGByteCodeParserInternal::verbose) {
             dataLog("Bailing inlining (hard).\n");
             dataLog("Stack: ", currentCodeOrigin(), "\n");
         }
@@ -1978,7 +1974,7 @@ bool ByteCodeParser::handleInlining(
     // it has no idea.
     if (!Options::usePolymorphicCallInliningForNonStubStatus()
         && !callLinkStatus.isBasedOnStub()) {
-        if (verbose) {
+        if (DFGByteCodeParserInternal::verbose) {
             dataLog("Bailing inlining (non-stub polymorphism).\n");
             dataLog("Stack: ", currentCodeOrigin(), "\n");
         }
@@ -2006,14 +2002,14 @@ bool ByteCodeParser::handleInlining(
         // where it would be beneficial. It might be best to handle these cases as if all calls were
         // closure calls.
         // https://bugs.webkit.org/show_bug.cgi?id=136020
-        if (verbose) {
+        if (DFGByteCodeParserInternal::verbose) {
             dataLog("Bailing inlining (mix).\n");
             dataLog("Stack: ", currentCodeOrigin(), "\n");
         }
         return false;
     }
     
-    if (verbose) {
+    if (DFGByteCodeParserInternal::verbose) {
         dataLog("Doing hard inlining...\n");
         dataLog("Stack: ", currentCodeOrigin(), "\n");
     }
@@ -2024,11 +2020,11 @@ bool ByteCodeParser::handleInlining(
     // store the callee so that it will be accessible to all of the blocks we're about to create. We
     // get away with doing an immediate-set here because we wouldn't have performed any side effects
     // yet.
-    if (verbose)
+    if (DFGByteCodeParserInternal::verbose)
         dataLog("Register offset: ", registerOffset);
     VirtualRegister calleeReg(registerOffset + CallFrameSlot::callee);
     calleeReg = m_inlineStackTop->remapOperand(calleeReg);
-    if (verbose)
+    if (DFGByteCodeParserInternal::verbose)
         dataLog("Callee is going to be ", calleeReg, "\n");
     setDirect(calleeReg, callTargetNode, ImmediateSetWithFlush);
 
@@ -2042,7 +2038,7 @@ bool ByteCodeParser::handleInlining(
     addToGraph(Switch, OpInfo(&data), thingToSwitchOn);
     
     BasicBlock* originBlock = m_currentBlock;
-    if (verbose)
+    if (DFGByteCodeParserInternal::verbose)
         dataLog("Marking ", RawPointer(originBlock), " as linked (origin of poly inline)\n");
     originBlock->didLink();
     cancelLinkingForBlock(m_inlineStackTop, originBlock);
@@ -2054,7 +2050,7 @@ bool ByteCodeParser::handleInlining(
     // We may force this true if we give up on inlining any of the edges.
     bool couldTakeSlowPath = callLinkStatus.couldTakeSlowPath();
     
-    if (verbose)
+    if (DFGByteCodeParserInternal::verbose)
         dataLog("About to loop over functions at ", currentCodeOrigin(), ".\n");
     
     for (unsigned i = 0; i < callLinkStatus.size(); ++i) {
@@ -2101,11 +2097,11 @@ bool ByteCodeParser::handleInlining(
             addToGraph(Jump);
             landingBlocks.append(m_currentBlock);
         }
-        if (verbose)
+        if (DFGByteCodeParserInternal::verbose)
             dataLog("Marking ", RawPointer(m_currentBlock), " as linked (tail of poly inlinee)\n");
         m_currentBlock->didLink();
 
-        if (verbose)
+        if (DFGByteCodeParserInternal::verbose)
             dataLog("Finished inlining ", callLinkStatus[i], " at ", currentCodeOrigin(), ".\n");
     }
     
@@ -2115,7 +2111,7 @@ bool ByteCodeParser::handleInlining(
     m_exitOK = true;
     data.fallThrough = BranchTarget(slowPathBlock.ptr());
     m_graph.appendBlock(slowPathBlock.copyRef());
-    if (verbose)
+    if (DFGByteCodeParserInternal::verbose)
         dataLog("Marking ", RawPointer(slowPathBlock.ptr()), " as linked (slow path block)\n");
     slowPathBlock->didLink();
     prepareToParseBlock();
@@ -2146,7 +2142,7 @@ bool ByteCodeParser::handleInlining(
     Ref<BasicBlock> continuationBlock = adoptRef(
         *new BasicBlock(UINT_MAX, m_numArguments, m_numLocals, 1));
     m_graph.appendBlock(continuationBlock.copyRef());
-    if (verbose)
+    if (DFGByteCodeParserInternal::verbose)
         dataLog("Adding unlinked block ", RawPointer(continuationBlock.ptr()), " (continuation)\n");
     m_inlineStackTop->m_unlinkedBlocks.append(UnlinkedBlock(continuationBlock.ptr()));
     prepareToParseBlock();
@@ -2158,7 +2154,7 @@ bool ByteCodeParser::handleInlining(
     m_currentIndex = oldOffset;
     m_exitOK = true;
     
-    if (verbose) {
+    if (DFGByteCodeParserInternal::verbose) {
         dataLog("Done inlining (hard).\n");
         dataLog("Stack: ", currentCodeOrigin(), "\n");
     }
@@ -3483,14 +3479,14 @@ bool ByteCodeParser::needsDynamicLookup(ResolveType type, OpcodeID opcode)
 
 GetByOffsetMethod ByteCodeParser::planLoad(const ObjectPropertyCondition& condition)
 {
-    if (verbose)
+    if (DFGByteCodeParserInternal::verbose)
         dataLog("Planning a load: ", condition, "\n");
     
     // We might promote this to Equivalence, and a later DFG pass might also do such promotion
     // even if we fail, but for simplicity this cannot be asked to load an equivalence condition.
     // None of the clients of this method will request a load of an Equivalence condition anyway,
     // and supporting it would complicate the heuristics below.
-    RELEASE_ASSERT(condition.kind() == PropertyCondition::Presence);
+    PARSER_ASSERT(condition.kind() == PropertyCondition::Presence);
     
     // Here's the ranking of how to handle this, from most preferred to least preferred:
     //
@@ -3593,14 +3589,14 @@ bool ByteCodeParser::check(const ObjectPropertyConditionSet& conditionSet)
 
 GetByOffsetMethod ByteCodeParser::planLoad(const ObjectPropertyConditionSet& conditionSet)
 {
-    if (verbose)
+    if (DFGByteCodeParserInternal::verbose)
         dataLog("conditionSet = ", conditionSet, "\n");
     
     GetByOffsetMethod result;
     for (const ObjectPropertyCondition& condition : conditionSet) {
         switch (condition.kind()) {
         case PropertyCondition::Presence:
-            RELEASE_ASSERT(!result); // Should only see exactly one of these.
+            PARSER_ASSERT(!result); // Should only see exactly one of these.
             result = planLoad(condition);
             if (!result)
                 return GetByOffsetMethod();
@@ -3770,7 +3766,7 @@ Node* ByteCodeParser::load(
 
 Node* ByteCodeParser::store(Node* base, unsigned identifier, const PutByIdVariant& variant, Node* value)
 {
-    RELEASE_ASSERT(variant.kind() == PutByIdVariant::Replace);
+    PARSER_ASSERT(variant.kind() == PutByIdVariant::Replace);
 
     checkPresenceLike(base, m_graph.identifiers()[identifier], variant.offset(), variant.structure());
     return handlePutByOffset(base, identifier, variant.offset(), variant.requiredType(), value);
@@ -4169,7 +4165,7 @@ bool ByteCodeParser::parseBlock(unsigned limit)
     // opposed to using a value we set explicitly.
     if (m_currentBlock == m_graph.block(0) && !inlineCallFrame()) {
         auto addResult = m_graph.m_rootToArguments.add(m_currentBlock, ArgumentsVector());
-        RELEASE_ASSERT(addResult.isNewEntry);
+        PARSER_ASSERT(addResult.isNewEntry);
         ArgumentsVector& entrypointArguments = addResult.iterator->value;
         entrypointArguments.resize(m_numArguments);
 
@@ -5266,7 +5262,7 @@ bool ByteCodeParser::parseBlock(unsigned limit)
                 NEXT_OPCODE(op_catch);
             }
 
-            RELEASE_ASSERT(!m_currentBlock->size());
+            PARSER_ASSERT(!m_currentBlock->size());
 
             ValueProfileAndOperandBuffer* buffer = static_cast<ValueProfileAndOperandBuffer*>(currentInstruction[3].u.pointer);
 
@@ -5291,8 +5287,8 @@ bool ByteCodeParser::parseBlock(unsigned limit)
                     if (operand.isLocal())
                         localPredictions.append(prediction);
                     else {
-                        RELEASE_ASSERT(operand.isArgument());
-                        RELEASE_ASSERT(static_cast<uint32_t>(operand.toArgument()) < argumentPredictions.size());
+                        PARSER_ASSERT(operand.isArgument());
+                        PARSER_ASSERT(static_cast<uint32_t>(operand.toArgument()) < argumentPredictions.size());
                         if (validationEnabled())
                             seenArguments.add(operand.toArgument());
                         argumentPredictions[operand.toArgument()] = prediction;
@@ -5301,7 +5297,7 @@ bool ByteCodeParser::parseBlock(unsigned limit)
 
                 if (validationEnabled()) {
                     for (unsigned argument = 0; argument < m_numArguments; ++argument)
-                        RELEASE_ASSERT(seenArguments.contains(argument));
+                        PARSER_ASSERT(seenArguments.contains(argument));
                 }
             }
 
@@ -5342,7 +5338,7 @@ bool ByteCodeParser::parseBlock(unsigned limit)
 
             {
                 auto addResult = m_graph.m_rootToArguments.add(m_currentBlock, ArgumentsVector());
-                RELEASE_ASSERT(addResult.isNewEntry);
+                PARSER_ASSERT(addResult.isNewEntry);
                 ArgumentsVector& entrypointArguments = addResult.iterator->value;
                 entrypointArguments.resize(m_numArguments);
 
@@ -5487,8 +5483,8 @@ bool ByteCodeParser::parseBlock(unsigned limit)
             case GlobalLexicalVar:
             case GlobalLexicalVarWithVarInjectionChecks: {
                 JSScope* constantScope = JSScope::constantScopeForCodeBlock(resolveType, m_inlineStackTop->m_codeBlock);
-                RELEASE_ASSERT(constantScope);
-                RELEASE_ASSERT(static_cast<JSScope*>(currentInstruction[6].u.pointer) == constantScope);
+                PARSER_ASSERT(constantScope);
+                PARSER_ASSERT(static_cast<JSScope*>(currentInstruction[6].u.pointer) == constantScope);
                 set(VirtualRegister(dst), weakJSConstant(constantScope));
                 addToGraph(Phantom, get(VirtualRegister(scope)));
                 break;
@@ -5829,7 +5825,7 @@ bool ByteCodeParser::parseBlock(unsigned limit)
             // Baseline->DFG OSR jumps between loop hints. The DFG assumes that Baseline->DFG
             // OSR can only happen at basic block boundaries. Assert that these two statements
             // are compatible.
-            RELEASE_ASSERT(m_currentIndex == blockBegin);
+            PARSER_ASSERT(m_currentIndex == blockBegin);
             
             // We never do OSR into an inlined code block. That could not happen, since OSR
             // looks up the code block that is the replacement for the baseline JIT code
@@ -6179,7 +6175,7 @@ void ByteCodeParser::linkBlock(BasicBlock* block, Vector<BasicBlock*>& possibleT
         break;
     }
     
-    if (verbose)
+    if (DFGByteCodeParserInternal::verbose)
         dataLog("Marking ", RawPointer(block), " as linked (actually did linking)\n");
     block->didLink();
 }
@@ -6187,10 +6183,10 @@ void ByteCodeParser::linkBlock(BasicBlock* block, Vector<BasicBlock*>& possibleT
 void ByteCodeParser::linkBlocks(Vector<UnlinkedBlock>& unlinkedBlocks, Vector<BasicBlock*>& possibleTargets)
 {
     for (size_t i = 0; i < unlinkedBlocks.size(); ++i) {
-        if (verbose)
+        if (DFGByteCodeParserInternal::verbose)
             dataLog("Attempting to link ", RawPointer(unlinkedBlocks[i].m_block), "\n");
         if (unlinkedBlocks[i].m_needsNormalLinking) {
-            if (verbose)
+            if (DFGByteCodeParserInternal::verbose)
                 dataLog("    Does need normal linking.\n");
             linkBlock(unlinkedBlocks[i].m_block, possibleTargets);
             unlinkedBlocks[i].m_needsNormalLinking = false;
index 57ec4a8..ebd1b8e 100644 (file)
@@ -48,7 +48,9 @@ namespace JSC { namespace DFG {
 
 namespace {
 
-const bool verbose = false;
+namespace DFGCSEPhaseInternal {
+static const bool verbose = false;
+}
 
 class ImpureDataSlot {
     WTF_MAKE_NONCOPYABLE(ImpureDataSlot);
@@ -627,7 +629,7 @@ public:
     
     bool iterate()
     {
-        if (verbose)
+        if (DFGCSEPhaseInternal::verbose)
             dataLog("Performing iteration.\n");
         
         m_changed = false;
@@ -638,13 +640,13 @@ public:
             m_impureData = &m_impureDataMap[m_block];
             m_writesSoFar.clear();
             
-            if (verbose)
+            if (DFGCSEPhaseInternal::verbose)
                 dataLog("Processing block ", *m_block, ":\n");
 
             for (unsigned nodeIndex = 0; nodeIndex < m_block->size(); ++nodeIndex) {
                 m_nodeIndex = nodeIndex;
                 m_node = m_block->at(nodeIndex);
-                if (verbose)
+                if (DFGCSEPhaseInternal::verbose)
                     dataLog("  Looking at node ", m_node, ":\n");
                 
                 m_graph.performSubstitution(m_node);
@@ -707,7 +709,7 @@ public:
         // a global search.
         LazyNode match = m_impureData->availableAtTail.get(location);
         if (!!match) {
-            if (verbose)
+            if (DFGCSEPhaseInternal::verbose)
                 dataLog("      Found local match: ", match, "\n");
             return match;
         }
@@ -715,7 +717,7 @@ public:
         // If it's not available at this point in the block, and at some prior point in the block
         // we have clobbered this heap location, then there is no point in doing a global search.
         if (m_writesSoFar.overlaps(location.heap())) {
-            if (verbose)
+            if (DFGCSEPhaseInternal::verbose)
                 dataLog("      Not looking globally because of local clobber: ", m_writesSoFar, "\n");
             return nullptr;
         }
@@ -772,7 +774,7 @@ public:
             BasicBlock* block = worklist.takeLast();
             seenList.append(block);
             
-            if (verbose)
+            if (DFGCSEPhaseInternal::verbose)
                 dataLog("      Searching in block ", *block, "\n");
             ImpureBlockData& data = m_impureDataMap[block];
             
@@ -780,12 +782,12 @@ public:
             // they came *after* our position in the block. Clearly, while our block dominates
             // itself, the things in the block after us don't dominate us.
             if (m_graph.m_ssaDominators->strictlyDominates(block, m_block)) {
-                if (verbose)
+                if (DFGCSEPhaseInternal::verbose)
                     dataLog("        It strictly dominates.\n");
                 DFG_ASSERT(m_graph, m_node, data.didVisit);
                 DFG_ASSERT(m_graph, m_node, !match);
                 match = data.availableAtTail.get(location);
-                if (verbose)
+                if (DFGCSEPhaseInternal::verbose)
                     dataLog("        Availability: ", match, "\n");
                 if (!!match) {
                     // Don't examine the predecessors of a match. At this point we just want to
@@ -795,10 +797,10 @@ public:
                 }
             }
             
-            if (verbose)
+            if (DFGCSEPhaseInternal::verbose)
                 dataLog("        Dealing with write set ", data.writes, "\n");
             if (data.writes.overlaps(location.heap())) {
-                if (verbose)
+                if (DFGCSEPhaseInternal::verbose)
                     dataLog("        Clobbered.\n");
                 return nullptr;
             }
@@ -830,16 +832,16 @@ public:
     
     void def(HeapLocation location, LazyNode value)
     {
-        if (verbose)
+        if (DFGCSEPhaseInternal::verbose)
             dataLog("    Got heap location def: ", location, " -> ", value, "\n");
         
         LazyNode match = findReplacement(location);
         
-        if (verbose)
+        if (DFGCSEPhaseInternal::verbose)
             dataLog("      Got match: ", match, "\n");
         
         if (!match) {
-            if (verbose)
+            if (DFGCSEPhaseInternal::verbose)
                 dataLog("      Adding at-tail mapping: ", location, " -> ", value, "\n");
             auto result = m_impureData->availableAtTail.add(location, value);
             ASSERT_UNUSED(result, !result);
index b0c55be..b49cb4c 100644 (file)
@@ -37,7 +37,9 @@
 
 namespace JSC { namespace DFG {
 
+namespace DFGInPlaceAbstractStateInternal {
 static const bool verbose = false;
+}
 
 InPlaceAbstractState::InPlaceAbstractState(Graph& graph)
     : m_graph(graph)
@@ -276,7 +278,7 @@ void InPlaceAbstractState::mergeStateAtTail(AbstractValue& destination, Abstract
 
 bool InPlaceAbstractState::merge(BasicBlock* from, BasicBlock* to)
 {
-    if (verbose)
+    if (DFGInPlaceAbstractStateInternal::verbose)
         dataLog("   Merging from ", pointerDump(from), " to ", pointerDump(to), "\n");
     ASSERT(from->variablesAtTail.numberOfArguments() == to->variablesAtHead.numberOfArguments());
     ASSERT(from->variablesAtTail.numberOfLocals() == to->variablesAtHead.numberOfLocals());
@@ -307,7 +309,7 @@ bool InPlaceAbstractState::merge(BasicBlock* from, BasicBlock* to)
 
         for (NodeAbstractValuePair& entry : to->ssa->valuesAtHead) {
             NodeFlowProjection node = entry.node;
-            if (verbose)
+            if (DFGInPlaceAbstractStateInternal::verbose)
                 dataLog("      Merging for ", node, ": from ", forNode(node), " to ", entry.value, "\n");
 #ifndef NDEBUG
             unsigned valueCountInFromBlock = 0;
@@ -322,7 +324,7 @@ bool InPlaceAbstractState::merge(BasicBlock* from, BasicBlock* to)
 
             changed |= entry.value.merge(forNode(node));
 
-            if (verbose)
+            if (DFGInPlaceAbstractStateInternal::verbose)
                 dataLog("         Result: ", entry.value, "\n");
         }
         break;
@@ -336,7 +338,7 @@ bool InPlaceAbstractState::merge(BasicBlock* from, BasicBlock* to)
     if (!to->cfaHasVisited)
         changed = true;
     
-    if (verbose)
+    if (DFGInPlaceAbstractStateInternal::verbose)
         dataLog("      Will revisit: ", changed, "\n");
     to->cfaShouldRevisit |= changed;
     
index 5d1cbc7..98e0f6f 100644 (file)
@@ -41,7 +41,9 @@ namespace JSC { namespace DFG {
 
 namespace {
 
+namespace DFGIntegerCheckCombiningPhaseInternal {
 static const bool verbose = false;
+}
 
 enum RangeKind {
     InvalidRangeKind,
@@ -203,13 +205,13 @@ private:
         
         for (auto* node : *block) {
             RangeKeyAndAddend data = rangeKeyAndAddend(node);
-            if (verbose)
+            if (DFGIntegerCheckCombiningPhaseInternal::verbose)
                 dataLog("For ", node, ": ", data, "\n");
             if (!data)
                 continue;
             
             Range& range = m_map[data.m_key];
-            if (verbose)
+            if (DFGIntegerCheckCombiningPhaseInternal::verbose)
                 dataLog("    Range: ", range, "\n");
             if (range.m_count) {
                 if (data.m_addend > range.m_maxBound) {
@@ -226,7 +228,7 @@ private:
                 range.m_maxOrigin = node->origin.semantic;
             }
             range.m_count++;
-            if (verbose)
+            if (DFGIntegerCheckCombiningPhaseInternal::verbose)
                 dataLog("    New range: ", range, "\n");
         }
         
index 07b6ac3..4a3831e 100644 (file)
@@ -42,7 +42,9 @@ namespace JSC { namespace DFG {
 
 namespace {
 
-const bool verbose = false;
+namespace DFGIntegerRangeOptimizationPhaseInternal {
+static const bool verbose = false;
+}
 const unsigned giveUpThreshold = 50;
 
 int64_t clampedSumImpl() { return 0; }
@@ -1019,7 +1021,7 @@ public:
             m_insertionSet.execute(m_graph.block(0));
         }
         
-        if (verbose) {
+        if (DFGIntegerRangeOptimizationPhaseInternal::verbose) {
             dataLog("Graph before integer range optimization:\n");
             m_graph.dump();
         }
@@ -1110,7 +1112,7 @@ public:
                 m_relationships = m_relationshipsAtHead[block];
             
                 for (auto* node : *block) {
-                    if (verbose)
+                    if (DFGIntegerRangeOptimizationPhaseInternal::verbose)
                         dataLog("Analysis: at ", node, ": ", listDump(sortedRelationships()), "\n");
                     executeNode(node);
                 }
@@ -1195,11 +1197,11 @@ public:
                         RelationshipMap forTrue = m_relationships;
                         RelationshipMap forFalse = m_relationships;
                         
-                        if (verbose)
+                        if (DFGIntegerRangeOptimizationPhaseInternal::verbose)
                             dataLog("Dealing with true:\n");
                         setRelationship(forTrue, relationshipForTrue);
                         if (Relationship relationshipForFalse = relationshipForTrue.inverse()) {
-                            if (verbose)
+                            if (DFGIntegerRangeOptimizationPhaseInternal::verbose)
                                 dataLog("Dealing with false:\n");
                             setRelationship(forFalse, relationshipForFalse);
                         }
@@ -1223,7 +1225,7 @@ public:
             m_relationships = m_relationshipsAtHead[block];
             for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
                 Node* node = block->at(nodeIndex);
-                if (verbose)
+                if (DFGIntegerRangeOptimizationPhaseInternal::verbose)
                     dataLog("Transformation: at ", node, ": ", listDump(sortedRelationships()), "\n");
                 
                 // This ends up being pretty awkward to write because we need to decide if we
@@ -1287,14 +1289,14 @@ public:
                         maxValue = std::min(maxValue, relationship.maxValueOfLeft());
                     }
 
-                    if (verbose)
+                    if (DFGIntegerRangeOptimizationPhaseInternal::verbose)
                         dataLog("    minValue = ", minValue, ", maxValue = ", maxValue, "\n");
                     
                     if (sumOverflows<int>(minValue, node->child2()->asInt32()) ||
                         sumOverflows<int>(maxValue, node->child2()->asInt32()))
                         break;
 
-                    if (verbose)
+                    if (DFGIntegerRangeOptimizationPhaseInternal::verbose)
                         dataLog("    It's in bounds.\n");
                     
                     executeNode(block->at(nodeIndex));
@@ -1548,7 +1550,7 @@ private:
         if (!relationship)
             return;
         
-        if (verbose)
+        if (DFGIntegerRangeOptimizationPhaseInternal::verbose)
             dataLog("    Setting: ", relationship, " (ttl = ", timeToLive, ")\n");
 
         auto result = relationshipMap.add(
@@ -1602,7 +1604,7 @@ private:
                     if (otherRelationship.vagueness() < relationship.vagueness()
                         && otherRelationship.right()->isInt32Constant()) {
                         Relationship newRelationship = relationship.filterConstant(otherRelationship);
-                        if (verbose && newRelationship != relationship)
+                        if (DFGIntegerRangeOptimizationPhaseInternal::verbose && newRelationship != relationship)
                             dataLog("      Refined to: ", newRelationship, " based on ", otherRelationship, "\n");
                         relationship = newRelationship;
                     }
@@ -1616,7 +1618,7 @@ private:
                     if (otherRelationship.vagueness() > relationship.vagueness()
                         && otherRelationship.right()->isInt32Constant()) {
                         Relationship newRelationship = otherRelationship.filterConstant(relationship);
-                        if (verbose && newRelationship != otherRelationship)
+                        if (DFGIntegerRangeOptimizationPhaseInternal::verbose && newRelationship != otherRelationship)
                             dataLog("      Refined ", otherRelationship, " to: ", newRelationship, "\n");
                         otherRelationship = newRelationship;
                     }
@@ -1639,7 +1641,7 @@ private:
             // @x == @c and @x != @d, where @d > @c, then we want to turn @x != @d into @x < @d.
             
             if (timeToLive && otherRelationship.kind() == Relationship::Equal) {
-                if (verbose)
+                if (DFGIntegerRangeOptimizationPhaseInternal::verbose)
                     dataLog("      Considering: ", otherRelationship, "\n");
                 
                 // We have:
@@ -1675,7 +1677,7 @@ private:
     
     bool mergeTo(RelationshipMap& relationshipMap, BasicBlock* target)
     {
-        if (verbose) {
+        if (DFGIntegerRangeOptimizationPhaseInternal::verbose) {
             dataLog("Merging to ", pointerDump(target), ":\n");
             dataLog("    Incoming: ", listDump(sortedRelationships(relationshipMap)), "\n");
             dataLog("    At head: ", listDump(sortedRelationships(m_relationshipsAtHead[target])), "\n");
@@ -1697,7 +1699,7 @@ private:
                 for (Relationship relationship : entry.value) {
                     ASSERT(relationship.left() == entry.key);
                     if (isLive(relationship.right())) {
-                        if (verbose)
+                        if (DFGIntegerRangeOptimizationPhaseInternal::verbose)
                             dataLog("  Propagating ", relationship, "\n");
                         values.append(relationship);
                     }
@@ -1734,12 +1736,12 @@ private:
             Vector<Relationship> mergedRelationships;
             for (Relationship targetRelationship : entry.value) {
                 for (Relationship sourceRelationship : iter->value) {
-                    if (verbose)
+                    if (DFGIntegerRangeOptimizationPhaseInternal::verbose)
                         dataLog("  Merging ", targetRelationship, " and ", sourceRelationship, ":\n");
                     targetRelationship.merge(
                         sourceRelationship,
                         [&] (Relationship newRelationship) {
-                            if (verbose)
+                            if (DFGIntegerRangeOptimizationPhaseInternal::verbose)
                                 dataLog("    Got ", newRelationship, "\n");
 
                             if (newRelationship.right()->isInt32Constant()) {
index 9f344b3..1aed2c1 100644 (file)
@@ -42,7 +42,9 @@ namespace JSC { namespace DFG {
 
 namespace {
 
-bool verbose = false;
+namespace DFGMovHintRemovalPhaseInternal {
+static const bool verbose = false;
+}
 
 class MovHintRemovalPhase : public Phase {
 public:
@@ -55,7 +57,7 @@ public:
     
     bool run()
     {
-        if (verbose) {
+        if (DFGMovHintRemovalPhaseInternal::verbose) {
             dataLog("Graph before MovHint removal:\n");
             m_graph.dump();
         }
@@ -69,7 +71,7 @@ public:
 private:
     void handleBlock(BasicBlock* block)
     {
-        if (verbose)
+        if (DFGMovHintRemovalPhaseInternal::verbose)
             dataLog("Handing block ", pointerDump(block), "\n");
         
         // A MovHint is unnecessary if the local dies before it is used. We answer this question by
@@ -87,7 +89,7 @@ private:
                 m_state.operand(reg) = currentEpoch;
             });
         
-        if (verbose)
+        if (DFGMovHintRemovalPhaseInternal::verbose)
             dataLog("    Locals: ", m_state, "\n");
         
         // Assume that blocks after us exit.
@@ -98,7 +100,7 @@ private:
             
             if (node->op() == MovHint) {
                 Epoch localEpoch = m_state.operand(node->unlinkedLocal());
-                if (verbose)
+                if (DFGMovHintRemovalPhaseInternal::verbose)
                     dataLog("    At ", node, ": current = ", currentEpoch, ", local = ", localEpoch, "\n");
                 if (!localEpoch || localEpoch == currentEpoch) {
                     node->setOpAndDefaultFlags(ZombieHint);
@@ -120,7 +122,7 @@ private:
                         if (!!m_state.operand(reg))
                             return;
                         
-                        if (verbose)
+                        if (DFGMovHintRemovalPhaseInternal::verbose)
                             dataLog("    Killed operand at ", node, ": ", reg, "\n");
                         m_state.operand(reg) = currentEpoch;
                     });
index 7a9c515..3fda7d4 100644 (file)
@@ -48,7 +48,9 @@ namespace JSC { namespace DFG {
 
 namespace {
 
-bool verbose = false;
+namespace DFGObjectAllocationSinkingPhaseInternal {
+static const bool verbose = false;
+}
 
 // In order to sink object cycles, we use a points-to analysis coupled
 // with an escape analysis. This analysis is actually similar to an
@@ -717,7 +719,7 @@ public:
         if (!performSinking())
             return false;
 
-        if (verbose) {
+        if (DFGObjectAllocationSinkingPhaseInternal::verbose) {
             dataLog("Graph after elimination:\n");
             m_graph.dump();
         }
@@ -742,7 +744,7 @@ private:
             graphBeforeSinking = out.toCString();
         }
 
-        if (verbose) {
+        if (DFGObjectAllocationSinkingPhaseInternal::verbose) {
             dataLog("Graph before elimination:\n");
             m_graph.dump();
         }
@@ -752,7 +754,7 @@ private:
         if (!determineSinkCandidates())
             return false;
 
-        if (verbose) {
+        if (DFGObjectAllocationSinkingPhaseInternal::verbose) {
             for (BasicBlock* block : m_graph.blocksInNaturalOrder()) {
                 dataLog("Heap at head of ", *block, ": \n", m_heapAtHead[block]);
                 dataLog("Heap at tail of ", *block, ": \n", m_heapAtTail[block]);
@@ -773,7 +775,7 @@ private:
 
         bool changed;
         do {
-            if (verbose)
+            if (DFGObjectAllocationSinkingPhaseInternal::verbose)
                 dataLog("Doing iteration of escape analysis.\n");
             changed = false;
 
@@ -1198,7 +1200,7 @@ private:
         if (m_sinkCandidates.isEmpty())
             return hasUnescapedReads;
 
-        if (verbose)
+        if (DFGObjectAllocationSinkingPhaseInternal::verbose)
             dataLog("Candidates: ", listDump(m_sinkCandidates), "\n");
 
         // Create the materialization nodes
@@ -1765,7 +1767,7 @@ private:
                 }
             }
 
-            if (verbose) {
+            if (DFGObjectAllocationSinkingPhaseInternal::verbose) {
                 dataLog("Local mapping at ", pointerDump(block), ": ", mapDump(m_localMapping), "\n");
                 dataLog("Local materializations at ", pointerDump(block), ": ", mapDump(m_escapeeToMaterialization), "\n");
             }
@@ -1781,7 +1783,7 @@ private:
                     m_localMapping.set(location, m_bottom);
 
                     if (m_sinkCandidates.contains(node)) {
-                        if (verbose)
+                        if (DFGObjectAllocationSinkingPhaseInternal::verbose)
                             dataLog("For sink candidate ", node, " found location ", location, "\n");
                         m_insertionSet.insert(
                             nodeIndex + 1,
@@ -1796,7 +1798,7 @@ private:
                     populateMaterialization(block, materialization, escapee);
                     m_escapeeToMaterialization.set(escapee, materialization);
                     m_insertionSet.insert(nodeIndex, materialization);
-                    if (verbose)
+                    if (DFGObjectAllocationSinkingPhaseInternal::verbose)
                         dataLog("Materializing ", escapee, " => ", materialization, " at ", node, "\n");
                 }
 
@@ -1851,7 +1853,7 @@ private:
 
                         doLower = true;
 
-                        if (verbose)
+                        if (DFGObjectAllocationSinkingPhaseInternal::verbose)
                             dataLog("Creating hint with value ", nodeValue, " before ", node, "\n");
                         m_insertionSet.insert(
                             nodeIndex + 1,
@@ -1994,7 +1996,7 @@ private:
 
     void insertOSRHintsForUpdate(unsigned nodeIndex, NodeOrigin origin, bool& canExit, AvailabilityMap& availability, Node* escapee, Node* materialization)
     {
-        if (verbose) {
+        if (DFGObjectAllocationSinkingPhaseInternal::verbose) {
             dataLog("Inserting OSR hints at ", origin, ":\n");
             dataLog("    Escapee: ", escapee, "\n");
             dataLog("    Materialization: ", materialization, "\n");
@@ -2167,7 +2169,7 @@ private:
 
     Node* createRecovery(BasicBlock* block, PromotedHeapLocation location, Node* where, bool& canExit)
     {
-        if (verbose)
+        if (DFGObjectAllocationSinkingPhaseInternal::verbose)
             dataLog("Recovering ", location, " at ", where, "\n");
         ASSERT(location.base()->isPhantomAllocation());
         Node* base = getMaterialization(block, location.base());
@@ -2175,7 +2177,7 @@ private:
 
         NodeOrigin origin = where->origin.withSemantic(base->origin.semantic);
 
-        if (verbose)
+        if (DFGObjectAllocationSinkingPhaseInternal::verbose)
             dataLog("Base is ", base, " and value is ", value, "\n");
 
         if (base->isPhantomAllocation()) {
index dcc1c97..3c088a6 100644 (file)
@@ -43,7 +43,9 @@ namespace JSC { namespace DFG {
 
 namespace {
 
-bool verbose = false;
+namespace DFGPhantomInsertionPhaseInternal {
+static const bool verbose = false;
+}
 
 class PhantomInsertionPhase : public Phase {
 public:
@@ -60,7 +62,7 @@ public:
         // SetLocals execute, which is inaccurate. That causes us to insert too few Phantoms.
         DFG_ASSERT(m_graph, nullptr, m_graph.m_refCountState == ExactRefCount);
         
-        if (verbose) {
+        if (DFGPhantomInsertionPhaseInternal::verbose) {
             dataLog("Graph before Phantom insertion:\n");
             m_graph.dump();
         }
@@ -70,7 +72,7 @@ public:
         for (BasicBlock* block : m_graph.blocksInNaturalOrder())
             handleBlock(block);
         
-        if (verbose) {
+        if (DFGPhantomInsertionPhaseInternal::verbose) {
             dataLog("Graph after Phantom insertion:\n");
             m_graph.dump();
         }
@@ -101,7 +103,7 @@ private:
         unsigned lastExitingIndex = 0;
         for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
             Node* node = block->at(nodeIndex);
-            if (verbose)
+            if (DFGPhantomInsertionPhaseInternal::verbose)
                 dataLog("Considering ", node, "\n");
             
             switch (node->op()) {
@@ -139,7 +141,7 @@ private:
             VirtualRegister alreadyKilled;
 
             auto processKilledOperand = [&] (VirtualRegister reg) {
-                if (verbose)
+                if (DFGPhantomInsertionPhaseInternal::verbose)
                     dataLog("    Killed operand: ", reg, "\n");
 
                 // Already handled from SetLocal.
@@ -155,7 +157,7 @@ private:
                 if (killedNode->epoch() == currentEpoch)
                     return;
                 
-                if (verbose) {
+                if (DFGPhantomInsertionPhaseInternal::verbose) {
                     dataLog(
                         "    Inserting Phantom on ", killedNode, " after ",
                         block->at(lastExitingIndex), "\n");
index d3da552..14ced2a 100644 (file)
@@ -42,7 +42,9 @@ namespace JSC { namespace DFG {
 
 namespace {
 
-bool verbose = false;
+namespace DFGPutStackSinkingPhaseInternal {
+static const bool verbose = false;
+}
 
 class PutStackSinkingPhase : public Phase {
 public:
@@ -71,7 +73,7 @@ public:
         // the stack. It's not clear to me if this is important or not.
         // https://bugs.webkit.org/show_bug.cgi?id=145296
         
-        if (verbose) {
+        if (DFGPutStackSinkingPhaseInternal::verbose) {
             dataLog("Graph before PutStack sinking:\n");
             m_graph.dump();
         }
@@ -105,7 +107,7 @@ public:
                 Operands<bool> live = liveAtTail[block];
                 for (unsigned nodeIndex = block->size(); nodeIndex--;) {
                     Node* node = block->at(nodeIndex);
-                    if (verbose)
+                    if (DFGPutStackSinkingPhaseInternal::verbose)
                         dataLog("Live at ", node, ": ", live, "\n");
                     
                     Vector<VirtualRegister, 4> reads;
@@ -113,7 +115,7 @@ public:
                     auto escapeHandler = [&] (VirtualRegister operand) {
                         if (operand.isHeader())
                             return;
-                        if (verbose)
+                        if (DFGPutStackSinkingPhaseInternal::verbose)
                             dataLog("    ", operand, " is live at ", node, "\n");
                         reads.append(operand);
                     };
@@ -230,7 +232,7 @@ public:
                 Operands<FlushFormat> deferred = deferredAtHead[block];
                 
                 for (Node* node : *block) {
-                    if (verbose)
+                    if (DFGPutStackSinkingPhaseInternal::verbose)
                         dataLog("Deferred at ", node, ":", deferred, "\n");
                     
                     if (node->op() == GetStack) {
@@ -277,7 +279,7 @@ public:
                     }
                     
                     auto escapeHandler = [&] (VirtualRegister operand) {
-                        if (verbose)
+                        if (DFGPutStackSinkingPhaseInternal::verbose)
                             dataLog("For ", node, " escaping ", operand, "\n");
                         if (operand.isHeader())
                             return;
@@ -303,13 +305,13 @@ public:
                 
                 for (BasicBlock* successor : block->successors()) {
                     for (size_t i = deferred.size(); i--;) {
-                        if (verbose)
+                        if (DFGPutStackSinkingPhaseInternal::verbose)
                             dataLog("Considering ", VirtualRegister(deferred.operandForIndex(i)), " at ", pointerDump(block), "->", pointerDump(successor), ": ", deferred[i], " and ", deferredAtHead[successor][i], " merges to ");
 
                         deferredAtHead[successor][i] =
                             merge(deferredAtHead[successor][i], deferred[i]);
                         
-                        if (verbose)
+                        if (DFGPutStackSinkingPhaseInternal::verbose)
                             dataLog(deferredAtHead[successor][i], "\n");
                     }
                 }
@@ -387,7 +389,7 @@ public:
                 if (!isConcrete(format))
                     return nullptr;
 
-                if (verbose)
+                if (DFGPutStackSinkingPhaseInternal::verbose)
                     dataLog("Adding Phi for ", operand, " at ", pointerDump(block), "\n");
                 
                 Node* phiNode = m_graph.addNode(SpecHeapTop, Phi, block->at(0)->origin.withInvalidExit());
@@ -411,7 +413,7 @@ public:
                 mapping.operand(operand) = def->value();
             }
             
-            if (verbose)
+            if (DFGPutStackSinkingPhaseInternal::verbose)
                 dataLog("Mapping at top of ", pointerDump(block), ": ", mapping, "\n");
             
             for (SSACalculator::Def* phiDef : ssaCalculator.phisForBlock(block)) {
@@ -419,7 +421,7 @@ public:
                 
                 insertionSet.insert(0, phiDef->value());
                 
-                if (verbose)
+                if (DFGPutStackSinkingPhaseInternal::verbose)
                     dataLog("   Mapping ", operand, " to ", phiDef->value(), "\n");
                 mapping.operand(operand) = phiDef->value();
             }
@@ -427,7 +429,7 @@ public:
             deferred = deferredAtHead[block];
             for (unsigned nodeIndex = 0; nodeIndex < block->size(); ++nodeIndex) {
                 Node* node = block->at(nodeIndex);
-                if (verbose)
+                if (DFGPutStackSinkingPhaseInternal::verbose)
                     dataLog("Deferred at ", node, ":", deferred, "\n");
                 
                 switch (node->op()) {
@@ -435,7 +437,7 @@ public:
                     StackAccessData* data = node->stackAccessData();
                     VirtualRegister operand = data->local;
                     deferred.operand(operand) = data->format;
-                    if (verbose)
+                    if (DFGPutStackSinkingPhaseInternal::verbose)
                         dataLog("   Mapping ", operand, " to ", node->child1().node(), " at ", node, "\n");
                     mapping.operand(operand) = node->child1().node();
                     break;
@@ -470,7 +472,7 @@ public:
                 
                 default: {
                     auto escapeHandler = [&] (VirtualRegister operand) {
-                        if (verbose)
+                        if (DFGPutStackSinkingPhaseInternal::verbose)
                             dataLog("For ", node, " escaping ", operand, "\n");
 
                         if (operand.isHeader())
@@ -484,7 +486,7 @@ public:
                         }
                     
                         // Gotta insert a PutStack.
-                        if (verbose)
+                        if (DFGPutStackSinkingPhaseInternal::verbose)
                             dataLog("Inserting a PutStack for ", operand, " at ", node, "\n");
 
                         Node* incoming = mapping.operand(operand);
@@ -523,7 +525,7 @@ public:
                     Node* phiNode = phiDef->value();
                     SSACalculator::Variable* variable = phiDef->variable();
                     VirtualRegister operand = indexToOperand[variable->index()];
-                    if (verbose)
+                    if (DFGPutStackSinkingPhaseInternal::verbose)
                         dataLog("Creating Upsilon for ", operand, " at ", pointerDump(block), "->", pointerDump(successorBlock), "\n");
                     FlushFormat format = deferredAtHead[successorBlock].operand(operand);
                     DFG_ASSERT(m_graph, nullptr, isConcrete(format));
@@ -568,7 +570,7 @@ public:
             }
         }
         
-        if (verbose) {
+        if (DFGPutStackSinkingPhaseInternal::verbose) {
             dataLog("Graph after PutStack sinking:\n");
             m_graph.dump();
         }
index 3a2a717..b5e2c35 100644 (file)
@@ -43,7 +43,9 @@ namespace JSC { namespace DFG {
 
 namespace {
 
-bool verbose = false;
+namespace DFGStoreBarrierInsertionPhaseInternal {
+static const bool verbose = false;
+}
 
 enum class PhaseMode {
     // Does only a local analysis for store barrier insertion and assumes that pointers live
@@ -75,7 +77,7 @@ public:
     
     bool run()
     {
-        if (verbose) {
+        if (DFGStoreBarrierInsertionPhaseInternal::verbose) {
             dataLog("Starting store barrier insertion:\n");
             m_graph.dump();
         }
@@ -167,7 +169,7 @@ public:
 private:
     bool handleBlock(BasicBlock* block)
     {
-        if (verbose) {
+        if (DFGStoreBarrierInsertionPhaseInternal::verbose) {
             dataLog("Dealing with block ", pointerDump(block), "\n");
             if (reallyInsertBarriers())
                 dataLog("    Really inserting barriers.\n");
@@ -206,7 +208,7 @@ private:
         for (m_nodeIndex = 0; m_nodeIndex < block->size(); ++m_nodeIndex) {
             m_node = block->at(m_nodeIndex);
             
-            if (verbose) {
+            if (DFGStoreBarrierInsertionPhaseInternal::verbose) {
                 dataLog(
                     "    ", m_currentEpoch, ": Looking at node ", m_node, " with children: ");
                 CommaPrinter comma;
@@ -348,7 +350,7 @@ private:
                 break;
             }
             
-            if (verbose) {
+            if (DFGStoreBarrierInsertionPhaseInternal::verbose) {
                 dataLog(
                     "    ", m_currentEpoch, ": Done with node ", m_node, " (", m_node->epoch(),
                     ") with children: ");
@@ -380,7 +382,7 @@ private:
     
     void considerBarrier(Edge base, Edge child)
     {
-        if (verbose)
+        if (DFGStoreBarrierInsertionPhaseInternal::verbose)
             dataLog("        Considering adding barrier ", base, " => ", child, "\n");
         
         // We don't need a store barrier if the child is guaranteed to not be a cell.
@@ -389,7 +391,7 @@ private:
             // Don't try too hard because it's too expensive to run AI.
             if (child->hasConstant()) {
                 if (!child->asJSValue().isCell()) {
-                    if (verbose)
+                    if (DFGStoreBarrierInsertionPhaseInternal::verbose)
                         dataLog("            Rejecting because of constant type.\n");
                     return;
                 }
@@ -400,7 +402,7 @@ private:
                 case NodeResultInt32:
                 case NodeResultInt52:
                 case NodeResultBoolean:
-                    if (verbose)
+                    if (DFGStoreBarrierInsertionPhaseInternal::verbose)
                         dataLog("            Rejecting because of result type.\n");
                     return;
                 default:
@@ -414,7 +416,7 @@ private:
             // Go into rage mode to eliminate any chance of a barrier with a non-cell child. We
             // can afford to keep around AI in Global mode.
             if (!m_interpreter->needsTypeCheck(child, ~SpecCell)) {
-                if (verbose)
+                if (DFGStoreBarrierInsertionPhaseInternal::verbose)
                     dataLog("            Rejecting because of AI type.\n");
                 return;
             }
@@ -426,7 +428,7 @@ private:
     
     void considerBarrier(Edge base)
     {
-        if (verbose)
+        if (DFGStoreBarrierInsertionPhaseInternal::verbose)
             dataLog("        Considering adding barrier on ", base, "\n");
         
         // We don't need a store barrier if the epoch of the base is identical to the current
@@ -434,12 +436,12 @@ private:
         // be in newgen, or we just ran a barrier on it so it's guaranteed to be remembered
         // already.
         if (base->epoch() == m_currentEpoch) {
-            if (verbose)
+            if (DFGStoreBarrierInsertionPhaseInternal::verbose)
                 dataLog("            Rejecting because it's in the current epoch.\n");
             return;
         }
         
-        if (verbose)
+        if (DFGStoreBarrierInsertionPhaseInternal::verbose)
             dataLog("            Inserting barrier.\n");
         insertBarrier(m_nodeIndex + 1, base);
     }
index e3c52ac..e973957 100644 (file)
@@ -40,7 +40,9 @@ namespace JSC { namespace DFG {
 
 namespace {
 
-bool verbose = false;
+namespace DFGVarargsForwardingPhaseInternal {
+static const bool verbose = false;
+}
 
 class VarargsForwardingPhase : public Phase {
 public:
@@ -53,7 +55,7 @@ public:
     {
         DFG_ASSERT(m_graph, nullptr, m_graph.m_form != SSA);
         
-        if (verbose) {
+        if (DFGVarargsForwardingPhaseInternal::verbose) {
             dataLog("Graph before varargs forwarding:\n");
             m_graph.dump();
         }
@@ -85,7 +87,7 @@ private:
         // We expect calls into this function to be rare. So, this is written in a simple O(n) manner.
         
         Node* candidate = block->at(candidateNodeIndex);
-        if (verbose)
+        if (DFGVarargsForwardingPhaseInternal::verbose)
             dataLog("Handling candidate ", candidate, "\n");
         
         // Find the index of the last node in this block to use the candidate, and look for escaping
@@ -121,7 +123,7 @@ private:
                         sawEscape = true;
                     });
                 if (sawEscape) {
-                    if (verbose)
+                    if (DFGVarargsForwardingPhaseInternal::verbose)
                         dataLog("    Escape at ", node, "\n");
                     return;
                 }
@@ -138,7 +140,7 @@ private:
             case TailCallVarargs:
             case TailCallVarargsInlinedCaller:
                 if (node->child1() == candidate || node->child2() == candidate) {
-                    if (verbose)
+                    if (DFGVarargsForwardingPhaseInternal::verbose)
                         dataLog("    Escape at ", node, "\n");
                     return;
                 }
@@ -148,7 +150,7 @@ private:
                 
             case SetLocal:
                 if (node->child1() == candidate && node->variableAccessData()->isLoadedFrom()) {
-                    if (verbose)
+                    if (DFGVarargsForwardingPhaseInternal::verbose)
                         dataLog("    Escape at ", node, "\n");
                     return;
                 }
@@ -156,7 +158,7 @@ private:
                 
             default:
                 if (m_graph.uses(node, candidate)) {
-                    if (verbose)
+                    if (DFGVarargsForwardingPhaseInternal::verbose)
                         dataLog("    Escape at ", node, "\n");
                     return;
                 }
@@ -165,7 +167,7 @@ private:
             forAllKilledOperands(
                 m_graph, node, block->tryAt(nodeIndex + 1),
                 [&] (VirtualRegister reg) {
-                    if (verbose)
+                    if (DFGVarargsForwardingPhaseInternal::verbose)
                         dataLog("    Killing ", reg, " while we are interested in ", listDump(relevantLocals), "\n");
                     for (unsigned i = 0; i < relevantLocals.size(); ++i) {
                         if (relevantLocals[i] == reg) {
@@ -176,7 +178,7 @@ private:
                     }
                 });
         }
-        if (verbose)
+        if (DFGVarargsForwardingPhaseInternal::verbose)
             dataLog("Selected lastUserIndex = ", lastUserIndex, ", ", block->at(lastUserIndex), "\n");
         
         // We're still in business. Determine if between the candidate and the last user there is any
@@ -193,7 +195,7 @@ private:
             case ZombieHint:
             case KillStack:
                 if (argumentsInvolveStackSlot(candidate, node->unlinkedLocal())) {
-                    if (verbose)
+                    if (DFGVarargsForwardingPhaseInternal::verbose)
                         dataLog("    Interference at ", node, "\n");
                     return;
                 }
@@ -201,7 +203,7 @@ private:
                 
             case PutStack:
                 if (argumentsInvolveStackSlot(candidate, node->stackAccessData()->local)) {
-                    if (verbose)
+                    if (DFGVarargsForwardingPhaseInternal::verbose)
                         dataLog("    Interference at ", node, "\n");
                     return;
                 }
@@ -210,7 +212,7 @@ private:
             case SetLocal:
             case Flush:
                 if (argumentsInvolveStackSlot(candidate, node->local())) {
-                    if (verbose)
+                    if (DFGVarargsForwardingPhaseInternal::verbose)
                         dataLog("    Interference at ", node, "\n");
                     return;
                 }
@@ -232,7 +234,7 @@ private:
                     },
                     NoOpClobberize());
                 if (doesInterfere) {
-                    if (verbose)
+                    if (DFGVarargsForwardingPhaseInternal::verbose)
                         dataLog("    Interference at ", node, "\n");
                     return;
                 }
@@ -240,7 +242,7 @@ private:
         }
         
         // We can make this work.
-        if (verbose)
+        if (DFGVarargsForwardingPhaseInternal::verbose)
             dataLog("    Will do forwarding!\n");
         m_changed = true;
         
index 90211fd..9b9914a 100644 (file)
@@ -38,8 +38,6 @@
 
 namespace JSC { namespace FTL {
 
-using namespace B3;
-
 AbstractHeap::AbstractHeap(AbstractHeap* parent, const char* heapName, ptrdiff_t offset)
     : m_offset(offset)
     , m_heapName(heapName)
@@ -77,7 +75,7 @@ void AbstractHeap::compute(unsigned begin)
 
     if (m_children.isEmpty()) {
         // Must special-case leaves so that they use just one slot on the number line.
-        m_range = HeapRange(begin);
+        m_range = B3::HeapRange(begin);
         return;
     }
 
@@ -87,7 +85,7 @@ void AbstractHeap::compute(unsigned begin)
         current = child->range().end();
     }
 
-    m_range = HeapRange(begin, current);
+    m_range = B3::HeapRange(begin, current);
 }
 
 void AbstractHeap::shallowDump(PrintStream& out) const
index 5903a8e..40cc8b8 100644 (file)
@@ -47,8 +47,6 @@
 
 namespace JSC { namespace FTL {
 
-using namespace B3;
-
 AbstractHeapRepository::AbstractHeapRepository()
     : root(nullptr, "jscRoot")
 
@@ -92,48 +90,49 @@ AbstractHeapRepository::~AbstractHeapRepository()
 {
 }
 
-void AbstractHeapRepository::decorateMemory(const AbstractHeap* heap, Value* value)
+void AbstractHeapRepository::decorateMemory(const AbstractHeap* heap, B3::Value* value)
 {
     m_heapForMemory.append(HeapForValue(heap, value));
 }
 
-void AbstractHeapRepository::decorateCCallRead(const AbstractHeap* heap, Value* value)
+void AbstractHeapRepository::decorateCCallRead(const AbstractHeap* heap, B3::Value* value)
 {
     m_heapForCCallRead.append(HeapForValue(heap, value));
 }
 
-void AbstractHeapRepository::decorateCCallWrite(const AbstractHeap* heap, Value* value)
+void AbstractHeapRepository::decorateCCallWrite(const AbstractHeap* heap, B3::Value* value)
 {
     m_heapForCCallWrite.append(HeapForValue(heap, value));
 }
 
-void AbstractHeapRepository::decoratePatchpointRead(const AbstractHeap* heap, Value* value)
+void AbstractHeapRepository::decoratePatchpointRead(const AbstractHeap* heap, B3::Value* value)
 {
     m_heapForPatchpointRead.append(HeapForValue(heap, value));
 }
 
-void AbstractHeapRepository::decoratePatchpointWrite(const AbstractHeap* heap, Value* value)
+void AbstractHeapRepository::decoratePatchpointWrite(const AbstractHeap* heap, B3::Value* value)
 {
     m_heapForPatchpointWrite.append(HeapForValue(heap, value));
 }
 
-void AbstractHeapRepository::decorateFenceRead(const AbstractHeap* heap, Value* value)
+void AbstractHeapRepository::decorateFenceRead(const AbstractHeap* heap, B3::Value* value)
 {
     m_heapForFenceRead.append(HeapForValue(heap, value));
 }
 
-void AbstractHeapRepository::decorateFenceWrite(const AbstractHeap* heap, Value* value)
+void AbstractHeapRepository::decorateFenceWrite(const AbstractHeap* heap, B3::Value* value)
 {
     m_heapForFenceWrite.append(HeapForValue(heap, value));
 }
 
-void AbstractHeapRepository::decorateFencedAccess(const AbstractHeap* heap, Value* value)
+void AbstractHeapRepository::decorateFencedAccess(const AbstractHeap* heap, B3::Value* value)
 {
     m_heapForFencedAccess.append(HeapForValue(heap, value));
 }
 
 void AbstractHeapRepository::computeRangesAndDecorateInstructions()
 {
+    using namespace B3;
     root.compute();
 
     if (verboseCompilationEnabled()) {
index 36dc49c..b28dd9a 100644 (file)
@@ -37,9 +37,7 @@
 
 namespace JSC { namespace FTL {
 
-using namespace DFG;
-
-JITFinalizer::JITFinalizer(Plan& plan)
+JITFinalizer::JITFinalizer(DFG::Plan& plan)
     : Finalizer(plan)
 {
 }
index 676b390..7c0181d 100644 (file)
 
 namespace JSC { namespace FTL {
 
-using namespace DFG;
-
 void link(State& state)
 {
+    using namespace DFG;
     Graph& graph = state.graph;
     CodeBlock* codeBlock = graph.m_codeBlock;
     VM& vm = graph.m_vm;
index 417178d..8bca56c 100644 (file)
 
 namespace JSC { namespace FTL {
 
-using namespace JSC::DFG;
-
 extern "C" void JIT_OPERATION operationPopulateObjectInOSR(
     ExecState* exec, ExitTimeObjectMaterialization* materialization,
     EncodedJSValue* encodedValue, EncodedJSValue* values)
 {
+    using namespace DFG;
     VM& vm = exec->vm();
     CodeBlock* codeBlock = exec->codeBlock();
 
@@ -120,6 +119,7 @@ extern "C" void JIT_OPERATION operationPopulateObjectInOSR(
 extern "C" JSCell* JIT_OPERATION operationMaterializeObjectInOSR(
     ExecState* exec, ExitTimeObjectMaterialization* materialization, EncodedJSValue* values)
 {
+    using namespace DFG;
     VM& vm = exec->vm();
 
     // We cannot GC. We've got pointers in evil places.
index 01c06e7..c5a6c66 100644 (file)
@@ -27,6 +27,7 @@
 #include "MarkingConstraintSet.h"
 
 #include "Options.h"
+#include <wtf/Function.h>
 #include <wtf/TimeWithDynamicClockType.h>
 
 namespace JSC {
@@ -106,15 +107,15 @@ void MarkingConstraintSet::didStartMarking()
     m_iteration = 1;
 }
 
-void MarkingConstraintSet::add(CString abbreviatedName, CString name, Function<void(SlotVisitor&, const VisitingTimeout&)> function, ConstraintVolatility volatility)
+void MarkingConstraintSet::add(CString abbreviatedName, CString name, ::Function<void(SlotVisitor&, const VisitingTimeout&)> function, ConstraintVolatility volatility)
 {
     add(std::make_unique<MarkingConstraint>(WTFMove(abbreviatedName), WTFMove(name), WTFMove(function), volatility));
 }
 
 void MarkingConstraintSet::add(
     CString abbreviatedName, CString name,
-    Function<void(SlotVisitor&, const VisitingTimeout&)> executeFunction,
-    Function<double(SlotVisitor&)> quickWorkEstimateFunction,
+    ::Function<void(SlotVisitor&, const VisitingTimeout&)> executeFunction,
+    ::Function<double(SlotVisitor&)> quickWorkEstimateFunction,
     ConstraintVolatility volatility)
 {
     add(std::make_unique<MarkingConstraint>(WTFMove(abbreviatedName), WTFMove(name), WTFMove(executeFunction), WTFMove(quickWorkEstimateFunction), volatility));
index ae6e183..f70666d 100644 (file)
@@ -33,7 +33,9 @@
 
 namespace JSC {
 
+namespace ShadowChickenInternal {
 static const bool verbose = false;
+}
 
 void ShadowChicken::Packet::dump(PrintStream& out) const
 {
@@ -86,7 +88,7 @@ void ShadowChicken::log(VM& vm, ExecState* exec, const Packet& packet)
 
 void ShadowChicken::update(VM& vm, ExecState* exec)
 {
-    if (verbose) {
+    if (ShadowChickenInternal::verbose) {
         dataLog("Running update on: ", *this, "\n");
         WTFReportBacktrace();
     }
@@ -112,13 +114,13 @@ void ShadowChicken::update(VM& vm, ExecState* exec)
         }
     }
     
-    if (verbose)
+    if (ShadowChickenInternal::verbose)
         dataLog("Highest point since last time: ", RawPointer(highestPointSinceLastTime), "\n");
     
     while (!m_stack.isEmpty() && (m_stack.last().frame < highestPointSinceLastTime || m_stack.last().isTailDeleted))
         m_stack.removeLast();
     
-    if (verbose)
+    if (ShadowChickenInternal::verbose)
         dataLog("    Revised stack: ", listDump(m_stack), "\n");
     
     // It's possible that the top of stack is now tail-deleted. The stack no longer contains any
@@ -141,7 +143,7 @@ void ShadowChicken::update(VM& vm, ExecState* exec)
     }
 
     
-    if (verbose)
+    if (ShadowChickenInternal::verbose)
         dataLog("    Revised stack: ", listDump(m_stack), "\n");
     
     // The log-based and exec-based rules require that ShadowChicken was enabled. The point of
@@ -169,7 +171,7 @@ void ShadowChicken::update(VM& vm, ExecState* exec)
             });
         stackRightNow.reverse();
         
-        if (verbose)
+        if (ShadowChickenInternal::verbose)
             dataLog("    Stack right now: ", listDump(stackRightNow), "\n");
         
         unsigned shadowIndex = 0;
@@ -194,7 +196,7 @@ void ShadowChicken::update(VM& vm, ExecState* exec)
         }
         m_stack.resize(shadowIndex);
         
-        if (verbose)
+        if (ShadowChickenInternal::verbose)
             dataLog("    Revised stack: ", listDump(m_stack), "\n");
     }
     
@@ -208,17 +210,17 @@ void ShadowChicken::update(VM& vm, ExecState* exec)
         }
     }
     
-    if (verbose)
+    if (ShadowChickenInternal::verbose)
         dataLog("    Highest point since last time: ", RawPointer(highestPointSinceLastTime), "\n");
     
     // Set everything up so that we know where the top frame is in the log.
     unsigned indexInLog = logCursorIndex;
     
     auto advanceIndexInLogTo = [&] (CallFrame* frame, JSObject* callee, CallFrame* callerFrame) -> bool {
-        if (verbose)
+        if (ShadowChickenInternal::verbose)
             dataLog("    Advancing to frame = ", RawPointer(frame), " from indexInLog = ", indexInLog, "\n");
         if (indexInLog > logCursorIndex) {
-            if (verbose)
+            if (ShadowChickenInternal::verbose)
                 dataLog("    Bailing.\n");
             return false;
         }
@@ -244,7 +246,7 @@ void ShadowChicken::update(VM& vm, ExecState* exec)
             if (packet.isPrologue() && packet.frame == frame
                 && (!callee || packet.callee == callee)
                 && (!callerFrame || packet.callerFrame == callerFrame)) {
-                if (verbose)
+                if (ShadowChickenInternal::verbose)
                     dataLog("    Found at indexInLog = ", indexInLog, "\n");
                 return true;
             }
@@ -266,7 +268,7 @@ void ShadowChicken::update(VM& vm, ExecState* exec)
         // It seems like the latter option is less harmful, so that's what we do.
         indexInLog = oldIndexInLog;
         
-        if (verbose)
+        if (ShadowChickenInternal::verbose)
             dataLog("    Didn't find it.\n");
         return false;
     };
@@ -286,10 +288,10 @@ void ShadowChicken::update(VM& vm, ExecState* exec)
             }
 
             CallFrame* callFrame = visitor->callFrame();
-            if (verbose)
+            if (ShadowChickenInternal::verbose)
                 dataLog("    Examining ", RawPointer(callFrame), "\n");
             if (callFrame == highestPointSinceLastTime) {
-                if (verbose)
+                if (ShadowChickenInternal::verbose)
                     dataLog("    Bailing at ", RawPointer(callFrame), " because it's the highest point since last time.\n");
                 return StackVisitor::Done;
             }
@@ -312,7 +314,7 @@ void ShadowChicken::update(VM& vm, ExecState* exec)
                 // This condition protects us from the case where advanceIndexInLogTo didn't find
                 // anything.
                 && m_log[indexInLog].frame == toPush.last().frame) {
-                if (verbose)
+                if (ShadowChickenInternal::verbose)
                     dataLog("    Going to loop through to find tail deleted frames with indexInLog = ", indexInLog, " and push-stack top = ", toPush.last(), "\n");
                 for (;;) {
                     ASSERT(m_log[indexInLog].frame == toPush.last().frame);
@@ -337,7 +339,7 @@ void ShadowChicken::update(VM& vm, ExecState* exec)
                     indexInLog--; // Skip over the tail packet.
                     
                     if (!advanceIndexInLogTo(tailPacket.frame, nullptr, nullptr)) {
-                        if (verbose)
+                        if (ShadowChickenInternal::verbose)
                             dataLog("Can't find prologue packet for tail: ", RawPointer(tailPacket.frame), "\n");
                         // We were unable to locate the prologue packet for this tail packet.
                         // This is rare but can happen in a situation like:
@@ -357,7 +359,7 @@ void ShadowChicken::update(VM& vm, ExecState* exec)
             return StackVisitor::Continue;
         });
 
-    if (verbose)
+    if (ShadowChickenInternal::verbose)
         dataLog("    Pushing: ", listDump(toPush), "\n");
     
     for (unsigned i = toPush.size(); i--;)
@@ -373,7 +375,7 @@ void ShadowChicken::update(VM& vm, ExecState* exec)
     } else
         m_logCursor = m_log;
 
-    if (verbose)
+    if (ShadowChickenInternal::verbose)
         dataLog("    After pushing: ", *this, "\n");
 
     // Remove tail frames until the number of tail deleted frames is small enough.
@@ -399,7 +401,7 @@ void ShadowChicken::update(VM& vm, ExecState* exec)
         }
     }
 
-    if (verbose)
+    if (ShadowChickenInternal::verbose)
         dataLog("    After clean-up: ", *this, "\n");
 }
 
index f3ddcfc..0eab565 100644 (file)
@@ -33,7 +33,9 @@
 
 namespace JSC {
 
+namespace BinarySwitchInternal {
 static const bool verbose = false;
+}
 
 static unsigned globalCounter; // We use a different seed every time we are invoked.
 
@@ -47,7 +49,7 @@ BinarySwitch::BinarySwitch(GPRReg value, const Vector<int64_t>& cases, Type type
     if (cases.isEmpty())
         return;
 
-    if (verbose)
+    if (BinarySwitchInternal::verbose)
         dataLog("Original cases: ", listDump(cases), "\n");
     
     for (unsigned i = 0; i < cases.size(); ++i)
@@ -55,7 +57,7 @@ BinarySwitch::BinarySwitch(GPRReg value, const Vector<int64_t>& cases, Type type
     
     std::sort(m_cases.begin(), m_cases.end());
 
-    if (verbose)
+    if (BinarySwitchInternal::verbose)
         dataLog("Sorted cases: ", listDump(m_cases), "\n");
     
     for (unsigned i = 1; i < m_cases.size(); ++i)
@@ -137,11 +139,11 @@ bool BinarySwitch::advance(MacroAssembler& jit)
 
 void BinarySwitch::build(unsigned start, bool hardStart, unsigned end)
 {
-    if (verbose)
+    if (BinarySwitchInternal::verbose)
         dataLog("Building with start = ", start, ", hardStart = ", hardStart, ", end = ", end, "\n");
 
     auto append = [&] (const BranchCode& code) {
-        if (verbose)
+        if (BinarySwitchInternal::verbose)
             dataLog("==> ", code, "\n");
         m_branches.append(code);
     };
@@ -159,7 +161,7 @@ void BinarySwitch::build(unsigned start, bool hardStart, unsigned end)
     const unsigned leafThreshold = 3;
     
     if (size <= leafThreshold) {
-        if (verbose)
+        if (BinarySwitchInternal::verbose)
             dataLog("It's a leaf.\n");
         
         // It turns out that for exactly three cases or less, it's better to just compare each
@@ -186,7 +188,7 @@ void BinarySwitch::build(unsigned start, bool hardStart, unsigned end)
             }
         }
 
-        if (verbose)
+        if (BinarySwitchInternal::verbose)
             dataLog("allConsecutive = ", allConsecutive, "\n");
         
         Vector<unsigned, 3> localCaseIndices;
@@ -214,7 +216,7 @@ void BinarySwitch::build(unsigned start, bool hardStart, unsigned end)
         return;
     }
 
-    if (verbose)
+    if (BinarySwitchInternal::verbose)
         dataLog("It's not a leaf.\n");
         
     // There are two different strategies we could consider here:
@@ -314,7 +316,7 @@ void BinarySwitch::build(unsigned start, bool hardStart, unsigned end)
         
     unsigned medianIndex = (start + end) / 2;
 
-    if (verbose)
+    if (BinarySwitchInternal::verbose)
         dataLog("medianIndex = ", medianIndex, "\n");
 
     // We want medianIndex to point to the thing we will do a less-than compare against. We want
@@ -347,7 +349,7 @@ void BinarySwitch::build(unsigned start, bool hardStart, unsigned end)
     RELEASE_ASSERT(medianIndex > start);
     RELEASE_ASSERT(medianIndex + 1 < end);
         
-    if (verbose)
+    if (BinarySwitchInternal::verbose)
         dataLog("fixed medianIndex = ", medianIndex, "\n");
 
     append(BranchCode(LessThanToPush, medianIndex));
index 1e49f2f..ad869f2 100644 (file)
@@ -225,7 +225,9 @@ void Data::finalizeStats()
 }
 
 #if ENABLE(LLINT_STATS)
+namespace LLIntDataInternal {
 static const bool verboseStats = false;
+}
 
 static bool compareStats(const OpcodeStats& a, const OpcodeStats& b)
 {
@@ -283,7 +285,7 @@ void Data::loadStats()
     unsigned index;
     char opcodeName[100];
     while (fscanf(file, "[%u]: fast:%zu slow:%zu id:%u %s\n", &index, &loaded.count, &loaded.slowPathCount, &loaded.id, opcodeName) != EOF) {
-        if (verboseStats)
+        if (LLIntDataInternal::verboseStats)
             dataLogF("loaded [%u]: fast %zu slow %zu id:%u %s\n", index, loaded.count, loaded.slowPathCount, loaded.id, opcodeName);
 
         OpcodeStats& stats = opcodeStats(loaded.id);
@@ -291,7 +293,7 @@ void Data::loadStats()
         stats.slowPathCount = loaded.slowPathCount;
     }
 
-    if (verboseStats) {
+    if (LLIntDataInternal::verboseStats) {
         dataLogF("After loading from %s, ", filename);
         dumpStats();
     }
@@ -330,7 +332,7 @@ void Data::saveStats()
         if (!stats.count && !stats.slowPathCount)
             break; // stats are sorted. If we encountered 0 counts, then there are no more non-zero counts.
 
-        if (verboseStats)
+        if (LLIntDataInternal::verboseStats)
             dataLogF("saved [%u]: fast:%zu slow:%zu id:%u %s\n", index, stats.count, stats.slowPathCount, stats.id, opcodeNames[stats.id]);
 
         fprintf(file, "[%u]: fast:%zu slow:%zu id:%u %s\n", index, stats.count, stats.slowPathCount, stats.id, opcodeNames[stats.id]);
index 8f7117b..3b71a30 100644 (file)
@@ -1366,7 +1366,9 @@ EncodedJSValue JSC_HOST_CALL arrayProtoPrivateFuncAppendMemcpy(ExecState* exec)
 
 // -------------------- ArrayPrototype.constructor Watchpoint ------------------
 
+namespace ArrayPrototypeInternal {
 static bool verbose = false;
+}
 
 class ArrayPrototypeAdaptiveInferredPropertyWatchpoint : public AdaptiveInferredPropertyValueWatchpointBase {
 public:
@@ -1388,7 +1390,7 @@ void ArrayPrototype::tryInitializeSpeciesWatchpoint(ExecState* exec)
 
     auto scope = DECLARE_THROW_SCOPE(vm);
 
-    if (verbose)
+    if (ArrayPrototypeInternal::verbose)
         dataLog("Initializing Array species watchpoints for Array.prototype: ", pointerDump(this), " with structure: ", pointerDump(this->structure()), "\nand Array: ", pointerDump(this->globalObject()->arrayConstructor()), " with structure: ", pointerDump(this->globalObject()->arrayConstructor()->structure()), "\n");
     // First we need to make sure that the Array.prototype.constructor property points to Array
     // and that Array[Symbol.species] is the primordial GetterSetter.
@@ -1466,7 +1468,7 @@ void ArrayPrototypeAdaptiveInferredPropertyWatchpoint::handleFire(const FireDeta
 
     StringFireDetail stringDetail(out.toCString().data());
 
-    if (verbose)
+    if (ArrayPrototypeInternal::verbose)
         WTF::dataLog(stringDetail, "\n");
 
     JSGlobalObject* globalObject = m_arrayPrototype->globalObject();
index 5493f8f..fc8b848 100644 (file)
@@ -108,41 +108,6 @@ static void appendSourceToError(CallFrame* callFrame, ErrorInstance* exception,
 
 }
 
-class FindFirstCallerFrameWithCodeblockFunctor {
-public:
-    FindFirstCallerFrameWithCodeblockFunctor(CallFrame* startCallFrame)
-        : m_startCallFrame(startCallFrame)
-        , m_foundCallFrame(nullptr)
-        , m_foundStartCallFrame(false)
-        , m_index(0)
-    { }
-
-    StackVisitor::Status operator()(StackVisitor& visitor)
-    {
-        if (!m_foundStartCallFrame && (visitor->callFrame() == m_startCallFrame))
-            m_foundStartCallFrame = true;
-
-        if (m_foundStartCallFrame) {
-            if (visitor->callFrame()->codeBlock()) {
-                m_foundCallFrame = visitor->callFrame();
-                return StackVisitor::Done;
-            }
-            m_index++;
-        }
-
-        return StackVisitor::Continue;
-    }
-
-    CallFrame* foundCallFrame() const { return m_foundCallFrame; }
-    unsigned index() const { return m_index; }
-
-private:
-    CallFrame* m_startCallFrame;
-    CallFrame* m_foundCallFrame;
-    bool m_foundStartCallFrame;
-    unsigned m_index;
-};
-
 void ErrorInstance::finishCreation(ExecState* exec, VM& vm, const String& message, bool useCurrentFrame)
 {
     Base::finishCreation(vm);
index 7ef0008..c920acf 100644 (file)
@@ -49,7 +49,10 @@ namespace JSC {
 
 const ClassInfo IntlDateTimeFormat::s_info = { "Object", &Base::s_info, nullptr, nullptr, CREATE_METHOD_TABLE(IntlDateTimeFormat) };
 
+namespace IntlDTFInternal {
 static const char* const relevantExtensionKeys[2] = { "ca", "nu" };
+}
+
 static const size_t indexOfExtensionKeyCa = 0;
 static const size_t indexOfExtensionKeyNu = 1;
 
@@ -189,6 +192,7 @@ static String canonicalizeTimeZoneName(const String& timeZoneName)
     return canonical;
 }
 
+namespace IntlDTFInternal {
 static Vector<String> localeData(const String& locale, size_t keyIndex)
 {
     Vector<String> keyLocaleData;
@@ -319,6 +323,7 @@ static JSObject* toDateTimeOptionsAnyDate(ExecState& exec, JSValue originalOptio
     // 9. Return options.
     return options;
 }
+}
 
 void IntlDateTimeFormat::setFormatsFromPattern(const StringView& pattern)
 {
@@ -433,7 +438,7 @@ void IntlDateTimeFormat::initializeDateTimeFormat(ExecState& exec, JSValue local
     RETURN_IF_EXCEPTION(scope, void());
 
     // 5. Let options be ToDateTimeOptions(options, "any", "date").
-    JSObject* options = toDateTimeOptionsAnyDate(exec, originalOptions);
+    JSObject* options = IntlDTFInternal::toDateTimeOptionsAnyDate(exec, originalOptions);
     // 6. ReturnIfAbrupt(options).
     RETURN_IF_EXCEPTION(scope, void());
 
@@ -450,7 +455,7 @@ void IntlDateTimeFormat::initializeDateTimeFormat(ExecState& exec, JSValue local
     // 11. Let localeData be the value of %DateTimeFormat%.[[localeData]].
     // 12. Let r be ResolveLocale( %DateTimeFormat%.[[availableLocales]], requestedLocales, opt, %DateTimeFormat%.[[relevantExtensionKeys]], localeData).
     const HashSet<String> availableLocales = exec.jsCallee()->globalObject()->intlDateTimeFormatAvailableLocales();
-    HashMap<String, String> resolved = resolveLocale(exec, availableLocales, requestedLocales, localeOpt, relevantExtensionKeys, WTF_ARRAY_LENGTH(relevantExtensionKeys), localeData);
+    HashMap<String, String> resolved = resolveLocale(exec, availableLocales, requestedLocales, localeOpt, IntlDTFInternal::relevantExtensionKeys, WTF_ARRAY_LENGTH(IntlDTFInternal::relevantExtensionKeys), IntlDTFInternal::localeData);
 
     // 13. Set dateTimeFormat.[[locale]] to the value of r.[[locale]].
     m_locale = resolved.get(vm.propertyNames->locale.string());
index 2247479..c2fb834 100644 (file)
@@ -88,12 +88,14 @@ void IntlNumberFormat::visitChildren(JSCell* cell, SlotVisitor& visitor)
     visitor.append(thisObject->m_boundFormat);
 }
 
+namespace IntlNFInternal {
 static Vector<String> localeData(const String& locale, size_t keyIndex)
 {
     // 9.1 Internal slots of Service Constructors & 11.2.3 Internal slots (ECMA-402 2.0)
     ASSERT_UNUSED(keyIndex, !keyIndex); // The index of the extension key "nu" in relevantExtensionKeys is 0.
     return numberingSystemsForLocale(locale);
 }
+}
 
 static inline unsigned computeCurrencySortKey(const String& currency)
 {
@@ -192,7 +194,7 @@ void IntlNumberFormat::initializeNumberFormat(ExecState& state, JSValue locales,
     // 11. Let localeData be %NumberFormat%.[[localeData]].
     // 12. Let r be ResolveLocale(%NumberFormat%.[[availableLocales]], requestedLocales, opt, %NumberFormat%.[[relevantExtensionKeys]], localeData).
     auto& availableLocales = state.jsCallee()->globalObject()->intlNumberFormatAvailableLocales();
-    auto result = resolveLocale(state, availableLocales, requestedLocales, opt, relevantExtensionKeys, WTF_ARRAY_LENGTH(relevantExtensionKeys), localeData);
+    auto result = resolveLocale(state, availableLocales, requestedLocales, opt, relevantExtensionKeys, WTF_ARRAY_LENGTH(relevantExtensionKeys), IntlNFInternal::localeData);
 
     // 13. Set numberFormat.[[locale]] to the value of r.[[locale]].
     m_locale = result.get(ASCIILiteral("locale"));
index 0abc530..78c9e64 100644 (file)
@@ -32,6 +32,7 @@
 
 namespace JSC {
 
+#undef MAKE_S_INFO
 #define MAKE_S_INFO(type) \
     template<> const ClassInfo JS##type##Constructor::s_info = {"Function", &JS##type##Constructor::Base::s_info, nullptr, nullptr, CREATE_METHOD_TABLE(JS##type##Constructor)}
 
index 66d74e6..b143455 100644 (file)
@@ -35,6 +35,7 @@ namespace JSC {
 const ClassInfo JSTypedArrayViewPrototype::s_info = {"Prototype", &JSTypedArrayViewPrototype::Base::s_info, nullptr, nullptr,
     CREATE_METHOD_TABLE(JSTypedArrayViewPrototype)};
 
+#undef MAKE_S_INFO
 #define MAKE_S_INFO(type) \
     template<> const ClassInfo JS##type##Prototype::s_info = {#type "Prototype", &JS##type##Prototype::Base::s_info, nullptr, nullptr, CREATE_METHOD_TABLE(JS##type##Prototype)}
 
index d0a93c4..09189d2 100644 (file)
@@ -32,6 +32,7 @@
 
 namespace JSC {
 
+#undef MAKE_S_INFO
 #define MAKE_S_INFO(type) \
     template<> const ClassInfo JS##type##Array::s_info = { \
         #type "Array", &JS##type##Array::Base::s_info, nullptr, nullptr, \
index 3f6f282..ba46b30 100644 (file)
@@ -33,14 +33,16 @@ namespace JSC {
 
 const ClassInfo NullGetterFunction::s_info = { "Function", &Base::s_info, nullptr, nullptr, CREATE_METHOD_TABLE(NullGetterFunction) };
 
+namespace NullGetterFunctionInternal {
 static EncodedJSValue JSC_HOST_CALL callReturnUndefined(ExecState*)
 {
     return JSValue::encode(jsUndefined());
 }
+}
 
 CallType NullGetterFunction::getCallData(JSCell*, CallData& callData)
 {
-    callData.native.function = callReturnUndefined;
+    callData.native.function = NullGetterFunctionInternal::callReturnUndefined;
     return CallType::Host;
 }
 
index 50c5b80..6af0d1a 100644 (file)
@@ -70,6 +70,7 @@ static bool callerIsStrict(ExecState* exec)
     return iter.callerIsStrict();
 }
 
+namespace NullSetterFunctionInternal {
 static EncodedJSValue JSC_HOST_CALL callReturnUndefined(ExecState* exec)
 {
     VM& vm = exec->vm();
@@ -79,10 +80,11 @@ static EncodedJSValue JSC_HOST_CALL callReturnUndefined(ExecState* exec)
         return JSValue::encode(throwTypeError(exec, scope, ASCIILiteral("Setting a property that has only a getter")));
     return JSValue::encode(jsUndefined());
 }
+}
 
 CallType NullSetterFunction::getCallData(JSCell*, CallData& callData)
 {
-    callData.native.function = callReturnUndefined;
+    callData.native.function = NullSetterFunctionInternal::callReturnUndefined;
     return CallType::Host;
 }
 
index c17b5eb..613cd09 100644 (file)
@@ -35,7 +35,7 @@
 #include <wtf/MathExtras.h>
 #include <wtf/dtoa/double-conversion.h>
 
-using namespace WTF::double_conversion;
+using DoubleToStringConverter = WTF::double_conversion::DoubleToStringConverter;
 
 // To avoid conflict with WTF::StringBuilder.
 typedef WTF::double_conversion::StringBuilder DoubleConversionStringBuilder;
index 3dcaeaa..0878046 100644 (file)
@@ -34,7 +34,9 @@
 
 namespace JSC {
 
+namespace PromiseDeferredTimerInternal {
 static const bool verbose = false;
+}
 
 PromiseDeferredTimer::PromiseDeferredTimer(VM& vm)
     : Base(&vm)
@@ -55,7 +57,7 @@ void PromiseDeferredTimer::doWork()
         JSPromiseDeferred* ticket;
         Task task;
         std::tie(ticket, task) = m_tasks.takeLast();
-        dataLogLnIf(verbose, "Doing work on promise: ", RawPointer(ticket));
+        dataLogLnIf(PromiseDeferredTimerInternal::verbose, "Doing work on promise: ", RawPointer(ticket));
 
         // We may have already canceled these promises.
         if (m_pendingPromises.contains(ticket)) {
@@ -105,11 +107,11 @@ void PromiseDeferredTimer::addPendingPromise(JSPromiseDeferred* ticket, Vector<S
 
     auto result = m_pendingPromises.add(ticket, Vector<Strong<JSCell>>());
     if (result.isNewEntry) {
-        dataLogLnIf(verbose, "Adding new pending promise: ", RawPointer(ticket));
+        dataLogLnIf(PromiseDeferredTimerInternal::verbose, "Adding new pending promise: ", RawPointer(ticket));
         dependencies.append(Strong<JSCell>(*m_vm, ticket));
         result.iterator->value = WTFMove(dependencies);
     } else {
-        dataLogLnIf(verbose, "Adding new dependencies for promise: ", RawPointer(ticket));
+        dataLogLnIf(PromiseDeferredTimerInternal::verbose, "Adding new dependencies for promise: ", RawPointer(ticket));
         result.iterator->value.appendVector(dependencies);
     }
 
@@ -124,7 +126,7 @@ bool PromiseDeferredTimer::cancelPendingPromise(JSPromiseDeferred* ticket)
     bool result = m_pendingPromises.remove(ticket);
 
     if (result)
-        dataLogLnIf(verbose, "Canceling promise: ", RawPointer(ticket));
+        dataLogLnIf(PromiseDeferredTimerInternal::verbose, "Canceling promise: ", RawPointer(ticket));
 
     return result;
 }
index e315c41..0f85ef8 100644 (file)
@@ -32,7 +32,9 @@
 
 namespace JSC {
 
+namespace TypeProfilerInternal {
 static const bool verbose = false;
+}
 
 TypeProfiler::TypeProfiler()
     : m_nextUniqueVariableID(1)
@@ -59,7 +61,7 @@ void TypeProfiler::logTypesForTypeLocation(TypeLocation* location, VM& vm)
 
 void TypeProfiler::insertNewLocation(TypeLocation* location)
 {
-    if (verbose)
+    if (TypeProfilerInternal::verbose)
         dataLogF("Registering location:: divotStart:%u, divotEnd:%u\n", location->m_divotStart, location->m_divotEnd);
 
     if (!m_bucketMap.contains(location->m_sourceID)) {
index 85c49d0..53d73e8 100644 (file)
@@ -37,7 +37,9 @@
 
 namespace JSC {
 
+namespace TypeProfilerLogInternal {
 static const bool verbose = false;
+}
 
 void TypeProfilerLog::initializeLog()
 {
@@ -56,7 +58,7 @@ TypeProfilerLog::~TypeProfilerLog()
 void TypeProfilerLog::processLogEntries(const String& reason)
 {
     double before = 0;
-    if (verbose) {
+    if (TypeProfilerLogInternal::verbose) {
         dataLog("Process caller:'", reason, "'");
         before = currentTimeMS();
     }
@@ -95,7 +97,7 @@ void TypeProfilerLog::processLogEntries(const String& reason)
     // pauses and causes the collector to mark the log.
     m_currentLogEntryPtr = m_logStartPtr;
 
-    if (verbose) {
+    if (TypeProfilerLogInternal::verbose) {
         double after = currentTimeMS();
         dataLogF(" Processing the log took: '%f' ms\n", after - before);
     }
index 00ac3c1..ad7d621 100644 (file)
@@ -76,7 +76,9 @@ namespace JSC { namespace Wasm {
 using namespace B3;
 
 namespace {
-const bool verbose = false;
+namespace WasmB3IRGeneratorInternal {
+static const bool verbose = false;
+}
 }
 
 class B3IRGenerator {
@@ -1566,9 +1568,9 @@ Expected<std::unique_ptr<InternalFunction>, String> parseAndCompile(CompilationC
     if (!ASSERT_DISABLED)
         validate(procedure, "After parsing:\n");
 
-    dataLogIf(verbose, "Pre SSA: ", procedure);
+    dataLogIf(WasmB3IRGeneratorInternal::verbose, "Pre SSA: ", procedure);
     fixSSA(procedure);
-    dataLogIf(verbose, "Post SSA: ", procedure);
+    dataLogIf(WasmB3IRGeneratorInternal::verbose, "Post SSA: ", procedure);
     
     {
         B3::prepareForGeneration(procedure);
index 50feb73..22edd16 100644 (file)
@@ -49,7 +49,9 @@
 
 namespace JSC { namespace Wasm {
 
+namespace WasmBBQPlanInternal {
 static const bool verbose = false;
+}
 
 BBQPlan::BBQPlan(VM* vm, Ref<ModuleInformation> info, AsyncWork work, CompletionTask&& task)
     : Base(vm, WTFMove(info), WTFMove(task))
@@ -86,7 +88,7 @@ const char* BBQPlan::stateString(State state)
 void BBQPlan::moveToState(State state)
 {
     ASSERT(state >= m_state);
-    dataLogLnIf(verbose && state != m_state, "moving to state: ", stateString(state), " from state: ", stateString(m_state));
+    dataLogLnIf(WasmBBQPlanInternal::verbose && state != m_state, "moving to state: ", stateString(state), " from state: ", stateString(m_state));
     m_state = state;
 }
 
@@ -95,9 +97,9 @@ bool BBQPlan::parseAndValidateModule()
     if (m_state != State::Initial)
         return true;
 
-    dataLogLnIf(verbose, "starting validation");
+    dataLogLnIf(WasmBBQPlanInternal::verbose, "starting validation");
     MonotonicTime startTime;
-    if (verbose || Options::reportCompileTimes())
+    if (WasmBBQPlanInternal::verbose || Options::reportCompileTimes())
         startTime = MonotonicTime::now();
 
     {
@@ -111,7 +113,7 @@ bool BBQPlan::parseAndValidateModule()
 
     const auto& functionLocations = m_moduleInformation->functionLocationInBinary;
     for (unsigned functionIndex = 0; functionIndex < functionLocations.size(); ++functionIndex) {
-        dataLogLnIf(verbose, "Processing function starting at: ", functionLocations[functionIndex].start, " and ending at: ", functionLocations[functionIndex].end);
+        dataLogLnIf(WasmBBQPlanInternal::verbose, "Processing function starting at: ", functionLocations[functionIndex].start, " and ending at: ", functionLocations[functionIndex].end);
         const uint8_t* functionStart = m_source + functionLocations[functionIndex].start;
         size_t functionLength = functionLocations[functionIndex].end - functionLocations[functionIndex].start;
         ASSERT(functionLength <= m_sourceLength);
@@ -120,7 +122,7 @@ bool BBQPlan::parseAndValidateModule()
 
         auto validationResult = validateFunction(functionStart, functionLength, signature, m_moduleInformation.get());
         if (!validationResult) {
-            if (verbose) {
+            if (WasmBBQPlanInternal::verbose) {
                 for (unsigned i = 0; i < functionLength; ++i)
                     dataLog(RawPointer(reinterpret_cast<void*>(functionStart[i])), ", ");
                 dataLogLn();
@@ -130,7 +132,7 @@ bool BBQPlan::parseAndValidateModule()
         }
     }
 
-    if (verbose || Options::reportCompileTimes())
+    if (WasmBBQPlanInternal::verbose || Options::reportCompileTimes())
         dataLogLn("Took ", (MonotonicTime::now() - startTime).microseconds(), " us to validate module");
 
     moveToState(State::Validated);
@@ -142,7 +144,7 @@ bool BBQPlan::parseAndValidateModule()
 void BBQPlan::prepare()
 {
     ASSERT(m_state == State::Validated);
-    dataLogLnIf(verbose, "Starting preparation");
+    dataLogLnIf(WasmBBQPlanInternal::verbose, "Starting preparation");
 
     auto tryReserveCapacity = [this] (auto& vector, size_t size, const char* what) {
         if (UNLIKELY(!vector.tryReserveCapacity(size))) {
@@ -174,7 +176,7 @@ void BBQPlan::prepare()
         if (import->kind != ExternalKind::Function)
             continue;
         unsigned importFunctionIndex = m_wasmToWasmExitStubs.size();
-        dataLogLnIf(verbose, "Processing import function number ", importFunctionIndex, ": ", makeString(import->module), ": ", makeString(import->field));
+        dataLogLnIf(WasmBBQPlanInternal::verbose, "Processing import function number ", importFunctionIndex, ": ", makeString(import->module), ": ", makeString(import->field));
         auto binding = wasmToWasm(importFunctionIndex);
         if (UNLIKELY(!binding)) {
             switch (binding.error()) {
@@ -230,7 +232,7 @@ public:
 void BBQPlan::compileFunctions(CompilationEffort effort)
 {
     ASSERT(m_state >= State::Prepared);
-    dataLogLnIf(verbose, "Starting compilation");
+    dataLogLnIf(WasmBBQPlanInternal::verbose, "Starting compilation");
 
     if (!hasWork())
         return;
@@ -294,7 +296,7 @@ void BBQPlan::compileFunctions(CompilationEffort effort)
 void BBQPlan::complete(const AbstractLocker& locker)
 {
     ASSERT(m_state != State::Compiled || m_currentIndex >= m_moduleInformation->functionLocationInBinary.size());
-    dataLogLnIf(verbose, "Starting Completion");
+    dataLogLnIf(WasmBBQPlanInternal::verbose, "Starting Completion");
 
     if (!failed() && m_state == State::Compiled) {
         for (uint32_t functionIndex = 0; functionIndex < m_moduleInformation->functionLocationInBinary.size(); functionIndex++) {
index e86fd47..2b1d828 100644 (file)
 namespace JSC { namespace Wasm {
 
 namespace {
+namespace WasmFaultSignalHandlerInternal {
 static const bool verbose = false;
 }
+}
 
 static StaticLock codeLocationsLock;
 static LazyNeverDestroyed<HashSet<std::tuple<void*, void*>>> codeLocations; // (start, end)
@@ -55,33 +57,33 @@ static bool fastHandlerInstalled { false };
 static SignalAction trapHandler(Signal, SigInfo& sigInfo, PlatformRegisters& context)
 {
     void* faultingInstruction = MachineContext::instructionPointer(context);
-    dataLogLnIf(verbose, "starting handler for fault at: ", RawPointer(faultingInstruction));
+    dataLogLnIf(WasmFaultSignalHandlerInternal::verbose, "starting handler for fault at: ", RawPointer(faultingInstruction));
 
-    dataLogLnIf(verbose, "JIT memory start: ", RawPointer(reinterpret_cast<void*>(startOfFixedExecutableMemoryPool)), " end: ", RawPointer(reinterpret_cast<void*>(endOfFixedExecutableMemoryPool)));
+    dataLogLnIf(WasmFaultSignalHandlerInternal::verbose, "JIT memory start: ", RawPointer(reinterpret_cast<void*>(startOfFixedExecutableMemoryPool)), " end: ", RawPointer(reinterpret_cast<void*>(endOfFixedExecutableMemoryPool)));
     // First we need to make sure we are in JIT code before we can aquire any locks. Otherwise,
     // we might have crashed in code that is already holding one of the locks we want to aquire.
     if (isJITPC(faultingInstruction)) {
         bool faultedInActiveFastMemory = false;
         {
             void* faultingAddress = sigInfo.faultingAddress;
-            dataLogLnIf(verbose, "checking faulting address: ", RawPointer(faultingAddress), " is in an active fast memory");
+            dataLogLnIf(WasmFaultSignalHandlerInternal::verbose, "checking faulting address: ", RawPointer(faultingAddress), " is in an active fast memory");
             faultedInActiveFastMemory = Wasm::Memory::addressIsInActiveFastMemory(faultingAddress);
         }
         if (faultedInActiveFastMemory) {
-            dataLogLnIf(verbose, "found active fast memory for faulting address");
+            dataLogLnIf(WasmFaultSignalHandlerInternal::verbose, "found active fast memory for faulting address");
             LockHolder locker(codeLocationsLock);
             for (auto range : codeLocations.get()) {
                 void* start;
                 void* end;
                 std::tie(start, end) = range;
-                dataLogLnIf(verbose, "function start: ", RawPointer(start), " end: ", RawPointer(end));
+                dataLogLnIf(WasmFaultSignalHandlerInternal::verbose, "function start: ", RawPointer(start), " end: ", RawPointer(end));
                 if (start <= faultingInstruction && faultingInstruction < end) {
-                    dataLogLnIf(verbose, "found match");
+                    dataLogLnIf(WasmFaultSignalHandlerInternal::verbose, "found match");
                     MacroAssemblerCodeRef exceptionStub = Thunks::singleton().existingStub(throwExceptionFromWasmThunkGenerator);
                     // If for whatever reason we don't have a stub then we should just treat this like a regular crash.
                     if (!exceptionStub)
                         break;
-                    dataLogLnIf(verbose, "found stub: ", RawPointer(exceptionStub.code().executableAddress()));
+                    dataLogLnIf(WasmFaultSignalHandlerInternal::verbose, "found stub: ", RawPointer(exceptionStub.code().executableAddress()));
                     MachineContext::argumentPointer<1>(context) = reinterpret_cast<void*>(ExceptionType::OutOfBoundsMemoryAccess);
                     MachineContext::instructionPointer(context) = exceptionStub.code().executableAddress();
                     return SignalAction::Handled;
index 5918f92..74cf653 100644 (file)
@@ -47,7 +47,9 @@
 
 namespace JSC { namespace Wasm {
 
+namespace WasmOMGPlanInternal {
 static const bool verbose = false;
+}
 
 OMGPlan::OMGPlan(Ref<Module> module, uint32_t functionIndex, MemoryMode mode, CompletionTask&& task)
     : Base(nullptr, makeRef(const_cast<ModuleInformation&>(module->moduleInformation())), WTFMove(task))
@@ -58,7 +60,7 @@ OMGPlan::OMGPlan(Ref<Module> module, uint32_t functionIndex, MemoryMode mode, Co
     setMode(mode);
     ASSERT(m_codeBlock->runnable());
     ASSERT(m_codeBlock.ptr() == m_module->codeBlockFor(m_mode));
-    dataLogLnIf(verbose, "Starting OMG plan for ", functionIndex, " of module: ", RawPointer(&m_module.get()));
+    dataLogLnIf(WasmOMGPlanInternal::verbose, "Starting OMG plan for ", functionIndex, " of module: ", RawPointer(&m_module.get()));
 }
 
 void OMGPlan::work(CompilationEffort)
@@ -138,9 +140,9 @@ void OMGPlan::work(CompilationEffort)
 
         auto repatchCalls = [&] (const Vector<UnlinkedWasmToWasmCall>&  callsites) {
             for (auto& call : callsites) {
-                dataLogLnIf(verbose, "Considering repatching call at: ", RawPointer(call.callLocation.dataLocation()), " that targets ", call.functionIndexSpace);
+                dataLogLnIf(WasmOMGPlanInternal::verbose, "Considering repatching call at: ", RawPointer(call.callLocation.dataLocation()), " that targets ", call.functionIndexSpace);
                 if (call.functionIndexSpace == functionIndexSpace) {
-                    dataLogLnIf(verbose, "Repatching call at: ", RawPointer(call.callLocation.dataLocation()), " to ", RawPointer(entrypoint));
+                    dataLogLnIf(WasmOMGPlanInternal::verbose, "Repatching call at: ", RawPointer(call.callLocation.dataLocation()), " to ", RawPointer(entrypoint));
                     MacroAssembler::repatchNearCall(call.callLocation, CodeLocationLabel(entrypoint));
                 }
             }
@@ -156,7 +158,7 @@ void OMGPlan::work(CompilationEffort)
         repatchCalls(unlinkedCalls);
     }
 
-    dataLogLnIf(verbose, "Finished with tier up count at: ", m_codeBlock->tierUpCount(m_functionIndex).count());
+    dataLogLnIf(WasmOMGPlanInternal::verbose, "Finished with tier up count at: ", m_codeBlock->tierUpCount(m_functionIndex).count());
     complete(holdLock(m_lock));
 }
 
index 1483ac7..ba27cf9 100644 (file)
@@ -47,7 +47,9 @@
 
 namespace JSC { namespace Wasm {
 
+namespace WasmPlanInternal {
 static const bool verbose = false;
+}
 
 Plan::Plan(VM* vm, Ref<ModuleInformation> info, CompletionTask&& task)
     : m_moduleInformation(WTFMove(info))
@@ -128,7 +130,7 @@ bool Plan::tryRemoveVMAndCancelIfLast(VM& vm)
 
 void Plan::fail(const AbstractLocker& locker, String&& errorMessage)
 {
-    dataLogLnIf(verbose, "failing with message: ", errorMessage);
+    dataLogLnIf(WasmPlanInternal::verbose, "failing with message: ", errorMessage);
     m_errorMessage = WTFMove(errorMessage);
     complete(locker);
 }
index b2298d6..8b600ed 100644 (file)
@@ -36,7 +36,9 @@
 namespace JSC { namespace Wasm {
 
 namespace {
-const bool verbose = false;
+namespace WasmSignatureInternal {
+static const bool verbose = false;
+}
 }
 
 String Signature::toString() const
@@ -103,7 +105,7 @@ std::pair<SignatureIndex, Ref<Signature>> SignatureInformation::adopt(Ref<Signat
         ++info.m_nextIndex;
         RELEASE_ASSERT(info.m_nextIndex > nextValue); // crash on overflow.
         ASSERT(nextValue == addResult.iterator->value);
-        if (verbose)
+        if (WasmSignatureInternal::verbose)
             dataLogLn("Adopt new signature ", signature.get(), " with index ", addResult.iterator->value, " hash: ", signature->hash());
 
         auto addResult = info.m_indexMap.add(nextValue, signature.copyRef());
@@ -111,7 +113,7 @@ std::pair<SignatureIndex, Ref<Signature>> SignatureInformation::adopt(Ref<Signat
         ASSERT(info.m_indexMap.size() == info.m_signatureMap.size());
         return std::make_pair(nextValue, WTFMove(signature));
     }
-    if (verbose)
+    if (WasmSignatureInternal::verbose)
         dataLogLn("Existing signature ", signature.get(), " with index ", addResult.iterator->value, " hash: ", signature->hash());
     ASSERT(addResult.iterator->value != Signature::invalidIndex);
     ASSERT(info.m_indexMap.contains(addResult.iterator->value));
index 76a87ac..9c5b684 100644 (file)
@@ -33,7 +33,9 @@
 
 namespace JSC { namespace Wasm {
 
+namespace WasmWorklistInternal {
 static const bool verbose = false;
+}
 
 const char* Worklist::priorityString(Priority priority)
 {
@@ -144,7 +146,7 @@ void Worklist::enqueue(Ref<Plan> plan)
             ASSERT_UNUSED(element, element.plan.get() != &plan.get());
     }
 
-    dataLogLnIf(verbose, "Enqueuing plan");
+    dataLogLnIf(WasmWorklistInternal::verbose, "Enqueuing plan");
     m_queue.enqueue({ Priority::Preparation, nextTicket(),  WTFMove(plan) });
     m_planEnqueued->notifyOne(locker);
 }
index 6677936..e5af620 100644 (file)
@@ -1,3 +1,37 @@
+2017-09-12  Keith Miller  <keith_miller@apple.com>
+
+        Do unified source builds for JSC
+        https://bugs.webkit.org/show_bug.cgi?id=176076
+
+        Reviewed by Geoffrey Garen.
+
+        This patch adds a script that will automatically bundle source
+        files, which is currently only used by the CMake build. It's
+        important that we use the same script to generate the bundles
+        for the CMake build as the Xcode build. If we didn't do this then
+        it's likely that there would be build errors that occur in only
+        one build system. On the same note, we also need to be careful to
+        not bundle platform specific source files with platform
+        independent ones. There are a couple of things the script does not
+        currently handle but are not essential for the CMake build. First,
+        it does not handle the max bundle size restrictions that the Xcode
+        build will require. It also does not handle C files.
+
+        The unified source generator script works by collecting groups of
+        up to 8 files from the same directory. We don't bundle files from
+        across directories since I didn't see a speedup from doing
+        so. Additionally, splitting at the directory boundary means that
+        it is less likely that adding a new file will force a "clean"
+        build. This would happen because the new file will shift every
+        subsequent file into the next unified source bundle.
+
+        Using unified sources appears to be a roughly 3.5x build time
+        speed up for clean builds on my MBP and appears to have a
+        negligible effect in incremental builds.
+
+        * generate-unified-source-bundles.rb: Added.
+        * wtf/Assertions.h:
+
 2017-09-12  Joseph Pecoraro  <pecoraro@apple.com>
 
         QualifiedName::init should assume AtomicStrings::init was already called
diff --git a/Source/WTF/generate-unified-source-bundles.rb b/Source/WTF/generate-unified-source-bundles.rb
new file mode 100644 (file)
index 0000000..4ffedea
--- /dev/null
@@ -0,0 +1,141 @@
+# Copyright (C) 2017 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1. Redistributions of source code must retain the above copyright
+#    notice, this list of conditions and the following disclaimer.
+# 2. Redistributions in binary form must reproduce the above copyright
+#    notice, this list of conditions and the following disclaimer in the
+#    documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+# THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+# THE POSSIBILITY OF SUCH DAMAGE.
+
+require 'fileutils'
+require 'pathname'
+require 'getoptlong'
+
+SCRIPT_NAME = File.basename($0)
+
+def usage
+    puts "usage: #{SCRIPT_NAME} [options] -p <desination-path> <source-file> [<source-file>...]"
+    puts "--help                          (-h) Print this message"
+    puts "--derived-sources-path          (-p) Path to the directory where the unified source files should be placed. This argument is required."
+    puts "--verbose                       (-v) Adds extra logging to stderr."
+    exit 1
+end
+
+MAX_BUNDLE_SIZE = 8
+$derivedSourcesPath = nil
+$verbose = false
+# FIXME: Use these when Xcode uses unified sources.
+$maxCppBundleCount = 100000
+$maxObjCBundleCount = 100000
+
+GetoptLong.new(['--help', '-h', GetoptLong::NO_ARGUMENT],
+               ['--derived-sources-path', '-p', GetoptLong::REQUIRED_ARGUMENT],
+               ['--verbose', '-v', GetoptLong::NO_ARGUMENT],
+               ['--max-cpp-bundle-count', GetoptLong::REQUIRED_ARGUMENT],
+               ['--max-obj-c-bundle-count', GetoptLong::REQUIRED_ARGUMENT]).each {
+    | opt, arg |
+    case opt
+    when '--help'
+        usage
+    when '--derived-sources-path'
+        $derivedSourcesPath = Pathname.new(arg)
+    when '--verbose'
+        $verbose = true
+    when '--max-cpp-bundle-count'
+        $maxCppBundleCount = arg
+    when '--max-obj-c-bundle-count'
+        $maxObjCBundleCount = arg
+    end
+}
+
+usage if !$derivedSourcesPath || ARGV.empty?
+
+def log(text)
+    $stderr.puts text if $verbose
+end
+
+$generatedSources = []
+
+class BundleManager
+    attr_reader :bundleCount, :extension, :fileCount, :currentBundleText
+
+    def initialize(extension)
+        @extension = extension
+        @fileCount = 0
+        @bundleCount = 0
+        @currentBundleText = ""
+    end
+
+    def flush
+        # No point in writing an empty bundle file
+        return if @currentBundleText == ""
+
+        @bundleCount += 1
+        bundleFile = "UnifiedSource#{@bundleCount}#{extension}"
+        bundleFile = $derivedSourcesPath + bundleFile
+        log("writing bundle #{bundleFile} with: \n#{@currentBundleText}")
+        IO::write(bundleFile, @currentBundleText)
+        $generatedSources << bundleFile
+
+        @currentBundleText = ""
+        @fileCount = 0
+    end
+
+    def addFile(file)
+        raise "wrong extension: #{file.extname} expected #{@extension}" unless file.extname == @extension
+        if @fileCount == MAX_BUNDLE_SIZE
+            log("flushing because new bundle is full #{@fileCount}")
+            flush
+        end
+        @currentBundleText += "#include \"#{file}\"\n"
+        @fileCount += 1
+    end
+end
+
+$bundleManagers = {
+    ".cpp" => BundleManager.new(".cpp"),
+    ".mm" => BundleManager.new(".mm")
+}
+
+$currentDirectory = nil
+
+
+ARGV.sort.each {
+    | file |
+
+    path = Pathname.new(file)
+    if ($currentDirectory != path.dirname)
+        log("flushing because new dirname old: #{$currentDirectory}, new: #{path.dirname}")
+        $bundleManagers.each_value { | x | x.flush }
+        $currentDirectory = path.dirname
+    end
+
+    bundle = $bundleManagers[path.extname]
+    if !bundle
+        log("No bundle for #{path.extname} files building #{path} standalone")
+        $generatedSources << path
+    else
+        bundle.addFile(path)
+    end
+}
+
+$bundleManagers.each_value { | x | x.flush }
+
+# We use stdout to report our unified source list to CMake.
+# Add trailing semicolon since CMake seems dislikes not having it.
+# Also, make sure we use print instead of puts because CMake will think the \n is a source file and fail to build.
+print($generatedSources.join(";") + ";")
index 3b04379..179f518 100644 (file)
@@ -535,18 +535,22 @@ inline void compilerFenceForCrash()
 #endif
 }
 
-#ifndef CRASH_WITH_SECURITY_IMPLICATION_AND_INFO
+#ifndef CRASH_WITH_INFO
 // This is useful if you are going to stuff data into registers before crashing. Like the crashWithInfo functions below...
 // GCC doesn't like the ##__VA_ARGS__ here since this macro is called from another macro so we just CRASH instead there.
 #if COMPILER(CLANG) || COMPILER(MSVC)
-#define CRASH_WITH_SECURITY_IMPLICATION_AND_INFO(...) do { \
+#define CRASH_WITH_INFO(...) do { \
         WTF::isIntegralType(__VA_ARGS__); \
         compilerFenceForCrash(); \
         WTFCrashWithInfo(__LINE__, __FILE__, WTF_PRETTY_FUNCTION, __COUNTER__, ##__VA_ARGS__); \
     } while (false)
 #else
-#define CRASH_WITH_SECURITY_IMPLICATION_AND_INFO(...) CRASH()
+#define CRASH_WITH_INFO(...) CRASH()
 #endif
+#endif // CRASH_WITH_INFO
+
+#ifndef CRASH_WITH_SECURITY_IMPLICATION_AND_INFO
+#define CRASH_WITH_SECURITY_IMPLICATION_AND_INFO CRASH_WITH_INFO
 #endif // CRASH_WITH_SECURITY_IMPLICATION_AND_INFO
 
 #endif // __cplusplus