The GC should be in a thread
authorfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 2 Nov 2016 22:01:04 +0000 (22:01 +0000)
committerfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 2 Nov 2016 22:01:04 +0000 (22:01 +0000)
https://bugs.webkit.org/show_bug.cgi?id=163562

Reviewed by Geoffrey Garen and Andreas Kling.
Source/JavaScriptCore:

In a concurrent GC, the work of collecting happens on a separate thread. This patch
implements this, and schedules the thread the way that a concurrent GC thread would be
scheduled. But, the GC isn't actually concurrent yet because it calls stopTheWorld() before
doing anything and calls resumeTheWorld() after it's done with everything. The next step will
be to make it really concurrent by basically calling stopTheWorld()/resumeTheWorld() around
bounded snippets of work while making most of the work happen with the world running. Our GC
will probably always have stop-the-world phases because the semantics of JSC weak references
call for it.

This implements concurrent GC scheduling. This means that there is no longer a
Heap::collect() API. Instead, you can call collectAsync() which makes sure that a GC is
scheduled (it will do nothing if one is scheduled or ongoing) or you can call collectSync()
to schedule a GC and wait for it to happen. I made our debugging stuff call collectSync().
It should be a goal to never call collectSync() except for debugging or benchmark harness
hacks.

The collector thread is an AutomaticThread, so it won't linger when not in use. It works on
a ticket-based system, like you would see at the DMV. A ticket is a 64-bit integer. There are
two ticket counters: last granted and last served. When you request a collection, last
granted is incremented and its new value given to you. When a collection completes, last
served is incremented. collectSync() waits until last served catches up to what last granted
had been at the time you requested a GC. This means that if you request a sync GC in the
middle of an async GC, you will wait for that async GC to finish and then you will request
and wait for your sync GC.

The synchronization between the collector thread and the main threads is complex. The
collector thread needs to be able to ask the main thread to stop. It needs to be able to do
some post-GC clean-up, like the synchronous CodeBlock and LargeAllocation sweeps, on the main
thread. The collector needs to be able to ask the main thread to execute a cross-modifying
code fence before running any JIT code, since the GC might aid the JIT worklist and run JIT
finalization. It's possible for the GC to want the main thread to run something at the same
time that the main thread wants to wait for the GC. The main thread needs to be able to run
non-JSC stuff without causing the GC to completely stall. The main thread needs to be able
to query its own state (is there a request to stop?) and change it (running JSC versus not)
quickly, since this may happen on hot paths. This kind of intertwined system of requests,
notifications, and state changes requires a combination of lock-free algorithms and waiting.
So, this is all implemented using a Atomic<unsigned> Heap::m_worldState, which has bits to
represent things being requested by the collector and the heap access state of the mutator. I
am borrowing a lot of terms that I've seen in other VMs that I've worked on. Here's what they
mean:

- Stop the world: make sure that either the mutator is not running, or that it's not running
  code that could mess with the heap.

- Heap access: the mutator is said to have heap access if it could mess with the heap.

If you stop the world and the mutator doesn't have heap access, all you're doing is making
sure that it will block when it tries to acquire heap access. This means that our GC is
already fully concurrent in cases where the GC is requested while the mutator has no heap
access. This probably won't happen, but if it did then it should just work. Usually, stopping
the world means that we state our shouldStop request with m_worldState, and a future call
to Heap::stopIfNecessary() will go to slow path and stop. The act of stopping or waiting to
acquire heap access is managed by using ParkingLot API directly on m_worldState. This works
out great because it would be very awkward to get the same functionality using locks and
condition variables, since we want stopIfNecessary/acquireAccess/requestAccess fast paths
that are single atomic instructions (load/CAS/CAS, respectively). The mutator will call these
things frequently. Currently we have Heap::stopIfNecessary() polling on every allocator slow
path, but we may want to make it even more frequent than that.

Currently only JSC API clients benefit from the heap access optimization. The DOM forces us
to assume that heap access is permanently on, since DOM manipulation doesn't always hold the
JSLock. We could still allow the GC to proceed when the runloop is idle by having the GC put
a task on the runloop that just calls stopIfNecessary().

This is perf neutral. The only behavior change that clients ought to observe is that marking
and the weak fixpoint happen on a separate thread. Marking was already parallel so it already
handled multiple threads, but now it _never_ runs on the main thread. The weak fixpoint
needed some help to be able to run on another thread - mostly because there was some code in
IndexedDB that was using thread specifics in the weak fixpoint.

* API/JSBase.cpp:
(JSSynchronousEdenCollectForDebugging):
* API/JSManagedValue.mm:
(-[JSManagedValue initWithValue:]):
* heap/EdenGCActivityCallback.cpp:
(JSC::EdenGCActivityCallback::doCollection):
* heap/FullGCActivityCallback.cpp:
(JSC::FullGCActivityCallback::doCollection):
* heap/Heap.cpp:
(JSC::Heap::Thread::Thread):
(JSC::Heap::Heap):
(JSC::Heap::lastChanceToFinalize):
(JSC::Heap::markRoots):
(JSC::Heap::gatherStackRoots):
(JSC::Heap::deleteUnmarkedCompiledCode):
(JSC::Heap::collectAllGarbage):
(JSC::Heap::collectAsync):
(JSC::Heap::collectSync):
(JSC::Heap::shouldCollectInThread):
(JSC::Heap::collectInThread):
(JSC::Heap::stopTheWorld):
(JSC::Heap::resumeTheWorld):
(JSC::Heap::stopIfNecessarySlow):
(JSC::Heap::acquireAccessSlow):
(JSC::Heap::releaseAccessSlow):
(JSC::Heap::handleDidJIT):
(JSC::Heap::handleNeedFinalize):
(JSC::Heap::setDidJIT):
(JSC::Heap::setNeedFinalize):
(JSC::Heap::waitWhileNeedFinalize):
(JSC::Heap::finalize):
(JSC::Heap::requestCollection):
(JSC::Heap::waitForCollection):
(JSC::Heap::didFinishCollection):
(JSC::Heap::canCollect):
(JSC::Heap::shouldCollectHeuristic):
(JSC::Heap::shouldCollect):
(JSC::Heap::collectIfNecessaryOrDefer):
(JSC::Heap::collectAccordingToDeferGCProbability):
(JSC::Heap::collect): Deleted.
(JSC::Heap::collectWithoutAnySweep): Deleted.
(JSC::Heap::collectImpl): Deleted.
* heap/Heap.h:
(JSC::Heap::ReleaseAccessScope::ReleaseAccessScope):
(JSC::Heap::ReleaseAccessScope::~ReleaseAccessScope):
* heap/HeapInlines.h:
(JSC::Heap::acquireAccess):
(JSC::Heap::releaseAccess):
(JSC::Heap::stopIfNecessary):
* heap/MachineStackMarker.cpp:
(JSC::MachineThreads::gatherConservativeRoots):
(JSC::MachineThreads::gatherFromCurrentThread): Deleted.
* heap/MachineStackMarker.h:
* jit/JITWorklist.cpp:
(JSC::JITWorklist::completeAllForVM):
* jit/JITWorklist.h:
* jsc.cpp:
(functionFullGC):
(functionEdenGC):
* runtime/InitializeThreading.cpp:
(JSC::initializeThreading):
* runtime/JSLock.cpp:
(JSC::JSLock::didAcquireLock):
(JSC::JSLock::unlock):
(JSC::JSLock::willReleaseLock):
* tools/JSDollarVMPrototype.cpp:
(JSC::JSDollarVMPrototype::edenGC):

Source/WebCore:

No new tests because existing tests cover this.

We now need to be more careful about using JSLock. This fixes some places that were not
holding it. New assertions in the GC are more likely to catch this than before.

* bindings/js/WorkerScriptController.cpp:
(WebCore::WorkerScriptController::WorkerScriptController):

Source/WTF:

This fixes some bugs and adds a few features.

* wtf/Atomics.h: The GC may do work on behalf of the JIT. If it does, the main thread needs to execute a cross-modifying code fence. This is cpuid on x86 and I believe it's isb on ARM. It would have been an isync on PPC and I think that isb is the ARM equivalent.
(WTF::arm_isb):
(WTF::crossModifyingCodeFence):
(WTF::x86_ortop):
(WTF::x86_cpuid):
* wtf/AutomaticThread.cpp: I accidentally had AutomaticThreadCondition inherit from ThreadSafeRefCounted<AutomaticThread> [sic]. This never crashed before because all of our prior AutomaticThreadConditions were immortal.
(WTF::AutomaticThread::AutomaticThread):
(WTF::AutomaticThread::~AutomaticThread):
(WTF::AutomaticThread::start):
* wtf/AutomaticThread.h:
* wtf/MainThread.cpp: Need to allow initializeGCThreads() to be called separately because it's now more than just a debugging thing.
(WTF::initializeGCThreads):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@208306 268f45cc-cd09-0410-ab3c-d52691b4dbfc

62 files changed:
Source/JavaScriptCore/API/JSBase.cpp
Source/JavaScriptCore/API/JSManagedValue.mm
Source/JavaScriptCore/CMakeLists.txt
Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/JavaScriptCore.xcodeproj/project.pbxproj
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/dfg/DFGDriver.cpp
Source/JavaScriptCore/dfg/DFGWorklist.cpp
Source/JavaScriptCore/dfg/DFGWorklist.h
Source/JavaScriptCore/ftl/FTLCompile.cpp
Source/JavaScriptCore/heap/EdenGCActivityCallback.cpp
Source/JavaScriptCore/heap/FullGCActivityCallback.cpp
Source/JavaScriptCore/heap/GCActivityCallback.h
Source/JavaScriptCore/heap/Heap.cpp
Source/JavaScriptCore/heap/Heap.h
Source/JavaScriptCore/heap/HeapInlines.h
Source/JavaScriptCore/heap/HeapTimer.cpp
Source/JavaScriptCore/heap/HeapTimer.h
Source/JavaScriptCore/heap/IncrementalSweeper.cpp
Source/JavaScriptCore/heap/IncrementalSweeper.h
Source/JavaScriptCore/heap/MachineStackMarker.cpp
Source/JavaScriptCore/heap/MachineStackMarker.h
Source/JavaScriptCore/heap/ReleaseHeapAccessScope.h [moved from Source/WebCore/platform/ios/WebSafeIncrementalSweeperIOS.h with 55% similarity]
Source/JavaScriptCore/heap/StopIfNecessaryTimer.cpp [new file with mode: 0644]
Source/JavaScriptCore/heap/StopIfNecessaryTimer.h [new file with mode: 0644]
Source/JavaScriptCore/inspector/agents/InspectorDebuggerAgent.cpp
Source/JavaScriptCore/jit/JITWorklist.cpp
Source/JavaScriptCore/jit/JITWorklist.h
Source/JavaScriptCore/jsc.cpp
Source/JavaScriptCore/runtime/AtomicsObject.cpp
Source/JavaScriptCore/runtime/InitializeThreading.cpp
Source/JavaScriptCore/runtime/JSLock.cpp
Source/JavaScriptCore/runtime/JSLock.h
Source/JavaScriptCore/runtime/VM.cpp
Source/JavaScriptCore/tools/JSDollarVMPrototype.cpp
Source/WTF/ChangeLog
Source/WTF/wtf/Atomics.h
Source/WTF/wtf/AutomaticThread.cpp
Source/WTF/wtf/AutomaticThread.h
Source/WTF/wtf/CompilationThread.cpp
Source/WTF/wtf/MainThread.cpp
Source/WTF/wtf/MainThread.h
Source/WTF/wtf/Optional.h
Source/WTF/wtf/ParkingLot.cpp
Source/WTF/wtf/ThreadSpecific.h
Source/WTF/wtf/WordLock.cpp
Source/WTF/wtf/text/AtomicStringImpl.cpp
Source/WebCore/ChangeLog
Source/WebCore/Modules/indexeddb/IDBDatabase.cpp
Source/WebCore/Modules/indexeddb/IDBDatabase.h
Source/WebCore/Modules/indexeddb/IDBRequest.cpp
Source/WebCore/Modules/indexeddb/IDBTransaction.cpp
Source/WebCore/WebCore.xcodeproj/project.pbxproj
Source/WebCore/bindings/js/JSDOMWindowBase.cpp
Source/WebCore/bindings/js/WorkerScriptController.cpp
Source/WebCore/bindings/js/WorkerScriptController.h
Source/WebCore/dom/EventTarget.cpp
Source/WebCore/platform/ios/WebSafeGCActivityCallbackIOS.h [deleted file]
Source/WebCore/testing/Internals.cpp
Source/WebCore/testing/Internals.h
Source/WebCore/testing/Internals.idl
Source/WebCore/workers/WorkerRunLoop.cpp

index cbad507..b0e74b3 100644 (file)
@@ -165,7 +165,7 @@ void JSSynchronousEdenCollectForDebugging(JSContextRef ctx)
 
     ExecState* exec = toJS(ctx);
     JSLockHolder locker(exec);
-    exec->vm().heap.collect(CollectionScope::Eden);
+    exec->vm().heap.collectSync(CollectionScope::Eden);
 }
 
 void JSDisableGCTimer(void)
index e788b5c..038a682 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
index b452596..dcd5459 100644 (file)
@@ -486,6 +486,7 @@ set(JavaScriptCore_SOURCES
     heap/MarkedSpace.cpp
     heap/MutatorState.cpp
     heap/SlotVisitor.cpp
+    heap/StopIfNecessaryTimer.cpp
     heap/Weak.cpp
     heap/WeakBlock.cpp
     heap/WeakHandleOwner.cpp
index abc6ae9..4db6deb 100644 (file)
@@ -1,3 +1,148 @@
+2016-11-02  Filip Pizlo  <fpizlo@apple.com>
+
+        The GC should be in a thread
+        https://bugs.webkit.org/show_bug.cgi?id=163562
+
+        Reviewed by Geoffrey Garen and Andreas Kling.
+        
+        In a concurrent GC, the work of collecting happens on a separate thread. This patch
+        implements this, and schedules the thread the way that a concurrent GC thread would be
+        scheduled. But, the GC isn't actually concurrent yet because it calls stopTheWorld() before
+        doing anything and calls resumeTheWorld() after it's done with everything. The next step will
+        be to make it really concurrent by basically calling stopTheWorld()/resumeTheWorld() around
+        bounded snippets of work while making most of the work happen with the world running. Our GC
+        will probably always have stop-the-world phases because the semantics of JSC weak references
+        call for it.
+        
+        This implements concurrent GC scheduling. This means that there is no longer a
+        Heap::collect() API. Instead, you can call collectAsync() which makes sure that a GC is
+        scheduled (it will do nothing if one is scheduled or ongoing) or you can call collectSync()
+        to schedule a GC and wait for it to happen. I made our debugging stuff call collectSync().
+        It should be a goal to never call collectSync() except for debugging or benchmark harness
+        hacks.
+        
+        The collector thread is an AutomaticThread, so it won't linger when not in use. It works on
+        a ticket-based system, like you would see at the DMV. A ticket is a 64-bit integer. There are
+        two ticket counters: last granted and last served. When you request a collection, last
+        granted is incremented and its new value given to you. When a collection completes, last
+        served is incremented. collectSync() waits until last served catches up to what last granted
+        had been at the time you requested a GC. This means that if you request a sync GC in the
+        middle of an async GC, you will wait for that async GC to finish and then you will request
+        and wait for your sync GC.
+        
+        The synchronization between the collector thread and the main threads is complex. The
+        collector thread needs to be able to ask the main thread to stop. It needs to be able to do
+        some post-GC clean-up, like the synchronous CodeBlock and LargeAllocation sweeps, on the main
+        thread. The collector needs to be able to ask the main thread to execute a cross-modifying
+        code fence before running any JIT code, since the GC might aid the JIT worklist and run JIT
+        finalization. It's possible for the GC to want the main thread to run something at the same
+        time that the main thread wants to wait for the GC. The main thread needs to be able to run
+        non-JSC stuff without causing the GC to completely stall. The main thread needs to be able
+        to query its own state (is there a request to stop?) and change it (running JSC versus not)
+        quickly, since this may happen on hot paths. This kind of intertwined system of requests,
+        notifications, and state changes requires a combination of lock-free algorithms and waiting.
+        So, this is all implemented using a Atomic<unsigned> Heap::m_worldState, which has bits to
+        represent things being requested by the collector and the heap access state of the mutator. I
+        am borrowing a lot of terms that I've seen in other VMs that I've worked on. Here's what they
+        mean:
+        
+        - Stop the world: make sure that either the mutator is not running, or that it's not running
+          code that could mess with the heap.
+        
+        - Heap access: the mutator is said to have heap access if it could mess with the heap.
+        
+        If you stop the world and the mutator doesn't have heap access, all you're doing is making
+        sure that it will block when it tries to acquire heap access. This means that our GC is
+        already fully concurrent in cases where the GC is requested while the mutator has no heap
+        access. This probably won't happen, but if it did then it should just work. Usually, stopping
+        the world means that we state our shouldStop request with m_worldState, and a future call
+        to Heap::stopIfNecessary() will go to slow path and stop. The act of stopping or waiting to
+        acquire heap access is managed by using ParkingLot API directly on m_worldState. This works
+        out great because it would be very awkward to get the same functionality using locks and
+        condition variables, since we want stopIfNecessary/acquireAccess/requestAccess fast paths
+        that are single atomic instructions (load/CAS/CAS, respectively). The mutator will call these
+        things frequently. Currently we have Heap::stopIfNecessary() polling on every allocator slow
+        path, but we may want to make it even more frequent than that.
+        
+        Currently only JSC API clients benefit from the heap access optimization. The DOM forces us
+        to assume that heap access is permanently on, since DOM manipulation doesn't always hold the
+        JSLock. We could still allow the GC to proceed when the runloop is idle by having the GC put
+        a task on the runloop that just calls stopIfNecessary().
+        
+        This is perf neutral. The only behavior change that clients ought to observe is that marking
+        and the weak fixpoint happen on a separate thread. Marking was already parallel so it already
+        handled multiple threads, but now it _never_ runs on the main thread. The weak fixpoint
+        needed some help to be able to run on another thread - mostly because there was some code in
+        IndexedDB that was using thread specifics in the weak fixpoint.
+
+        * API/JSBase.cpp:
+        (JSSynchronousEdenCollectForDebugging):
+        * API/JSManagedValue.mm:
+        (-[JSManagedValue initWithValue:]):
+        * heap/EdenGCActivityCallback.cpp:
+        (JSC::EdenGCActivityCallback::doCollection):
+        * heap/FullGCActivityCallback.cpp:
+        (JSC::FullGCActivityCallback::doCollection):
+        * heap/Heap.cpp:
+        (JSC::Heap::Thread::Thread):
+        (JSC::Heap::Heap):
+        (JSC::Heap::lastChanceToFinalize):
+        (JSC::Heap::markRoots):
+        (JSC::Heap::gatherStackRoots):
+        (JSC::Heap::deleteUnmarkedCompiledCode):
+        (JSC::Heap::collectAllGarbage):
+        (JSC::Heap::collectAsync):
+        (JSC::Heap::collectSync):
+        (JSC::Heap::shouldCollectInThread):
+        (JSC::Heap::collectInThread):
+        (JSC::Heap::stopTheWorld):
+        (JSC::Heap::resumeTheWorld):
+        (JSC::Heap::stopIfNecessarySlow):
+        (JSC::Heap::acquireAccessSlow):
+        (JSC::Heap::releaseAccessSlow):
+        (JSC::Heap::handleDidJIT):
+        (JSC::Heap::handleNeedFinalize):
+        (JSC::Heap::setDidJIT):
+        (JSC::Heap::setNeedFinalize):
+        (JSC::Heap::waitWhileNeedFinalize):
+        (JSC::Heap::finalize):
+        (JSC::Heap::requestCollection):
+        (JSC::Heap::waitForCollection):
+        (JSC::Heap::didFinishCollection):
+        (JSC::Heap::canCollect):
+        (JSC::Heap::shouldCollectHeuristic):
+        (JSC::Heap::shouldCollect):
+        (JSC::Heap::collectIfNecessaryOrDefer):
+        (JSC::Heap::collectAccordingToDeferGCProbability):
+        (JSC::Heap::collect): Deleted.
+        (JSC::Heap::collectWithoutAnySweep): Deleted.
+        (JSC::Heap::collectImpl): Deleted.
+        * heap/Heap.h:
+        (JSC::Heap::ReleaseAccessScope::ReleaseAccessScope):
+        (JSC::Heap::ReleaseAccessScope::~ReleaseAccessScope):
+        * heap/HeapInlines.h:
+        (JSC::Heap::acquireAccess):
+        (JSC::Heap::releaseAccess):
+        (JSC::Heap::stopIfNecessary):
+        * heap/MachineStackMarker.cpp:
+        (JSC::MachineThreads::gatherConservativeRoots):
+        (JSC::MachineThreads::gatherFromCurrentThread): Deleted.
+        * heap/MachineStackMarker.h:
+        * jit/JITWorklist.cpp:
+        (JSC::JITWorklist::completeAllForVM):
+        * jit/JITWorklist.h:
+        * jsc.cpp:
+        (functionFullGC):
+        (functionEdenGC):
+        * runtime/InitializeThreading.cpp:
+        (JSC::initializeThreading):
+        * runtime/JSLock.cpp:
+        (JSC::JSLock::didAcquireLock):
+        (JSC::JSLock::unlock):
+        (JSC::JSLock::willReleaseLock):
+        * tools/JSDollarVMPrototype.cpp:
+        (JSC::JSDollarVMPrototype::edenGC):
+
 2016-11-02  Michael Saboff  <msaboff@apple.com>
 
         Crash beneath SlotVisitor::drain @ cooksillustrated.com
index 2fbd2e4..5436d0b 100644 (file)
                0F7C39FF1C90C55B00480151 /* DFGOpInfo.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C39FE1C90C55B00480151 /* DFGOpInfo.h */; };
                0F7C5FB81D888A0C0044F5E2 /* MarkedBlockInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C5FB71D888A010044F5E2 /* MarkedBlockInlines.h */; };
                0F7C5FBA1D8895070044F5E2 /* MarkedSpaceInlines.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7C5FB91D8895050044F5E2 /* MarkedSpaceInlines.h */; };
+               0F7CF94F1DBEEE880098CC12 /* ReleaseHeapAccessScope.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */; };
+               0F7CF9521DC027D90098CC12 /* StopIfNecessaryTimer.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7CF9511DC027D70098CC12 /* StopIfNecessaryTimer.h */; };
+               0F7CF9531DC027DB0098CC12 /* StopIfNecessaryTimer.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7CF9501DC027D70098CC12 /* StopIfNecessaryTimer.cpp */; };
                0F7CF9561DC1258D0098CC12 /* AtomicsObject.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7CF9541DC1258B0098CC12 /* AtomicsObject.cpp */; };
                0F7CF9571DC125900098CC12 /* AtomicsObject.h in Headers */ = {isa = PBXBuildFile; fileRef = 0F7CF9551DC1258B0098CC12 /* AtomicsObject.h */; };
                0F7F988B1D9596C500F4F12E /* DFGStoreBarrierClusteringPhase.cpp in Sources */ = {isa = PBXBuildFile; fileRef = 0F7F98891D9596C300F4F12E /* DFGStoreBarrierClusteringPhase.cpp */; };
                0F7C39FE1C90C55B00480151 /* DFGOpInfo.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; name = DFGOpInfo.h; path = dfg/DFGOpInfo.h; sourceTree = "<group>"; };
                0F7C5FB71D888A010044F5E2 /* MarkedBlockInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MarkedBlockInlines.h; sourceTree = "<group>"; };
                0F7C5FB91D8895050044F5E2 /* MarkedSpaceInlines.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = MarkedSpaceInlines.h; sourceTree = "<group>"; };
+               0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ReleaseHeapAccessScope.h; sourceTree = "<group>"; };
+               0F7CF9501DC027D70098CC12 /* StopIfNecessaryTimer.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = StopIfNecessaryTimer.cpp; sourceTree = "<group>"; };
+               0F7CF9511DC027D70098CC12 /* StopIfNecessaryTimer.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = StopIfNecessaryTimer.h; sourceTree = "<group>"; };
                0F7CF9541DC1258B0098CC12 /* AtomicsObject.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = AtomicsObject.cpp; sourceTree = "<group>"; };
                0F7CF9551DC1258B0098CC12 /* AtomicsObject.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = AtomicsObject.h; sourceTree = "<group>"; };
                0F7F98891D9596C300F4F12E /* DFGStoreBarrierClusteringPhase.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; name = DFGStoreBarrierClusteringPhase.cpp; path = dfg/DFGStoreBarrierClusteringPhase.cpp; sourceTree = "<group>"; };
                                0FA762021DB9242300B7A2FD /* MutatorState.cpp */,
                                0FA762031DB9242300B7A2FD /* MutatorState.h */,
                                ADDB1F6218D77DB7009B58A8 /* OpaqueRootSet.h */,
+                               0F7CF94E1DBEEE860098CC12 /* ReleaseHeapAccessScope.h */,
                                C225494215F7DBAA0065E898 /* SlotVisitor.cpp */,
                                14BA78F013AAB88F005B7C2C /* SlotVisitor.h */,
                                0FCB408515C0A3C30048932B /* SlotVisitorInlines.h */,
+                               0F7CF9501DC027D70098CC12 /* StopIfNecessaryTimer.cpp */,
+                               0F7CF9511DC027D70098CC12 /* StopIfNecessaryTimer.h */,
                                142E3132134FF0A600AFADB5 /* Strong.h */,
                                145722851437E140005FDE26 /* StrongInlines.h */,
                                141448CC13A1783700F5BA1A /* TinyBloomFilter.h */,
                                0F3B3A281544C997003ED0FF /* DFGCFGSimplificationPhase.h in Headers */,
                                E3FFC8531DAD7D1500DEA53E /* DOMJITValue.h in Headers */,
                                0F9D36951AE9CC33000D4DFB /* DFGCleanUpPhase.h in Headers */,
+                               0F7CF94F1DBEEE880098CC12 /* ReleaseHeapAccessScope.h in Headers */,
                                A77A424017A0BBFD00A8DB81 /* DFGClobberize.h in Headers */,
                                0F37308D1C0BD29100052BFA /* B3PhiChildren.h in Headers */,
                                A77A424217A0BBFD00A8DB81 /* DFGClobberSet.h in Headers */,
                                933040040E6A749400786E6A /* SmallStrings.h in Headers */,
                                BC18C4640E16F5CD00B34460 /* SourceCode.h in Headers */,
                                0F7C39FD1C8F659500480151 /* RegExpObjectInlines.h in Headers */,
+                               0F7CF9521DC027D90098CC12 /* StopIfNecessaryTimer.h in Headers */,
                                BC18C4630E16F5CD00B34460 /* SourceProvider.h in Headers */,
                                E49DC16C12EF294E00184A1F /* SourceProviderCache.h in Headers */,
                                E49DC16D12EF295300184A1F /* SourceProviderCacheItem.h in Headers */,
                                7C184E1A17BEDBD3007CB63A /* JSPromise.cpp in Sources */,
                                7C184E2217BEE240007CB63A /* JSPromiseConstructor.cpp in Sources */,
                                7C008CDA187124BB00955C24 /* JSPromiseDeferred.cpp in Sources */,
+                               0F7CF9531DC027DB0098CC12 /* StopIfNecessaryTimer.cpp in Sources */,
                                7C184E1E17BEE22E007CB63A /* JSPromisePrototype.cpp in Sources */,
                                2A05ABD51961DF2400341750 /* JSPropertyNameEnumerator.cpp in Sources */,
                                E3EF88741B66DF23003F26CB /* JSPropertyNameIterator.cpp in Sources */,
index 1067b22..2f9120e 100644 (file)
@@ -2607,8 +2607,11 @@ void CodeBlock::visitChildren(SlotVisitor& visitor)
 
     if (m_jitCode)
         visitor.reportExtraMemoryVisited(m_jitCode->size());
-    if (m_instructions.size())
-        visitor.reportExtraMemoryVisited(m_instructions.size() * sizeof(Instruction) / m_instructions.refCount());
+    if (m_instructions.size()) {
+        unsigned refCount = m_instructions.refCount();
+        RELEASE_ASSERT(refCount);
+        visitor.reportExtraMemoryVisited(m_instructions.size() * sizeof(Instruction) / refCount);
+    }
 
     stronglyVisitStrongReferences(visitor);
     stronglyVisitWeakReferences(visitor);
index ff3257a..14cd0d0 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2011-2014, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -101,10 +101,10 @@ static CompilationResult compileImpl(
     
     plan->callback = callback;
     if (Options::useConcurrentJIT()) {
-        Worklist* worklist = ensureGlobalWorklistFor(mode);
+        Worklist& worklist = ensureGlobalWorklistFor(mode);
         if (logCompilationChanges(mode))
-            dataLog("Deferring DFG compilation of ", *codeBlock, " with queue length ", worklist->queueLength(), ".\n");
-        worklist->enqueue(plan);
+            dataLog("Deferring DFG compilation of ", *codeBlock, " with queue length ", worklist.queueLength(), ".\n");
+        worklist.enqueue(plan);
         return CompilationDeferred;
     }
     
index 0ce6979..ea89467 100644 (file)
@@ -33,6 +33,7 @@
 #include "DFGSafepoint.h"
 #include "DeferGC.h"
 #include "JSCInlines.h"
+#include "ReleaseHeapAccessScope.h"
 #include <mutex>
 
 namespace JSC { namespace DFG {
@@ -104,9 +105,17 @@ protected:
             dataLog(m_worklist, ": Compiling ", m_plan->key(), " asynchronously\n");
         
         // There's no way for the GC to be safepointing since we own rightToRun.
-        RELEASE_ASSERT(m_plan->vm->heap.mutatorState() != MutatorState::HelpingGC);
+        if (m_plan->vm->heap.collectorBelievesThatTheWorldIsStopped()) {
+            dataLog("Heap is stoped but here we are! (1)\n");
+            RELEASE_ASSERT_NOT_REACHED();
+        }
         m_plan->compileInThread(*m_longLivedState, &m_data);
-        RELEASE_ASSERT(m_plan->stage == Plan::Cancelled || m_plan->vm->heap.mutatorState() != MutatorState::HelpingGC);
+        if (m_plan->stage != Plan::Cancelled) {
+            if (m_plan->vm->heap.collectorBelievesThatTheWorldIsStopped()) {
+                dataLog("Heap is stopped but here we are! (2)\n");
+                RELEASE_ASSERT_NOT_REACHED();
+            }
+        }
         
         {
             LockHolder locker(*m_worklist.m_lock);
@@ -124,7 +133,7 @@ protected:
             
             m_worklist.m_planCompiled.notifyAll();
         }
-        RELEASE_ASSERT(m_plan->vm->heap.mutatorState() != MutatorState::HelpingGC);
+        RELEASE_ASSERT(!m_plan->vm->heap.collectorBelievesThatTheWorldIsStopped());
         
         return WorkResult::Continue;
     }
@@ -238,6 +247,13 @@ Worklist::State Worklist::compilationState(CompilationKey key)
 void Worklist::waitUntilAllPlansForVMAreReady(VM& vm)
 {
     DeferGC deferGC(vm.heap);
+    
+    // While we are waiting for the compiler to finish, the collector might have already suspended
+    // the compiler and then it will be waiting for us to stop. That's a deadlock. We avoid that
+    // deadlock by relinquishing our heap access, so that the collector pretends that we are stopped
+    // even if we aren't.
+    ReleaseHeapAccessScope releaseHeapAccessScope(vm.heap);
+    
     // Wait for all of the plans for the given VM to complete. The idea here
     // is that we want all of the caller VM's plans to be done. We don't care
     // about any other VM's plans, and we won't attempt to wait on those.
@@ -483,13 +499,13 @@ void Worklist::dump(const LockHolder&, PrintStream& out) const
 
 static Worklist* theGlobalDFGWorklist;
 
-Worklist* ensureGlobalDFGWorklist()
+Worklist& ensureGlobalDFGWorklist()
 {
     static std::once_flag initializeGlobalWorklistOnceFlag;
     std::call_once(initializeGlobalWorklistOnceFlag, [] {
         theGlobalDFGWorklist = &Worklist::create("DFG Worklist", Options::numberOfDFGCompilerThreads(), Options::priorityDeltaOfDFGCompilerThreads()).leakRef();
     });
-    return theGlobalDFGWorklist;
+    return *theGlobalDFGWorklist;
 }
 
 Worklist* existingGlobalDFGWorklistOrNull()
@@ -499,13 +515,13 @@ Worklist* existingGlobalDFGWorklistOrNull()
 
 static Worklist* theGlobalFTLWorklist;
 
-Worklist* ensureGlobalFTLWorklist()
+Worklist& ensureGlobalFTLWorklist()
 {
     static std::once_flag initializeGlobalWorklistOnceFlag;
     std::call_once(initializeGlobalWorklistOnceFlag, [] {
         theGlobalFTLWorklist = &Worklist::create("FTL Worklist", Options::numberOfFTLCompilerThreads(), Options::priorityDeltaOfFTLCompilerThreads()).leakRef();
     });
-    return theGlobalFTLWorklist;
+    return *theGlobalFTLWorklist;
 }
 
 Worklist* existingGlobalFTLWorklistOrNull()
@@ -513,12 +529,12 @@ Worklist* existingGlobalFTLWorklistOrNull()
     return theGlobalFTLWorklist;
 }
 
-Worklist* ensureGlobalWorklistFor(CompilationMode mode)
+Worklist& ensureGlobalWorklistFor(CompilationMode mode)
 {
     switch (mode) {
     case InvalidCompilationMode:
         RELEASE_ASSERT_NOT_REACHED();
-        return 0;
+        return ensureGlobalDFGWorklist();
     case DFGMode:
         return ensureGlobalDFGWorklist();
     case FTLMode:
@@ -526,13 +542,13 @@ Worklist* ensureGlobalWorklistFor(CompilationMode mode)
         return ensureGlobalFTLWorklist();
     }
     RELEASE_ASSERT_NOT_REACHED();
-    return 0;
+    return ensureGlobalDFGWorklist();
 }
 
 void completeAllPlansForVM(VM& vm)
 {
     for (unsigned i = DFG::numberOfWorklists(); i--;) {
-        if (DFG::Worklist* worklist = DFG::worklistForIndexOrNull(i))
+        if (DFG::Worklist* worklist = DFG::existingWorklistForIndexOrNull(i))
             worklist->completeAllPlansForVM(vm);
     }
 }
@@ -540,7 +556,7 @@ void completeAllPlansForVM(VM& vm)
 void rememberCodeBlocks(VM& vm)
 {
     for (unsigned i = DFG::numberOfWorklists(); i--;) {
-        if (DFG::Worklist* worklist = DFG::worklistForIndexOrNull(i))
+        if (DFG::Worklist* worklist = DFG::existingWorklistForIndexOrNull(i))
             worklist->rememberCodeBlocks(vm);
     }
 }
index c9f0919..955dab5 100644 (file)
@@ -121,18 +121,30 @@ private:
 };
 
 // For DFGMode compilations.
-Worklist* ensureGlobalDFGWorklist();
+Worklist& ensureGlobalDFGWorklist();
 Worklist* existingGlobalDFGWorklistOrNull();
 
 // For FTLMode and FTLForOSREntryMode compilations.
-Worklist* ensureGlobalFTLWorklist();
+Worklist& ensureGlobalFTLWorklist();
 Worklist* existingGlobalFTLWorklistOrNull();
 
-Worklist* ensureGlobalWorklistFor(CompilationMode);
+Worklist& ensureGlobalWorklistFor(CompilationMode);
 
 // Simplify doing things for all worklists.
 inline unsigned numberOfWorklists() { return 2; }
-inline Worklist* worklistForIndexOrNull(unsigned index)
+inline Worklist& ensureWorklistForIndex(unsigned index)
+{
+    switch (index) {
+    case 0:
+        return ensureGlobalDFGWorklist();
+    case 1:
+        return ensureGlobalFTLWorklist();
+    default:
+        RELEASE_ASSERT_NOT_REACHED();
+        return ensureGlobalDFGWorklist();
+    }
+}
+inline Worklist* existingWorklistForIndexOrNull(unsigned index)
 {
     switch (index) {
     case 0:
@@ -144,6 +156,12 @@ inline Worklist* worklistForIndexOrNull(unsigned index)
         return 0;
     }
 }
+inline Worklist& existingWorklistForIndex(unsigned index)
+{
+    Worklist* result = existingWorklistForIndexOrNull(index);
+    RELEASE_ASSERT(result);
+    return *result;
+}
 
 void completeAllPlansForVM(VM&);
 void rememberCodeBlocks(VM&);
index 68659a8..e85e287 100644 (file)
@@ -65,7 +65,7 @@ void compile(State& state, Safepoint::Result& safepointResult)
 
     if (safepointResult.didGetCancelled())
         return;
-    RELEASE_ASSERT(state.graph.m_vm.heap.mutatorState() != MutatorState::HelpingGC);
+    RELEASE_ASSERT(!state.graph.m_vm.heap.collectorBelievesThatTheWorldIsStopped());
     
     if (state.allocationFailed)
         return;
index ad597e5..41fb2e9 100644 (file)
@@ -39,7 +39,7 @@ EdenGCActivityCallback::EdenGCActivityCallback(Heap* heap)
 
 void EdenGCActivityCallback::doCollection()
 {
-    m_vm->heap.collect(CollectionScope::Eden);
+    m_vm->heap.collectAsync(CollectionScope::Eden);
 }
 
 double EdenGCActivityCallback::lastGCLength()
index 5506c08..3d1545f 100644 (file)
@@ -55,7 +55,7 @@ void FullGCActivityCallback::doCollection()
     }
 #endif
 
-    heap.collect(CollectionScope::Full);
+    heap.collectAsync(CollectionScope::Full);
 }
 
 double FullGCActivityCallback::lastGCLength()
index 733e597..ec988c8 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2010 Apple Inc. All rights reserved.
+ * Copyright (C) 2010, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -40,8 +40,7 @@ namespace JSC {
 class FullGCActivityCallback;
 class Heap;
 
-class JS_EXPORT_PRIVATE GCActivityCallback : public HeapTimer, public ThreadSafeRefCounted<GCActivityCallback> {
-    WTF_MAKE_FAST_ALLOCATED;
+class JS_EXPORT_PRIVATE GCActivityCallback : public HeapTimer {
 public:
     static RefPtr<FullGCActivityCallback> createFullTimer(Heap*);
     static RefPtr<GCActivityCallback> createEdenTimer(Heap*);
index 46fc411..f48cf59 100644 (file)
@@ -52,6 +52,7 @@
 #include "SamplingProfiler.h"
 #include "ShadowChicken.h"
 #include "SuperSampler.h"
+#include "StopIfNecessaryTimer.h"
 #include "TypeProfilerLog.h"
 #include "UnlinkedCodeBlock.h"
 #include "VM.h"
@@ -189,6 +190,41 @@ private:
 
 } // anonymous namespace
 
+class Heap::Thread : public AutomaticThread {
+public:
+    Thread(const LockHolder& locker, Heap& heap)
+        : AutomaticThread(locker, heap.m_threadLock, heap.m_threadCondition)
+        , m_heap(heap)
+    {
+    }
+    
+protected:
+    PollResult poll(const LockHolder& locker) override
+    {
+        if (m_heap.m_threadShouldStop) {
+            m_heap.notifyThreadStopping(locker);
+            return PollResult::Stop;
+        }
+        if (m_heap.shouldCollectInThread(locker))
+            return PollResult::Work;
+        return PollResult::Wait;
+    }
+    
+    WorkResult work() override
+    {
+        m_heap.collectInThread();
+        return WorkResult::Continue;
+    }
+    
+    void threadDidStart() override
+    {
+        WTF::registerGCThread(GCThreadType::Main);
+    }
+
+private:
+    Heap& m_heap;
+};
+
 Heap::Heap(VM* vm, HeapType heapType)
     : m_heapType(heapType)
     , m_ramSize(Options::forceRAMSize() ? Options::forceRAMSize() : ramSize())
@@ -224,15 +260,23 @@ Heap::Heap(VM* vm, HeapType heapType)
 #endif // USE(CF)
     , m_fullActivityCallback(GCActivityCallback::createFullTimer(this))
     , m_edenActivityCallback(GCActivityCallback::createEdenTimer(this))
-    , m_sweeper(std::make_unique<IncrementalSweeper>(this))
+    , m_sweeper(adoptRef(new IncrementalSweeper(this)))
+    , m_stopIfNecessaryTimer(adoptRef(new StopIfNecessaryTimer(vm)))
     , m_deferralDepth(0)
 #if USE(FOUNDATION)
     , m_delayedReleaseRecursionCount(0)
 #endif
     , m_helperClient(&heapHelperPool())
+    , m_threadLock(Box<Lock>::create())
+    , m_threadCondition(AutomaticThreadCondition::create())
 {
+    m_worldState.store(0);
+    
     if (Options::verifyHeap())
         m_verifier = std::make_unique<HeapVerifier>(this, Options::numberOfGCCyclesToRecordForVerification());
+    
+    LockHolder locker(*m_threadLock);
+    m_thread = adoptRef(new Thread(locker, *this));
 }
 
 Heap::~Heap()
@@ -251,9 +295,28 @@ bool Heap::isPagedOut(double deadline)
 void Heap::lastChanceToFinalize()
 {
     RELEASE_ASSERT(!m_vm->entryScope);
-    RELEASE_ASSERT(!m_collectionScope);
     RELEASE_ASSERT(m_mutatorState == MutatorState::Running);
-
+    
+    // Carefully bring the thread down. We need to use waitForCollector() until we know that there
+    // won't be any other collections.
+    bool stopped = false;
+    {
+        LockHolder locker(*m_threadLock);
+        stopped = m_thread->tryStop(locker);
+        if (!stopped) {
+            m_threadShouldStop = true;
+            m_threadCondition->notifyOne(locker);
+        }
+    }
+    if (!stopped) {
+        waitForCollector(
+            [&] (const LockHolder&) -> bool {
+                return m_threadIsStopping;
+            });
+        // It's now safe to join the thread, since we know that there will not be any more collections.
+        m_thread->join();
+    }
+    
     m_arrayBuffers.lastChanceToFinalize();
     m_codeBlocks->lastChanceToFinalize();
     m_objectSpace.lastChanceToFinalize();
@@ -381,12 +444,10 @@ void Heap::completeAllJITPlans()
 #endif
 }
 
-void Heap::markRoots(double gcStartTime, void* stackOrigin, void* stackTop, MachineThreads::RegisterState& calleeSavedRegisters)
+void Heap::markRoots(double gcStartTime)
 {
     TimingScope markRootsTimingScope(*this, "Heap::markRoots");
     
-    ASSERT(isValidThreadState(m_vm));
-
     HeapRootVisitor heapRootVisitor(m_slotVisitor);
     
     {
@@ -459,7 +520,7 @@ void Heap::markRoots(double gcStartTime, void* stackOrigin, void* stackTop, Mach
             TimingScope preConvergenceTimingScope(*this, "Heap::markRoots conservative scan");
             ConservativeRoots conservativeRoots(*this);
             SuperSamplerScope superSamplerScope(false);
-            gatherStackRoots(conservativeRoots, stackOrigin, stackTop, calleeSavedRegisters);
+            gatherStackRoots(conservativeRoots);
             gatherJSStackRoots(conservativeRoots);
             gatherScratchBufferRoots(conservativeRoots);
             visitConservativeRoots(conservativeRoots);
@@ -499,10 +560,10 @@ void Heap::markRoots(double gcStartTime, void* stackOrigin, void* stackTop, Mach
     endMarking();
 }
 
-void Heap::gatherStackRoots(ConservativeRoots& roots, void* stackOrigin, void* stackTop, MachineThreads::RegisterState& calleeSavedRegisters)
+void Heap::gatherStackRoots(ConservativeRoots& roots)
 {
     m_jitStubRoutines->clearMarks();
-    m_machineThreads.gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks, stackOrigin, stackTop, calleeSavedRegisters);
+    m_machineThreads.gatherConservativeRoots(roots, *m_jitStubRoutines, *m_codeBlocks);
 }
 
 void Heap::gatherJSStackRoots(ConservativeRoots& roots)
@@ -566,8 +627,8 @@ void Heap::visitConservativeRoots(ConservativeRoots& roots)
 void Heap::visitCompilerWorklistWeakReferences()
 {
 #if ENABLE(DFG_JIT)
-    for (auto worklist : m_suspendedCompilerWorklists)
-        worklist->visitWeakReferences(m_slotVisitor);
+    for (unsigned i = DFG::numberOfWorklists(); i--;)
+        DFG::existingWorklistForIndex(i).visitWeakReferences(m_slotVisitor);
 
     if (Options::logGC() == GCLogging::Verbose)
         dataLog("DFG Worklists:\n", m_slotVisitor);
@@ -577,8 +638,8 @@ void Heap::visitCompilerWorklistWeakReferences()
 void Heap::removeDeadCompilerWorklistEntries()
 {
 #if ENABLE(DFG_JIT)
-    for (auto worklist : m_suspendedCompilerWorklists)
-        worklist->removeDeadPlans(*m_vm);
+    for (unsigned i = DFG::numberOfWorklists(); i--;)
+        DFG::existingWorklistForIndex(i).removeDeadPlans(*m_vm);
 #endif
 }
 
@@ -908,7 +969,7 @@ void Heap::clearUnmarkedExecutables()
 void Heap::deleteUnmarkedCompiledCode()
 {
     clearUnmarkedExecutables();
-    m_codeBlocks->deleteUnmarkedAndUnreferenced(*m_collectionScope);
+    m_codeBlocks->deleteUnmarkedAndUnreferenced(*m_lastCollectionScope);
     m_jitStubRoutines->deleteUnmarkedJettisonedStubRoutines();
 }
 
@@ -928,11 +989,10 @@ void Heap::addToRememberedSet(const JSCell* cell)
 
 void Heap::collectAllGarbage()
 {
-    SuperSamplerScope superSamplerScope(false);
     if (!m_isSafeToCollect)
         return;
-
-    collectWithoutAnySweep(CollectionScope::Full);
+    
+    collectSync(CollectionScope::Full);
 
     DeferGCForAWhile deferGC(*this);
     if (UNLIKELY(Options::useImmortalObjects()))
@@ -955,34 +1015,74 @@ void Heap::collectAllGarbage()
     sweepAllLogicallyEmptyWeakBlocks();
 }
 
-void Heap::collect(Optional<CollectionScope> scope)
+void Heap::collectAsync(Optional<CollectionScope> scope)
 {
-    SuperSamplerScope superSamplerScope(false);
     if (!m_isSafeToCollect)
         return;
-    
-    collectWithoutAnySweep(scope);
+
+    bool alreadyRequested = false;
+    {
+        LockHolder locker(*m_threadLock);
+        for (Optional<CollectionScope> request : m_requests) {
+            if (scope) {
+                if (scope == CollectionScope::Eden) {
+                    alreadyRequested = true;
+                    break;
+                } else {
+                    RELEASE_ASSERT(scope == CollectionScope::Full);
+                    if (request == CollectionScope::Full) {
+                        alreadyRequested = true;
+                        break;
+                    }
+                }
+            } else {
+                if (!request || request == CollectionScope::Full) {
+                    alreadyRequested = true;
+                    break;
+                }
+            }
+        }
+    }
+    if (alreadyRequested)
+        return;
+
+    requestCollection(scope);
 }
 
-NEVER_INLINE void Heap::collectWithoutAnySweep(Optional<CollectionScope> scope)
+void Heap::collectSync(Optional<CollectionScope> scope)
 {
-    void* stackTop;
-    ALLOCATE_AND_GET_REGISTER_STATE(registers);
-
-    collectImpl(scope, wtfThreadData().stack().origin(), &stackTop, registers);
+    if (!m_isSafeToCollect)
+        return;
+    
+    waitForCollection(requestCollection(scope));
+}
 
-    sanitizeStackForVM(m_vm);
+bool Heap::shouldCollectInThread(const LockHolder&)
+{
+    RELEASE_ASSERT(m_requests.isEmpty() == (m_lastServedTicket == m_lastGrantedTicket));
+    RELEASE_ASSERT(m_lastServedTicket <= m_lastGrantedTicket);
+    
+    return !m_requests.isEmpty();
 }
 
-NEVER_INLINE void Heap::collectImpl(Optional<CollectionScope> scope, void* stackOrigin, void* stackTop, MachineThreads::RegisterState& calleeSavedRegisters)
+void Heap::collectInThread()
 {
+    Optional<CollectionScope> scope;
+    {
+        LockHolder locker(*m_threadLock);
+        RELEASE_ASSERT(!m_requests.isEmpty());
+        scope = m_requests.first();
+    }
+    
     SuperSamplerScope superSamplerScope(false);
-    TimingScope collectImplTimingScope(scope, "Heap::collectImpl");
+    TimingScope collectImplTimingScope(scope, "Heap::collectInThread");
     
 #if ENABLE(ALLOCATION_LOGGING)
     dataLogF("JSC GC starting collection.\n");
 #endif
     
+    stopTheWorld();
+
     double before = 0;
     if (Options::logGC()) {
         dataLog("[GC: ", capacity() / 1024, " kb ");
@@ -999,89 +1099,403 @@ NEVER_INLINE void Heap::collectImpl(Optional<CollectionScope> scope, void* stack
 #if ENABLE(JIT)
     {
         DeferGCForAWhile awhile(*this);
-        JITWorklist::instance()->completeAllForVM(*m_vm);
+        if (JITWorklist::instance()->completeAllForVM(*m_vm))
+            setGCDidJIT();
     }
 #endif // ENABLE(JIT)
     
     vm()->shadowChicken().update(*vm(), vm()->topCallFrame);
     
-    RELEASE_ASSERT(!m_deferralDepth);
-    ASSERT(vm()->currentThreadIsHoldingAPILock());
-    RELEASE_ASSERT(vm()->atomicStringTable() == wtfThreadData().atomicStringTable());
     ASSERT(m_isSafeToCollect);
-    RELEASE_ASSERT(!m_collectionScope);
+    if (m_collectionScope) {
+        dataLog("Collection scope already set during GC: ", m_collectionScope, "\n");
+        RELEASE_ASSERT_NOT_REACHED();
+    }
     
-    suspendCompilerThreads();
     willStartCollection(scope);
-    {
-        HelpingGCScope helpingHeapScope(*this);
+    collectImplTimingScope.setScope(*this);
         
-        collectImplTimingScope.setScope(*this);
-        
-        gcStartTime = WTF::monotonicallyIncreasingTime();
-        if (m_verifier) {
-            // Verify that live objects from the last GC cycle haven't been corrupted by
-            // mutators before we begin this new GC cycle.
-            m_verifier->verify(HeapVerifier::Phase::BeforeGC);
+    gcStartTime = WTF::monotonicallyIncreasingTime();
+    if (m_verifier) {
+        // Verify that live objects from the last GC cycle haven't been corrupted by
+        // mutators before we begin this new GC cycle.
+        m_verifier->verify(HeapVerifier::Phase::BeforeGC);
             
-            m_verifier->initializeGCCycle();
-            m_verifier->gatherLiveObjects(HeapVerifier::Phase::BeforeMarking);
-        }
+        m_verifier->initializeGCCycle();
+        m_verifier->gatherLiveObjects(HeapVerifier::Phase::BeforeMarking);
+    }
         
-        flushOldStructureIDTables();
-        stopAllocation();
-        prepareForMarking();
-        flushWriteBarrierBuffer();
+    flushOldStructureIDTables();
+    stopAllocation();
+    prepareForMarking();
+    flushWriteBarrierBuffer();
         
-        if (HasOwnPropertyCache* cache = vm()->hasOwnPropertyCache())
-            cache->clear();
+    if (HasOwnPropertyCache* cache = vm()->hasOwnPropertyCache())
+        cache->clear();
         
-        markRoots(gcStartTime, stackOrigin, stackTop, calleeSavedRegisters);
+    markRoots(gcStartTime);
         
-        if (m_verifier) {
-            m_verifier->gatherLiveObjects(HeapVerifier::Phase::AfterMarking);
-            m_verifier->verify(HeapVerifier::Phase::AfterMarking);
-        }
+    if (m_verifier) {
+        m_verifier->gatherLiveObjects(HeapVerifier::Phase::AfterMarking);
+        m_verifier->verify(HeapVerifier::Phase::AfterMarking);
+    }
         
-        if (vm()->typeProfiler())
-            vm()->typeProfiler()->invalidateTypeSetCache();
+    if (vm()->typeProfiler())
+        vm()->typeProfiler()->invalidateTypeSetCache();
         
-        reapWeakHandles();
-        pruneStaleEntriesFromWeakGCMaps();
-        sweepArrayBuffers();
-        snapshotUnswept();
-        finalizeUnconditionalFinalizers();
-        removeDeadCompilerWorklistEntries();
-        deleteUnmarkedCompiledCode();
-        deleteSourceProviderCaches();
+    reapWeakHandles();
+    pruneStaleEntriesFromWeakGCMaps();
+    sweepArrayBuffers();
+    snapshotUnswept();
+    finalizeUnconditionalFinalizers();
+    removeDeadCompilerWorklistEntries();
+    notifyIncrementalSweeper();
         
-        notifyIncrementalSweeper();
-        m_codeBlocks->writeBarrierCurrentlyExecuting(this);
-        m_codeBlocks->clearCurrentlyExecuting();
+    m_codeBlocks->writeBarrierCurrentlyExecuting(this);
+    m_codeBlocks->clearCurrentlyExecuting();
         
-        prepareForAllocation();
-        updateAllocationLimits();
-    }
+    prepareForAllocation();
+    updateAllocationLimits();
+
     didFinishCollection(gcStartTime);
-    resumeCompilerThreads();
-    sweepLargeAllocations();
     
     if (m_verifier) {
         m_verifier->trimDeadObjects();
         m_verifier->verify(HeapVerifier::Phase::AfterGC);
     }
 
+    if (false) {
+        dataLog("Heap state after GC:\n");
+        m_objectSpace.dumpBits();
+    }
+    
     if (Options::logGC()) {
         double after = currentTimeMS();
         dataLog(after - before, " ms]\n");
     }
     
-    if (false) {
-        dataLog("Heap state after GC:\n");
-        m_objectSpace.dumpBits();
+    {
+        LockHolder locker(*m_threadLock);
+        m_requests.removeFirst();
+        m_lastServedTicket++;
+        clearMutatorWaiting();
+    }
+    ParkingLot::unparkAll(&m_worldState);
+
+    setNeedFinalize();
+    resumeTheWorld();
+}
+
+void Heap::stopTheWorld()
+{
+    RELEASE_ASSERT(!m_collectorBelievesThatTheWorldIsStopped);
+    waitWhileNeedFinalize();
+    stopTheMutator();
+    suspendCompilerThreads();
+    m_collectorBelievesThatTheWorldIsStopped = true;
+}
+
+void Heap::resumeTheWorld()
+{
+    RELEASE_ASSERT(m_collectorBelievesThatTheWorldIsStopped);
+    m_collectorBelievesThatTheWorldIsStopped = false;
+    resumeCompilerThreads();
+    resumeTheMutator();
+}
+
+void Heap::stopTheMutator()
+{
+    for (;;) {
+        unsigned oldState = m_worldState.load();
+        if ((oldState & stoppedBit)
+            && (oldState & shouldStopBit))
+            return;
+        
+        // Note: We could just have the mutator stop in-place like we do when !hasAccessBit. We could
+        // switch to that if it turned out to be less confusing, but then it would not give the
+        // mutator the opportunity to react to the world being stopped.
+        if (oldState & mutatorWaitingBit) {
+            if (m_worldState.compareExchangeWeak(oldState, oldState & ~mutatorWaitingBit))
+                ParkingLot::unparkAll(&m_worldState);
+            continue;
+        }
+        
+        if (!(oldState & hasAccessBit)
+            || (oldState & stoppedBit)) {
+            // We can stop the world instantly.
+            if (m_worldState.compareExchangeWeak(oldState, oldState | stoppedBit | shouldStopBit))
+                return;
+            continue;
+        }
+        
+        RELEASE_ASSERT(oldState & hasAccessBit);
+        RELEASE_ASSERT(!(oldState & stoppedBit));
+        m_worldState.compareExchangeStrong(oldState, oldState | shouldStopBit);
+        m_stopIfNecessaryTimer->scheduleSoon();
+        ParkingLot::compareAndPark(&m_worldState, oldState | shouldStopBit);
     }
 }
 
+void Heap::resumeTheMutator()
+{
+    for (;;) {
+        unsigned oldState = m_worldState.load();
+        RELEASE_ASSERT(oldState & shouldStopBit);
+        
+        if (!(oldState & hasAccessBit)) {
+            // We can resume the world instantly.
+            if (m_worldState.compareExchangeWeak(oldState, oldState & ~(stoppedBit | shouldStopBit))) {
+                ParkingLot::unparkAll(&m_worldState);
+                return;
+            }
+            continue;
+        }
+        
+        // We can tell the world to resume.
+        if (m_worldState.compareExchangeWeak(oldState, oldState & ~shouldStopBit)) {
+            ParkingLot::unparkAll(&m_worldState);
+            return;
+        }
+    }
+}
+
+void Heap::stopIfNecessarySlow()
+{
+    while (stopIfNecessarySlow(m_worldState.load())) { }
+    handleGCDidJIT();
+}
+
+bool Heap::stopIfNecessarySlow(unsigned oldState)
+{
+    RELEASE_ASSERT(oldState & hasAccessBit);
+    
+    if (handleNeedFinalize(oldState))
+        return true;
+    
+    if (!(oldState & shouldStopBit)) {
+        if (!(oldState & stoppedBit))
+            return false;
+        m_worldState.compareExchangeStrong(oldState, oldState & ~stoppedBit);
+        return true;
+    }
+    
+    m_worldState.compareExchangeStrong(oldState, oldState | stoppedBit);
+    ParkingLot::unparkAll(&m_worldState);
+    ParkingLot::compareAndPark(&m_worldState, oldState | stoppedBit);
+    return true;
+}
+
+template<typename Func>
+void Heap::waitForCollector(const Func& func)
+{
+    for (;;) {
+        bool done;
+        {
+            LockHolder locker(*m_threadLock);
+            done = func(locker);
+            if (!done) {
+                setMutatorWaiting();
+                // At this point, the collector knows that we intend to wait, and he will clear the
+                // waiting bit and then unparkAll when the GC cycle finishes. Clearing the bit
+                // prevents us from parking except if there is also stop-the-world. Unparking after
+                // clearing means that if the clearing happens after we park, then we will unpark.
+            }
+        }
+
+        // If we're in a stop-the-world scenario, we need to wait for that even if done is true.
+        unsigned oldState = m_worldState.load();
+        if (stopIfNecessarySlow(oldState))
+            continue;
+        
+        if (done) {
+            clearMutatorWaiting(); // Clean up just in case.
+            return;
+        }
+        
+        // If mutatorWaitingBit is still set then we want to wait.
+        ParkingLot::compareAndPark(&m_worldState, oldState | mutatorWaitingBit);
+    }
+}
+
+void Heap::acquireAccessSlow()
+{
+    for (;;) {
+        unsigned oldState = m_worldState.load();
+        RELEASE_ASSERT(!(oldState & hasAccessBit));
+        
+        if (oldState & shouldStopBit) {
+            RELEASE_ASSERT(oldState & stoppedBit);
+            // Wait until we're not stopped anymore.
+            ParkingLot::compareAndPark(&m_worldState, oldState);
+            continue;
+        }
+        
+        RELEASE_ASSERT(!(oldState & stoppedBit));
+        unsigned newState = oldState | hasAccessBit;
+        if (m_worldState.compareExchangeWeak(oldState, newState)) {
+            handleGCDidJIT();
+            handleNeedFinalize();
+            return;
+        }
+    }
+}
+
+void Heap::releaseAccessSlow()
+{
+    for (;;) {
+        unsigned oldState = m_worldState.load();
+        RELEASE_ASSERT(oldState & hasAccessBit);
+        RELEASE_ASSERT(!(oldState & stoppedBit));
+        
+        if (handleNeedFinalize(oldState))
+            continue;
+        
+        if (oldState & shouldStopBit) {
+            unsigned newState = (oldState & ~hasAccessBit) | stoppedBit;
+            if (m_worldState.compareExchangeWeak(oldState, newState)) {
+                ParkingLot::unparkAll(&m_worldState);
+                return;
+            }
+            continue;
+        }
+        
+        RELEASE_ASSERT(!(oldState & shouldStopBit));
+        
+        if (m_worldState.compareExchangeWeak(oldState, oldState & ~hasAccessBit))
+            return;
+    }
+}
+
+bool Heap::handleGCDidJIT(unsigned oldState)
+{
+    RELEASE_ASSERT(oldState & hasAccessBit);
+    if (!(oldState & gcDidJITBit))
+        return false;
+    if (m_worldState.compareExchangeWeak(oldState, oldState & ~gcDidJITBit)) {
+        WTF::crossModifyingCodeFence();
+        return true;
+    }
+    return true;
+}
+
+bool Heap::handleNeedFinalize(unsigned oldState)
+{
+    RELEASE_ASSERT(oldState & hasAccessBit);
+    if (!(oldState & needFinalizeBit))
+        return false;
+    if (m_worldState.compareExchangeWeak(oldState, oldState & ~needFinalizeBit)) {
+        finalize();
+        // Wake up anyone waiting for us to finalize. Note that they may have woken up already, in
+        // which case they would be waiting for us to release heap access.
+        ParkingLot::unparkAll(&m_worldState);
+        return true;
+    }
+    return true;
+}
+
+void Heap::handleGCDidJIT()
+{
+    while (handleGCDidJIT(m_worldState.load())) { }
+}
+
+void Heap::handleNeedFinalize()
+{
+    while (handleNeedFinalize(m_worldState.load())) { }
+}
+
+void Heap::setGCDidJIT()
+{
+    for (;;) {
+        unsigned oldState = m_worldState.load();
+        RELEASE_ASSERT(oldState & stoppedBit);
+        if (m_worldState.compareExchangeWeak(oldState, oldState | gcDidJITBit))
+            return;
+    }
+}
+
+void Heap::setNeedFinalize()
+{
+    for (;;) {
+        unsigned oldState = m_worldState.load();
+        if (m_worldState.compareExchangeWeak(oldState, oldState | needFinalizeBit)) {
+            m_stopIfNecessaryTimer->scheduleSoon();
+            return;
+        }
+    }
+}
+
+void Heap::waitWhileNeedFinalize()
+{
+    for (;;) {
+        unsigned oldState = m_worldState.load();
+        if (!(oldState & needFinalizeBit)) {
+            // This means that either there was no finalize request or the main thread will finalize
+            // with heap access, so a subsequent call to stopTheWorld() will return only when
+            // finalize finishes.
+            return;
+        }
+        ParkingLot::compareAndPark(&m_worldState, oldState);
+    }
+}
+
+unsigned Heap::setMutatorWaiting()
+{
+    for (;;) {
+        unsigned oldState = m_worldState.load();
+        unsigned newState = oldState | mutatorWaitingBit;
+        if (m_worldState.compareExchangeWeak(oldState, newState))
+            return newState;
+    }
+}
+
+void Heap::clearMutatorWaiting()
+{
+    for (;;) {
+        unsigned oldState = m_worldState.load();
+        if (m_worldState.compareExchangeWeak(oldState, oldState & ~mutatorWaitingBit))
+            return;
+    }
+}
+
+void Heap::notifyThreadStopping(const LockHolder&)
+{
+    m_threadIsStopping = true;
+    clearMutatorWaiting();
+    ParkingLot::unparkAll(&m_worldState);
+}
+
+void Heap::finalize()
+{
+    HelpingGCScope helpingGCScope(*this);
+    deleteUnmarkedCompiledCode();
+    deleteSourceProviderCaches();
+    sweepLargeAllocations();
+}
+
+Heap::Ticket Heap::requestCollection(Optional<CollectionScope> scope)
+{
+    stopIfNecessary();
+    
+    ASSERT(vm()->currentThreadIsHoldingAPILock());
+    RELEASE_ASSERT(vm()->atomicStringTable() == wtfThreadData().atomicStringTable());
+    
+    sanitizeStackForVM(m_vm);
+
+    LockHolder locker(*m_threadLock);
+    m_requests.append(scope);
+    m_lastGrantedTicket++;
+    m_threadCondition->notifyOne(locker);
+    return m_lastGrantedTicket;
+}
+
+void Heap::waitForCollection(Ticket ticket)
+{
+    waitForCollector(
+        [&] (const LockHolder&) -> bool {
+            return m_lastServedTicket >= ticket;
+        });
+}
+
 void Heap::sweepLargeAllocations()
 {
     m_objectSpace.sweepLargeAllocations();
@@ -1090,13 +1504,11 @@ void Heap::sweepLargeAllocations()
 void Heap::suspendCompilerThreads()
 {
 #if ENABLE(DFG_JIT)
-    ASSERT(m_suspendedCompilerWorklists.isEmpty());
-    for (unsigned i = DFG::numberOfWorklists(); i--;) {
-        if (DFG::Worklist* worklist = DFG::worklistForIndexOrNull(i)) {
-            m_suspendedCompilerWorklists.append(worklist);
-            worklist->suspendAllThreads();
-        }
-    }
+    // We ensure the worklists so that it's not possible for the mutator to start a new worklist
+    // after we have suspended the ones that he had started before. That's not very expensive since
+    // the worklists use AutomaticThreads anyway.
+    for (unsigned i = DFG::numberOfWorklists(); i--;)
+        DFG::ensureWorklistForIndex(i).suspendAllThreads();
 #endif
 }
 
@@ -1320,6 +1732,7 @@ void Heap::didFinishCollection(double gcStartTime)
     }
 
     RELEASE_ASSERT(m_collectionScope);
+    m_lastCollectionScope = m_collectionScope;
     m_collectionScope = Nullopt;
 
     for (auto* observer : m_observers)
@@ -1329,22 +1742,11 @@ void Heap::didFinishCollection(double gcStartTime)
 void Heap::resumeCompilerThreads()
 {
 #if ENABLE(DFG_JIT)
-    for (auto worklist : m_suspendedCompilerWorklists)
-        worklist->resumeAllThreads();
-    m_suspendedCompilerWorklists.clear();
+    for (unsigned i = DFG::numberOfWorklists(); i--;)
+        DFG::existingWorklistForIndex(i).resumeAllThreads();
 #endif
 }
 
-void Heap::setFullActivityCallback(PassRefPtr<FullGCActivityCallback> activityCallback)
-{
-    m_fullActivityCallback = activityCallback;
-}
-
-void Heap::setEdenActivityCallback(PassRefPtr<EdenGCActivityCallback> activityCallback)
-{
-    m_edenActivityCallback = activityCallback;
-}
-
 GCActivityCallback* Heap::fullActivityCallback()
 {
     return m_fullActivityCallback.get();
@@ -1355,11 +1757,6 @@ GCActivityCallback* Heap::edenActivityCallback()
     return m_edenActivityCallback.get();
 }
 
-void Heap::setIncrementalSweeper(std::unique_ptr<IncrementalSweeper> sweeper)
-{
-    m_sweeper = WTFMove(sweeper);
-}
-
 IncrementalSweeper* Heap::sweeper()
 {
     return m_sweeper.get();
@@ -1546,7 +1943,7 @@ void Heap::writeBarrierSlowPath(const JSCell* from)
     addToRememberedSet(from);
 }
 
-bool Heap::shouldCollect()
+bool Heap::canCollect()
 {
     if (isDeferred())
         return false;
@@ -1554,11 +1951,21 @@ bool Heap::shouldCollect()
         return false;
     if (collectionScope() || mutatorState() == MutatorState::HelpingGC)
         return false;
+    return true;
+}
+
+bool Heap::shouldCollectHeuristic()
+{
     if (Options::gcMaxHeapSize())
         return m_bytesAllocatedThisCycle > Options::gcMaxHeapSize();
     return m_bytesAllocatedThisCycle > m_maxEdenSize;
 }
 
+bool Heap::shouldCollect()
+{
+    return canCollect() && shouldCollectHeuristic();
+}
+
 bool Heap::isCurrentThreadBusy()
 {
     return mayBeGCThread() || mutatorState() != MutatorState::Running;
@@ -1598,13 +2005,22 @@ void Heap::reportExternalMemoryVisited(CellState oldState, size_t size)
 
 bool Heap::collectIfNecessaryOrDefer(GCDeferralContext* deferralContext)
 {
-    if (!shouldCollect())
+    if (!canCollect())
+        return false;
+    
+    if (deferralContext) {
+        deferralContext->m_shouldGC |=
+            !!(m_worldState.load() & (shouldStopBit | needFinalizeBit | gcDidJITBit));
+    } else
+        stopIfNecessary();
+    
+    if (!shouldCollectHeuristic())
         return false;
 
     if (deferralContext)
         deferralContext->m_shouldGC = true;
     else
-        collect();
+        collectAsync();
     return true;
 }
 
@@ -1614,7 +2030,7 @@ void Heap::collectAccordingToDeferGCProbability()
         return;
 
     if (randomNumber() < Options::deferGCProbability()) {
-        collect();
+        collectAsync();
         return;
     }
 
@@ -1660,4 +2076,14 @@ void Heap::didFreeBlock(size_t capacity)
 #endif
 }
 
+#if USE(CF)
+void Heap::setRunLoop(CFRunLoopRef runLoop)
+{
+    m_runLoop = runLoop;
+    m_fullActivityCallback->setRunLoop(runLoop);
+    m_edenActivityCallback->setRunLoop(runLoop);
+    m_sweeper->setRunLoop(runLoop);
+}
+#endif // USE(CF)
+
 } // namespace JSC
index c8bef13..eb5a3e2 100644 (file)
@@ -43,6 +43,8 @@
 #include "WeakReferenceHarvester.h"
 #include "WriteBarrierBuffer.h"
 #include "WriteBarrierSupport.h"
+#include <wtf/AutomaticThread.h>
+#include <wtf/Deque.h>
 #include <wtf/HashCountedSet.h>
 #include <wtf/HashSet.h>
 #include <wtf/ParallelHelperPool.h>
@@ -69,6 +71,7 @@ class JSCell;
 class JSValue;
 class LLIntOffsetsExtractor;
 class MarkedArgumentBuffer;
+class StopIfNecessaryTimer;
 class VM;
 
 namespace DFG {
@@ -130,18 +133,18 @@ public:
 
     JS_EXPORT_PRIVATE GCActivityCallback* fullActivityCallback();
     JS_EXPORT_PRIVATE GCActivityCallback* edenActivityCallback();
-    JS_EXPORT_PRIVATE void setFullActivityCallback(PassRefPtr<FullGCActivityCallback>);
-    JS_EXPORT_PRIVATE void setEdenActivityCallback(PassRefPtr<EdenGCActivityCallback>);
     JS_EXPORT_PRIVATE void setGarbageCollectionTimerEnabled(bool);
 
     JS_EXPORT_PRIVATE IncrementalSweeper* sweeper();
-    JS_EXPORT_PRIVATE void setIncrementalSweeper(std::unique_ptr<IncrementalSweeper>);
 
     void addObserver(HeapObserver* observer) { m_observers.append(observer); }
     void removeObserver(HeapObserver* observer) { m_observers.removeFirst(observer); }
 
     MutatorState mutatorState() const { return m_mutatorState; }
     Optional<CollectionScope> collectionScope() const { return m_collectionScope; }
+    bool hasHeapAccess() const;
+    bool mutatorIsStopped() const;
+    bool collectorBelievesThatTheWorldIsStopped() const;
 
     // We're always busy on the collection threads. On the main thread, this returns true if we're
     // helping heap.
@@ -173,8 +176,24 @@ public:
     JS_EXPORT_PRIVATE void collectAllGarbageIfNotDoneRecently();
     JS_EXPORT_PRIVATE void collectAllGarbage();
 
+    bool canCollect();
+    bool shouldCollectHeuristic();
     bool shouldCollect();
-    JS_EXPORT_PRIVATE void collect(Optional<CollectionScope> = Nullopt);
+    
+    // Queue up a collection. Returns immediately. This will not queue a collection if a collection
+    // of equal or greater strength exists. Full collections are stronger than Nullopt collections
+    // and Nullopt collections are stronger than Eden collections. Nullopt means that the GC can
+    // choose Eden or Full. This implies that if you request a GC while that GC is ongoing, nothing
+    // will happen.
+    JS_EXPORT_PRIVATE void collectAsync(Optional<CollectionScope> = Nullopt);
+    
+    // Queue up a collection and wait for it to complete. This won't return until you get your own
+    // complete collection. For example, if there was an ongoing asynchronous collection at the time
+    // you called this, then this would wait for that one to complete and then trigger your
+    // collection and then return. In weird cases, there could be multiple GC requests in the backlog
+    // and this will wait for that backlog before running its GC and returning.
+    JS_EXPORT_PRIVATE void collectSync(Optional<CollectionScope> = Nullopt);
+    
     bool collectIfNecessaryOrDefer(GCDeferralContext* = nullptr); // Returns true if it did collect.
     void collectAccordingToDeferGCProbability();
 
@@ -270,8 +289,52 @@ public:
     unsigned barrierThreshold() const { return m_barrierThreshold; }
     const unsigned* addressOfBarrierThreshold() const { return &m_barrierThreshold; }
 
+    // If true, the GC believes that the mutator is currently messing with the heap. We call this
+    // "having heap access". The GC may block if the mutator is in this state. If false, the GC may
+    // currently be doing things to the heap that make the heap unsafe to access for the mutator.
+    bool hasAccess() const;
+    
+    // If the mutator does not currently have heap access, this function will acquire it. If the GC
+    // is currently using the lack of heap access to do dangerous things to the heap then this
+    // function will block, waiting for the GC to finish. It's not valid to call this if the mutator
+    // already has heap access. The mutator is required to precisely track whether or not it has
+    // heap access.
+    //
+    // It's totally fine to acquireAccess() upon VM instantiation and keep it that way. This is how
+    // WebCore uses us. For most other clients, JSLock does acquireAccess()/releaseAccess() for you.
+    void acquireAccess();
+    
+    // Releases heap access. If the GC is blocking waiting to do bad things to the heap, it will be
+    // allowed to run now.
+    //
+    // Ordinarily, you should use the ReleaseHeapAccessScope to release and then reacquire heap
+    // access. You should do this anytime you're about do perform a blocking operation, like waiting
+    // on the ParkingLot.
+    void releaseAccess();
+    
+    // This is like a super optimized way of saying:
+    //
+    //     releaseAccess()
+    //     acquireAccess()
+    //
+    // The fast path is an inlined relaxed load and branch. The slow path will block the mutator if
+    // the GC wants to do bad things to the heap.
+    //
+    // All allocations logically call this. As an optimization to improve GC progress, you can call
+    // this anywhere that you can afford a load-branch and where an object allocation would have been
+    // safe.
+    //
+    // The GC will also push a stopIfNecessary() event onto the runloop of the thread that
+    // instantiated the VM whenever it wants the mutator to stop. This means that if you never block
+    // but instead use the runloop to wait for events, then you could safely run in a mode where the
+    // mutator has permanent heap access (like the DOM does). If you have good event handling
+    // discipline (i.e. you don't block the runloop) then you can be sure that stopIfNecessary() will
+    // already be called for you at the right times.
+    void stopIfNecessary();
+    
 #if USE(CF)
     CFRunLoopRef runLoop() const { return m_runLoop.get(); }
+    JS_EXPORT_PRIVATE void setRunLoop(CFRunLoopRef);
 #endif // USE(CF)
 
 private:
@@ -296,13 +359,15 @@ private:
     friend class HeapStatistics;
     friend class VM;
     friend class WeakSet;
+
+    class Thread;
+    friend class Thread;
+
     template<typename T> friend void* allocateCell(Heap&);
     template<typename T> friend void* allocateCell(Heap&, size_t);
     template<typename T> friend void* allocateCell(Heap&, GCDeferralContext*);
     template<typename T> friend void* allocateCell(Heap&, GCDeferralContext*, size_t);
 
-    void collectWithoutAnySweep(Optional<CollectionScope> = Nullopt);
-
     void* allocateWithDestructor(size_t); // For use with objects with destructors.
     void* allocateWithoutDestructor(size_t); // For use with objects without destructors.
     void* allocateWithDestructor(GCDeferralContext*, size_t);
@@ -319,9 +384,42 @@ private:
     JS_EXPORT_PRIVATE bool isValidAllocation(size_t);
     JS_EXPORT_PRIVATE void reportExtraMemoryAllocatedSlowCase(size_t);
     JS_EXPORT_PRIVATE void deprecatedReportExtraMemorySlowCase(size_t);
-
-    void collectImpl(Optional<CollectionScope>, void* stackOrigin, void* stackTop, MachineThreads::RegisterState&);
-
+    
+    bool shouldCollectInThread(const LockHolder&);
+    void collectInThread();
+    
+    void stopTheWorld();
+    void resumeTheWorld();
+    
+    void stopTheMutator();
+    void resumeTheMutator();
+    
+    void stopIfNecessarySlow();
+    bool stopIfNecessarySlow(unsigned extraStateBits);
+    
+    template<typename Func>
+    void waitForCollector(const Func&);
+    
+    JS_EXPORT_PRIVATE void acquireAccessSlow();
+    JS_EXPORT_PRIVATE void releaseAccessSlow();
+    
+    bool handleGCDidJIT(unsigned);
+    bool handleNeedFinalize(unsigned);
+    void handleGCDidJIT();
+    void handleNeedFinalize();
+    
+    void setGCDidJIT();
+    void setNeedFinalize();
+    void waitWhileNeedFinalize();
+    
+    unsigned setMutatorWaiting();
+    void clearMutatorWaiting();
+    void notifyThreadStopping(const LockHolder&);
+    
+    typedef uint64_t Ticket;
+    Ticket requestCollection(Optional<CollectionScope>);
+    void waitForCollection(Ticket);
+    
     void suspendCompilerThreads();
     void willStartCollection(Optional<CollectionScope>);
     void flushOldStructureIDTables();
@@ -329,8 +427,8 @@ private:
     void stopAllocation();
     void prepareForMarking();
     
-    void markRoots(double gcStartTime, void* stackOrigin, void* stackTop, MachineThreads::RegisterState&);
-    void gatherStackRoots(ConservativeRoots&, void* stackOrigin, void* stackTop, MachineThreads::RegisterState&);
+    void markRoots(double gcStartTime);
+    void gatherStackRoots(ConservativeRoots&);
     void gatherJSStackRoots(ConservativeRoots&);
     void gatherScratchBufferRoots(ConservativeRoots&);
     void beginMarking();
@@ -369,6 +467,7 @@ private:
     void zombifyDeadObjects();
     void gatherExtraHeapSnapshotData(HeapProfiler&);
     void removeDeadHeapSnapshotNodes(HeapProfiler&);
+    void finalize();
     void sweepLargeAllocations();
     
     void sweepAllLogicallyEmptyWeakBlocks();
@@ -403,6 +502,7 @@ private:
     size_t m_totalBytesVisitedThisCycle;
     
     Optional<CollectionScope> m_collectionScope;
+    Optional<CollectionScope> m_lastCollectionScope;
     MutatorState m_mutatorState { MutatorState::Running };
     StructureIDTable m_structureIDTable;
     MarkedSpace m_objectSpace;
@@ -453,12 +553,12 @@ private:
 #endif // USE(CF)
     RefPtr<FullGCActivityCallback> m_fullActivityCallback;
     RefPtr<GCActivityCallback> m_edenActivityCallback;
-    std::unique_ptr<IncrementalSweeper> m_sweeper;
+    RefPtr<IncrementalSweeper> m_sweeper;
+    RefPtr<StopIfNecessaryTimer> m_stopIfNecessaryTimer;
 
     Vector<HeapObserver*> m_observers;
 
     unsigned m_deferralDepth;
-    Vector<DFG::Worklist*> m_suspendedCompilerWorklists;
 
     std::unique_ptr<HeapVerifier> m_verifier;
 
@@ -490,6 +590,24 @@ private:
     size_t m_blockBytesAllocated { 0 };
     size_t m_externalMemorySize { 0 };
 #endif
+    
+    static const unsigned shouldStopBit = 1u << 0u;
+    static const unsigned stoppedBit = 1u << 1u;
+    static const unsigned hasAccessBit = 1u << 2u;
+    static const unsigned gcDidJITBit = 1u << 3u; // Set when the GC did some JITing, so on resume we need to cpuid.
+    static const unsigned needFinalizeBit = 1u << 4u;
+    static const unsigned mutatorWaitingBit = 1u << 5u; // Allows the mutator to use this as a condition variable.
+    Atomic<unsigned> m_worldState;
+    bool m_collectorBelievesThatTheWorldIsStopped { false };
+    
+    Deque<Optional<CollectionScope>> m_requests;
+    Ticket m_lastServedTicket { 0 };
+    Ticket m_lastGrantedTicket { 0 };
+    bool m_threadShouldStop { false };
+    bool m_threadIsStopping { false };
+    Box<Lock> m_threadLock;
+    RefPtr<AutomaticThreadCondition> m_threadCondition; // The mutator must not wait on this. It would cause a deadlock.
+    RefPtr<AutomaticThread> m_thread;
 };
 
 } // namespace JSC
index 16aa1ef..1fa9d85 100644 (file)
@@ -51,6 +51,34 @@ inline Heap* Heap::heap(const JSValue v)
     return heap(v.asCell());
 }
 
+inline bool Heap::hasHeapAccess() const
+{
+    return m_worldState.load() & hasAccessBit;
+}
+
+inline bool Heap::mutatorIsStopped() const
+{
+    unsigned state = m_worldState.load();
+    bool shouldStop = state & shouldStopBit;
+    bool stopped = state & stoppedBit;
+    // I only got it right when I considered all four configurations of shouldStop/stopped:
+    // !shouldStop, !stopped: The GC has not requested that we stop and we aren't stopped, so we
+    //     should return false.
+    // !shouldStop, stopped: The mutator is still stopped but the GC is done and the GC has requested
+    //     that we resume, so we should return false.
+    // shouldStop, !stopped: The GC called stopTheWorld() but the mutator hasn't hit a safepoint yet.
+    //     The mutator should be able to do whatever it wants in this state, as if we were not
+    //     stopped. So return false.
+    // shouldStop, stopped: The GC requested stop the world and the mutator obliged. The world is
+    //     stopped, so return true.
+    return shouldStop & stopped;
+}
+
+inline bool Heap::collectorBelievesThatTheWorldIsStopped() const
+{
+    return m_collectorBelievesThatTheWorldIsStopped;
+}
+
 ALWAYS_INLINE bool Heap::isMarked(const void* rawCell)
 {
     ASSERT(mayBeGCThread() != GCThreadType::Helper);
@@ -310,4 +338,30 @@ inline void Heap::deprecatedReportExtraMemory(size_t size)
         deprecatedReportExtraMemorySlowCase(size);
 }
 
+inline void Heap::acquireAccess()
+{
+    if (m_worldState.compareExchangeWeak(0, hasAccessBit))
+        return;
+    acquireAccessSlow();
+}
+
+inline bool Heap::hasAccess() const
+{
+    return m_worldState.loadRelaxed() & hasAccessBit;
+}
+
+inline void Heap::releaseAccess()
+{
+    if (m_worldState.compareExchangeWeak(hasAccessBit, 0))
+        return;
+    releaseAccessSlow();
+}
+
+inline void Heap::stopIfNecessary()
+{
+    if (m_worldState.loadRelaxed() == hasAccessBit)
+        return;
+    stopIfNecessarySlow();
+}
+
 } // namespace JSC
index c44ccf6..87069b7 100644 (file)
@@ -99,11 +99,13 @@ void HeapTimer::timerDidFire(CFRunLoopTimerRef, void* contextPtr)
 void HeapTimer::scheduleTimer(double intervalInSeconds)
 {
     CFRunLoopTimerSetNextFireDate(m_timer.get(), CFAbsoluteTimeGetCurrent() + intervalInSeconds);
+    m_isScheduled = true;
 }
 
 void HeapTimer::cancelTimer()
 {
     CFRunLoopTimerSetNextFireDate(m_timer.get(), CFAbsoluteTimeGetCurrent() + s_decade);
+    m_isScheduled = false;
 }
 
 #elif PLATFORM(EFL)
@@ -152,11 +154,13 @@ void HeapTimer::scheduleTimer(double intervalInSeconds)
 
     double targetTime = currentTime() + intervalInSeconds;
     ecore_timer_interval_set(m_timer, targetTime);
+    m_isScheduled = true;
 }
 
 void HeapTimer::cancelTimer()
 {
     ecore_timer_freeze(m_timer);
+    m_isScheduled = false;
 }
 #elif USE(GLIB)
 
@@ -219,11 +223,13 @@ void HeapTimer::scheduleTimer(double intervalInSeconds)
     gint64 targetTime = currentTime + std::min<gint64>(G_MAXINT64 - currentTime, delayDuration.count());
     ASSERT(targetTime >= currentTime);
     g_source_set_ready_time(m_timer.get(), targetTime);
+    m_isScheduled = true;
 }
 
 void HeapTimer::cancelTimer()
 {
     g_source_set_ready_time(m_timer.get(), -1);
+    m_isScheduled = false;
 }
 #else
 HeapTimer::HeapTimer(VM* vm)
index 9fe66a7..b470f3a 100644 (file)
@@ -44,7 +44,7 @@ namespace JSC {
 class JSLock;
 class VM;
 
-class HeapTimer {
+class HeapTimer : public ThreadSafeRefCounted<HeapTimer> {
 public:
     HeapTimer(VM*);
 #if USE(CF)
@@ -56,6 +56,7 @@ public:
 
     void scheduleTimer(double intervalInSeconds);
     void cancelTimer();
+    bool isScheduled() const { return m_isScheduled; }
 
 #if USE(CF)
     JS_EXPORT_PRIVATE void setRunLoop(CFRunLoopRef);
@@ -65,6 +66,7 @@ protected:
     VM* m_vm;
 
     RefPtr<JSLock> m_apiLock;
+    bool m_isScheduled { false };
 #if USE(CF)
     static const CFTimeInterval s_decade;
 
index 9faef88..6df607f 100644 (file)
@@ -71,6 +71,8 @@ void IncrementalSweeper::doSweep(double sweepBeginTime)
 
 bool IncrementalSweeper::sweepNextBlock()
 {
+    m_vm->heap.stopIfNecessary();
+
     MarkedBlock::Handle* block = nullptr;
     
     for (; m_currentAllocator; m_currentAllocator = m_currentAllocator->nextAllocator()) {
index dfede68..a84830b 100644 (file)
@@ -34,7 +34,6 @@ class Heap;
 class MarkedAllocator;
 
 class IncrementalSweeper : public HeapTimer {
-    WTF_MAKE_FAST_ALLOCATED;
 public:
     JS_EXPORT_PRIVATE explicit IncrementalSweeper(Heap*);
 
index a1fc306..15c85ad 100644 (file)
@@ -1,5 +1,5 @@
 /*
- *  Copyright (C) 2003-2009, 2015 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003-2009, 2015-2016 Apple Inc. All rights reserved.
  *  Copyright (C) 2007 Eric Seidel <eric@webkit.org>
  *  Copyright (C) 2009 Acision BV. All rights reserved.
  *
@@ -301,16 +301,6 @@ void MachineThreads::removeThreadIfFound(PlatformThread platformThread)
     }
 }
 
-SUPPRESS_ASAN
-void MachineThreads::gatherFromCurrentThread(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, void* stackOrigin, void* stackTop, RegisterState& calleeSavedRegisters)
-{
-    void* registersBegin = &calleeSavedRegisters;
-    void* registersEnd = reinterpret_cast<void*>(roundUpToMultipleOf<sizeof(void*)>(reinterpret_cast<uintptr_t>(&calleeSavedRegisters + 1)));
-    conservativeRoots.add(registersBegin, registersEnd, jitStubRoutines, codeBlocks);
-
-    conservativeRoots.add(stackTop, stackOrigin, jitStubRoutines, codeBlocks);
-}
-
 MachineThreads::Thread::Thread(const PlatformThread& platThread, void* base, void* end)
     : platformThread(platThread)
     , stackBase(base)
@@ -1018,10 +1008,8 @@ static void growBuffer(size_t size, void** buffer, size_t* capacity)
     *buffer = fastMalloc(*capacity);
 }
 
-void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks, void* stackOrigin, void* stackTop, RegisterState& calleeSavedRegisters)
+void MachineThreads::gatherConservativeRoots(ConservativeRoots& conservativeRoots, JITStubRoutineSet& jitStubRoutines, CodeBlockSet& codeBlocks)
 {
-    gatherFromCurrentThread(conservativeRoots, jitStubRoutines, codeBlocks, stackOrigin, stackTop, calleeSavedRegisters);
-
     size_t size;
     size_t capacity = 0;
     void* buffer = nullptr;
index a16851a..07b40dc 100644 (file)
@@ -1,7 +1,7 @@
 /*
  *  Copyright (C) 1999-2000 Harri Porten (porten@kde.org)
  *  Copyright (C) 2001 Peter Kelly (pmk@post.com)
- *  Copyright (C) 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2015 Apple Inc. All rights reserved.
+ *  Copyright (C) 2003-2009, 2015-2016 Apple Inc. All rights reserved.
  *
  *  This library is free software; you can redistribute it and/or
  *  modify it under the terms of the GNU Lesser General Public
@@ -65,7 +65,7 @@ public:
     MachineThreads(Heap*);
     ~MachineThreads();
 
-    void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, void* stackOrigin, void* stackTop, RegisterState& calleeSavedRegisters);
+    void gatherConservativeRoots(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&);
 
     JS_EXPORT_PRIVATE void addCurrentThread(); // Only needs to be called by clients that can use the same heap from multiple threads.
 
@@ -145,8 +145,6 @@ public:
     Thread* machineThreadForCurrentThread();
 
 private:
-    void gatherFromCurrentThread(ConservativeRoots&, JITStubRoutineSet&, CodeBlockSet&, void* stackOrigin, void* stackTop, RegisterState& calleeSavedRegisters);
-
     void tryCopyOtherThreadStack(Thread*, void*, size_t capacity, size_t*);
     bool tryCopyOtherThreadStacks(LockHolder&, void*, size_t capacity, size_t*);
 
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2012 Apple Inc. All rights reserved.
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
  * THE POSSIBILITY OF SUCH DAMAGE.
  */
 
-#ifndef WebSafeIncrementalSweeperIOS_h
-#define WebSafeIncrementalSweeperIOS_h
+#pragma once
 
-#include "WebCoreThread.h"
-#include <JavaScriptCore/IncrementalSweeper.h>
+#include "Heap.h"
 
-namespace WebCore {
+namespace JSC {
 
-class WebSafeIncrementalSweeper final : public JSC::IncrementalSweeper {
+// Almost all of the VM's code runs with "heap access". This means that the GC thread believes that
+// the VM is messing with the heap in a way that would be unsafe for certain phases of the collector,
+// like the weak reference fixpoint, stack scanning, and changing barrier modes. However, many long
+// running operations inside the VM don't require heap access. For example, memcpying a typed array
+// if a reference to it is on the stack is totally fine without heap access. Blocking on a futex is
+// also fine without heap access. Releasing heap access for long-running code (in the case of futex
+// wait, possibly infinitely long-running) ensures that the GC can finish a collection cycle while
+// you are waiting.
+class ReleaseHeapAccessScope {
 public:
-    explicit WebSafeIncrementalSweeper(JSC::Heap* heap)
-        : JSC::IncrementalSweeper(heap)
+    ReleaseHeapAccessScope(Heap& heap)
+        : m_heap(heap)
     {
-        setRunLoop(WebThreadRunLoop());
+        m_heap.releaseAccess();
+    }
+    
+    ~ReleaseHeapAccessScope()
+    {
+        m_heap.acquireAccess();
     }
 
-    ~WebSafeIncrementalSweeper() override { }
-
+private:
+    Heap& m_heap;
 };
 
-} // namespace WebCore
+} // namespace JSC
 
-#endif // WebSafeIncrementalSweeperIOS_h
diff --git a/Source/JavaScriptCore/heap/StopIfNecessaryTimer.cpp b/Source/JavaScriptCore/heap/StopIfNecessaryTimer.cpp
new file mode 100644 (file)
index 0000000..6e3176c
--- /dev/null
@@ -0,0 +1,55 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#include "config.h"
+#include "StopIfNecessaryTimer.h"
+
+#include "JSCInlines.h"
+
+namespace JSC {
+
+StopIfNecessaryTimer::StopIfNecessaryTimer(VM* vm)
+    : HeapTimer(vm)
+{
+}
+
+void StopIfNecessaryTimer::doWork()
+{
+    cancelTimer();
+    WTF::storeStoreFence();
+    m_vm->heap.stopIfNecessary();
+}
+
+void StopIfNecessaryTimer::scheduleSoon()
+{
+    if (isScheduled()) {
+        WTF::loadLoadFence();
+        return;
+    }
+    scheduleTimer(0);
+}
+
+} // namespace JSC
+
diff --git a/Source/JavaScriptCore/heap/StopIfNecessaryTimer.h b/Source/JavaScriptCore/heap/StopIfNecessaryTimer.h
new file mode 100644 (file)
index 0000000..a683184
--- /dev/null
@@ -0,0 +1,44 @@
+/*
+ * Copyright (C) 2016 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. ``AS IS'' AND ANY
+ * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL APPLE INC. OR
+ * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+ * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+ * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+ * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+ * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 
+ */
+
+#pragma once
+
+#include "HeapTimer.h"
+
+namespace JSC {
+
+class Heap;
+
+class StopIfNecessaryTimer : public HeapTimer {
+public:
+    explicit StopIfNecessaryTimer(VM*);
+    
+    void doWork() override;
+    
+    void scheduleSoon();
+};
+
+} // namespace JSC
+
index 723d70c..f688249 100644 (file)
@@ -455,6 +455,7 @@ void InspectorDebuggerAgent::resolveBreakpoint(const Script& script, JSC::Breakp
 
 void InspectorDebuggerAgent::setBreakpoint(JSC::Breakpoint& breakpoint, bool& existing)
 {
+    JSC::JSLockHolder locker(m_scriptDebugServer.vm());
     m_scriptDebugServer.setBreakpoint(breakpoint, existing);
 }
 
@@ -469,6 +470,7 @@ void InspectorDebuggerAgent::removeBreakpoint(ErrorString&, const String& breakp
         for (auto& action : breakpointActions)
             m_injectedScriptManager.releaseObjectGroup(objectGroupForBreakpointAction(action));
 
+        JSC::JSLockHolder locker(m_scriptDebugServer.vm());
         m_scriptDebugServer.removeBreakpointActions(breakpointID);
         m_scriptDebugServer.removeBreakpoint(breakpointID);
     }
@@ -560,6 +562,7 @@ void InspectorDebuggerAgent::schedulePauseOnNextStatement(DebuggerFrontendDispat
 
     m_breakReason = breakReason;
     m_breakAuxData = WTFMove(data);
+    JSC::JSLockHolder locker(m_scriptDebugServer.vm());
     m_scriptDebugServer.setPauseOnNextStatement(true);
 }
 
@@ -881,9 +884,12 @@ void InspectorDebuggerAgent::clearInspectorBreakpointState()
 
 void InspectorDebuggerAgent::clearDebuggerBreakpointState()
 {
-    m_scriptDebugServer.clearBreakpointActions();
-    m_scriptDebugServer.clearBreakpoints();
-    m_scriptDebugServer.clearBlacklist();
+    {
+        JSC::JSLockHolder holder(m_scriptDebugServer.vm());
+        m_scriptDebugServer.clearBreakpointActions();
+        m_scriptDebugServer.clearBreakpoints();
+        m_scriptDebugServer.clearBlacklist();
+    }
 
     m_pausedScriptState = nullptr;
     m_currentCallStack = { };
index 1140d6a..2c22da3 100644 (file)
@@ -158,8 +158,9 @@ JITWorklist::~JITWorklist()
     UNREACHABLE_FOR_PLATFORM();
 }
 
-void JITWorklist::completeAllForVM(VM& vm)
+bool JITWorklist::completeAllForVM(VM& vm)
 {
+    bool result = false;
     DeferGC deferGC(vm.heap);
     for (;;) {
         Vector<RefPtr<Plan>, 32> myPlans;
@@ -186,12 +187,14 @@ void JITWorklist::completeAllForVM(VM& vm)
                 // If we don't find plans, then we're either done or we need to wait, depending on
                 // whether we found some unfinished plans.
                 if (!didFindUnfinishedPlan)
-                    return;
+                    return result;
                 
                 m_condition->wait(*m_lock);
             }
         }
         
+        RELEASE_ASSERT(!myPlans.isEmpty());
+        result = true;
         finalizePlans(myPlans);
     }
 }
index a9e9d9d..0d39f8c 100644 (file)
@@ -50,7 +50,7 @@ class JITWorklist {
 public:
     ~JITWorklist();
     
-    void completeAllForVM(VM&);
+    bool completeAllForVM(VM&); // Return true if any JIT work happened.
     void poll(VM&);
     
     void compileLater(CodeBlock*);
index 2b53ca1..e16b44f 100644 (file)
@@ -1677,14 +1677,14 @@ EncodedJSValue JSC_HOST_CALL functionGCAndSweep(ExecState* exec)
 EncodedJSValue JSC_HOST_CALL functionFullGC(ExecState* exec)
 {
     JSLockHolder lock(exec);
-    exec->heap()->collect(CollectionScope::Full);
+    exec->heap()->collectSync(CollectionScope::Full);
     return JSValue::encode(jsNumber(exec->heap()->sizeAfterLastFullCollection()));
 }
 
 EncodedJSValue JSC_HOST_CALL functionEdenGC(ExecState* exec)
 {
     JSLockHolder lock(exec);
-    exec->heap()->collect(CollectionScope::Eden);
+    exec->heap()->collectSync(CollectionScope::Eden);
     return JSValue::encode(jsNumber(exec->heap()->sizeAfterLastEdenCollection()));
 }
 
index ca7ed73..a389522 100644 (file)
@@ -29,6 +29,7 @@
 #include "JSCInlines.h"
 #include "JSTypedArrays.h"
 #include "ObjectPrototype.h"
+#include "ReleaseHeapAccessScope.h"
 #include "TypedArrayController.h"
 
 namespace JSC {
@@ -340,14 +341,18 @@ EncodedJSValue JSC_HOST_CALL atomicsFuncWait(ExecState* exec)
     }
     
     bool didPassValidation = false;
-    ParkingLot::ParkResult result = ParkingLot::parkConditionally(
-        ptr,
-        [&] () -> bool {
-            didPassValidation = WTF::atomicLoad(ptr) == expectedValue;
-            return didPassValidation;
-        },
-        [] () { },
-        timeout);
+    ParkingLot::ParkResult result;
+    {
+        ReleaseHeapAccessScope releaseHeapAccessScope(vm.heap);
+        result = ParkingLot::parkConditionally(
+            ptr,
+            [&] () -> bool {
+                didPassValidation = WTF::atomicLoad(ptr) == expectedValue;
+                return didPassValidation;
+            },
+            [] () { },
+            timeout);
+    }
     const char* resultString;
     if (!didPassValidation)
         resultString = "not-equal";
index bff881d..ae088c4 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2008, 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2008, 2015-2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -42,8 +42,9 @@
 #include "SuperSampler.h"
 #include "WriteBarrier.h"
 #include <mutex>
-#include <wtf/dtoa.h>
+#include <wtf/MainThread.h>
 #include <wtf/Threading.h>
+#include <wtf/dtoa.h>
 #include <wtf/dtoa/cached-powers.h>
 
 using namespace WTF;
@@ -57,6 +58,7 @@ void initializeThreading()
     std::call_once(initializeThreadingOnceFlag, []{
         WTF::double_conversion::initialize();
         WTF::initializeThreading();
+        WTF::initializeGCThreads();
         Options::initialize();
         if (Options::recordGCPauseTimes())
             HeapStatistics::initialize();
index 82a2171..d90386c 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2005, 2008, 2012, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2005, 2008, 2012, 2014, 2016 Apple Inc. All rights reserved.
  *
  * This library is free software; you can redistribute it and/or
  * modify it under the terms of the GNU Library General Public
@@ -128,18 +128,25 @@ void JSLock::didAcquireLock()
     // FIXME: What should happen to the per-thread identifier table if we don't have a VM?
     if (!m_vm)
         return;
+    
+    WTFThreadData& threadData = wtfThreadData();
+    ASSERT(!m_entryAtomicStringTable);
+    m_entryAtomicStringTable = threadData.setCurrentAtomicStringTable(m_vm->atomicStringTable());
+    ASSERT(m_entryAtomicStringTable);
+
+    if (m_vm->heap.hasAccess())
+        m_shouldReleaseHeapAccess = false;
+    else {
+        m_vm->heap.acquireAccess();
+        m_shouldReleaseHeapAccess = true;
+    }
 
     RELEASE_ASSERT(!m_vm->stackPointerAtVMEntry());
     void* p = &p; // A proxy for the current stack pointer.
     m_vm->setStackPointerAtVMEntry(p);
 
-    WTFThreadData& threadData = wtfThreadData();
     m_vm->setLastStackTop(threadData.savedLastStackTop());
 
-    ASSERT(!m_entryAtomicStringTable);
-    m_entryAtomicStringTable = threadData.setCurrentAtomicStringTable(m_vm->atomicStringTable());
-    ASSERT(m_entryAtomicStringTable);
-
     m_vm->heap.machineThreads().addCurrentThread();
 
 #if ENABLE(SAMPLING_PROFILER)
@@ -167,7 +174,7 @@ void JSLock::unlock(intptr_t unlockCount)
     m_lockCount -= unlockCount;
 
     if (!m_lockCount) {
-
+        
         if (!m_hasExclusiveThread) {
             m_ownerThreadID = std::thread::id();
             m_lock.unlock();
@@ -183,6 +190,9 @@ void JSLock::willReleaseLock()
 
         vm->heap.releaseDelayedReleasedObjects();
         vm->setStackPointerAtVMEntry(nullptr);
+        
+        if (m_shouldReleaseHeapAccess)
+            vm->heap.releaseAccess();
     }
 
     if (m_entryAtomicStringTable) {
index 1d6c736..75ee783 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2005, 2008, 2009, 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2005, 2008, 2009, 2014, 2016 Apple Inc. All rights reserved.
  *
  * This library is free software; you can redistribute it and/or
  * modify it under the terms of the GNU Library General Public
@@ -136,6 +136,7 @@ private:
     intptr_t m_lockCount;
     unsigned m_lockDropDepth;
     bool m_hasExclusiveThread;
+    bool m_shouldReleaseHeapAccess;
     VM* m_vm;
     AtomicStringTable* m_entryAtomicStringTable; 
 };
index 81239c5..c31c179 100644 (file)
@@ -354,7 +354,7 @@ VM::~VM()
     // Make sure concurrent compilations are done, but don't install them, since there is
     // no point to doing so.
     for (unsigned i = DFG::numberOfWorklists(); i--;) {
-        if (DFG::Worklist* worklist = DFG::worklistForIndexOrNull(i)) {
+        if (DFG::Worklist* worklist = DFG::existingWorklistForIndexOrNull(i)) {
             worklist->removeNonCompilingPlansForVM(*this);
             worklist->waitUntilAllPlansForVMAreReady(*this);
             worklist->removeAllReadyPlansForVM(*this);
index 87f571f..ee4e07b 100644 (file)
@@ -130,7 +130,7 @@ void JSDollarVMPrototype::edenGC(ExecState* exec)
 {
     if (!ensureCurrentThreadOwnsJSLock(exec))
         return;
-    exec->heap()->collect(CollectionScope::Eden);
+    exec->heap()->collectSync(CollectionScope::Eden);
 }
 
 static EncodedJSValue JSC_HOST_CALL functionEdenGC(ExecState* exec)
index 8043bd2..2fabce8 100644 (file)
@@ -1,3 +1,25 @@
+2016-11-02  Filip Pizlo  <fpizlo@apple.com>
+
+        The GC should be in a thread
+        https://bugs.webkit.org/show_bug.cgi?id=163562
+
+        Reviewed by Geoffrey Garen and Andreas Kling.
+        
+        This fixes some bugs and adds a few features.
+
+        * wtf/Atomics.h: The GC may do work on behalf of the JIT. If it does, the main thread needs to execute a cross-modifying code fence. This is cpuid on x86 and I believe it's isb on ARM. It would have been an isync on PPC and I think that isb is the ARM equivalent.
+        (WTF::arm_isb):
+        (WTF::crossModifyingCodeFence):
+        (WTF::x86_ortop):
+        (WTF::x86_cpuid):
+        * wtf/AutomaticThread.cpp: I accidentally had AutomaticThreadCondition inherit from ThreadSafeRefCounted<AutomaticThread> [sic]. This never crashed before because all of our prior AutomaticThreadConditions were immortal.
+        (WTF::AutomaticThread::AutomaticThread):
+        (WTF::AutomaticThread::~AutomaticThread):
+        (WTF::AutomaticThread::start):
+        * wtf/AutomaticThread.h:
+        * wtf/MainThread.cpp: Need to allow initializeGCThreads() to be called separately because it's now more than just a debugging thing.
+        (WTF::initializeGCThreads):
+
 2016-11-02  Carlos Alberto Lopez Perez  <clopez@igalia.com>
 
         Clean wrong comment about compositing on the UI process.
index 4a88ea4..9b9a2f7 100644 (file)
@@ -35,6 +35,7 @@ extern "C" void _ReadWriteBarrier(void);
 #pragma intrinsic(_ReadWriteBarrier)
 #endif
 #include <windows.h>
+#include <intrin.h>
 #endif
 
 namespace WTF {
@@ -53,6 +54,8 @@ struct Atomic {
     // is usually not high enough to justify the risk.
 
     ALWAYS_INLINE T load(std::memory_order order = std::memory_order_seq_cst) const { return value.load(order); }
+    
+    ALWAYS_INLINE T loadRelaxed() const { return load(std::memory_order_relaxed); }
 
     ALWAYS_INLINE void store(T desired, std::memory_order order = std::memory_order_seq_cst) { value.store(desired, order); }
 
@@ -200,22 +203,24 @@ inline void arm_dmb_st()
     asm volatile("dmb ishst" ::: "memory");
 }
 
+inline void arm_isb()
+{
+    asm volatile("isb" ::: "memory");
+}
+
 inline void loadLoadFence() { arm_dmb(); }
 inline void loadStoreFence() { arm_dmb(); }
 inline void storeLoadFence() { arm_dmb(); }
 inline void storeStoreFence() { arm_dmb_st(); }
 inline void memoryBarrierAfterLock() { arm_dmb(); }
 inline void memoryBarrierBeforeUnlock() { arm_dmb(); }
+inline void crossModifyingCodeFence() { arm_isb(); }
 
 #elif CPU(X86) || CPU(X86_64)
 
 inline void x86_ortop()
 {
 #if OS(WINDOWS)
-    // I think that this does the equivalent of a dummy interlocked instruction,
-    // instead of using the 'mfence' instruction, at least according to MSDN. I
-    // know that it is equivalent for our purposes, but it would be good to
-    // investigate if that is actually better.
     MemoryBarrier();
 #elif CPU(X86_64)
     // This has acqrel semantics and is much cheaper than mfence. For exampe, in the JSC GC, using
@@ -226,12 +231,28 @@ inline void x86_ortop()
 #endif
 }
 
+inline void x86_cpuid()
+{
+#if OS(WINDOWS)
+    int info[4];
+    __cpuid(info, 0);
+#else
+    intptr_t a = 0, b, c, d;
+    asm volatile(
+        "cpuid"
+        : "+a"(a), "=b"(b), "=c"(c), "=d"(d)
+        :
+        : "memory");
+#endif
+}
+
 inline void loadLoadFence() { compilerFence(); }
 inline void loadStoreFence() { compilerFence(); }
 inline void storeLoadFence() { x86_ortop(); }
 inline void storeStoreFence() { compilerFence(); }
 inline void memoryBarrierAfterLock() { compilerFence(); }
 inline void memoryBarrierBeforeUnlock() { compilerFence(); }
+inline void crossModifyingCodeFence() { x86_cpuid(); }
 
 #else
 
@@ -241,6 +262,7 @@ inline void storeLoadFence() { std::atomic_thread_fence(std::memory_order_seq_cs
 inline void storeStoreFence() { std::atomic_thread_fence(std::memory_order_seq_cst); }
 inline void memoryBarrierAfterLock() { std::atomic_thread_fence(std::memory_order_seq_cst); }
 inline void memoryBarrierBeforeUnlock() { std::atomic_thread_fence(std::memory_order_seq_cst); }
+inline void crossModifyingCodeFence() { std::atomic_thread_fence(std::memory_order_seq_cst); } // Probably not strong enough.
 
 #endif
 
index 34bf18e..c7584ca 100644 (file)
@@ -78,7 +78,6 @@ void AutomaticThreadCondition::add(const LockHolder&, AutomaticThread* thread)
 
 void AutomaticThreadCondition::remove(const LockHolder&, AutomaticThread* thread)
 {
-    ASSERT(m_threads.contains(thread));
     m_threads.removeFirst(thread);
     ASSERT(!m_threads.contains(thread));
 }
@@ -92,11 +91,15 @@ AutomaticThread::AutomaticThread(const LockHolder& locker, Box<Lock> lock, RefPt
     : m_lock(lock)
     , m_condition(condition)
 {
+    if (verbose)
+        dataLog(RawPointer(this), ": Allocated AutomaticThread.\n");
     m_condition->add(locker, this);
 }
 
 AutomaticThread::~AutomaticThread()
 {
+    if (verbose)
+        dataLog(RawPointer(this), ": Deleting AutomaticThread.\n");
     LockHolder locker(*m_lock);
     
     // It's possible that we're in a waiting state with the thread shut down. This is a goofy way to
@@ -104,6 +107,16 @@ AutomaticThread::~AutomaticThread()
     m_condition->remove(locker, this);
 }
 
+bool AutomaticThread::tryStop(const LockHolder&)
+{
+    if (!m_isRunning)
+        return true;
+    if (m_hasUnderlyingThread)
+        return false;
+    m_isRunning = false;
+    return true;
+}
+
 void AutomaticThread::join()
 {
     LockHolder locker(*m_lock);
@@ -113,39 +126,44 @@ void AutomaticThread::join()
 
 class AutomaticThread::ThreadScope {
 public:
-    ThreadScope(AutomaticThread& thread)
+    ThreadScope(RefPtr<AutomaticThread> thread)
         : m_thread(thread)
     {
-        m_thread.threadDidStart();
+        m_thread->threadDidStart();
     }
     
     ~ThreadScope()
     {
-        m_thread.threadWillStop();
+        m_thread->threadWillStop();
+        
+        LockHolder locker(*m_thread->m_lock);
+        m_thread->m_hasUnderlyingThread = false;
     }
 
 private:
-    AutomaticThread& m_thread;
+    RefPtr<AutomaticThread> m_thread;
 };
 
 void AutomaticThread::start(const LockHolder&)
 {
+    RELEASE_ASSERT(m_isRunning);
+    
     RefPtr<AutomaticThread> preserveThisForThread = this;
     
+    m_hasUnderlyingThread = true;
+    
     ThreadIdentifier thread = createThread(
         "WTF::AutomaticThread",
         [=] () {
             if (verbose)
-                dataLog("Running automatic thread!\n");
-            RefPtr<AutomaticThread> preserveThisInThread = preserveThisForThread;
+                dataLog(RawPointer(this), ": Running automatic thread!\n");
+            ThreadScope threadScope(preserveThisForThread);
             
-            {
+            if (!ASSERT_DISABLED) {
                 LockHolder locker(*m_lock);
                 ASSERT(!m_condition->contains(locker, this));
             }
             
-            ThreadScope threadScope(*this);
-            
             auto stop = [&] (const LockHolder&) {
                 m_isRunning = false;
                 m_isRunningCondition.notifyAll();
@@ -167,7 +185,7 @@ void AutomaticThread::start(const LockHolder&)
                             m_condition->m_condition.waitUntilMonotonicClockSeconds(*m_lock, timeout);
                         if (!awokenByNotify) {
                             if (verbose)
-                                dataLog("Going to sleep!\n");
+                                dataLog(RawPointer(this), ": Going to sleep!\n");
                             m_condition->add(locker, this);
                             return;
                         }
index 1a15334..7680616 100644 (file)
@@ -69,7 +69,7 @@ namespace WTF {
 
 class AutomaticThread;
 
-class AutomaticThreadCondition : public ThreadSafeRefCounted<AutomaticThread> {
+class AutomaticThreadCondition : public ThreadSafeRefCounted<AutomaticThreadCondition> {
 public:
     static WTF_EXPORT_PRIVATE RefPtr<AutomaticThreadCondition> create();
     
@@ -112,6 +112,15 @@ public:
     // AutomaticThread).
     virtual ~AutomaticThread();
     
+    // Sometimes it's possible to optimize for the case that there is no underlying thread.
+    bool hasUnderlyingThread(const LockHolder&) const { return m_hasUnderlyingThread; }
+    
+    // This attempts to quickly stop the thread. This will succeed if the thread happens to not be
+    // running. Returns true if the thread has been stopped. A good idiom for stopping your automatic
+    // thread is to first try this, and if that doesn't work, to tell the thread using your own
+    // mechanism (set some flag and then notify the condition).
+    bool tryStop(const LockHolder&);
+    
     void join();
     
 protected:
@@ -151,9 +160,6 @@ protected:
     enum class WorkResult { Continue, Stop };
     virtual WorkResult work() = 0;
     
-    class ThreadScope;
-    friend class ThreadScope;
-    
     // It's sometimes useful to allocate resources while the thread is running, and to destroy them
     // when the thread dies. These methods let you do this. You can override these methods, and you
     // can be sure that the default ones don't do anything (so you don't need a super call).
@@ -163,11 +169,15 @@ protected:
 private:
     friend class AutomaticThreadCondition;
     
+    class ThreadScope;
+    friend class ThreadScope;
+    
     void start(const LockHolder&);
     
     Box<Lock> m_lock;
     RefPtr<AutomaticThreadCondition> m_condition;
     bool m_isRunning { true };
+    bool m_hasUnderlyingThread { false };
     Condition m_isRunningCondition;
 };
 
index 4b026ac..9761a0f 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2013 Apple Inc. All rights reserved.
+ * Copyright (C) 2013, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
 
 namespace WTF {
 
-static ThreadSpecific<bool>* s_isCompilationThread;
+static ThreadSpecific<bool, CanBeGCThread::True>* s_isCompilationThread;
 
 static void initializeCompilationThreads()
 {
     static std::once_flag initializeCompilationThreadsOnceFlag;
     std::call_once(initializeCompilationThreadsOnceFlag, []{
-        s_isCompilationThread = new ThreadSpecific<bool>();
+        s_isCompilationThread = new ThreadSpecific<bool, CanBeGCThread::True>();
     });
 }
 
index 946f02e..d548fb7 100644 (file)
@@ -190,11 +190,16 @@ bool canAccessThreadLocalDataForThread(ThreadIdentifier threadId)
 }
 #endif
 
-static ThreadSpecific<Optional<GCThreadType>>* isGCThread;
+static ThreadSpecific<Optional<GCThreadType>, CanBeGCThread::True>* isGCThread;
 
 void initializeGCThreads()
 {
-    isGCThread = new ThreadSpecific<Optional<GCThreadType>>();
+    static std::once_flag flag;
+    std::call_once(
+        flag,
+        [] {
+            isGCThread = new ThreadSpecific<Optional<GCThreadType>, CanBeGCThread::True>();
+        });
 }
 
 void registerGCThread(GCThreadType type)
index 402a18c..24e5f31 100644 (file)
@@ -68,7 +68,7 @@ inline bool isWebThread() { return isMainThread(); }
 inline bool isUIThread() { return isMainThread(); }
 #endif // USE(WEB_THREAD)
 
-void initializeGCThreads();
+WTF_EXPORT_PRIVATE void initializeGCThreads();
 
 enum class GCThreadType {
     Main,
index 4cb53bf..9286e6b 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2014 Apple Inc. All rights reserved.
+ * Copyright (C) 2014, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -28,6 +28,7 @@
 
 #include <type_traits>
 #include <wtf/Assertions.h>
+#include <wtf/PrintStream.h>
 #include <wtf/StdLibExtras.h>
 
 // WTF::Optional is a class based on std::optional, described here:
@@ -263,6 +264,15 @@ makeOptional(T&& value)
     return Optional<typename std::decay<T>::type>(std::forward<T>(value));
 }
 
+template<typename T>
+void printInternal(PrintStream& out, const Optional<T>& optional)
+{
+    if (optional)
+        out.print(*optional);
+    else
+        out.print("Nullopt");
+}
+
 } // namespace WTF
 
 using WTF::InPlace;
index ef626e7..24d3c07 100644 (file)
@@ -447,12 +447,12 @@ ThreadData::~ThreadData()
 
 ThreadData* myThreadData()
 {
-    static ThreadSpecific<RefPtr<ThreadData>>* threadData;
+    static ThreadSpecific<RefPtr<ThreadData>, CanBeGCThread::True>* threadData;
     static std::once_flag initializeOnce;
     std::call_once(
         initializeOnce,
         [] {
-            threadData = new ThreadSpecific<RefPtr<ThreadData>>();
+            threadData = new ThreadSpecific<RefPtr<ThreadData>, CanBeGCThread::True>();
         });
     
     RefPtr<ThreadData>& result = **threadData;
index 4526bfd..f0904fc 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2008 Apple Inc. All rights reserved.
+ * Copyright (C) 2008, 2016 Apple Inc. All rights reserved.
  * Copyright (C) 2009 Jian Li <jianli@chromium.org>
  * Copyright (C) 2012 Patrick Gansterer <paroga@paroga.com>
  *
@@ -42,6 +42,7 @@
 #ifndef WTF_ThreadSpecific_h
 #define WTF_ThreadSpecific_h
 
+#include <wtf/MainThread.h>
 #include <wtf/Noncopyable.h>
 #include <wtf/StdLibExtras.h>
 
@@ -59,7 +60,12 @@ namespace WTF {
 #define THREAD_SPECIFIC_CALL
 #endif
 
-template<typename T> class ThreadSpecific {
+enum class CanBeGCThread {
+    False,
+    True
+};
+
+template<typename T, CanBeGCThread canBeGCThread = CanBeGCThread::False> class ThreadSpecific {
     WTF_MAKE_NONCOPYABLE(ThreadSpecific);
 public:
     ThreadSpecific();
@@ -86,10 +92,10 @@ private:
     struct Data {
         WTF_MAKE_NONCOPYABLE(Data);
     public:
-        Data(T* value, ThreadSpecific<T>* owner) : value(value), owner(owner) {}
+        Data(T* value, ThreadSpecific<T, canBeGCThread>* owner) : value(value), owner(owner) {}
 
         T* value;
-        ThreadSpecific<T>* owner;
+        ThreadSpecific<T, canBeGCThread>* owner;
     };
 
 #if USE(PTHREADS)
@@ -127,24 +133,28 @@ inline void* threadSpecificGet(ThreadSpecificKey key)
     return pthread_getspecific(key);
 }
 
-template<typename T>
-inline ThreadSpecific<T>::ThreadSpecific()
+template<typename T, CanBeGCThread canBeGCThread>
+inline ThreadSpecific<T, canBeGCThread>::ThreadSpecific()
 {
     int error = pthread_key_create(&m_key, destroy);
     if (error)
         CRASH();
 }
 
-template<typename T>
-inline T* ThreadSpecific<T>::get()
+template<typename T, CanBeGCThread canBeGCThread>
+inline T* ThreadSpecific<T, canBeGCThread>::get()
 {
     Data* data = static_cast<Data*>(pthread_getspecific(m_key));
-    return data ? data->value : 0;
+    if (data)
+        return data->value;
+    RELEASE_ASSERT(canBeGCThread == CanBeGCThread::True || !mayBeGCThread());
+    return nullptr;
 }
 
-template<typename T>
-inline void ThreadSpecific<T>::set(T* ptr)
+template<typename T, CanBeGCThread canBeGCThread>
+inline void ThreadSpecific<T, canBeGCThread>::set(T* ptr)
 {
+    RELEASE_ASSERT(canBeGCThread == CanBeGCThread::True || !mayBeGCThread());
     ASSERT(!get());
     pthread_setspecific(m_key, new Data(ptr, this));
 }
@@ -185,8 +195,8 @@ inline void* threadSpecificGet(ThreadSpecificKey key)
     return FlsGetValue(key);
 }
 
-template<typename T>
-inline ThreadSpecific<T>::ThreadSpecific()
+template<typename T, CanBeGCThread canBeGCThread>
+inline ThreadSpecific<T, canBeGCThread>::ThreadSpecific()
     : m_index(-1)
 {
     DWORD flsKey = FlsAlloc(destroy);
@@ -199,22 +209,26 @@ inline ThreadSpecific<T>::ThreadSpecific()
     flsKeys()[m_index] = flsKey;
 }
 
-template<typename T>
-inline ThreadSpecific<T>::~ThreadSpecific()
+template<typename T, CanBeGCThread canBeGCThread>
+inline ThreadSpecific<T, canBeGCThread>::~ThreadSpecific()
 {
     FlsFree(flsKeys()[m_index]);
 }
 
-template<typename T>
-inline T* ThreadSpecific<T>::get()
+template<typename T, CanBeGCThread canBeGCThread>
+inline T* ThreadSpecific<T, canBeGCThread>::get()
 {
     Data* data = static_cast<Data*>(FlsGetValue(flsKeys()[m_index]));
-    return data ? data->value : 0;
+    if (data)
+        return data->value;
+    RELEASE_ASSERT(canBeGCThread == CanBeGCThread::True || !mayBeGCThread());
+    return nullptr;
 }
 
-template<typename T>
-inline void ThreadSpecific<T>::set(T* ptr)
+template<typename T, CanBeGCThread canBeGCThread>
+inline void ThreadSpecific<T, canBeGCThread>::set(T* ptr)
 {
+    RELEASE_ASSERT(canBeGCThread == CanBeGCThread::True || !mayBeGCThread());
     ASSERT(!get());
     Data* data = new Data(ptr, this);
     FlsSetValue(flsKeys()[m_index], data);
@@ -224,8 +238,8 @@ inline void ThreadSpecific<T>::set(T* ptr)
 #error ThreadSpecific is not implemented for this platform.
 #endif
 
-template<typename T>
-inline void THREAD_SPECIFIC_CALL ThreadSpecific<T>::destroy(void* ptr)
+template<typename T, CanBeGCThread canBeGCThread>
+inline void THREAD_SPECIFIC_CALL ThreadSpecific<T, canBeGCThread>::destroy(void* ptr)
 {
     Data* data = static_cast<Data*>(ptr);
 
@@ -249,14 +263,14 @@ inline void THREAD_SPECIFIC_CALL ThreadSpecific<T>::destroy(void* ptr)
     delete data;
 }
 
-template<typename T>
-inline bool ThreadSpecific<T>::isSet()
+template<typename T, CanBeGCThread canBeGCThread>
+inline bool ThreadSpecific<T, canBeGCThread>::isSet()
 {
     return !!get();
 }
 
-template<typename T>
-inline ThreadSpecific<T>::operator T*()
+template<typename T, CanBeGCThread canBeGCThread>
+inline ThreadSpecific<T, canBeGCThread>::operator T*()
 {
     T* ptr = static_cast<T*>(get());
     if (!ptr) {
@@ -269,21 +283,21 @@ inline ThreadSpecific<T>::operator T*()
     return ptr;
 }
 
-template<typename T>
-inline T* ThreadSpecific<T>::operator->()
+template<typename T, CanBeGCThread canBeGCThread>
+inline T* ThreadSpecific<T, canBeGCThread>::operator->()
 {
     return operator T*();
 }
 
-template<typename T>
-inline T& ThreadSpecific<T>::operator*()
+template<typename T, CanBeGCThread canBeGCThread>
+inline T& ThreadSpecific<T, canBeGCThread>::operator*()
 {
     return *operator T*();
 }
 
 #if USE(WEB_THREAD)
-template<typename T>
-inline void ThreadSpecific<T>::replace(T* newPtr)
+template<typename T, CanBeGCThread canBeGCThread>
+inline void ThreadSpecific<T, canBeGCThread>::replace(T* newPtr)
 {
     ASSERT(newPtr);
     Data* data = static_cast<Data*>(pthread_getspecific(m_key));
index 215e2ea..985f456 100644 (file)
@@ -61,7 +61,7 @@ struct ThreadData {
     ThreadData* queueTail { nullptr };
 };
 
-ThreadSpecific<ThreadData>* threadData;
+ThreadSpecific<ThreadData, CanBeGCThread::True>* threadData;
 
 ThreadData* myThreadData()
 {
@@ -69,7 +69,7 @@ ThreadData* myThreadData()
     std::call_once(
         initializeOnce,
         [] {
-            threadData = new ThreadSpecific<ThreadData>();
+            threadData = new ThreadSpecific<ThreadData, CanBeGCThread::True>();
         });
 
     return *threadData;
index ac60b63..f75654b 100644 (file)
 #include "AtomicStringImpl.h"
 
 #include "AtomicStringTable.h"
+#include "CommaPrinter.h"
+#include "DataLog.h"
 #include "HashSet.h"
 #include "IntegerToStringConversion.h"
 #include "StringHash.h"
+#include "StringPrintStream.h"
 #include "Threading.h"
 #include "WTFThreadData.h"
 #include <wtf/unicode/UTF8.h>
@@ -75,7 +78,8 @@ static inline Ref<AtomicStringImpl> addToStringTable(const T& value)
 {
     AtomicStringTableLocker locker;
 
-    HashSet<StringImpl*>::AddResult addResult = stringTable().add<HashTranslator>(value);
+    HashSet<StringImpl*>& atomicStringTable = stringTable();
+    HashSet<StringImpl*>::AddResult addResult = atomicStringTable.add<HashTranslator>(value);
 
     // If the string is newly-translated, then we need to adopt it.
     // The boolean in the pair tells us if that is so.
@@ -451,6 +455,7 @@ void AtomicStringImpl::remove(AtomicStringImpl* string)
     HashSet<StringImpl*>& atomicStringTable = stringTable();
     HashSet<StringImpl*>::iterator iterator = atomicStringTable.find(string);
     ASSERT_WITH_MESSAGE(iterator != atomicStringTable.end(), "The string being removed is atomic in the string table of an other thread!");
+    ASSERT(string == *iterator);
     atomicStringTable.remove(iterator);
 }
 
index 6b3146a..8fe0c4c 100644 (file)
@@ -1,3 +1,18 @@
+2016-11-02  Filip Pizlo  <fpizlo@apple.com>
+
+        The GC should be in a thread
+        https://bugs.webkit.org/show_bug.cgi?id=163562
+
+        Reviewed by Geoffrey Garen and Andreas Kling.
+
+        No new tests because existing tests cover this.
+        
+        We now need to be more careful about using JSLock. This fixes some places that were not
+        holding it. New assertions in the GC are more likely to catch this than before.
+
+        * bindings/js/WorkerScriptController.cpp:
+        (WebCore::WorkerScriptController::WorkerScriptController):
+
 2016-11-02  Joseph Pecoraro  <pecoraro@apple.com>
 
         Web Inspector: Include DebuggerAgent in Workers - see, pause, and step through scripts
index 6c2e5c1..f8d206f 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2015, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -55,6 +55,7 @@ IDBDatabase::IDBDatabase(ScriptExecutionContext& context, IDBClient::IDBConnecti
     , m_connectionProxy(connectionProxy)
     , m_info(resultData.databaseInfo())
     , m_databaseConnectionIdentifier(resultData.databaseConnectionIdentifier())
+    , m_eventNames(eventNames())
 {
     LOG(IndexedDB, "IDBDatabase::IDBDatabase - Creating database %s with version %" PRIu64 " connection %" PRIu64 " (%p)", m_info.name().utf8().data(), m_info.version(), m_databaseConnectionIdentifier, this);
     suspendIfNeeded();
@@ -73,7 +74,7 @@ IDBDatabase::~IDBDatabase()
 
 bool IDBDatabase::hasPendingActivity() const
 {
-    ASSERT(currentThread() == originThreadID());
+    ASSERT(currentThread() == originThreadID() || mayBeGCThread());
 
     if (m_closedInServer)
         return false;
@@ -81,7 +82,7 @@ bool IDBDatabase::hasPendingActivity() const
     if (!m_activeTransactions.isEmpty() || !m_committingTransactions.isEmpty() || !m_abortingTransactions.isEmpty())
         return true;
 
-    return hasEventListeners(eventNames().abortEvent) || hasEventListeners(eventNames().errorEvent) || hasEventListeners(eventNames().versionchangeEvent);
+    return hasEventListeners(m_eventNames.abortEvent) || hasEventListeners(m_eventNames.errorEvent) || hasEventListeners(m_eventNames.versionchangeEvent);
 }
 
 const String IDBDatabase::name() const
@@ -252,7 +253,7 @@ void IDBDatabase::connectionToServerLost(const IDBError& error)
     for (auto& transaction : m_activeTransactions.values())
         transaction->connectionClosedFromServer(error);
 
-    Ref<Event> event = Event::create(eventNames().errorEvent, true, false);
+    Ref<Event> event = Event::create(m_eventNames.errorEvent, true, false);
     event->setTarget(this);
 
     if (auto* context = scriptExecutionContext())
@@ -446,8 +447,8 @@ void IDBDatabase::fireVersionChangeEvent(const IDBResourceIdentifier& requestIde
         connectionProxy().didFireVersionChangeEvent(m_databaseConnectionIdentifier, requestIdentifier);
         return;
     }
-
-    Ref<Event> event = IDBVersionChangeEvent::create(requestIdentifier, currentVersion, requestedVersion, eventNames().versionchangeEvent);
+    
+    Ref<Event> event = IDBVersionChangeEvent::create(requestIdentifier, currentVersion, requestedVersion, m_eventNames.versionchangeEvent);
     event->setTarget(this);
     scriptExecutionContext()->eventQueue().enqueueEvent(WTFMove(event));
 }
@@ -459,7 +460,7 @@ bool IDBDatabase::dispatchEvent(Event& event)
 
     bool result = EventTargetWithInlineData::dispatchEvent(event);
 
-    if (event.isVersionChangeEvent() && event.type() == eventNames().versionchangeEvent)
+    if (event.isVersionChangeEvent() && event.type() == m_eventNames.versionchangeEvent)
         connectionProxy().didFireVersionChangeEvent(m_databaseConnectionIdentifier, downcast<IDBVersionChangeEvent>(event).requestIdentifier());
 
     return result;
index 7184f83..39894df 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2015, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -45,6 +45,7 @@ class IDBOpenDBRequest;
 class IDBResultData;
 class IDBTransaction;
 class IDBTransactionInfo;
+struct EventNames;
 
 class IDBDatabase : public ThreadSafeRefCounted<IDBDatabase>, public EventTargetWithInlineData, public IDBActiveDOMObject {
 public:
@@ -129,6 +130,8 @@ private:
     HashMap<IDBResourceIdentifier, RefPtr<IDBTransaction>> m_activeTransactions;
     HashMap<IDBResourceIdentifier, RefPtr<IDBTransaction>> m_committingTransactions;
     HashMap<IDBResourceIdentifier, RefPtr<IDBTransaction>> m_abortingTransactions;
+    
+    const EventNames& m_eventNames; // Need to cache this so we can use it from GC threads.
 };
 
 } // namespace WebCore
index ee3702f..fbad35f 100644 (file)
@@ -228,7 +228,7 @@ bool IDBRequest::canSuspendForDocumentSuspension() const
 
 bool IDBRequest::hasPendingActivity() const
 {
-    ASSERT(currentThread() == originThreadID());
+    ASSERT(currentThread() == originThreadID() || mayBeGCThread());
     return m_hasPendingActivity;
 }
 
index 8082448..2d81192 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2015, 2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -264,7 +264,7 @@ bool IDBTransaction::canSuspendForDocumentSuspension() const
 
 bool IDBTransaction::hasPendingActivity() const
 {
-    ASSERT(currentThread() == m_database->originThreadID());
+    ASSERT(currentThread() == m_database->originThreadID() || mayBeGCThread());
     return !m_contextStopped && m_state != IndexedDB::TransactionState::Finished;
 }
 
index 56a81a2..3e0937a 100644 (file)
                A456FA2611AD4A830020B420 /* LabelsNodeList.cpp in Sources */ = {isa = PBXBuildFile; fileRef = A456FA2411AD4A830020B420 /* LabelsNodeList.cpp */; };
                A456FA2711AD4A830020B420 /* LabelsNodeList.h in Headers */ = {isa = PBXBuildFile; fileRef = A456FA2511AD4A830020B420 /* LabelsNodeList.h */; };
                A501920E132EBF2E008BFE55 /* Autocapitalize.h in Headers */ = {isa = PBXBuildFile; fileRef = A501920C132EBF2E008BFE55 /* Autocapitalize.h */; settings = {ATTRIBUTES = (Private, ); }; };
-               A502C5DF13049B3500FC7D53 /* WebSafeGCActivityCallbackIOS.h in Headers */ = {isa = PBXBuildFile; fileRef = A502C5DD13049B3500FC7D53 /* WebSafeGCActivityCallbackIOS.h */; };
                A5071E801C506B66009951BE /* InspectorMemoryAgent.cpp in Sources */ = {isa = PBXBuildFile; fileRef = A5071E7E1C5067A0009951BE /* InspectorMemoryAgent.cpp */; };
                A5071E811C506B69009951BE /* InspectorMemoryAgent.h in Headers */ = {isa = PBXBuildFile; fileRef = A5071E7F1C5067A0009951BE /* InspectorMemoryAgent.h */; };
                A5071E851C56D0DC009951BE /* ResourceUsageData.h in Headers */ = {isa = PBXBuildFile; fileRef = A5071E821C56D079009951BE /* ResourceUsageData.h */; };
                CE7B2DB51586ABAD0098B3FA /* TextAlternativeWithRange.h in Headers */ = {isa = PBXBuildFile; fileRef = CE7B2DB11586ABAD0098B3FA /* TextAlternativeWithRange.h */; settings = {ATTRIBUTES = (Private, ); }; };
                CE7B2DB61586ABAD0098B3FA /* TextAlternativeWithRange.mm in Sources */ = {isa = PBXBuildFile; fileRef = CE7B2DB21586ABAD0098B3FA /* TextAlternativeWithRange.mm */; };
                CE7E17831C83A49100AD06AF /* ContentSecurityPolicyHash.h in Headers */ = {isa = PBXBuildFile; fileRef = CE7E17821C83A49100AD06AF /* ContentSecurityPolicyHash.h */; };
-               CE95208A1811B475007A5392 /* WebSafeIncrementalSweeperIOS.h in Headers */ = {isa = PBXBuildFile; fileRef = C2C4CB1D161A131200D214DA /* WebSafeIncrementalSweeperIOS.h */; };
                CEC337AD1A46071F009B8523 /* ServersSPI.h in Headers */ = {isa = PBXBuildFile; fileRef = CEC337AC1A46071F009B8523 /* ServersSPI.h */; settings = {ATTRIBUTES = (Private, ); }; };
                CEC337AF1A46086D009B8523 /* GraphicsServicesSPI.h in Headers */ = {isa = PBXBuildFile; fileRef = CEC337AE1A46086D009B8523 /* GraphicsServicesSPI.h */; settings = {ATTRIBUTES = (Private, ); }; };
                CECADFC6153778FF00E37068 /* DictationAlternative.cpp in Sources */ = {isa = PBXBuildFile; fileRef = CECADFC2153778FF00E37068 /* DictationAlternative.cpp */; };
                A456FA2411AD4A830020B420 /* LabelsNodeList.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = LabelsNodeList.cpp; sourceTree = "<group>"; };
                A456FA2511AD4A830020B420 /* LabelsNodeList.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = LabelsNodeList.h; sourceTree = "<group>"; };
                A501920C132EBF2E008BFE55 /* Autocapitalize.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = Autocapitalize.h; sourceTree = "<group>"; };
-               A502C5DD13049B3500FC7D53 /* WebSafeGCActivityCallbackIOS.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WebSafeGCActivityCallbackIOS.h; sourceTree = "<group>"; };
                A5071E7E1C5067A0009951BE /* InspectorMemoryAgent.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = InspectorMemoryAgent.cpp; sourceTree = "<group>"; };
                A5071E7F1C5067A0009951BE /* InspectorMemoryAgent.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = InspectorMemoryAgent.h; sourceTree = "<group>"; };
                A5071E821C56D079009951BE /* ResourceUsageData.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ResourceUsageData.h; sourceTree = "<group>"; };
                C280833D1C6DC22C001451B6 /* JSFontFace.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSFontFace.cpp; sourceTree = "<group>"; };
                C280833E1C6DC22C001451B6 /* JSFontFace.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = JSFontFace.h; sourceTree = "<group>"; };
                C28083411C6DC96A001451B6 /* JSFontFaceCustom.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = JSFontFaceCustom.cpp; sourceTree = "<group>"; };
-               C2C4CB1D161A131200D214DA /* WebSafeIncrementalSweeperIOS.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WebSafeIncrementalSweeperIOS.h; sourceTree = "<group>"; };
                C330A22113EC196B0000B45B /* ColorChooser.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = ColorChooser.h; sourceTree = "<group>"; };
                C33EE5C214FB49610002095A /* BaseClickableWithKeyInputType.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = BaseClickableWithKeyInputType.cpp; sourceTree = "<group>"; };
                C33EE5C314FB49610002095A /* BaseClickableWithKeyInputType.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = BaseClickableWithKeyInputType.h; sourceTree = "<group>"; };
                                FE0D84EA1048436E001A179E /* WebEvent.mm */,
                                CDA29A2E1CBF73FC00901CCF /* WebPlaybackSessionInterfaceAVKit.h */,
                                CDA29A2F1CBF73FC00901CCF /* WebPlaybackSessionInterfaceAVKit.mm */,
-                               A502C5DD13049B3500FC7D53 /* WebSafeGCActivityCallbackIOS.h */,
-                               C2C4CB1D161A131200D214DA /* WebSafeIncrementalSweeperIOS.h */,
                                3F42B31B1881191B00278AAC /* WebVideoFullscreenControllerAVKit.h */,
                                3F42B31C1881191B00278AAC /* WebVideoFullscreenControllerAVKit.mm */,
                                3FBC4AF2189881560046EE38 /* WebVideoFullscreenInterfaceAVKit.h */,
                                CDA29A0B1CBD9A7400901CCF /* WebPlaybackSessionModel.h in Headers */,
                                CDA29A0F1CBD9CFE00901CCF /* WebPlaybackSessionModelMediaElement.h in Headers */,
                                99CC0B6B18BEA1FF006CEBCC /* WebReplayInputs.h in Headers */,
-                               A502C5DF13049B3500FC7D53 /* WebSafeGCActivityCallbackIOS.h in Headers */,
-                               CE95208A1811B475007A5392 /* WebSafeIncrementalSweeperIOS.h in Headers */,
                                1CAF34810A6C405200ABE06E /* WebScriptObject.h in Headers */,
                                1CAF34830A6C405200ABE06E /* WebScriptObjectPrivate.h in Headers */,
                                1A569D1B0D7E2B82007C3983 /* WebScriptObjectProtocol.h in Headers */,
index 0c79641..05a0f00 100644 (file)
@@ -1,7 +1,7 @@
 /*
  *  Copyright (C) 2000 Harri Porten (porten@kde.org)
  *  Copyright (C) 2006 Jon Shier (jshier@iastate.edu)
- *  Copyright (C) 2003-2009, 2014 Apple Inc. All rights reseved.
+ *  Copyright (C) 2003-2009, 2014, 2016 Apple Inc. All rights reseved.
  *  Copyright (C) 2006 Alexey Proskuryakov (ap@webkit.org)
  *  Copyright (c) 2015 Canon Inc. All rights reserved.
  *
@@ -49,8 +49,6 @@
 
 #if PLATFORM(IOS)
 #include "ChromeClient.h"
-#include "WebSafeGCActivityCallbackIOS.h"
-#include "WebSafeIncrementalSweeperIOS.h"
 #endif
 
 using namespace JSC;
@@ -244,13 +242,11 @@ VM& JSDOMWindowBase::commonVM()
     if (!vm) {
         ScriptController::initializeThreading();
         vm = &VM::createLeaked(LargeHeap).leakRef();
+        vm->heap.acquireAccess(); // At any time, we may do things that affect the GC.
 #if !PLATFORM(IOS)
         vm->setExclusiveThread(std::this_thread::get_id());
 #else
-        vm->heap.setFullActivityCallback(WebSafeFullGCActivityCallback::create(&vm->heap));
-        vm->heap.setEdenActivityCallback(WebSafeEdenGCActivityCallback::create(&vm->heap));
-
-        vm->heap.setIncrementalSweeper(std::make_unique<WebSafeIncrementalSweeper>(&vm->heap));
+        vm->heap.setRunLoop(WebThreadRunLoop());
         vm->heap.machineThreads().addCurrentThread();
 #endif
 
index bdb15c6..252f872 100644 (file)
@@ -51,6 +51,7 @@ WorkerScriptController::WorkerScriptController(WorkerGlobalScope* workerGlobalSc
     , m_workerGlobalScope(workerGlobalScope)
     , m_workerGlobalScopeWrapper(*m_vm)
 {
+    m_vm->heap.acquireAccess(); // It's not clear that we have good discipline for heap access, so turn it on permanently.
     m_vm->ensureWatchdog();
     initNormalWorldClientData(m_vm.get());
 }
@@ -188,6 +189,16 @@ void WorkerScriptController::disableEval(const String& errorMessage)
     m_workerGlobalScopeWrapper->setEvalEnabled(false, errorMessage);
 }
 
+void WorkerScriptController::releaseHeapAccess()
+{
+    m_vm->heap.releaseAccess();
+}
+
+void WorkerScriptController::acquireHeapAccess()
+{
+    m_vm->heap.acquireAccess();
+}
+
 void WorkerScriptController::attachDebugger(JSC::Debugger* debugger)
 {
     initScriptIfNeeded();
index 5937899..4604552 100644 (file)
@@ -1,5 +1,5 @@
 /*
- * Copyright (C) 2008, 2015 Apple Inc. All Rights Reserved.
+ * Copyright (C) 2008, 2015, 2016 Apple Inc. All Rights Reserved.
  * Copyright (C) 2012 Google Inc. All Rights Reserved.
  *
  * Redistribution and use in source and binary forms, with or without
@@ -76,6 +76,9 @@ namespace WebCore {
         void disableEval(const String& errorMessage);
 
         JSC::VM& vm() { return *m_vm; }
+        
+        void releaseHeapAccess();
+        void acquireHeapAccess();
 
         void attachDebugger(JSC::Debugger*);
         void detachDebugger(JSC::Debugger*);
index 447467a..c98c15a 100644 (file)
@@ -131,7 +131,8 @@ bool EventTarget::setAttributeEventListener(const AtomicString& eventType, RefPt
         eventTargetData()->eventListenerMap.replace(eventType, *existingListener, listener.releaseNonNull(), { });
         return true;
     }
-    return addEventListener(eventType, listener.releaseNonNull());}
+    return addEventListener(eventType, listener.releaseNonNull());
+}
 
 EventListener* EventTarget::getAttributeEventListener(const AtomicString& eventType)
 {
diff --git a/Source/WebCore/platform/ios/WebSafeGCActivityCallbackIOS.h b/Source/WebCore/platform/ios/WebSafeGCActivityCallbackIOS.h
deleted file mode 100644 (file)
index 8f9f59b..0000000
+++ /dev/null
@@ -1,70 +0,0 @@
-/*
- * Copyright (C) 2011 Apple Inc. All rights reserved.
- *
- * Redistribution and use in source and binary forms, with or without
- * modification, are permitted provided that the following conditions
- * are met:
- * 1. Redistributions of source code must retain the above copyright
- *    notice, this list of conditions and the following disclaimer.
- * 2. Redistributions in binary form must reproduce the above copyright
- *    notice, this list of conditions and the following disclaimer in the
- *    documentation and/or other materials provided with the distribution.
- *
- * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
- * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
- * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
- * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
- * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
- * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
- * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
- * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
- * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
- * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
- * THE POSSIBILITY OF SUCH DAMAGE.
- */
-
-#ifndef WebSafeGCActivityCallbackIOS_h
-#define WebSafeGCActivityCallbackIOS_h
-
-#include "WebCoreThread.h"
-#include <JavaScriptCore/EdenGCActivityCallback.h>
-#include <JavaScriptCore/FullGCActivityCallback.h>
-
-namespace WebCore {
-
-class WebSafeFullGCActivityCallback final : public JSC::FullGCActivityCallback {
-public:
-    static PassRefPtr<WebSafeFullGCActivityCallback> create(JSC::Heap* heap)
-    {
-        return adoptRef(new WebSafeFullGCActivityCallback(heap));
-    }
-
-    ~WebSafeFullGCActivityCallback() override { }
-
-private:
-    WebSafeFullGCActivityCallback(JSC::Heap* heap)
-        : JSC::FullGCActivityCallback(heap)
-    {
-        setRunLoop(WebThreadRunLoop());
-    }
-};
-
-class WebSafeEdenGCActivityCallback final : public JSC::EdenGCActivityCallback {
-public:
-    static PassRefPtr<WebSafeEdenGCActivityCallback> create(JSC::Heap* heap)
-    {
-        return adoptRef(new WebSafeEdenGCActivityCallback(heap));
-    }
-
-    ~WebSafeEdenGCActivityCallback() override { }
-
-private:
-    WebSafeEdenGCActivityCallback(JSC::Heap* heap)
-        : JSC::EdenGCActivityCallback(heap)
-    {
-        setRunLoop(WebThreadRunLoop());
-    }
-};
-} // namespace WebCore
-
-#endif // WebSafeGCActivityCallbackIOS_h
index bbc3428..5917fa0 100644 (file)
@@ -1,6 +1,6 @@
 /*
  * Copyright (C) 2012 Google Inc. All rights reserved.
- * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -3293,4 +3293,9 @@ bool Internals::userPrefersReducedMotion() const
 
 #endif
 
+void Internals::reportBacktrace()
+{
+    WTFReportBacktrace();
+}
+
 } // namespace WebCore
index e04958f..f9cca6e 100644 (file)
@@ -496,6 +496,8 @@ public:
     void setUserInterfaceLayoutDirection(UserInterfaceLayoutDirection);
 
     bool userPrefersReducedMotion() const;
+    
+    void reportBacktrace();
 
 private:
     explicit Internals(Document&);
index aa3917f..03d5358 100644 (file)
@@ -1,6 +1,6 @@
 /*
  * Copyright (C) 2012 Google Inc. All rights reserved.
- * Copyright (C) 2013-2015 Apple Inc. All rights reserved.
+ * Copyright (C) 2013-2016 Apple Inc. All rights reserved.
  *
  * Redistribution and use in source and binary forms, with or without
  * modification, are permitted provided that the following conditions
@@ -468,4 +468,6 @@ enum UserInterfaceLayoutDirection {
     void setUserInterfaceLayoutDirection(UserInterfaceLayoutDirection userInterfaceLayoutDirection);
 
     boolean userPrefersReducedMotion();
+    
+    void reportBacktrace();
 };
index cc36119..978112b 100644 (file)
@@ -173,7 +173,11 @@ MessageQueueWaitResult WorkerRunLoop::runInMode(WorkerGlobalScope* context, cons
             absoluteTime = deadline;
     }
     MessageQueueWaitResult result;
+    if (WorkerScriptController* script = context->script())
+        script->releaseHeapAccess();
     auto task = m_messageQueue.waitForMessageFilteredWithTimeout(result, predicate, absoluteTime);
+    if (WorkerScriptController* script = context->script())
+        script->acquireHeapAccess();
 
     // If the context is closing, don't execute any further JavaScript tasks (per section 4.1.1 of the Web Workers spec).  However, there may be implementation cleanup tasks in the queue, so keep running through it.