[JSC] Emit write barrier after storing instead of before storing
authorysuzuki@apple.com <ysuzuki@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 31 Jul 2019 01:35:40 +0000 (01:35 +0000)
committerysuzuki@apple.com <ysuzuki@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 31 Jul 2019 01:35:40 +0000 (01:35 +0000)
https://bugs.webkit.org/show_bug.cgi?id=200193

Reviewed by Saam Barati.

I reviewed tricky GC-related code including visitChildren and manual writeBarrier, and I found that we have several problems with write-barriers.

1. Some write-barriers are emitted before stores happen

    Some code like LazyProperty emits write-barrier before we store the value. This is wrong since JSC has concurrent collector. Let's consider the situation like this.

        1. Cell "A" is not marked yet
        2. Write-barrier is emitted onto "A"
        3. Concurrent collector scans "A"
        4. Store to "A"'s field happens
        5. (4)'s field is not rescaned

    We should emit write-barrier after stores. This patch places write-barriers after stores happen.

2. Should emit write-barrier after the stored fields are reachable from the owner.

    We have code that is logically the same to the following.

        ```
        auto data = std::make_unique<XXX>();
        data->m_field.set(vm, owner, value);

        storeStoreBarrier();
        owner->m_data = WTFMove(data);
        ```

    This is not correct. When write-barrier is emitted, the owner cannot reach to the field that is stored.
    The actual example is AccessCase. We are emitting write-barriers with owner when creating AccessCase, but this is not
    effective until this AccessCase is chained to StructureStubInfo, which is reachable from CodeBlock.

    I don't think this is actually an issue because currently AccessCase generation is guarded by CodeBlock->m_lock. And CodeBlock::visitChildren takes this lock.
    But emitting a write-barrier at the right place is still better. This patch places write-barriers when StructureStubInfo::addAccessCase is called.

Speculative GC fix, it was hard to reproduce the crash since we need to control concurrent collector and main thread's scheduling in an instruction-level.

* bytecode/BytecodeList.rb:
* bytecode/CodeBlock.cpp:
(JSC::CodeBlock::finishCreation):
* bytecode/StructureStubInfo.cpp:
(JSC::StructureStubInfo::addAccessCase):
* bytecode/StructureStubInfo.h:
(JSC::StructureStubInfo::considerCaching):
* dfg/DFGPlan.cpp:
(JSC::DFG::Plan::finalizeWithoutNotifyingCallback):
* jit/JITOperations.cpp:
* llint/LLIntSlowPaths.cpp:
(JSC::LLInt::LLINT_SLOW_PATH_DECL):
(JSC::LLInt::setupGetByIdPrototypeCache):
* runtime/CommonSlowPaths.cpp:
(JSC::SLOW_PATH_DECL):
* runtime/LazyPropertyInlines.h:
(JSC::ElementType>::setMayBeNull):
* runtime/RegExpCachedResult.h:
(JSC::RegExpCachedResult::record):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@248027 268f45cc-cd09-0410-ab3c-d52691b4dbfc

Source/JavaScriptCore/ChangeLog
Source/JavaScriptCore/bytecode/BytecodeList.rb
Source/JavaScriptCore/bytecode/CodeBlock.cpp
Source/JavaScriptCore/bytecode/StructureStubInfo.cpp
Source/JavaScriptCore/bytecode/StructureStubInfo.h
Source/JavaScriptCore/dfg/DFGPlan.cpp
Source/JavaScriptCore/jit/JITOperations.cpp
Source/JavaScriptCore/llint/LLIntSlowPaths.cpp
Source/JavaScriptCore/runtime/CommonSlowPaths.cpp
Source/JavaScriptCore/runtime/LazyPropertyInlines.h
Source/JavaScriptCore/runtime/RegExpCachedResult.h

index 2ae0b99..a84b563 100644 (file)
@@ -1,5 +1,67 @@
 2019-07-30  Yusuke Suzuki  <ysuzuki@apple.com>
 
+        [JSC] Emit write barrier after storing instead of before storing
+        https://bugs.webkit.org/show_bug.cgi?id=200193
+
+        Reviewed by Saam Barati.
+
+        I reviewed tricky GC-related code including visitChildren and manual writeBarrier, and I found that we have several problems with write-barriers.
+
+        1. Some write-barriers are emitted before stores happen
+
+            Some code like LazyProperty emits write-barrier before we store the value. This is wrong since JSC has concurrent collector. Let's consider the situation like this.
+
+                1. Cell "A" is not marked yet
+                2. Write-barrier is emitted onto "A"
+                3. Concurrent collector scans "A"
+                4. Store to "A"'s field happens
+                5. (4)'s field is not rescaned
+
+            We should emit write-barrier after stores. This patch places write-barriers after stores happen.
+
+        2. Should emit write-barrier after the stored fields are reachable from the owner.
+
+            We have code that is logically the same to the following.
+
+                ```
+                auto data = std::make_unique<XXX>();
+                data->m_field.set(vm, owner, value);
+
+                storeStoreBarrier();
+                owner->m_data = WTFMove(data);
+                ```
+
+            This is not correct. When write-barrier is emitted, the owner cannot reach to the field that is stored.
+            The actual example is AccessCase. We are emitting write-barriers with owner when creating AccessCase, but this is not
+            effective until this AccessCase is chained to StructureStubInfo, which is reachable from CodeBlock.
+
+            I don't think this is actually an issue because currently AccessCase generation is guarded by CodeBlock->m_lock. And CodeBlock::visitChildren takes this lock.
+            But emitting a write-barrier at the right place is still better. This patch places write-barriers when StructureStubInfo::addAccessCase is called.
+
+        Speculative GC fix, it was hard to reproduce the crash since we need to control concurrent collector and main thread's scheduling in an instruction-level.
+
+        * bytecode/BytecodeList.rb:
+        * bytecode/CodeBlock.cpp:
+        (JSC::CodeBlock::finishCreation):
+        * bytecode/StructureStubInfo.cpp:
+        (JSC::StructureStubInfo::addAccessCase):
+        * bytecode/StructureStubInfo.h:
+        (JSC::StructureStubInfo::considerCaching):
+        * dfg/DFGPlan.cpp:
+        (JSC::DFG::Plan::finalizeWithoutNotifyingCallback):
+        * jit/JITOperations.cpp:
+        * llint/LLIntSlowPaths.cpp:
+        (JSC::LLInt::LLINT_SLOW_PATH_DECL):
+        (JSC::LLInt::setupGetByIdPrototypeCache):
+        * runtime/CommonSlowPaths.cpp:
+        (JSC::SLOW_PATH_DECL):
+        * runtime/LazyPropertyInlines.h:
+        (JSC::ElementType>::setMayBeNull):
+        * runtime/RegExpCachedResult.h:
+        (JSC::RegExpCachedResult::record):
+
+2019-07-30  Yusuke Suzuki  <ysuzuki@apple.com>
+
         [JSC] Make StructureChain less-tricky by using Auxiliary Buffer
         https://bugs.webkit.org/show_bug.cgi?id=200192
 
index 8f769bd..29d7e4b 100644 (file)
@@ -841,8 +841,8 @@ op :resolve_scope,
              constantScope: WriteBarrierBase[JSScope],
 
              # written from the slow path
-             globalLexicalEnvironment: JSGlobalLexicalEnvironment.*,
-             globalObject: JSGlobalObject.*,
+             globalLexicalEnvironment: WriteBarrierBase[JSGlobalLexicalEnvironment],
+             globalObject: WriteBarrierBase[JSGlobalObject],
         },
     }
 
index 93d0fa9..ef1cabf 100644 (file)
@@ -603,7 +603,7 @@ bool CodeBlock::finishCreation(VM& vm, ScriptExecutable* ownerExecutable, Unlink
                 if (op.type == GlobalProperty || op.type == GlobalPropertyWithVarInjectionChecks)
                     metadata.m_globalLexicalBindingEpoch = m_globalObject->globalLexicalBindingEpoch();
             } else
-                metadata.m_globalObject = nullptr;
+                metadata.m_globalObject.clear();
             break;
         }
 
index 77f2c2a..7222d48 100644 (file)
@@ -136,92 +136,96 @@ AccessGenerationResult StructureStubInfo::addAccessCase(
     const GCSafeConcurrentJSLocker& locker, CodeBlock* codeBlock, const Identifier& ident, std::unique_ptr<AccessCase> accessCase)
 {
     VM& vm = *codeBlock->vm();
-    
-    if (StructureStubInfoInternal::verbose)
-        dataLog("Adding access case: ", accessCase, "\n");
-    
-    if (!accessCase)
-        return AccessGenerationResult::GaveUp;
-    
-    AccessGenerationResult result;
-    
-    if (cacheType == CacheType::Stub) {
-        result = u.stub->addCase(locker, vm, codeBlock, *this, ident, WTFMove(accessCase));
-        
+    ASSERT(vm.heap.isDeferred());
+    AccessGenerationResult result = ([&] () -> AccessGenerationResult {
         if (StructureStubInfoInternal::verbose)
-            dataLog("Had stub, result: ", result, "\n");
-
-        if (result.shouldResetStubAndFireWatchpoints())
-            return result;
-
-        if (!result.buffered()) {
-            bufferedStructures.clear();
-            return result;
-        }
-    } else {
-        std::unique_ptr<PolymorphicAccess> access = std::make_unique<PolymorphicAccess>();
+            dataLog("Adding access case: ", accessCase, "\n");
         
-        Vector<std::unique_ptr<AccessCase>, 2> accessCases;
+        if (!accessCase)
+            return AccessGenerationResult::GaveUp;
         
-        std::unique_ptr<AccessCase> previousCase =
-            AccessCase::fromStructureStubInfo(vm, codeBlock, *this);
-        if (previousCase)
-            accessCases.append(WTFMove(previousCase));
+        AccessGenerationResult result;
         
-        accessCases.append(WTFMove(accessCase));
+        if (cacheType == CacheType::Stub) {
+            result = u.stub->addCase(locker, vm, codeBlock, *this, ident, WTFMove(accessCase));
+            
+            if (StructureStubInfoInternal::verbose)
+                dataLog("Had stub, result: ", result, "\n");
+
+            if (result.shouldResetStubAndFireWatchpoints())
+                return result;
+
+            if (!result.buffered()) {
+                bufferedStructures.clear();
+                return result;
+            }
+        } else {
+            std::unique_ptr<PolymorphicAccess> access = std::make_unique<PolymorphicAccess>();
+            
+            Vector<std::unique_ptr<AccessCase>, 2> accessCases;
+            
+            std::unique_ptr<AccessCase> previousCase =
+                AccessCase::fromStructureStubInfo(vm, codeBlock, *this);
+            if (previousCase)
+                accessCases.append(WTFMove(previousCase));
+            
+            accessCases.append(WTFMove(accessCase));
+            
+            result = access->addCases(locker, vm, codeBlock, *this, ident, WTFMove(accessCases));
+            
+            if (StructureStubInfoInternal::verbose)
+                dataLog("Created stub, result: ", result, "\n");
+
+            if (result.shouldResetStubAndFireWatchpoints())
+                return result;
+
+            if (!result.buffered()) {
+                bufferedStructures.clear();
+                return result;
+            }
+            
+            cacheType = CacheType::Stub;
+            u.stub = access.release();
+        }
         
-        result = access->addCases(locker, vm, codeBlock, *this, ident, WTFMove(accessCases));
+        RELEASE_ASSERT(!result.generatedSomeCode());
         
-        if (StructureStubInfoInternal::verbose)
-            dataLog("Created stub, result: ", result, "\n");
-
-        if (result.shouldResetStubAndFireWatchpoints())
-            return result;
-
+        // If we didn't buffer any cases then bail. If this made no changes then we'll just try again
+        // subject to cool-down.
         if (!result.buffered()) {
+            if (StructureStubInfoInternal::verbose)
+                dataLog("Didn't buffer anything, bailing.\n");
             bufferedStructures.clear();
             return result;
         }
         
-        cacheType = CacheType::Stub;
-        u.stub = access.release();
-    }
-    
-    RELEASE_ASSERT(!result.generatedSomeCode());
-    
-    // If we didn't buffer any cases then bail. If this made no changes then we'll just try again
-    // subject to cool-down.
-    if (!result.buffered()) {
-        if (StructureStubInfoInternal::verbose)
-            dataLog("Didn't buffer anything, bailing.\n");
+        // The buffering countdown tells us if we should be repatching now.
+        if (bufferingCountdown) {
+            if (StructureStubInfoInternal::verbose)
+                dataLog("Countdown is too high: ", bufferingCountdown, ".\n");
+            return result;
+        }
+        
+        // Forget the buffered structures so that all future attempts to cache get fully handled by the
+        // PolymorphicAccess.
         bufferedStructures.clear();
-        return result;
-    }
-    
-    // The buffering countdown tells us if we should be repatching now.
-    if (bufferingCountdown) {
+        
+        result = u.stub->regenerate(locker, vm, codeBlock, *this, ident);
+        
         if (StructureStubInfoInternal::verbose)
-            dataLog("Countdown is too high: ", bufferingCountdown, ".\n");
-        return result;
-    }
-    
-    // Forget the buffered structures so that all future attempts to cache get fully handled by the
-    // PolymorphicAccess.
-    bufferedStructures.clear();
-    
-    result = u.stub->regenerate(locker, vm, codeBlock, *this, ident);
-    
-    if (StructureStubInfoInternal::verbose)
-        dataLog("Regeneration result: ", result, "\n");
-    
-    RELEASE_ASSERT(!result.buffered());
-    
-    if (!result.generatedSomeCode())
+            dataLog("Regeneration result: ", result, "\n");
+        
+        RELEASE_ASSERT(!result.buffered());
+        
+        if (!result.generatedSomeCode())
+            return result;
+        
+        // If we generated some code then we don't want to attempt to repatch in the future until we
+        // gather enough cases.
+        bufferingCountdown = Options::repatchBufferingCountdown();
         return result;
-    
-    // If we generated some code then we don't want to attempt to repatch in the future until we
-    // gather enough cases.
-    bufferingCountdown = Options::repatchBufferingCountdown();
+    })();
+    vm.heap.writeBarrier(codeBlock);
     return result;
 }
 
index d3a35ac..6be6de8 100644 (file)
@@ -92,8 +92,10 @@ public:
     // This returns true if it has marked everything that it will ever mark.
     bool propagateTransitions(SlotVisitor&);
         
-    ALWAYS_INLINE bool considerCaching(CodeBlock* codeBlock, Structure* structure)
+    ALWAYS_INLINE bool considerCaching(VM& vm, CodeBlock* codeBlock, Structure* structure)
     {
+        DisallowGC disallowGC;
+
         // We never cache non-cells.
         if (!structure) {
             sawNonCell = true;
@@ -151,10 +153,8 @@ public:
             // the base's structure. That seems unlikely for the canonical use of instanceof, where
             // the prototype is fixed.
             bool isNewlyAdded = bufferedStructures.add(structure);
-            if (isNewlyAdded) {
-                VM& vm = *codeBlock->vm();
+            if (isNewlyAdded)
                 vm.heap.writeBarrier(codeBlock);
-            }
             return isNewlyAdded;
         }
         countdown--;
index 5758a6e..e413ea4 100644 (file)
@@ -581,49 +581,56 @@ bool Plan::isStillValidOnMainThread()
 
 CompilationResult Plan::finalizeWithoutNotifyingCallback()
 {
-    // We will establish new references from the code block to things. So, we need a barrier.
-    m_vm->heap.writeBarrier(m_codeBlock);
+    // We perform multiple stores before emitting a write-barrier. To ensure that no GC happens between store and write-barrier, we should ensure that
+    // GC is deferred when this function is called.
+    ASSERT(m_vm->heap.isDeferred());
+
+    CompilationResult result = [&] {
+        if (!isStillValidOnMainThread() || !isStillValid()) {
+            CODEBLOCK_LOG_EVENT(m_codeBlock, "dfgFinalize", ("invalidated"));
+            return CompilationInvalidated;
+        }
 
-    if (!isStillValidOnMainThread() || !isStillValid()) {
-        CODEBLOCK_LOG_EVENT(m_codeBlock, "dfgFinalize", ("invalidated"));
-        return CompilationInvalidated;
-    }
+        bool result;
+        if (m_codeBlock->codeType() == FunctionCode)
+            result = m_finalizer->finalizeFunction();
+        else
+            result = m_finalizer->finalize();
 
-    bool result;
-    if (m_codeBlock->codeType() == FunctionCode)
-        result = m_finalizer->finalizeFunction();
-    else
-        result = m_finalizer->finalize();
+        if (!result) {
+            CODEBLOCK_LOG_EVENT(m_codeBlock, "dfgFinalize", ("failed"));
+            return CompilationFailed;
+        }
 
-    if (!result) {
-        CODEBLOCK_LOG_EVENT(m_codeBlock, "dfgFinalize", ("failed"));
-        return CompilationFailed;
-    }
+        reallyAdd(m_codeBlock->jitCode()->dfgCommon());
 
-    reallyAdd(m_codeBlock->jitCode()->dfgCommon());
+        if (validationEnabled()) {
+            TrackedReferences trackedReferences;
 
-    if (validationEnabled()) {
-        TrackedReferences trackedReferences;
+            for (WriteBarrier<JSCell>& reference : m_codeBlock->jitCode()->dfgCommon()->weakReferences)
+                trackedReferences.add(reference.get());
+            for (WriteBarrier<Structure>& reference : m_codeBlock->jitCode()->dfgCommon()->weakStructureReferences)
+                trackedReferences.add(reference.get());
+            for (WriteBarrier<Unknown>& constant : m_codeBlock->constants())
+                trackedReferences.add(constant.get());
 
-        for (WriteBarrier<JSCell>& reference : m_codeBlock->jitCode()->dfgCommon()->weakReferences)
-            trackedReferences.add(reference.get());
-        for (WriteBarrier<Structure>& reference : m_codeBlock->jitCode()->dfgCommon()->weakStructureReferences)
-            trackedReferences.add(reference.get());
-        for (WriteBarrier<Unknown>& constant : m_codeBlock->constants())
-            trackedReferences.add(constant.get());
+            for (auto* inlineCallFrame : *m_inlineCallFrames) {
+                ASSERT(inlineCallFrame->baselineCodeBlock.get());
+                trackedReferences.add(inlineCallFrame->baselineCodeBlock.get());
+            }
 
-        for (auto* inlineCallFrame : *m_inlineCallFrames) {
-            ASSERT(inlineCallFrame->baselineCodeBlock.get());
-            trackedReferences.add(inlineCallFrame->baselineCodeBlock.get());
+            // Check that any other references that we have anywhere in the JITCode are also
+            // tracked either strongly or weakly.
+            m_codeBlock->jitCode()->validateReferences(trackedReferences);
         }
 
-        // Check that any other references that we have anywhere in the JITCode are also
-        // tracked either strongly or weakly.
-        m_codeBlock->jitCode()->validateReferences(trackedReferences);
-    }
+        CODEBLOCK_LOG_EVENT(m_codeBlock, "dfgFinalize", ("succeeded"));
+        return CompilationSuccessful;
+    }();
 
-    CODEBLOCK_LOG_EVENT(m_codeBlock, "dfgFinalize", ("succeeded"));
-    return CompilationSuccessful;
+    // We will establish new references from the code block to things. So, we need a barrier.
+    m_vm->heap.writeBarrier(m_codeBlock);
+    return result;
 }
 
 void Plan::finalizeAndNotifyCallback()
index da20c85..e9eed61 100644 (file)
@@ -182,7 +182,7 @@ EncodedJSValue JIT_OPERATION operationTryGetByIdOptimize(ExecState* exec, Struct
     baseValue.getPropertySlot(exec, ident, slot);
     RETURN_IF_EXCEPTION(scope, encodedJSValue());
 
-    if (stubInfo->considerCaching(exec->codeBlock(), baseValue.structureOrNull()) && !slot.isTaintedByOpaqueObject() && (slot.isCacheableValue() || slot.isCacheableGetter() || slot.isUnset()))
+    if (stubInfo->considerCaching(*vm, exec->codeBlock(), baseValue.structureOrNull()) && !slot.isTaintedByOpaqueObject() && (slot.isCacheableValue() || slot.isCacheableGetter() || slot.isUnset()))
         repatchGetByID(exec, baseValue, ident, slot, *stubInfo, GetByIDKind::Try);
 
     return JSValue::encode(slot.getPureResult());
@@ -234,7 +234,7 @@ EncodedJSValue JIT_OPERATION operationGetByIdDirectOptimize(ExecState* exec, Str
     bool found = baseValue.getOwnPropertySlot(exec, ident, slot);
     RETURN_IF_EXCEPTION(scope, encodedJSValue());
 
-    if (stubInfo->considerCaching(exec->codeBlock(), baseValue.structureOrNull()))
+    if (stubInfo->considerCaching(vm, exec->codeBlock(), baseValue.structureOrNull()))
         repatchGetByID(exec, baseValue, ident, slot, *stubInfo, GetByIDKind::Direct);
 
     RELEASE_AND_RETURN(scope, JSValue::encode(found ? slot.getValue(exec, ident) : jsUndefined()));
@@ -290,7 +290,7 @@ EncodedJSValue JIT_OPERATION operationGetByIdOptimize(ExecState* exec, Structure
         
         LOG_IC((ICEvent::OperationGetByIdOptimize, baseValue.classInfoOrNull(*vm), ident, baseValue == slot.slotBase()));
         
-        if (stubInfo->considerCaching(exec->codeBlock(), baseValue.structureOrNull()))
+        if (stubInfo->considerCaching(*vm, exec->codeBlock(), baseValue.structureOrNull()))
             repatchGetByID(exec, baseValue, ident, slot, *stubInfo, GetByIDKind::Normal);
         return found ? slot.getValue(exec, ident) : jsUndefined();
     }));
@@ -343,7 +343,7 @@ EncodedJSValue JIT_OPERATION operationGetByIdWithThisOptimize(ExecState* exec, S
     return JSValue::encode(baseValue.getPropertySlot(exec, ident, slot, [&] (bool found, PropertySlot& slot) -> JSValue {
         LOG_IC((ICEvent::OperationGetByIdWithThisOptimize, baseValue.classInfoOrNull(*vm), ident, baseValue == slot.slotBase()));
         
-        if (stubInfo->considerCaching(exec->codeBlock(), baseValue.structureOrNull()))
+        if (stubInfo->considerCaching(*vm, exec->codeBlock(), baseValue.structureOrNull()))
             repatchGetByID(exec, baseValue, ident, slot, *stubInfo, GetByIDKind::WithThis);
         return found ? slot.getValue(exec, ident) : jsUndefined();
     }));
@@ -421,7 +421,7 @@ EncodedJSValue JIT_OPERATION operationInByIdOptimize(ExecState* exec, StructureS
     scope.release();
     PropertySlot slot(baseObject, PropertySlot::InternalMethodType::HasProperty);
     bool found = baseObject->getPropertySlot(exec, ident, slot);
-    if (stubInfo->considerCaching(exec->codeBlock(), baseObject->structure(vm)))
+    if (stubInfo->considerCaching(vm, exec->codeBlock(), baseObject->structure(vm)))
         repatchInByID(exec, baseObject, ident, found, slot, *stubInfo);
     return JSValue::encode(jsBoolean(found));
 }
@@ -530,7 +530,7 @@ void JIT_OPERATION operationPutByIdStrictOptimize(ExecState* exec, StructureStub
     if (accessType != static_cast<AccessType>(stubInfo->accessType))
         return;
     
-    if (stubInfo->considerCaching(codeBlock, structure))
+    if (stubInfo->considerCaching(*vm, codeBlock, structure))
         repatchPutByID(exec, baseValue, structure, ident, slot, *stubInfo, NotDirect);
 }
 
@@ -560,7 +560,7 @@ void JIT_OPERATION operationPutByIdNonStrictOptimize(ExecState* exec, StructureS
     if (accessType != static_cast<AccessType>(stubInfo->accessType))
         return;
     
-    if (stubInfo->considerCaching(codeBlock, structure))
+    if (stubInfo->considerCaching(*vm, codeBlock, structure))
         repatchPutByID(exec, baseValue, structure, ident, slot, *stubInfo, NotDirect);
 }
 
@@ -589,7 +589,7 @@ void JIT_OPERATION operationPutByIdDirectStrictOptimize(ExecState* exec, Structu
     if (accessType != static_cast<AccessType>(stubInfo->accessType))
         return;
     
-    if (stubInfo->considerCaching(codeBlock, structure))
+    if (stubInfo->considerCaching(vm, codeBlock, structure))
         repatchPutByID(exec, baseObject, structure, ident, slot, *stubInfo, Direct);
 }
 
@@ -618,7 +618,7 @@ void JIT_OPERATION operationPutByIdDirectNonStrictOptimize(ExecState* exec, Stru
     if (accessType != static_cast<AccessType>(stubInfo->accessType))
         return;
     
-    if (stubInfo->considerCaching(codeBlock, structure))
+    if (stubInfo->considerCaching(vm, codeBlock, structure))
         repatchPutByID(exec, baseObject, structure, ident, slot, *stubInfo, Direct);
 }
 
@@ -2229,7 +2229,7 @@ EncodedJSValue JIT_OPERATION operationInstanceOfOptimize(ExecState* exec, Struct
     bool result = JSObject::defaultHasInstance(exec, value, proto);
     RETURN_IF_EXCEPTION(scope, JSValue::encode(jsUndefined()));
     
-    if (stubInfo->considerCaching(exec->codeBlock(), value.structureOrNull()))
+    if (stubInfo->considerCaching(vm, exec->codeBlock(), value.structureOrNull()))
         repatchInstanceOf(exec, value, proto, *stubInfo, result);
     
     return JSValue::encode(jsBoolean(result));
index fcfa4d3..d621ce4 100644 (file)
@@ -677,14 +677,13 @@ LLINT_SLOW_PATH_DECL(slow_path_get_by_id_direct)
             metadata.m_structureID = 0;
             metadata.m_offset = 0;
 
-            if (structure->propertyAccessesAreCacheable()
-                && !structure->needImpurePropertyWatchpoint()) {
+            if (structure->propertyAccessesAreCacheable() && !structure->needImpurePropertyWatchpoint()) {
+                {
+                    ConcurrentJSLocker locker(codeBlock->m_lock);
+                    metadata.m_structureID = structure->id();
+                    metadata.m_offset = slot.cachedOffset();
+                }
                 vm.heap.writeBarrier(codeBlock);
-
-                ConcurrentJSLocker locker(codeBlock->m_lock);
-
-                metadata.m_structureID = structure->id();
-                metadata.m_offset = slot.cachedOffset();
             }
         }
     }
@@ -737,15 +736,16 @@ static void setupGetByIdPrototypeCache(ExecState* exec, VM& vm, const Instructio
     auto result = watchpointMap.add(std::make_tuple(structure->id(), bytecodeOffset), WTFMove(watchpoints));
     ASSERT_UNUSED(result, result.isNewEntry);
 
-    ConcurrentJSLocker locker(codeBlock->m_lock);
-
-    if (slot.isUnset()) {
-        metadata.m_modeMetadata.setUnsetMode(structure);
-        return;
+    {
+        ConcurrentJSLocker locker(codeBlock->m_lock);
+        if (slot.isUnset())
+            metadata.m_modeMetadata.setUnsetMode(structure);
+        else {
+            ASSERT(slot.isValue());
+            metadata.m_modeMetadata.setProtoLoadMode(structure, offset, slot.slotBase());
+        }
     }
-    ASSERT(slot.isValue());
-
-    metadata.m_modeMetadata.setProtoLoadMode(structure, offset, slot.slotBase());
+    vm.heap.writeBarrier(codeBlock);
 }
 
 
@@ -796,19 +796,16 @@ LLINT_SLOW_PATH_DECL(slow_path_get_by_id)
         Structure* structure = baseCell->structure(vm);
         if (slot.isValue() && slot.slotBase() == baseValue) {
             ConcurrentJSLocker locker(codeBlock->m_lock);
-
             // Start out by clearing out the old cache.
             metadata.m_modeMetadata.clearToDefaultModeWithoutCache();
 
             // Prevent the prototype cache from ever happening.
             metadata.m_modeMetadata.hitCountForLLIntCaching = 0;
         
-            if (structure->propertyAccessesAreCacheable()
-                && !structure->needImpurePropertyWatchpoint()) {
-                vm.heap.writeBarrier(codeBlock);
-
+            if (structure->propertyAccessesAreCacheable() && !structure->needImpurePropertyWatchpoint()) {
                 metadata.m_modeMetadata.defaultMode.structureID = structure->id();
                 metadata.m_modeMetadata.defaultMode.cachedOffset = slot.cachedOffset();
+                vm.heap.writeBarrier(codeBlock);
             }
         } else if (UNLIKELY(metadata.m_modeMetadata.hitCountForLLIntCaching && (slot.isValue() || slot.isUnset()))) {
             ASSERT(slot.slotBase() != baseValue);
@@ -816,12 +813,13 @@ LLINT_SLOW_PATH_DECL(slow_path_get_by_id)
             if (!(--metadata.m_modeMetadata.hitCountForLLIntCaching))
                 setupGetByIdPrototypeCache(exec, vm, pc, metadata, baseCell, slot, ident);
         }
-    } else if (!LLINT_ALWAYS_ACCESS_SLOW
-        && isJSArray(baseValue)
-        && ident == vm.propertyNames->length) {
-        ConcurrentJSLocker locker(codeBlock->m_lock);
-        metadata.m_modeMetadata.setArrayLengthMode();
-        metadata.m_modeMetadata.arrayLengthMode.arrayProfile.observeStructure(baseValue.asCell()->structure(vm));
+    } else if (!LLINT_ALWAYS_ACCESS_SLOW && isJSArray(baseValue) && ident == vm.propertyNames->length) {
+        {
+            ConcurrentJSLocker locker(codeBlock->m_lock);
+            metadata.m_modeMetadata.setArrayLengthMode();
+            metadata.m_modeMetadata.arrayLengthMode.arrayProfile.observeStructure(baseValue.asCell()->structure(vm));
+        }
+        vm.heap.writeBarrier(codeBlock);
     }
 
     LLINT_PROFILE_VALUE(result);
@@ -872,15 +870,9 @@ LLINT_SLOW_PATH_DECL(slow_path_put_by_id)
         JSCell* baseCell = baseValue.asCell();
         Structure* structure = baseCell->structure(vm);
         
-        if (!structure->isUncacheableDictionary()
-            && !structure->typeInfo().prohibitsPropertyCaching()
-            && baseCell == slot.base()) {
-
-            vm.heap.writeBarrier(codeBlock);
-            
+        if (!structure->isUncacheableDictionary() && !structure->typeInfo().prohibitsPropertyCaching() && baseCell == slot.base()) {
             if (slot.type() == PutPropertySlot::NewProperty) {
                 GCSafeConcurrentJSLocker locker(codeBlock->m_lock, vm.heap);
-            
                 if (!structure->isDictionary() && structure->previousID()->outOfLineCapacity() == structure->outOfLineCapacity()) {
                     ASSERT(structure->previousID()->transitionWatchpointSetHasBeenInvalidated());
 
@@ -896,12 +888,17 @@ LLINT_SLOW_PATH_DECL(slow_path_put_by_id)
                             ASSERT(chain);
                             metadata.m_structureChain.set(vm, codeBlock, chain);
                         }
+                        vm.heap.writeBarrier(codeBlock);
                     }
                 }
             } else {
                 structure->didCachePropertyReplacement(vm, slot.cachedOffset());
-                metadata.m_oldStructureID = structure->id();
-                metadata.m_offset = slot.cachedOffset();
+                {
+                    ConcurrentJSLocker locker(codeBlock->m_lock);
+                    metadata.m_oldStructureID = structure->id();
+                    metadata.m_offset = slot.cachedOffset();
+                }
+                vm.heap.writeBarrier(codeBlock);
             }
         }
     }
index 19370f7..c5fda30 100644 (file)
@@ -1072,7 +1072,8 @@ SLOW_PATH_DECL(slow_path_resolve_scope)
     BEGIN();
     auto bytecode = pc->as<OpResolveScope>();
     auto& metadata = bytecode.metadata(exec);
-    const Identifier& ident = exec->codeBlock()->identifier(bytecode.m_var);
+    CodeBlock* codeBlock = exec->codeBlock();
+    const Identifier& ident = codeBlock->identifier(bytecode.m_var);
     JSScope* scope = exec->uncheckedR(bytecode.m_scope.offset()).Register::scope();
     JSObject* resolvedScope = JSScope::resolve(exec, scope, ident);
     // Proxy can throw an error here, e.g. Proxy in with statement's @unscopables.
@@ -1093,16 +1094,16 @@ SLOW_PATH_DECL(slow_path_resolve_scope)
             bool hasProperty = globalObject->hasProperty(exec, ident);
             CHECK_EXCEPTION();
             if (hasProperty) {
-                ConcurrentJSLocker locker(exec->codeBlock()->m_lock);
+                ConcurrentJSLocker locker(codeBlock->m_lock);
                 metadata.m_resolveType = needsVarInjectionChecks(resolveType) ? GlobalPropertyWithVarInjectionChecks : GlobalProperty;
-                metadata.m_globalObject = globalObject;
+                metadata.m_globalObject.set(vm, codeBlock, globalObject);
                 metadata.m_globalLexicalBindingEpoch = globalObject->globalLexicalBindingEpoch();
             }
         } else if (resolvedScope->isGlobalLexicalEnvironment()) {
             JSGlobalLexicalEnvironment* globalLexicalEnvironment = jsCast<JSGlobalLexicalEnvironment*>(resolvedScope);
-            ConcurrentJSLocker locker(exec->codeBlock()->m_lock);
+            ConcurrentJSLocker locker(codeBlock->m_lock);
             metadata.m_resolveType = needsVarInjectionChecks(resolveType) ? GlobalLexicalVarWithVarInjectionChecks : GlobalLexicalVar;
-            metadata.m_globalLexicalEnvironment = globalLexicalEnvironment;
+            metadata.m_globalLexicalEnvironment.set(vm, codeBlock, globalLexicalEnvironment);
         }
         break;
     }
index 599d843..5d5b796 100644 (file)
@@ -53,9 +53,9 @@ void LazyProperty<OwnerType, ElementType>::initLater(const Func&)
 template<typename OwnerType, typename ElementType>
 void LazyProperty<OwnerType, ElementType>::setMayBeNull(VM& vm, const OwnerType* owner, ElementType* value)
 {
-    vm.heap.writeBarrier(owner, value);
     m_pointer = bitwise_cast<uintptr_t>(value);
     RELEASE_ASSERT(!(m_pointer & lazyTag));
+    vm.heap.writeBarrier(owner, value);
 }
 
 template<typename OwnerType, typename ElementType>
index 5aa8a39..96d96bc 100644 (file)
@@ -47,11 +47,11 @@ class RegExpCachedResult {
 public:
     ALWAYS_INLINE void record(VM& vm, JSObject* owner, RegExp* regExp, JSString* input, MatchResult result)
     {
-        vm.heap.writeBarrier(owner);
         m_lastRegExp.setWithoutWriteBarrier(regExp);
         m_lastInput.setWithoutWriteBarrier(input);
         m_result = result;
         m_reified = false;
+        vm.heap.writeBarrier(owner);
     }
 
     JSArray* lastResult(ExecState*, JSObject* owner);