WebRTC MediaStreamTrack Enable / Disable causes video delay / lag
authoryouenn@apple.com <youenn@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 6 Jul 2018 23:47:00 +0000 (23:47 +0000)
committeryouenn@apple.com <youenn@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 6 Jul 2018 23:47:00 +0000 (23:47 +0000)
https://bugs.webkit.org/show_bug.cgi?id=186889
<rdar://problem/41370285>

Reviewed by Eric Carlson.

Source/WebCore:

Libwebrtc expects a continuous flow of calls for audio data since the API
does not provide any possiblity to give timestamps.

We were optimizing previously when a source is muted so that we would not transmit audio data.
This breaks synchronization between audio and video frames (which are timestamped).

This patch reverts the optimization and instead makes sure to send zeros for silenced audio tracks.

This requires MediaStreamTrackPrivate to send audio data even if disabled,
so that RealtimeOutgoingAudioSource will continue sending zeros at the correct pace.
This also requires WebAudioSourceProviderAVFObjC to exit early if its track is disabled.

Covered by existing tests.
Manual testing shows that synchronization is kept.

* platform/mediastream/MediaStreamTrackPrivate.cpp:
(WebCore::MediaStreamTrackPrivate::audioSamplesAvailable):
* platform/mediastream/RealtimeOutgoingAudioSource.cpp:
(WebCore::RealtimeOutgoingAudioSource::RealtimeOutgoingAudioSource):
(WebCore::RealtimeOutgoingAudioSource::initializeConverter):
(WebCore::RealtimeOutgoingAudioSource::stop):
(WebCore::RealtimeOutgoingAudioSource::sourceMutedChanged):
(WebCore::RealtimeOutgoingAudioSource::sourceEnabledChanged):
(WebCore::RealtimeOutgoingAudioSource::handleMutedIfNeeded): Deleted.
* platform/mediastream/RealtimeOutgoingAudioSource.h:
(WebCore::RealtimeOutgoingAudioSource::pullAudioData):
(WebCore::RealtimeOutgoingAudioSource::isSilenced const):
(WebCore::RealtimeOutgoingAudioSource::sendSilence): Deleted.
* platform/mediastream/mac/RealtimeOutgoingAudioSourceCocoa.cpp:
(WebCore::RealtimeOutgoingAudioSourceCocoa::pullAudioData):
(WebCore::RealtimeOutgoingAudioSourceCocoa::handleMutedIfNeeded): Deleted.
(WebCore::RealtimeOutgoingAudioSourceCocoa::sendSilence): Deleted.
* platform/mediastream/mac/RealtimeOutgoingAudioSourceCocoa.h:
* platform/mediastream/mac/WebAudioSourceProviderAVFObjC.mm:
(WebCore::WebAudioSourceProviderAVFObjC::audioSamplesAvailable):

LayoutTests:

Test is no longer valid since we are now sending 0 bytes for audio tracks.

* webrtc/audio-muted-stats2-expected.txt: Removed.
* webrtc/audio-muted-stats2.html: Removed.

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@233604 268f45cc-cd09-0410-ab3c-d52691b4dbfc

12 files changed:
LayoutTests/ChangeLog
LayoutTests/webrtc/audio-muted-stats2-expected.txt [deleted file]
LayoutTests/webrtc/audio-muted-stats2.html [deleted file]
Source/WebCore/ChangeLog
Source/WebCore/platform/mediastream/MediaStreamTrackPrivate.cpp
Source/WebCore/platform/mediastream/RealtimeOutgoingAudioSource.cpp
Source/WebCore/platform/mediastream/RealtimeOutgoingAudioSource.h
Source/WebCore/platform/mediastream/gstreamer/RealtimeOutgoingAudioSourceLibWebRTC.cpp
Source/WebCore/platform/mediastream/gstreamer/RealtimeOutgoingAudioSourceLibWebRTC.h
Source/WebCore/platform/mediastream/mac/RealtimeOutgoingAudioSourceCocoa.cpp
Source/WebCore/platform/mediastream/mac/RealtimeOutgoingAudioSourceCocoa.h
Source/WebCore/platform/mediastream/mac/WebAudioSourceProviderAVFObjC.mm

index 33f2617..13a7b1f 100644 (file)
@@ -1,3 +1,16 @@
+2018-07-06  Youenn Fablet  <youenn@apple.com>
+
+        WebRTC MediaStreamTrack Enable / Disable causes video delay / lag
+        https://bugs.webkit.org/show_bug.cgi?id=186889
+        <rdar://problem/41370285>
+
+        Reviewed by Eric Carlson.
+
+        Test is no longer valid since we are now sending 0 bytes for audio tracks.
+
+        * webrtc/audio-muted-stats2-expected.txt: Removed.
+        * webrtc/audio-muted-stats2.html: Removed.
+
 2018-07-06  Ryan Haddad  <ryanhaddad@apple.com>
 
         Skip imported/w3c/web-platform-tests/infrastructure/assumptions/html-elements.html on debug.
diff --git a/LayoutTests/webrtc/audio-muted-stats2-expected.txt b/LayoutTests/webrtc/audio-muted-stats2-expected.txt
deleted file mode 100644 (file)
index 621b5ff..0000000
+++ /dev/null
@@ -1,3 +0,0 @@
-
-PASS Audio silent data not being sent in case track is muted from the start 
-
diff --git a/LayoutTests/webrtc/audio-muted-stats2.html b/LayoutTests/webrtc/audio-muted-stats2.html
deleted file mode 100644 (file)
index 789a1b2..0000000
+++ /dev/null
@@ -1,64 +0,0 @@
-<!doctype html>
-<html>
-    <head>
-        <meta charset="utf-8">
-        <title>Testing basic video exchange from offerer to receiver</title>
-        <script src="../resources/testharness.js"></script>
-        <script src="../resources/testharnessreport.js"></script>
-    </head>
-    <body>
-        <script src ="routines.js"></script>
-        <script>
-function getOutboundRTPStats(connection)
-{
-    return connection.getStats().then((report) => {
-        var stats;
-        report.forEach((statItem) => {
-            if (statItem.type === "outbound-rtp") {
-                stats = statItem;
-            }
-        });
-        return stats;
-    });
-}
-
-function checkOutboundBytesSentNotIncreasing(firstConnection, statsFirstConnection, count)
-{
-    return getOutboundRTPStats(firstConnection).then((stats) => {
-       if (stats.bytesSent > statsFirstConnection.bytesSent)
-            return Promise.reject("outbound stats bytes sent increasing");
-        if (++count === 10)
-            return;
-        return waitFor(50).then(() => {
-            return checkOutboundBytesSentNotIncreasing(firstConnection, statsFirstConnection, count);
-        });
-    });
-}
-
-var track, firstConnection;
-promise_test((test) => {
-    if (window.testRunner)
-        testRunner.setUserMediaPermission(true);
-
-    return navigator.mediaDevices.getUserMedia({ audio: true}).then((stream) => {
-       track = stream.getAudioTracks()[0];
-       track.enabled = false;
-        return new Promise((resolve, reject) => {
-            createConnections((connection) => {
-                firstConnection = connection;
-                firstConnection.addTrack(track, stream);
-            }, (connection) => {
-                connection.ontrack = resolve;
-            });
-            setTimeout(() => reject("Test timed out"), 5000);
-        });
-    }).then(() => {
-        return getOutboundRTPStats(firstConnection);
-    }).then((stats) => {
-        statsFirstConnection = stats;
-        return checkOutboundBytesSentNotIncreasing(firstConnection, statsFirstConnection, 0);
-    });
-}, "Audio silent data not being sent in case track is muted from the start");
-        </script>
-    </body>
-</html>
index e2b738b..4fc9a7f 100644 (file)
@@ -1,5 +1,49 @@
 2018-07-06  Youenn Fablet  <youenn@apple.com>
 
+        WebRTC MediaStreamTrack Enable / Disable causes video delay / lag
+        https://bugs.webkit.org/show_bug.cgi?id=186889
+        <rdar://problem/41370285>
+
+        Reviewed by Eric Carlson.
+
+        Libwebrtc expects a continuous flow of calls for audio data since the API
+        does not provide any possiblity to give timestamps.
+
+        We were optimizing previously when a source is muted so that we would not transmit audio data.
+        This breaks synchronization between audio and video frames (which are timestamped).
+
+        This patch reverts the optimization and instead makes sure to send zeros for silenced audio tracks.
+
+        This requires MediaStreamTrackPrivate to send audio data even if disabled,
+        so that RealtimeOutgoingAudioSource will continue sending zeros at the correct pace.
+        This also requires WebAudioSourceProviderAVFObjC to exit early if its track is disabled.
+
+        Covered by existing tests.
+        Manual testing shows that synchronization is kept.
+
+        * platform/mediastream/MediaStreamTrackPrivate.cpp:
+        (WebCore::MediaStreamTrackPrivate::audioSamplesAvailable):
+        * platform/mediastream/RealtimeOutgoingAudioSource.cpp:
+        (WebCore::RealtimeOutgoingAudioSource::RealtimeOutgoingAudioSource):
+        (WebCore::RealtimeOutgoingAudioSource::initializeConverter):
+        (WebCore::RealtimeOutgoingAudioSource::stop):
+        (WebCore::RealtimeOutgoingAudioSource::sourceMutedChanged):
+        (WebCore::RealtimeOutgoingAudioSource::sourceEnabledChanged):
+        (WebCore::RealtimeOutgoingAudioSource::handleMutedIfNeeded): Deleted.
+        * platform/mediastream/RealtimeOutgoingAudioSource.h:
+        (WebCore::RealtimeOutgoingAudioSource::pullAudioData):
+        (WebCore::RealtimeOutgoingAudioSource::isSilenced const):
+        (WebCore::RealtimeOutgoingAudioSource::sendSilence): Deleted.
+        * platform/mediastream/mac/RealtimeOutgoingAudioSourceCocoa.cpp:
+        (WebCore::RealtimeOutgoingAudioSourceCocoa::pullAudioData):
+        (WebCore::RealtimeOutgoingAudioSourceCocoa::handleMutedIfNeeded): Deleted.
+        (WebCore::RealtimeOutgoingAudioSourceCocoa::sendSilence): Deleted.
+        * platform/mediastream/mac/RealtimeOutgoingAudioSourceCocoa.h:
+        * platform/mediastream/mac/WebAudioSourceProviderAVFObjC.mm:
+        (WebCore::WebAudioSourceProviderAVFObjC::audioSamplesAvailable):
+
+2018-07-06  Youenn Fablet  <youenn@apple.com>
+
         Make RealtimeOutgoingVideoSource use DestructionThread::Main
         https://bugs.webkit.org/show_bug.cgi?id=187402
 
index fc4f4ce..d7abbdf 100644 (file)
@@ -216,9 +216,6 @@ void MediaStreamTrackPrivate::audioSamplesAvailable(const MediaTime& mediaTime,
         updateReadyState();
     }
 
-    if (!enabled())
-        return;
-
     for (auto& observer : m_observers)
         observer->audioSamplesAvailable(*this, mediaTime, data, description, sampleCount);
 }
index d236132..6090f0e 100644 (file)
@@ -38,7 +38,6 @@ namespace WebCore {
 
 RealtimeOutgoingAudioSource::RealtimeOutgoingAudioSource(Ref<MediaStreamTrackPrivate>&& audioSource)
     : m_audioSource(WTFMove(audioSource))
-    , m_silenceAudioTimer(*this, &RealtimeOutgoingAudioSource::sendSilence)
 {
     m_audioSource->addObserver(*this);
     initializeConverter();
@@ -58,35 +57,22 @@ void RealtimeOutgoingAudioSource::initializeConverter()
 {
     m_muted = m_audioSource->muted();
     m_enabled = m_audioSource->enabled();
-    handleMutedIfNeeded();
 }
 
 void RealtimeOutgoingAudioSource::stop()
 {
     ASSERT(isMainThread());
-    m_silenceAudioTimer.stop();
     m_audioSource->removeObserver(*this);
 }
 
 void RealtimeOutgoingAudioSource::sourceMutedChanged()
 {
     m_muted = m_audioSource->muted();
-    handleMutedIfNeeded();
 }
 
 void RealtimeOutgoingAudioSource::sourceEnabledChanged()
 {
     m_enabled = m_audioSource->enabled();
-    handleMutedIfNeeded();
-}
-
-void RealtimeOutgoingAudioSource::handleMutedIfNeeded()
-{
-    bool isSilenced = m_muted || !m_enabled;
-    if (isSilenced && !m_silenceAudioTimer.isActive())
-        m_silenceAudioTimer.startRepeating(1_s);
-    if (!isSilenced && m_silenceAudioTimer.isActive())
-        m_silenceAudioTimer.stop();
 }
 
 } // namespace WebCore
index aef0e88..76a61f8 100644 (file)
@@ -58,13 +58,11 @@ public:
 protected:
     explicit RealtimeOutgoingAudioSource(Ref<MediaStreamTrackPrivate>&&);
 
-    virtual void handleMutedIfNeeded();
-    virtual void sendSilence() { };
-    virtual void pullAudioData() { };
+    virtual void pullAudioData() { }
+
+    bool isSilenced() const { return m_muted || !m_enabled; }
 
     Vector<webrtc::AudioTrackSinkInterface*> m_sinks;
-    bool m_muted { false };
-    bool m_enabled { true };
 
 private:
     virtual void AddSink(webrtc::AudioTrackSinkInterface* sink) { m_sinks.append(sink); }
@@ -102,8 +100,8 @@ private:
     void initializeConverter();
 
     Ref<MediaStreamTrackPrivate> m_audioSource;
-
-    Timer m_silenceAudioTimer;
+    bool m_muted { false };
+    bool m_enabled { true };
 };
 
 } // namespace WebCore
index 27bdb2f..3d85ea0 100644 (file)
@@ -43,14 +43,6 @@ void RealtimeOutgoingAudioSourceLibWebRTC::audioSamplesAvailable(const MediaTime
 {
 }
 
-void RealtimeOutgoingAudioSourceLibWebRTC::handleMutedIfNeeded()
-{
-}
-
-void RealtimeOutgoingAudioSourceLibWebRTC::sendSilence()
-{
-}
-
 void RealtimeOutgoingAudioSourceLibWebRTC::pullAudioData()
 {
 }
index 1700b18..ee56d52 100644 (file)
@@ -43,9 +43,6 @@ private:
     bool hasBufferedEnoughData() final;
 
     void pullAudioData() final;
-
-    void handleMutedIfNeeded() final;
-    void sendSilence() final;
 };
 
 } // namespace WebCore
index cc6c775..2366676 100644 (file)
@@ -54,30 +54,6 @@ Ref<RealtimeOutgoingAudioSource> RealtimeOutgoingAudioSource::create(Ref<MediaSt
     return RealtimeOutgoingAudioSourceCocoa::create(WTFMove(audioSource));
 }
 
-void RealtimeOutgoingAudioSourceCocoa::handleMutedIfNeeded()
-{
-    bool isSilenced = m_muted || !m_enabled;
-    m_sampleConverter->setMuted(isSilenced);
-
-    RealtimeOutgoingAudioSource::handleMutedIfNeeded();
-}
-
-void RealtimeOutgoingAudioSourceCocoa::sendSilence()
-{
-    LibWebRTCProvider::callOnWebRTCSignalingThread([this, protectedThis = makeRef(*this)] {
-        size_t chunkSampleCount = m_outputStreamDescription.sampleRate() / 100;
-        size_t bufferSize = chunkSampleCount * LibWebRTCAudioFormat::sampleByteSize * m_outputStreamDescription.numberOfChannels();
-
-        if (!bufferSize)
-            return;
-
-        m_audioBuffer.grow(bufferSize);
-        memset(m_audioBuffer.data(), 0, bufferSize);
-        for (auto sink : m_sinks)
-            sink->OnData(m_audioBuffer.data(), LibWebRTCAudioFormat::sampleSize, m_outputStreamDescription.sampleRate(), m_outputStreamDescription.numberOfChannels(), chunkSampleCount);
-    });
-}
-
 bool RealtimeOutgoingAudioSourceCocoa::isReachingBufferedAudioDataHighLimit()
 {
     auto writtenAudioDuration = m_writeCount / m_inputStreamDescription.sampleRate();
@@ -153,6 +129,9 @@ void RealtimeOutgoingAudioSourceCocoa::pullAudioData()
     bufferList.mBuffers[0].mDataByteSize = bufferSize;
     bufferList.mBuffers[0].mData = m_audioBuffer.data();
 
+    if (isSilenced() !=  m_sampleConverter->muted())
+        m_sampleConverter->setMuted(isSilenced());
+
     m_sampleConverter->pullAvalaibleSamplesAsChunks(bufferList, chunkSampleCount, m_readCount, [this, chunkSampleCount] {
         m_readCount += chunkSampleCount;
         for (auto sink : m_sinks)
index 76fa5ad..8500c53 100644 (file)
@@ -52,9 +52,6 @@ private:
 
     void pullAudioData() final;
 
-    void handleMutedIfNeeded() final;
-    void sendSilence() final;
-
     Ref<AudioSampleDataSource> m_sampleConverter;
     CAAudioStreamDescription m_inputStreamDescription;
     CAAudioStreamDescription m_outputStreamDescription;
index 71c7e22..d4e973b 100644 (file)
@@ -159,8 +159,11 @@ void WebAudioSourceProviderAVFObjC::unprepare()
     }
 }
 
-void WebAudioSourceProviderAVFObjC::audioSamplesAvailable(MediaStreamTrackPrivate&, const MediaTime&, const PlatformAudioData& data, const AudioStreamDescription& description, size_t frameCount)
+void WebAudioSourceProviderAVFObjC::audioSamplesAvailable(MediaStreamTrackPrivate& track, const MediaTime&, const PlatformAudioData& data, const AudioStreamDescription& description, size_t frameCount)
 {
+    if (!track.enabled())
+        return;
+
     ASSERT(description.platformDescription().type == PlatformDescription::CAAudioStreamBasicType);
     auto& basicDescription = *WTF::get<const AudioStreamBasicDescription*>(description.platformDescription().description);
     if (!m_inputDescription || m_inputDescription->streamDescription() != basicDescription)