Implement incoming webrtc data based on tracksCurr
authorcommit-queue@webkit.org <commit-queue@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 18 Mar 2017 02:36:24 +0000 (02:36 +0000)
committercommit-queue@webkit.org <commit-queue@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 18 Mar 2017 02:36:24 +0000 (02:36 +0000)
https://bugs.webkit.org/show_bug.cgi?id=169836

Patch by Youenn Fablet <youenn@apple.com> on 2017-03-17
Reviewed by Eric Carlson.

Source/WebCore:

Test: webrtc/video-with-receiver.html

Constructing incoming tracks based on libwebrtc OnAddTrack.
Constructing incoming media streams based on libwebrtc OnAddStream.
Firing only addstream if legacy API flag is on.

Ensuring that media stream and media stream tracks relationship is still correctly implemented.
For that, we keep a map that relates libwebrtc media streams with WebCore media streams.
Adding the ability to get a receiver related to the track on the track event.

Implementing the possibility to create a transceiver ahead of track arrival time.
Created transceivers that are not related to any real source are kept in the peer connection back end.
When a libwebrtc track is appearing, it is associated with the track source of the corresponding transceiver based on track type.

Added the ability to create empty real time sources and set their data libwebrtc track when being available.

* Modules/mediastream/MediaStream.cpp:
(WebCore::MediaStream::addTrackFromPlatform):
* Modules/mediastream/MediaStream.h:
* Modules/mediastream/libwebrtc/LibWebRTCMediaEndpoint.cpp:
(WebCore::LibWebRTCMediaEndpoint::mediaStreamFromRTCStream):
(WebCore::LibWebRTCMediaEndpoint::addRemoteStream):
(WebCore::LibWebRTCMediaEndpoint::addRemoteTrack):
(WebCore::LibWebRTCMediaEndpoint::removeRemoteStream):
(WebCore::LibWebRTCMediaEndpoint::OnAddStream):
(WebCore::LibWebRTCMediaEndpoint::OnRemoveStream):
(WebCore::LibWebRTCMediaEndpoint::OnAddTrack):
(WebCore::LibWebRTCMediaEndpoint::stop):
(WebCore::createMediaStreamTrack): Deleted.
(WebCore::LibWebRTCMediaEndpoint::addStream): Deleted.
* Modules/mediastream/libwebrtc/LibWebRTCMediaEndpoint.h:
* Modules/mediastream/libwebrtc/LibWebRTCPeerConnectionBackend.cpp:
(WebCore::LibWebRTCPeerConnectionBackend::doStop):
(WebCore::createReceiverForSource):
(WebCore::createEmptySource):
(WebCore::LibWebRTCPeerConnectionBackend::createReceiver):
(WebCore::LibWebRTCPeerConnectionBackend::videoReceiver):
(WebCore::LibWebRTCPeerConnectionBackend::audioReceiver):
(WebCore::LibWebRTCPeerConnectionBackend::removeRemoteStream):
(WebCore::LibWebRTCPeerConnectionBackend::addRemoteStream):
* Modules/mediastream/libwebrtc/LibWebRTCPeerConnectionBackend.h:
* platform/mediastream/mac/RealtimeIncomingAudioSource.cpp:
(WebCore::RealtimeIncomingAudioSource::setSourceTrack):
* platform/mediastream/mac/RealtimeIncomingAudioSource.h:
* platform/mediastream/mac/RealtimeIncomingVideoSource.cpp:
(WebCore::RealtimeIncomingVideoSource::setSourceTrack):
* platform/mediastream/mac/RealtimeIncomingVideoSource.h:

LayoutTests:

* webrtc/video-with-receiver-expected.txt: Added.
* webrtc/video-with-receiver.html: Copied from LayoutTests/webrtc/video.html.
* webrtc/video.html:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@214132 268f45cc-cd09-0410-ab3c-d52691b4dbfc

15 files changed:
LayoutTests/ChangeLog
LayoutTests/webrtc/video-with-receiver-expected.txt [new file with mode: 0644]
LayoutTests/webrtc/video-with-receiver.html [new file with mode: 0644]
LayoutTests/webrtc/video.html
Source/WebCore/ChangeLog
Source/WebCore/Modules/mediastream/MediaStream.cpp
Source/WebCore/Modules/mediastream/MediaStream.h
Source/WebCore/Modules/mediastream/libwebrtc/LibWebRTCMediaEndpoint.cpp
Source/WebCore/Modules/mediastream/libwebrtc/LibWebRTCMediaEndpoint.h
Source/WebCore/Modules/mediastream/libwebrtc/LibWebRTCPeerConnectionBackend.cpp
Source/WebCore/Modules/mediastream/libwebrtc/LibWebRTCPeerConnectionBackend.h
Source/WebCore/platform/mediastream/mac/RealtimeIncomingAudioSource.cpp
Source/WebCore/platform/mediastream/mac/RealtimeIncomingAudioSource.h
Source/WebCore/platform/mediastream/mac/RealtimeIncomingVideoSource.cpp
Source/WebCore/platform/mediastream/mac/RealtimeIncomingVideoSource.h

index 0c90ae9..d89d47f 100644 (file)
@@ -1,3 +1,14 @@
+2017-03-17  Youenn Fablet  <youenn@apple.com>
+
+        Implement incoming webrtc data based on tracksCurr
+        https://bugs.webkit.org/show_bug.cgi?id=169836
+
+        Reviewed by Eric Carlson.
+
+        * webrtc/video-with-receiver-expected.txt: Added.
+        * webrtc/video-with-receiver.html: Copied from LayoutTests/webrtc/video.html.
+        * webrtc/video.html:
+
 2017-03-17  Ryan Haddad  <ryanhaddad@apple.com>
 
         Remove TestExpectation for a test that is no longer in the tree.
diff --git a/LayoutTests/webrtc/video-with-receiver-expected.txt b/LayoutTests/webrtc/video-with-receiver-expected.txt
new file mode 100644 (file)
index 0000000..ce31e19
--- /dev/null
@@ -0,0 +1,4 @@
+
+
+PASS Basic video exchange 
+
diff --git a/LayoutTests/webrtc/video-with-receiver.html b/LayoutTests/webrtc/video-with-receiver.html
new file mode 100644 (file)
index 0000000..4cdde04
--- /dev/null
@@ -0,0 +1,69 @@
+<!doctype html>
+<html>
+    <head>
+        <meta charset="utf-8">
+        <title>Testing basic video exchange from offerer to receiver</title>
+        <script src="../resources/testharness.js"></script>
+        <script src="../resources/testharnessreport.js"></script>
+    </head>
+    <body>
+        <video id="video" autoplay=""></video>
+        <canvas id="canvas" width="640" height="480"></canvas>
+        <script src ="routines.js"></script>
+        <script>
+video = document.getElementById("video");
+canvas = document.getElementById("canvas");
+
+function testImage()
+{
+    canvas.width = video.videoWidth;
+    canvas.height = video.videoHeight;
+    canvas.getContext('2d').drawImage(video, 0, 0, canvas.width, canvas.height);
+
+    imageData = canvas.getContext('2d').getImageData(10, 325, 250, 1);
+    data = imageData.data;
+
+    var index = 20;
+    assert_true(data[index] < 100);
+    assert_true(data[index + 1] < 100);
+    assert_true(data[index + 2] < 100);
+
+    index = 80;
+    assert_true(data[index] > 200);
+    assert_true(data[index + 1] > 200);
+    assert_true(data[index + 2] > 200);
+
+    index += 80;
+    assert_true(data[index] > 200);
+    assert_true(data[index + 1] > 200);
+    assert_true(data[index + 2] < 100);
+}
+
+promise_test((test) => {
+    if (window.testRunner)
+        testRunner.setUserMediaPermission(true);
+
+    return navigator.mediaDevices.getUserMedia({ video: true}).then((stream) => {
+        return new Promise((resolve, reject) => {
+            if (window.internals)
+                internals.useMockRTCPeerConnectionFactory("TwoRealPeerConnections");
+
+            createConnections((firstConnection) => {
+                firstConnection.addStream(stream);
+            }, (secondConnection) => {
+                resolve(secondConnection.addTransceiver("video").receiver.track);
+            });
+            setTimeout(() => reject("Test timed out"), 5000);
+        });
+    }).then((track) => {
+        video.srcObject = new MediaStream([track]);
+        return waitFor(500);
+    }).then(() => {
+        return video.play();
+    }).then(() => {
+        testImage();
+    });
+}, "Basic video exchange");
+        </script>
+    </body>
+</html>
index 5aad5dd..33111e7 100644 (file)
@@ -13,7 +13,6 @@
         <script>
 video = document.getElementById("video");
 canvas = document.getElementById("canvas");
-// FIXME: We should use tracks
 
 function testImage()
 {
@@ -52,7 +51,13 @@ promise_test((test) => {
             createConnections((firstConnection) => {
                 firstConnection.addStream(stream);
             }, (secondConnection) => {
-                secondConnection.onaddstream = (streamEvent) => { resolve(streamEvent.stream); };
+                secondConnection.ontrack = (trackEvent) => {
+                    assert_true(trackEvent.track instanceof MediaStreamTrack);
+                    assert_true(trackEvent.receiver instanceof RTCRtpReceiver);
+                    assert_true(Array.isArray(trackEvent.streams), "Array.isArray() should return true")
+                    assert_true(Object.isFrozen(trackEvent.streams), "Object.isFrozen() should return true")
+                    resolve(trackEvent.streams[0]);
+                };
             });
             setTimeout(() => reject("Test timed out"), 5000);
         });
index 3f6531e..7e6cfed 100644 (file)
@@ -1,3 +1,58 @@
+2017-03-17  Youenn Fablet  <youenn@apple.com>
+
+        Implement incoming webrtc data based on tracksCurr
+        https://bugs.webkit.org/show_bug.cgi?id=169836
+
+        Reviewed by Eric Carlson.
+
+        Test: webrtc/video-with-receiver.html
+
+        Constructing incoming tracks based on libwebrtc OnAddTrack.
+        Constructing incoming media streams based on libwebrtc OnAddStream.
+        Firing only addstream if legacy API flag is on.
+
+        Ensuring that media stream and media stream tracks relationship is still correctly implemented.
+        For that, we keep a map that relates libwebrtc media streams with WebCore media streams.
+        Adding the ability to get a receiver related to the track on the track event.
+
+        Implementing the possibility to create a transceiver ahead of track arrival time.
+        Created transceivers that are not related to any real source are kept in the peer connection back end.
+        When a libwebrtc track is appearing, it is associated with the track source of the corresponding transceiver based on track type.
+
+        Added the ability to create empty real time sources and set their data libwebrtc track when being available.
+
+        * Modules/mediastream/MediaStream.cpp:
+        (WebCore::MediaStream::addTrackFromPlatform):
+        * Modules/mediastream/MediaStream.h:
+        * Modules/mediastream/libwebrtc/LibWebRTCMediaEndpoint.cpp:
+        (WebCore::LibWebRTCMediaEndpoint::mediaStreamFromRTCStream):
+        (WebCore::LibWebRTCMediaEndpoint::addRemoteStream):
+        (WebCore::LibWebRTCMediaEndpoint::addRemoteTrack):
+        (WebCore::LibWebRTCMediaEndpoint::removeRemoteStream):
+        (WebCore::LibWebRTCMediaEndpoint::OnAddStream):
+        (WebCore::LibWebRTCMediaEndpoint::OnRemoveStream):
+        (WebCore::LibWebRTCMediaEndpoint::OnAddTrack):
+        (WebCore::LibWebRTCMediaEndpoint::stop):
+        (WebCore::createMediaStreamTrack): Deleted.
+        (WebCore::LibWebRTCMediaEndpoint::addStream): Deleted.
+        * Modules/mediastream/libwebrtc/LibWebRTCMediaEndpoint.h:
+        * Modules/mediastream/libwebrtc/LibWebRTCPeerConnectionBackend.cpp:
+        (WebCore::LibWebRTCPeerConnectionBackend::doStop):
+        (WebCore::createReceiverForSource):
+        (WebCore::createEmptySource):
+        (WebCore::LibWebRTCPeerConnectionBackend::createReceiver):
+        (WebCore::LibWebRTCPeerConnectionBackend::videoReceiver):
+        (WebCore::LibWebRTCPeerConnectionBackend::audioReceiver):
+        (WebCore::LibWebRTCPeerConnectionBackend::removeRemoteStream):
+        (WebCore::LibWebRTCPeerConnectionBackend::addRemoteStream):
+        * Modules/mediastream/libwebrtc/LibWebRTCPeerConnectionBackend.h:
+        * platform/mediastream/mac/RealtimeIncomingAudioSource.cpp:
+        (WebCore::RealtimeIncomingAudioSource::setSourceTrack):
+        * platform/mediastream/mac/RealtimeIncomingAudioSource.h:
+        * platform/mediastream/mac/RealtimeIncomingVideoSource.cpp:
+        (WebCore::RealtimeIncomingVideoSource::setSourceTrack):
+        * platform/mediastream/mac/RealtimeIncomingVideoSource.h:
+
 2017-03-17  Eric Carlson  <eric.carlson@apple.com>
 
         [MediaStream] Compensate for video capture orientation
index e44886c..1dc6350 100644 (file)
@@ -207,6 +207,12 @@ void MediaStream::didRemoveTrack(MediaStreamTrackPrivate& trackPrivate)
     internalRemoveTrack(trackPrivate.id(), StreamModifier::Platform);
 }
 
+void MediaStream::addTrackFromPlatform(Ref<MediaStreamTrack>&& track)
+{
+    m_private->addTrack(&track->privateTrack(), MediaStreamPrivate::NotifyClientOption::Notify);
+    internalAddTrack(WTFMove(track), StreamModifier::Platform);
+}
+
 bool MediaStream::internalAddTrack(Ref<MediaStreamTrack>&& trackToAdd, StreamModifier streamModifier)
 {
     auto result = m_trackSet.add(trackToAdd->id(), WTFMove(trackToAdd));
index 816d706..d1c8fc0 100644 (file)
@@ -102,6 +102,8 @@ public:
     void addObserver(Observer*);
     void removeObserver(Observer*);
 
+    void addTrackFromPlatform(Ref<MediaStreamTrack>&&);
+
 protected:
     MediaStream(ScriptExecutionContext&, const MediaStreamTrackVector&);
     MediaStream(ScriptExecutionContext&, RefPtr<MediaStreamPrivate>&&);
index 26a5fa9..cd74f67 100644 (file)
@@ -45,6 +45,7 @@
 #include "RTCTrackEvent.h"
 #include "RealtimeIncomingAudioSource.h"
 #include "RealtimeIncomingVideoSource.h"
+#include "RuntimeEnabledFeatures.h"
 #include <webrtc/base/physicalsocketserver.h>
 #include <webrtc/p2p/base/basicpacketsocketfactory.h>
 #include <webrtc/p2p/client/basicportallocator.h>
@@ -382,35 +383,71 @@ static inline String trackId(webrtc::MediaStreamTrackInterface& videoTrack)
     return String(videoTrack.id().data(), videoTrack.id().size());
 }
 
-static inline Ref<MediaStreamTrack> createMediaStreamTrack(ScriptExecutionContext& context, Ref<RealtimeMediaSource>&& remoteSource)
+MediaStream& LibWebRTCMediaEndpoint::mediaStreamFromRTCStream(webrtc::MediaStreamInterface* rtcStream)
 {
-    String trackId = remoteSource->id();
-    return MediaStreamTrack::create(context, MediaStreamTrackPrivate::create(WTFMove(remoteSource), WTFMove(trackId)));
+    auto mediaStream = m_streams.ensure(rtcStream, [this] {
+        auto stream = MediaStream::create(*m_peerConnectionBackend.connection().scriptExecutionContext());
+        auto streamPointer = stream.ptr();
+        m_peerConnectionBackend.addRemoteStream(WTFMove(stream));
+        return streamPointer;
+    });
+    return *mediaStream.iterator->value;
 }
 
-void LibWebRTCMediaEndpoint::addStream(webrtc::MediaStreamInterface& stream)
+void LibWebRTCMediaEndpoint::addRemoteStream(webrtc::MediaStreamInterface& rtcStream)
 {
-    MediaStreamTrackVector tracks;
-    for (auto& videoTrack : stream.GetVideoTracks()) {
-        ASSERT(videoTrack);
-        String id = trackId(*videoTrack);
-        auto remoteSource = RealtimeIncomingVideoSource::create(WTFMove(videoTrack), WTFMove(id));
-        tracks.append(createMediaStreamTrack(*m_peerConnectionBackend.connection().scriptExecutionContext(), WTFMove(remoteSource)));
+    if (!RuntimeEnabledFeatures::sharedFeatures().webRTCLegacyAPIEnabled())
+        return;
+
+    auto& mediaStream = mediaStreamFromRTCStream(&rtcStream);
+    m_peerConnectionBackend.connection().fireEvent(MediaStreamEvent::create(eventNames().addstreamEvent, false, false, &mediaStream));
+}
+
+void LibWebRTCMediaEndpoint::addRemoteTrack(const webrtc::RtpReceiverInterface& rtcReceiver, const std::vector<rtc::scoped_refptr<webrtc::MediaStreamInterface>>& rtcStreams)
+{
+    RefPtr<RTCRtpReceiver> receiver;
+    RefPtr<RealtimeMediaSource> remoteSource;
+
+    auto* rtcTrack = rtcReceiver.track().get();
+
+    switch (rtcReceiver.media_type()) {
+    case cricket::MEDIA_TYPE_DATA:
+        return;
+    case cricket::MEDIA_TYPE_AUDIO: {
+        rtc::scoped_refptr<webrtc::AudioTrackInterface> audioTrack = static_cast<webrtc::AudioTrackInterface*>(rtcTrack);
+        auto audioReceiver = m_peerConnectionBackend.audioReceiver(trackId(*rtcTrack));
+
+        receiver = WTFMove(audioReceiver.receiver);
+        audioReceiver.source->setSourceTrack(WTFMove(audioTrack));
+        break;
+    }
+    case cricket::MEDIA_TYPE_VIDEO: {
+        rtc::scoped_refptr<webrtc::VideoTrackInterface> videoTrack = static_cast<webrtc::VideoTrackInterface*>(rtcTrack);
+        auto videoReceiver = m_peerConnectionBackend.videoReceiver(trackId(*rtcTrack));
+
+        receiver = WTFMove(videoReceiver.receiver);
+        videoReceiver.source->setSourceTrack(WTFMove(videoTrack));
+        break;
     }
-    for (auto& audioTrack : stream.GetAudioTracks()) {
-        ASSERT(audioTrack);
-        String id = trackId(*audioTrack);
-        auto remoteSource = RealtimeIncomingAudioSource::create(WTFMove(audioTrack), WTFMove(id));
-        tracks.append(createMediaStreamTrack(*m_peerConnectionBackend.connection().scriptExecutionContext(), WTFMove(remoteSource)));
     }
 
-    auto newStream = MediaStream::create(*m_peerConnectionBackend.connection().scriptExecutionContext(), WTFMove(tracks));
-    m_peerConnectionBackend.connection().fireEvent(MediaStreamEvent::create(eventNames().addstreamEvent, false, false, newStream.copyRef()));
+    auto* track = receiver->track();
+    ASSERT(track);
 
     Vector<RefPtr<MediaStream>> streams;
-    streams.append(newStream.copyRef());
-    for (auto& track : newStream->getTracks())
-        m_peerConnectionBackend.connection().fireEvent(RTCTrackEvent::create(eventNames().trackEvent, false, false, nullptr, track.get(), Vector<RefPtr<MediaStream>>(streams), nullptr));
+    for (auto& rtcStream : rtcStreams) {
+        auto& mediaStream = mediaStreamFromRTCStream(rtcStream.get());
+        streams.append(&mediaStream);
+        mediaStream.addTrackFromPlatform(*track);
+    }
+    m_peerConnectionBackend.connection().fireEvent(RTCTrackEvent::create(eventNames().trackEvent, false, false, WTFMove(receiver), track, WTFMove(streams), nullptr));
+}
+
+void LibWebRTCMediaEndpoint::removeRemoteStream(webrtc::MediaStreamInterface& rtcStream)
+{
+    auto* mediaStream = m_streams.take(&rtcStream);
+    if (mediaStream)
+        m_peerConnectionBackend.removeRemoteStream(mediaStream);
 }
 
 void LibWebRTCMediaEndpoint::OnAddStream(rtc::scoped_refptr<webrtc::MediaStreamInterface> stream)
@@ -419,13 +456,28 @@ void LibWebRTCMediaEndpoint::OnAddStream(rtc::scoped_refptr<webrtc::MediaStreamI
         if (protectedThis->isStopped())
             return;
         ASSERT(stream);
-        protectedThis->addStream(*stream.get());
+        protectedThis->addRemoteStream(*stream.get());
     });
 }
 
-void LibWebRTCMediaEndpoint::OnRemoveStream(rtc::scoped_refptr<webrtc::MediaStreamInterface>)
+void LibWebRTCMediaEndpoint::OnRemoveStream(rtc::scoped_refptr<webrtc::MediaStreamInterface> stream)
 {
-    notImplemented();
+    callOnMainThread([protectedThis = makeRef(*this), stream = WTFMove(stream)] {
+        if (protectedThis->isStopped())
+            return;
+        ASSERT(stream);
+        protectedThis->removeRemoteStream(*stream.get());
+    });
+}
+
+void LibWebRTCMediaEndpoint::OnAddTrack(rtc::scoped_refptr<webrtc::RtpReceiverInterface> receiver, const std::vector<rtc::scoped_refptr<webrtc::MediaStreamInterface>>& streams)
+{
+    callOnMainThread([protectedThis = makeRef(*this), receiver = WTFMove(receiver), streams] {
+        if (protectedThis->isStopped())
+            return;
+        ASSERT(receiver);
+        protectedThis->addRemoteTrack(*receiver, streams);
+    });
 }
 
 std::unique_ptr<RTCDataChannelHandler> LibWebRTCMediaEndpoint::createDataChannel(const String& label, const RTCDataChannelInit& options)
@@ -485,6 +537,7 @@ void LibWebRTCMediaEndpoint::stop()
     ASSERT(m_backend);
     m_backend->Close();
     m_backend = nullptr;
+    m_streams.clear();
 }
 
 void LibWebRTCMediaEndpoint::OnRenegotiationNeeded()
index 6e6c765..4355924 100644 (file)
@@ -89,6 +89,7 @@ private:
     void OnAddStream(rtc::scoped_refptr<webrtc::MediaStreamInterface>) final;
     void OnRemoveStream(rtc::scoped_refptr<webrtc::MediaStreamInterface>) final;
     void OnDataChannel(rtc::scoped_refptr<webrtc::DataChannelInterface>) final;
+    void OnAddTrack(rtc::scoped_refptr<webrtc::RtpReceiverInterface>, const std::vector<rtc::scoped_refptr<webrtc::MediaStreamInterface>>&) final;
     void OnRenegotiationNeeded() final;
     void OnIceConnectionChange(webrtc::PeerConnectionInterface::IceConnectionState) final;
     void OnIceGatheringChange(webrtc::PeerConnectionInterface::IceGatheringState) final;
@@ -101,9 +102,13 @@ private:
     void setLocalSessionDescriptionFailed(const std::string&);
     void setRemoteSessionDescriptionSucceeded();
     void setRemoteSessionDescriptionFailed(const std::string&);
-    void addStream(webrtc::MediaStreamInterface&);
+    void addRemoteStream(webrtc::MediaStreamInterface&);
+    void addRemoteTrack(const webrtc::RtpReceiverInterface&, const std::vector<rtc::scoped_refptr<webrtc::MediaStreamInterface>>&);
+    void removeRemoteStream(webrtc::MediaStreamInterface&);
     void addDataChannel(rtc::scoped_refptr<webrtc::DataChannelInterface>&&);
 
+    MediaStream& mediaStreamFromRTCStream(webrtc::MediaStreamInterface*);
+
     int AddRef() const { ref(); return static_cast<int>(refCount()); }
     int Release() const { deref(); return static_cast<int>(refCount()); }
 
@@ -169,6 +174,7 @@ private:
     CreateSessionDescriptionObserver m_createSessionDescriptionObserver;
     SetLocalSessionDescriptionObserver m_setLocalSessionDescriptionObserver;
     SetRemoteSessionDescriptionObserver m_setRemoteSessionDescriptionObserver;
+    HashMap<webrtc::MediaStreamInterface*, MediaStream*> m_streams;
 
     bool m_isInitiator { false };
 };
index 56664c8..9133370 100644 (file)
@@ -165,6 +165,9 @@ void LibWebRTCPeerConnectionBackend::doCreateAnswer(RTCAnswerOptions&&)
 void LibWebRTCPeerConnectionBackend::doStop()
 {
     m_endpoint->stop();
+
+    m_remoteStreams.clear();
+    m_pendingReceivers.clear();
 }
 
 void LibWebRTCPeerConnectionBackend::doAddIceCandidate(RTCIceCandidate& candidate)
@@ -200,16 +203,64 @@ void LibWebRTCPeerConnectionBackend::addVideoSource(Ref<RealtimeOutgoingVideoSou
     m_videoSources.append(WTFMove(source));
 }
 
-Ref<RTCRtpReceiver> LibWebRTCPeerConnectionBackend::createReceiver(const String&, const String& trackKind, const String& trackId)
+static inline Ref<RTCRtpReceiver> createReceiverForSource(ScriptExecutionContext& context, Ref<RealtimeMediaSource>&& source)
 {
-    // FIXME: We need to create a source that will get fueled once we will receive OnAddStream.
-    // For the moment, we create an empty one.
-    auto remoteTrackPrivate = (trackKind == "audio") ? MediaStreamTrackPrivate::create(RealtimeIncomingAudioSource::create(nullptr, String(trackId))) : MediaStreamTrackPrivate::create(RealtimeIncomingVideoSource::create(nullptr, String(trackId)));
-    auto remoteTrack = MediaStreamTrack::create(*m_peerConnection.scriptExecutionContext(), WTFMove(remoteTrackPrivate));
+    auto remoteTrackPrivate = MediaStreamTrackPrivate::create(WTFMove(source));
+    auto remoteTrack = MediaStreamTrack::create(context, WTFMove(remoteTrackPrivate));
 
     return RTCRtpReceiver::create(WTFMove(remoteTrack));
 }
 
+static inline Ref<RealtimeMediaSource> createEmptySource(const String& trackKind, String&& trackId)
+{
+    // FIXME: trackKind should be an enumeration
+    if (trackKind == "audio")
+        return RealtimeIncomingAudioSource::create(nullptr, WTFMove(trackId));
+    ASSERT(trackKind == "video");
+    return RealtimeIncomingVideoSource::create(nullptr, WTFMove(trackId));
+}
+
+Ref<RTCRtpReceiver> LibWebRTCPeerConnectionBackend::createReceiver(const String&, const String& trackKind, const String& trackId)
+{
+    auto receiver = createReceiverForSource(*m_peerConnection.scriptExecutionContext(), createEmptySource(trackKind, String(trackId)));
+    m_pendingReceivers.append(receiver.copyRef());
+    return receiver;
+}
+
+LibWebRTCPeerConnectionBackend::VideoReceiver LibWebRTCPeerConnectionBackend::videoReceiver(String&& trackId)
+{
+    // FIXME: Add to Vector a utility routine for that take-or-create pattern.
+    // FIXME: We should be selecting the receiver based on track id.
+    for (size_t cptr = 0; cptr < m_pendingReceivers.size(); ++cptr) {
+        if (m_pendingReceivers[cptr]->track()->source().type() == RealtimeMediaSource::Type::Video) {
+            Ref<RTCRtpReceiver> receiver = m_pendingReceivers[cptr].copyRef();
+            m_pendingReceivers.remove(cptr);
+            Ref<RealtimeIncomingVideoSource> source = static_cast<RealtimeIncomingVideoSource&>(receiver->track()->source());
+            return { WTFMove(receiver), WTFMove(source) };
+        }
+    }
+    auto source = RealtimeIncomingVideoSource::create(nullptr, WTFMove(trackId));
+    auto receiver = createReceiverForSource(*m_peerConnection.scriptExecutionContext(), source.copyRef());
+    return { WTFMove(receiver), WTFMove(source) };
+}
+
+LibWebRTCPeerConnectionBackend::AudioReceiver LibWebRTCPeerConnectionBackend::audioReceiver(String&& trackId)
+{
+    // FIXME: Add to Vector a utility routine for that take-or-create pattern.
+    // FIXME: We should be selecting the receiver based on track id.
+    for (size_t cptr = 0; cptr < m_pendingReceivers.size(); ++cptr) {
+        if (m_pendingReceivers[cptr]->track()->source().type() == RealtimeMediaSource::Type::Audio) {
+            Ref<RTCRtpReceiver> receiver = m_pendingReceivers[cptr].copyRef();
+            m_pendingReceivers.remove(cptr);
+            Ref<RealtimeIncomingAudioSource> source = static_cast<RealtimeIncomingAudioSource&>(receiver->track()->source());
+            return { WTFMove(receiver), WTFMove(source) };
+        }
+    }
+    auto source = RealtimeIncomingAudioSource::create(nullptr, WTFMove(trackId));
+    auto receiver = createReceiverForSource(*m_peerConnection.scriptExecutionContext(), source.copyRef());
+    return { WTFMove(receiver), WTFMove(source) };
+}
+
 std::unique_ptr<RTCDataChannelHandler> LibWebRTCPeerConnectionBackend::createDataChannelHandler(const String& label, const RTCDataChannelInit& options)
 {
     return m_endpoint->createDataChannel(label, options);
@@ -251,6 +302,18 @@ void LibWebRTCPeerConnectionBackend::notifyAddedTrack(RTCRtpSender& sender)
     m_endpoint->addTrack(*sender.track(), sender.mediaStreamIds());
 }
 
+void LibWebRTCPeerConnectionBackend::removeRemoteStream(MediaStream* mediaStream)
+{
+    m_remoteStreams.removeFirstMatching([mediaStream](const auto& item) {
+        return item.get() == mediaStream;
+    });
+}
+
+void LibWebRTCPeerConnectionBackend::addRemoteStream(Ref<MediaStream>&& mediaStream)
+{
+    m_remoteStreams.append(WTFMove(mediaStream));
+}
+
 } // namespace WebCore
 
 #endif // USE(LIBWEBRTC)
index 8560053..2598639 100644 (file)
@@ -39,6 +39,8 @@ class LibWebRTCMediaEndpoint;
 class RTCRtpReceiver;
 class RTCSessionDescription;
 class RTCStatsReport;
+class RealtimeIncomingAudioSource;
+class RealtimeIncomingVideoSource;
 class RealtimeOutgoingAudioSource;
 class RealtimeOutgoingVideoSource;
 
@@ -67,9 +69,7 @@ private:
     RefPtr<RTCSessionDescription> currentRemoteDescription() const final;
     RefPtr<RTCSessionDescription> pendingRemoteDescription() const final;
 
-    void notifyAddedTrack(RTCRtpSender&) final;
     // FIXME: API to implement for real
-    Vector<RefPtr<MediaStream>> getRemoteStreams() const final { return { }; }
     void replaceTrack(RTCRtpSender&, Ref<MediaStreamTrack>&&, DOMPromise<void>&&) final { }
 
     void emulatePlatformEvent(const String&) final { }
@@ -82,15 +82,35 @@ private:
     void getStatsSucceeded(const DeferredPromise&, Ref<RTCStatsReport>&&);
     void getStatsFailed(const DeferredPromise&, Exception&&);
 
+    Vector<RefPtr<MediaStream>> getRemoteStreams() const final { return m_remoteStreams; }
+    void removeRemoteStream(MediaStream*);
+    void addRemoteStream(Ref<MediaStream>&&);
+
+    void notifyAddedTrack(RTCRtpSender&) final;
+
+    struct VideoReceiver {
+        Ref<RTCRtpReceiver> receiver;
+        Ref<RealtimeIncomingVideoSource> source;
+    };
+    struct AudioReceiver {
+        Ref<RTCRtpReceiver> receiver;
+        Ref<RealtimeIncomingAudioSource> source;
+    };
+    VideoReceiver videoReceiver(String&& trackId);
+    AudioReceiver audioReceiver(String&& trackId);
+
 private:
     Ref<LibWebRTCMediaEndpoint> m_endpoint;
     bool m_isLocalDescriptionSet { false };
     bool m_isRemoteDescriptionSet { false };
 
+    // FIXME: Make m_remoteStreams a Vector of Ref.
+    Vector<RefPtr<MediaStream>> m_remoteStreams;
     Vector<std::unique_ptr<webrtc::IceCandidateInterface>> m_pendingCandidates;
     Vector<Ref<RealtimeOutgoingAudioSource>> m_audioSources;
     Vector<Ref<RealtimeOutgoingVideoSource>> m_videoSources;
     HashMap<const DeferredPromise*, Ref<DeferredPromise>> m_statsPromises;
+    Vector<Ref<RTCRtpReceiver>> m_pendingReceivers;
 };
 
 } // namespace WebCore
index 7df1a62..ecfd78e 100644 (file)
@@ -130,6 +130,15 @@ void RealtimeIncomingAudioSource::stopProducingData()
         m_audioTrack->RemoveSink(this);
 }
 
+void RealtimeIncomingAudioSource::setSourceTrack(rtc::scoped_refptr<webrtc::AudioTrackInterface>&& track)
+{
+    ASSERT(!m_audioTrack);
+    ASSERT(track);
+
+    m_audioTrack = WTFMove(track);
+    if (m_isProducingData)
+        m_audioTrack->AddSink(this);
+}
 
 RefPtr<RealtimeMediaSourceCapabilities> RealtimeIncomingAudioSource::capabilities() const
 {
index c7aa58b..7ddbf22 100644 (file)
@@ -48,6 +48,8 @@ class RealtimeIncomingAudioSource final : public RealtimeMediaSource, private we
 public:
     static Ref<RealtimeIncomingAudioSource> create(rtc::scoped_refptr<webrtc::AudioTrackInterface>&&, String&&);
 
+    void setSourceTrack(rtc::scoped_refptr<webrtc::AudioTrackInterface>&&);
+
 private:
     RealtimeIncomingAudioSource(rtc::scoped_refptr<webrtc::AudioTrackInterface>&&, String&&);
     ~RealtimeIncomingAudioSource();
index 92ee19e..eb526d1 100644 (file)
@@ -82,6 +82,17 @@ void RealtimeIncomingVideoSource::startProducingData()
         m_videoTrack->AddOrUpdateSink(this, rtc::VideoSinkWants());
 }
 
+void RealtimeIncomingVideoSource::setSourceTrack(rtc::scoped_refptr<webrtc::VideoTrackInterface>&& track)
+{
+    ASSERT(!m_videoTrack);
+    ASSERT(track);
+
+    m_muted = false;
+    m_videoTrack = track;
+    if (m_isProducingData)
+        m_videoTrack->AddOrUpdateSink(this, rtc::VideoSinkWants());
+}
+
 void RealtimeIncomingVideoSource::stopProducingData()
 {
     if (!m_isProducingData)
index ee5e82b..29d7076 100644 (file)
@@ -49,6 +49,8 @@ public:
     static Ref<RealtimeIncomingVideoSource> create(rtc::scoped_refptr<webrtc::VideoTrackInterface>&&, String&&);
     ~RealtimeIncomingVideoSource() { stopProducingData(); }
 
+    void setSourceTrack(rtc::scoped_refptr<webrtc::VideoTrackInterface>&&);
+
 private:
     RealtimeIncomingVideoSource(rtc::scoped_refptr<webrtc::VideoTrackInterface>&&, String&&, CFMutableDictionaryRef);