[WPE][GTK] Implement WebAudioSourceProviderGStreamer to allow bridging MediaStream...
authorcommit-queue@webkit.org <commit-queue@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 7 Dec 2018 10:48:56 +0000 (10:48 +0000)
committercommit-queue@webkit.org <commit-queue@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 7 Dec 2018 10:48:56 +0000 (10:48 +0000)
https://bugs.webkit.org/show_bug.cgi?id=186933

Source/WebCore:

Reusing the AudioSourceProviderGStreamer itself as it was doing almost everything we needed,
just added a constructor to be able to create it from a MediaStreamTrackPrivate and made it a
WebAudioSourceProvider which only means it is now a ThreadSafeRefCounted.

Sensibily refactored GStreamerMediaStreamSource so that we could reuse it to track a single
MediaStreamTrack.

Patch by Thibault Saunier <tsaunier@igalia.com> on 2018-12-07
Reviewed by Philippe Normand.

Enabled all tests depending on that feature.

* platform/audio/gstreamer/AudioSourceProviderGStreamer.cpp:
(WebCore::AudioSourceProviderGStreamer::AudioSourceProviderGStreamer):
(WebCore::AudioSourceProviderGStreamer::~AudioSourceProviderGStreamer):
(WebCore::AudioSourceProviderGStreamer::setClient):
* platform/audio/gstreamer/AudioSourceProviderGStreamer.h:
* platform/mediastream/MediaStreamTrackPrivate.cpp:
(WebCore::MediaStreamTrackPrivate::audioSourceProvider):
* platform/mediastream/gstreamer/GStreamerAudioCapturer.cpp:
(WebCore::GStreamerAudioCapturer::GStreamerAudioCapturer):
* platform/mediastream/gstreamer/GStreamerAudioStreamDescription.h:
* platform/mediastream/gstreamer/GStreamerMediaStreamSource.cpp:
(WebCore::webkitMediaStreamSrcSetupSrc):
(WebCore::webkitMediaStreamSrcSetupAppSrc):
(WebCore::webkitMediaStreamSrcAddTrack):
(WebCore::webkitMediaStreamSrcSetStream):
(WebCore::webkitMediaStreamSrcNew):
* platform/mediastream/gstreamer/GStreamerMediaStreamSource.h:
* platform/mediastream/gstreamer/MockGStreamerAudioCaptureSource.cpp:
(WebCore::WrappedMockRealtimeAudioSource::WrappedMockRealtimeAudioSource):
(WebCore::WrappedMockRealtimeAudioSource::start):
(WebCore::WrappedMockRealtimeAudioSource::addHum):
(WebCore::WrappedMockRealtimeAudioSource::render):
(WebCore::WrappedMockRealtimeAudioSource::settingsDidChange):
(WebCore::MockGStreamerAudioCaptureSource::startProducingData):
* platform/mediastream/gstreamer/RealtimeOutgoingAudioSourceLibWebRTC.cpp:
(WebCore::RealtimeOutgoingAudioSourceLibWebRTC::pullAudioData): Handle the case where input buffers
  are "big" and process all the data we can for each runs of the method.

LayoutTests:

Patch by Thibault Saunier <tsaunier@igalia.com> on 2018-12-07
Reviewed by Philippe Normand.

Enabled all tests depending on that feature.

* platform/gtk/TestExpectations:
* webrtc/clone-audio-track.html:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@238951 268f45cc-cd09-0410-ab3c-d52691b4dbfc

15 files changed:
LayoutTests/ChangeLog
LayoutTests/platform/gtk/TestExpectations
LayoutTests/webrtc/clone-audio-track.html
Source/WebCore/ChangeLog
Source/WebCore/platform/audio/gstreamer/AudioSourceProviderGStreamer.cpp
Source/WebCore/platform/audio/gstreamer/AudioSourceProviderGStreamer.h
Source/WebCore/platform/mediastream/MediaStreamTrackPrivate.cpp
Source/WebCore/platform/mediastream/gstreamer/GStreamerAudioCapturer.cpp
Source/WebCore/platform/mediastream/gstreamer/GStreamerAudioCapturer.h
Source/WebCore/platform/mediastream/gstreamer/GStreamerAudioStreamDescription.h
Source/WebCore/platform/mediastream/gstreamer/GStreamerMediaStreamSource.cpp
Source/WebCore/platform/mediastream/gstreamer/GStreamerMediaStreamSource.h
Source/WebCore/platform/mediastream/gstreamer/MockGStreamerAudioCaptureSource.cpp
Source/WebCore/platform/mediastream/gstreamer/RealtimeOutgoingAudioSourceLibWebRTC.cpp
Source/WebCore/platform/mediastream/gstreamer/RealtimeOutgoingAudioSourceLibWebRTC.h

index df26188..ba32eaf 100644 (file)
@@ -1,3 +1,15 @@
+2018-12-07  Thibault Saunier  <tsaunier@igalia.com>
+
+        [WPE][GTK] Implement WebAudioSourceProviderGStreamer to allow bridging MediaStream and the WebAudio APIs
+        https://bugs.webkit.org/show_bug.cgi?id=186933
+
+        Reviewed by Philippe Normand.
+
+        Enabled all tests depending on that feature.
+
+        * platform/gtk/TestExpectations:
+        * webrtc/clone-audio-track.html:
+
 2018-12-06  Yongjun Zhang  <yongjun_zhang@apple.com>
 
         We should ignore minimumEffectiveDeviceWidth if the page specifies device-width in viewport meta-tag.
index 3181906..46aede2 100644 (file)
@@ -586,16 +586,6 @@ webkit.org/b/187064 webrtc/libwebrtc/descriptionGetters.html
 webkit.org/b/177533 webrtc/video-interruption.html
 
 webkit.org/b/186933 webrtc/peer-connection-createMediaStreamDestination.html
-webkit.org/b/186933 webrtc/peer-connection-remote-audio-mute2.html
-webkit.org/b/186933 webrtc/peer-connection-audio-unmute.html
-webkit.org/b/186933 webrtc/peer-connection-audio-mute2.html
-webkit.org/b/186933 webrtc/peer-connection-audio-mute.html
-webkit.org/b/186933 webrtc/peer-connection-remote-audio-mute.html
-webkit.org/b/186933 webrtc/clone-audio-track.html
-webkit.org/b/186933 webrtc/audio-replace-track.html
-webkit.org/b/186933 webrtc/audio-peer-connection-webaudio.html
-webkit.org/b/186933 webrtc/audio-muted-stats.html
-webkit.org/b/186933 webrtc/getUserMedia-webaudio-autoplay.html
 
 imported/w3c/web-platform-tests/webrtc/ [ Skip ]
 http/tests/webrtc [ Skip ]
index 649b446..0f93748 100644 (file)
@@ -33,7 +33,7 @@
                 });
             }).then(() => {
                 return analyseAudio(remoteStream, 200, context).then((results) => {
-                    assert_false(results.heardHum, "Did not heard hum from remote enabled track");
+                    assert_false(results.heardHum, "Did not hear hum from remote disabled track");
                 });
             }).then(() => {
                 return analyseAudio(new MediaStream([clonedTrack]), 200, context).then((results) => {
index f05d16d..cfd3fe4 100644 (file)
@@ -1,3 +1,47 @@
+2018-12-07  Thibault Saunier  <tsaunier@igalia.com>
+
+        [WPE][GTK] Implement WebAudioSourceProviderGStreamer to allow bridging MediaStream and the WebAudio APIs
+        https://bugs.webkit.org/show_bug.cgi?id=186933
+
+        Reusing the AudioSourceProviderGStreamer itself as it was doing almost everything we needed,
+        just added a constructor to be able to create it from a MediaStreamTrackPrivate and made it a
+        WebAudioSourceProvider which only means it is now a ThreadSafeRefCounted.
+
+        Sensibily refactored GStreamerMediaStreamSource so that we could reuse it to track a single
+        MediaStreamTrack.
+
+        Reviewed by Philippe Normand.
+
+        Enabled all tests depending on that feature.
+
+        * platform/audio/gstreamer/AudioSourceProviderGStreamer.cpp:
+        (WebCore::AudioSourceProviderGStreamer::AudioSourceProviderGStreamer):
+        (WebCore::AudioSourceProviderGStreamer::~AudioSourceProviderGStreamer):
+        (WebCore::AudioSourceProviderGStreamer::setClient):
+        * platform/audio/gstreamer/AudioSourceProviderGStreamer.h:
+        * platform/mediastream/MediaStreamTrackPrivate.cpp:
+        (WebCore::MediaStreamTrackPrivate::audioSourceProvider):
+        * platform/mediastream/gstreamer/GStreamerAudioCapturer.cpp:
+        (WebCore::GStreamerAudioCapturer::GStreamerAudioCapturer):
+        * platform/mediastream/gstreamer/GStreamerAudioStreamDescription.h:
+        * platform/mediastream/gstreamer/GStreamerMediaStreamSource.cpp:
+        (WebCore::webkitMediaStreamSrcSetupSrc):
+        (WebCore::webkitMediaStreamSrcSetupAppSrc):
+        (WebCore::webkitMediaStreamSrcAddTrack):
+        (WebCore::webkitMediaStreamSrcSetStream):
+        (WebCore::webkitMediaStreamSrcNew):
+        * platform/mediastream/gstreamer/GStreamerMediaStreamSource.h:
+        * platform/mediastream/gstreamer/MockGStreamerAudioCaptureSource.cpp:
+        (WebCore::WrappedMockRealtimeAudioSource::WrappedMockRealtimeAudioSource):
+        (WebCore::WrappedMockRealtimeAudioSource::start):
+        (WebCore::WrappedMockRealtimeAudioSource::addHum):
+        (WebCore::WrappedMockRealtimeAudioSource::render):
+        (WebCore::WrappedMockRealtimeAudioSource::settingsDidChange):
+        (WebCore::MockGStreamerAudioCaptureSource::startProducingData):
+        * platform/mediastream/gstreamer/RealtimeOutgoingAudioSourceLibWebRTC.cpp:
+        (WebCore::RealtimeOutgoingAudioSourceLibWebRTC::pullAudioData): Handle the case where input buffers
+          are "big" and process all the data we can for each runs of the method.
+
 2018-12-06  Alexey Proskuryakov  <ap@apple.com>
 
         Move USE_NEW_THEME out of WebCore's config.h
index d4a0ca2..e7111f3 100644 (file)
 #include <gst/audio/audio-info.h>
 #include <gst/base/gstadapter.h>
 
+#if ENABLE(MEDIA_STREAM) && USE(LIBWEBRTC)
+#include "GStreamerAudioData.h"
+#include "GStreamerMediaStreamSource.h"
+#endif
 
 namespace WebCore {
 
@@ -94,6 +98,31 @@ AudioSourceProviderGStreamer::AudioSourceProviderGStreamer()
     m_frontRightAdapter = gst_adapter_new();
 }
 
+#if ENABLE(MEDIA_STREAM) && USE(LIBWEBRTC)
+AudioSourceProviderGStreamer::AudioSourceProviderGStreamer(MediaStreamTrackPrivate& source)
+    : m_notifier(MainThreadNotifier<MainThreadNotification>::create())
+    , m_client(nullptr)
+    , m_deinterleaveSourcePads(0)
+    , m_deinterleavePadAddedHandlerId(0)
+    , m_deinterleaveNoMorePadsHandlerId(0)
+    , m_deinterleavePadRemovedHandlerId(0)
+{
+    m_frontLeftAdapter = gst_adapter_new();
+    m_frontRightAdapter = gst_adapter_new();
+    auto pipelineName = String::format("WebAudioProvider_MediaStreamTrack_%s", source.id().utf8().data());
+    m_pipeline = adoptGRef(GST_ELEMENT(g_object_ref_sink(gst_element_factory_make("pipeline", pipelineName.utf8().data()))));
+    auto src = webkitMediaStreamSrcNew();
+    webkitMediaStreamSrcAddTrack(WEBKIT_MEDIA_STREAM_SRC(src), &source, true);
+
+    m_audioSinkBin = adoptGRef(GST_ELEMENT(g_object_ref_sink(gst_parse_bin_from_description("tee name=audioTee", true, nullptr))));
+
+    gst_bin_add_many(GST_BIN(m_pipeline.get()), src, m_audioSinkBin.get(), nullptr);
+    gst_element_link(src, m_audioSinkBin.get());
+
+    connectSimpleBusMessageCallback(m_pipeline.get());
+}
+#endif
+
 AudioSourceProviderGStreamer::~AudioSourceProviderGStreamer()
 {
     m_notifier->invalidate();
@@ -105,6 +134,9 @@ AudioSourceProviderGStreamer::~AudioSourceProviderGStreamer()
         g_signal_handler_disconnect(deinterleave.get(), m_deinterleavePadRemovedHandlerId);
     }
 
+    if (m_pipeline)
+        gst_element_set_state(m_pipeline.get(), GST_STATE_NULL);
+
     g_object_unref(m_frontLeftAdapter);
     g_object_unref(m_frontRightAdapter);
 }
@@ -205,12 +237,17 @@ void AudioSourceProviderGStreamer::setClient(AudioSourceProviderClient* client)
     ASSERT(client);
     m_client = client;
 
+    if (m_pipeline)
+        gst_element_set_state(m_pipeline.get(), GST_STATE_PLAYING);
+
     // The volume element is used to mute audio playback towards the
     // autoaudiosink. This is needed to avoid double playback of audio
     // from our audio sink and from the WebAudio AudioDestination node
     // supposedly configured already by application side.
     GRefPtr<GstElement> volumeElement = adoptGRef(gst_bin_get_by_name(GST_BIN(m_audioSinkBin.get()), "volume"));
-    g_object_set(volumeElement.get(), "mute", TRUE, nullptr);
+
+    if (volumeElement)
+        g_object_set(volumeElement.get(), "mute", TRUE, nullptr);
 
     // The audioconvert and audioresample elements are needed to
     // ensure deinterleave and the sinks downstream receive buffers in
index ab60198..92d0a86 100644 (file)
 #include <wtf/Forward.h>
 #include <wtf/Noncopyable.h>
 
+#if ENABLE(MEDIA_STREAM) && USE(LIBWEBRTC)
+#include "GStreamerAudioStreamDescription.h"
+#include "MediaStreamTrackPrivate.h"
+#include "WebAudioSourceProvider.h"
+#endif
+
 typedef struct _GstAdapter GstAdapter;
 typedef struct _GstAppSink GstAppSink;
 
 namespace WebCore {
 
+#if ENABLE(MEDIA_STREAM) && USE(LIBWEBRTC)
+class AudioSourceProviderGStreamer final : public WebAudioSourceProvider {
+public:
+    static Ref<AudioSourceProviderGStreamer> create(MediaStreamTrackPrivate& source)
+    {
+        return adoptRef(*new AudioSourceProviderGStreamer(source));
+    }
+    AudioSourceProviderGStreamer(MediaStreamTrackPrivate&);
+#else
 class AudioSourceProviderGStreamer : public AudioSourceProvider {
     WTF_MAKE_NONCOPYABLE(AudioSourceProviderGStreamer);
 public:
+#endif
+
     AudioSourceProviderGStreamer();
     ~AudioSourceProviderGStreamer();
 
@@ -54,6 +71,7 @@ public:
     void clearAdapters();
 
 private:
+    GRefPtr<GstElement> m_pipeline;
     enum MainThreadNotification {
         DeinterleavePadsConfigured = 1 << 0,
     };
index 5166de6..95fa95e 100644 (file)
@@ -35,6 +35,8 @@
 
 #if PLATFORM(COCOA)
 #include "WebAudioSourceProviderAVFObjC.h"
+#elif ENABLE(WEB_AUDIO) && ENABLE(MEDIA_STREAM) && USE(LIBWEBRTC) && USE(GSTREAMER)
+#include "AudioSourceProviderGStreamer.h"
 #else
 #include "WebAudioSourceProvider.h"
 #endif
@@ -178,6 +180,9 @@ AudioSourceProvider* MediaStreamTrackPrivate::audioSourceProvider()
 #if PLATFORM(COCOA)
     if (!m_audioSourceProvider)
         m_audioSourceProvider = WebAudioSourceProviderAVFObjC::create(*this);
+#elif USE(LIBWEBRTC) && USE(GSTREAMER)
+    if (!m_audioSourceProvider)
+        m_audioSourceProvider = AudioSourceProviderGStreamer::create(*this);
 #endif
     return m_audioSourceProvider.get();
 }
index 8df758a..14e2318 100644 (file)
@@ -36,20 +36,10 @@ GStreamerAudioCapturer::GStreamerAudioCapturer(GStreamerCaptureDevice device)
 }
 
 GStreamerAudioCapturer::GStreamerAudioCapturer()
-    : GStreamerCapturer("audiotestsrc", adoptGRef(gst_caps_new_simple("audio/x-raw", "rate", G_TYPE_INT, LibWebRTCAudioFormat::sampleRate, nullptr)))
+    : GStreamerCapturer("appsrc", adoptGRef(gst_caps_new_simple("audio/x-raw", "rate", G_TYPE_INT, LibWebRTCAudioFormat::sampleRate, nullptr)))
 {
 }
 
-GstElement* GStreamerAudioCapturer::createSource()
-{
-    GstElement* source = GStreamerCapturer::createSource();
-
-    if (!m_device)
-        gst_util_set_object_arg(G_OBJECT(m_src.get()), "wave", "ticks");
-
-    return source;
-}
-
 GstElement* GStreamerAudioCapturer::createConverter()
 {
     auto converter = gst_parse_bin_from_description("audioconvert ! audioresample", TRUE, nullptr);
@@ -62,14 +52,15 @@ GstElement* GStreamerAudioCapturer::createConverter()
 bool GStreamerAudioCapturer::setSampleRate(int sampleRate)
 {
 
-    if (sampleRate > 0) {
-        GST_INFO_OBJECT(m_pipeline.get(), "Setting SampleRate %d", sampleRate);
-        m_caps = adoptGRef(gst_caps_new_simple("audio/x-raw", "rate", G_TYPE_INT, sampleRate, nullptr));
-    } else {
+    if (sampleRate <= 0) {
         GST_INFO_OBJECT(m_pipeline.get(), "Not forcing sample rate");
-        m_caps = adoptGRef(gst_caps_new_empty_simple("audio/x-raw"));
+
+        return false;
     }
 
+    GST_INFO_OBJECT(m_pipeline.get(), "Setting SampleRate %d", sampleRate);
+    m_caps = adoptGRef(gst_caps_new_simple("audio/x-raw", "rate", G_TYPE_INT, sampleRate, nullptr));
+
     if (!m_capsfilter.get())
         return false;
 
index 7b279ee..9aaf761 100644 (file)
@@ -32,7 +32,6 @@ public:
     GStreamerAudioCapturer(GStreamerCaptureDevice);
     GStreamerAudioCapturer();
 
-    GstElement* createSource() final;
     GstElement* createConverter() final;
     const char* name() final { return "Audio"; }
 
index 861adef..64fced4 100644 (file)
@@ -82,6 +82,7 @@ public:
     bool isSignedInteger() const final { return GST_AUDIO_INFO_IS_INTEGER(&m_info); }
     bool isNativeEndian() const final { return GST_AUDIO_INFO_ENDIANNESS(&m_info) == G_BYTE_ORDER; }
     bool isFloat() const final { return GST_AUDIO_INFO_IS_FLOAT(&m_info); }
+    int bytesPerFrame() { return GST_AUDIO_INFO_BPF(&m_info);  }
 
     uint32_t numberOfInterleavedChannels() const final { return isInterleaved() ? GST_AUDIO_INFO_CHANNELS(&m_info) : TRUE; }
     uint32_t numberOfChannelStreams() const final { return GST_AUDIO_INFO_CHANNELS(&m_info); }
index 2513ca9..927d986 100644 (file)
@@ -41,7 +41,6 @@ namespace WebCore {
 static void webkitMediaStreamSrcPushVideoSample(WebKitMediaStreamSrc* self, GstSample* gstsample);
 static void webkitMediaStreamSrcPushAudioSample(WebKitMediaStreamSrc* self, GstSample* gstsample);
 static void webkitMediaStreamSrcTrackEnded(WebKitMediaStreamSrc* self, MediaStreamTrackPrivate&);
-static void webkitMediaStreamSrcAddTrack(WebKitMediaStreamSrc* self, MediaStreamTrackPrivate*);
 static void webkitMediaStreamSrcRemoveTrackByType(WebKitMediaStreamSrc* self, RealtimeMediaSource::Type trackType);
 
 static GstStaticPadTemplate videoSrcTemplate = GST_STATIC_PAD_TEMPLATE("video_src",
@@ -156,7 +155,7 @@ public:
 
     void didAddTrack(MediaStreamTrackPrivate& track) final
     {
-        webkitMediaStreamSrcAddTrack(m_mediaStreamSrc.get(), &track);
+        webkitMediaStreamSrcAddTrack(m_mediaStreamSrc.get(), &track, false);
     }
 
     void didRemoveTrack(MediaStreamTrackPrivate& track) final
@@ -182,8 +181,8 @@ struct _WebKitMediaStreamSrc {
     std::unique_ptr<WebKitMediaStreamTrackObserver> mediaStreamTrackObserver;
     std::unique_ptr<WebKitMediaStreamObserver> mediaStreamObserver;
     volatile gint npads;
-    gulong probeid;
     RefPtr<MediaStreamPrivate> stream;
+    RefPtr<MediaStreamTrackPrivate> track;
 
     GstFlowCombiner* flowCombiner;
     GRefPtr<GstStreamCollection> streamCollection;
@@ -314,8 +313,11 @@ static GstStateChangeReturn webkitMediaStreamSrcChangeState(GstElement* element,
     if (transition == GST_STATE_CHANGE_PAUSED_TO_READY) {
 
         GST_OBJECT_LOCK(self);
-        for (auto& track : self->stream->tracks())
-            track->removeObserver(*self->mediaStreamTrackObserver.get());
+        if (self->stream) {
+            for (auto& track : self->stream->tracks())
+                track->removeObserver(*self->mediaStreamTrackObserver.get());
+        } else if (self->track)
+            self->track->removeObserver(*self->mediaStreamTrackObserver.get());
         GST_OBJECT_UNLOCK(self);
     }
 
@@ -435,22 +437,26 @@ static GstPadProbeReturn webkitMediaStreamSrcPadProbeCb(GstPad* pad, GstPadProbe
 
 static gboolean webkitMediaStreamSrcSetupSrc(WebKitMediaStreamSrc* self,
     MediaStreamTrackPrivate* track, GstElement* element,
-    GstStaticPadTemplate* pad_template, gboolean observe_track)
+    GstStaticPadTemplate* pad_template, gboolean observe_track,
+    bool onlyTrack)
 {
     auto pad = adoptGRef(gst_element_get_static_pad(element, "src"));
 
     gst_bin_add(GST_BIN(self), element);
 
-    ProbeData* data = new ProbeData;
-    data->self = WEBKIT_MEDIA_STREAM_SRC(self);
-    data->pad_template = pad_template;
-    data->track = track;
-
-    self->probeid = gst_pad_add_probe(pad.get(), (GstPadProbeType)GST_PAD_PROBE_TYPE_EVENT_DOWNSTREAM,
-        (GstPadProbeCallback)webkitMediaStreamSrcPadProbeCb, data,
-        [](gpointer data) {
-            delete (ProbeData*)data;
-        });
+    if (!onlyTrack) {
+        ProbeData* data = new ProbeData;
+        data->self = WEBKIT_MEDIA_STREAM_SRC(self);
+        data->pad_template = pad_template;
+        data->track = track;
+
+        gst_pad_add_probe(pad.get(), (GstPadProbeType)GST_PAD_PROBE_TYPE_EVENT_DOWNSTREAM,
+            (GstPadProbeCallback)webkitMediaStreamSrcPadProbeCb, data,
+            [](gpointer data) {
+                delete (ProbeData*)data;
+            });
+    } else
+        webkitMediaStreamSrcAddPad(self, pad.get(), pad_template);
 
     if (observe_track)
         track->addObserver(*self->mediaStreamTrackObserver.get());
@@ -461,12 +467,12 @@ static gboolean webkitMediaStreamSrcSetupSrc(WebKitMediaStreamSrc* self,
 
 static gboolean webkitMediaStreamSrcSetupAppSrc(WebKitMediaStreamSrc* self,
     MediaStreamTrackPrivate* track, GstElement** element,
-    GstStaticPadTemplate* pad_template)
+    GstStaticPadTemplate* pad_template, bool onlyTrack)
 {
     *element = gst_element_factory_make("appsrc", nullptr);
     g_object_set(*element, "is-live", true, "format", GST_FORMAT_TIME, nullptr);
 
-    return webkitMediaStreamSrcSetupSrc(self, track, *element, pad_template, TRUE);
+    return webkitMediaStreamSrcSetupSrc(self, track, *element, pad_template, TRUE, onlyTrack);
 }
 
 static void webkitMediaStreamSrcPostStreamCollection(WebKitMediaStreamSrc* self, MediaStreamPrivate* stream)
@@ -484,14 +490,20 @@ static void webkitMediaStreamSrcPostStreamCollection(WebKitMediaStreamSrc* self,
         gst_message_new_stream_collection(GST_OBJECT(self), self->streamCollection.get()));
 }
 
-static void webkitMediaStreamSrcAddTrack(WebKitMediaStreamSrc* self, MediaStreamTrackPrivate* track)
+bool webkitMediaStreamSrcAddTrack(WebKitMediaStreamSrc* self, MediaStreamTrackPrivate* track, bool onlyTrack)
 {
+    bool res = false;
     if (track->type() == RealtimeMediaSource::Type::Audio)
-        webkitMediaStreamSrcSetupAppSrc(self, track, &self->audioSrc, &audioSrcTemplate);
+        res = webkitMediaStreamSrcSetupAppSrc(self, track, &self->audioSrc, &audioSrcTemplate, onlyTrack);
     else if (track->type() == RealtimeMediaSource::Type::Video)
-        webkitMediaStreamSrcSetupAppSrc(self, track, &self->videoSrc, &videoSrcTemplate);
+        res = webkitMediaStreamSrcSetupAppSrc(self, track, &self->videoSrc, &videoSrcTemplate, onlyTrack);
     else
         GST_INFO("Unsupported track type: %d", static_cast<int>(track->type()));
+
+    if (onlyTrack && res)
+        self->track = track;
+
+    return false;
 }
 
 static void webkitMediaStreamSrcRemoveTrackByType(WebKitMediaStreamSrc* self, RealtimeMediaSource::Type trackType)
@@ -524,7 +536,7 @@ bool webkitMediaStreamSrcSetStream(WebKitMediaStreamSrc* self, MediaStreamPrivat
     self->stream = stream;
     self->stream->addObserver(*self->mediaStreamObserver.get());
     for (auto& track : stream->tracks())
-        webkitMediaStreamSrcAddTrack(self, track.get());
+        webkitMediaStreamSrcAddTrack(self, track.get(), false);
 
     return TRUE;
 }
@@ -593,6 +605,11 @@ static void webkitMediaStreamSrcTrackEnded(WebKitMediaStreamSrc* self,
     gst_pad_push_event(pad.get(), gst_event_new_eos());
 }
 
+GstElement* webkitMediaStreamSrcNew(void)
+{
+    return GST_ELEMENT(g_object_new(webkit_media_stream_src_get_type(), nullptr));
+}
+
 } // WebCore
 #endif // GST_CHECK_VERSION(1, 10, 0)
 #endif // ENABLE(VIDEO) && ENABLE(MEDIA_STREAM) && USE(LIBWEBRTC)
index 1599bae..c33bd02 100644 (file)
@@ -41,6 +41,8 @@ typedef struct _WebKitMediaStreamSrc WebKitMediaStreamSrc;
 #define WEBKIT_TYPE_MEDIA_STREAM_SRC (webkit_media_stream_src_get_type())
 GType webkit_media_stream_src_get_type(void) G_GNUC_CONST;
 bool webkitMediaStreamSrcSetStream(WebKitMediaStreamSrc*, MediaStreamPrivate*);
+bool webkitMediaStreamSrcAddTrack(WebKitMediaStreamSrc*, MediaStreamTrackPrivate*, bool onlyTrack);
+GstElement * webkitMediaStreamSrcNew(void);
 } // WebCore
 
 #endif // ENABLE(VIDEO) && ENABLE(MEDIA_STREAM) && USE(LIBWEBRTC)
index fd52ebf..16bb68c 100644 (file)
 #if ENABLE(MEDIA_STREAM) && USE(LIBWEBRTC) && USE(GSTREAMER)
 #include "MockGStreamerAudioCaptureSource.h"
 
+#include "GStreamerAudioStreamDescription.h"
 #include "MockRealtimeAudioSource.h"
 
+#include <gst/app/gstappsrc.h>
+
 namespace WebCore {
 
+static const double s_Tau = 2 * M_PI;
+static const double s_BipBopDuration = 0.07;
+static const double s_BipBopVolume = 0.5;
+static const double s_BipFrequency = 1500;
+static const double s_BopFrequency = 500;
+static const double s_HumFrequency = 150;
+static const double s_HumVolume = 0.1;
+
 class WrappedMockRealtimeAudioSource : public MockRealtimeAudioSource {
 public:
     WrappedMockRealtimeAudioSource(String&& deviceID, String&& name, String&& hashSalt)
         : MockRealtimeAudioSource(WTFMove(deviceID), WTFMove(name), WTFMove(hashSalt))
+        , m_src(nullptr)
+    {
+    }
+
+    void start(GRefPtr<GstElement> src)
+    {
+        m_src = src;
+        if (m_streamFormat)
+            gst_app_src_set_caps(GST_APP_SRC(m_src.get()), m_streamFormat->caps());
+        MockRealtimeAudioSource::start();
+    }
+
+    void addHum(float amplitude, float frequency, float sampleRate, uint64_t start, float *p, uint64_t count)
+    {
+        float humPeriod = sampleRate / frequency;
+        for (uint64_t i = start, end = start + count; i < end; ++i) {
+            float a = amplitude * sin(i * s_Tau / humPeriod);
+            a += *p;
+            *p++ = a;
+        }
+    }
+
+    void render(Seconds delta)
     {
+        ASSERT(m_src);
+
+        uint32_t totalFrameCount = GST_ROUND_UP_16(static_cast<size_t>(delta.seconds() * sampleRate()));
+        uint32_t frameCount = std::min(totalFrameCount, m_maximiumFrameCount);
+        while (frameCount) {
+            uint32_t bipBopStart = m_samplesRendered % m_bipBopBuffer.size();
+            uint32_t bipBopRemain = m_bipBopBuffer.size() - bipBopStart;
+            uint32_t bipBopCount = std::min(frameCount, bipBopRemain);
+
+            GstBuffer* buffer = gst_buffer_new_allocate(nullptr, bipBopCount * m_streamFormat->bytesPerFrame(), nullptr);
+            {
+                GstMappedBuffer map(buffer, GST_MAP_WRITE);
+
+                if (!muted()) {
+                    memcpy(map.data(), &m_bipBopBuffer[bipBopStart], sizeof(float) * bipBopCount);
+                    addHum(s_HumVolume, s_HumFrequency, sampleRate(), m_samplesRendered, (float*)map.data(), bipBopCount);
+                } else
+                    memset(map.data(), 0, sizeof(float) * bipBopCount);
+            }
+
+            gst_app_src_push_buffer(GST_APP_SRC(m_src.get()), buffer);
+            m_samplesRendered += bipBopCount;
+            totalFrameCount -= bipBopCount;
+            frameCount = std::min(totalFrameCount, m_maximiumFrameCount);
+        }
     }
+
+    void settingsDidChange(OptionSet<RealtimeMediaSourceSettings::Flag> settings)
+    {
+        if (settings.contains(RealtimeMediaSourceSettings::Flag::SampleRate)) {
+            GstAudioInfo info;
+            auto rate = sampleRate();
+            size_t sampleCount = 2 * rate;
+
+            m_maximiumFrameCount = WTF::roundUpToPowerOfTwo(renderInterval().seconds() * sampleRate());
+            gst_audio_info_set_format(&info, GST_AUDIO_FORMAT_F32LE, rate, 1, nullptr);
+            m_streamFormat = GStreamerAudioStreamDescription(info);
+
+            if (m_src)
+                gst_app_src_set_caps(GST_APP_SRC(m_src.get()), m_streamFormat->caps());
+
+            m_bipBopBuffer.grow(sampleCount);
+            m_bipBopBuffer.fill(0);
+
+            size_t bipBopSampleCount = ceil(s_BipBopDuration * rate);
+            size_t bipStart = 0;
+            size_t bopStart = rate;
+
+            addHum(s_BipBopVolume, s_BipFrequency, rate, 0, static_cast<float*>(m_bipBopBuffer.data() + bipStart), bipBopSampleCount);
+            addHum(s_BipBopVolume, s_BopFrequency, rate, 0, static_cast<float*>(m_bipBopBuffer.data() + bopStart), bipBopSampleCount);
+        }
+
+        MockRealtimeAudioSource::settingsDidChange(settings);
+    }
+
+    GRefPtr<GstElement> m_src;
+    std::optional<GStreamerAudioStreamDescription> m_streamFormat;
+    Vector<float> m_bipBopBuffer;
+    uint32_t m_maximiumFrameCount;
+    uint64_t m_samplesEmitted { 0 };
+    uint64_t m_samplesRendered { 0 };
 };
 
 CaptureSourceOrError MockRealtimeAudioSource::create(String&& deviceID,
@@ -80,7 +174,7 @@ void MockGStreamerAudioCaptureSource::stopProducingData()
 void MockGStreamerAudioCaptureSource::startProducingData()
 {
     GStreamerAudioCaptureSource::startProducingData();
-    m_wrappedSource->start();
+    static_cast<WrappedMockRealtimeAudioSource*>(m_wrappedSource.get())->start(capturer()->source());
 }
 
 const RealtimeMediaSourceSettings& MockGStreamerAudioCaptureSource::settings()
index 534af77..5e7f977 100644 (file)
@@ -108,33 +108,26 @@ void RealtimeOutgoingAudioSourceLibWebRTC::pullAudioData()
     size_t inChunkSampleCount = gst_audio_converter_get_in_frames(m_sampleConverter.get(), outChunkSampleCount);
     size_t inBufferSize = inChunkSampleCount * m_inputStreamDescription->getInfo()->bpf;
 
-    auto available = gst_adapter_available(m_adapter.get());
-    if (inBufferSize > available) {
-        GST_DEBUG("Not enough data: wanted: %ld > %ld available",
-            inBufferSize, available);
-
-        return;
-    }
-
-    auto inBuffer = adoptGRef(gst_adapter_take_buffer(m_adapter.get(), inBufferSize));
-    auto outBuffer = adoptGRef(gst_buffer_new_allocate(nullptr, outBufferSize, 0));
-    GstMappedBuffer outMap(outBuffer.get(), GST_MAP_WRITE);
-    if (isSilenced())
-        gst_audio_format_fill_silence(m_outputStreamDescription->getInfo()->finfo, outMap.data(), outMap.size());
-    else {
-        GstMappedBuffer inMap(inBuffer.get(), GST_MAP_READ);
-
-        gpointer in[1] = { inMap.data() };
-        gpointer out[1] = { outMap.data() };
-        if (!gst_audio_converter_samples(m_sampleConverter.get(), static_cast<GstAudioConverterFlags>(0), in, inChunkSampleCount, out, outChunkSampleCount)) {
-            GST_ERROR("Could not convert samples.");
-
-            return;
+    while (gst_adapter_available(m_adapter.get()) > inBufferSize) {
+        auto inBuffer = adoptGRef(gst_adapter_take_buffer(m_adapter.get(), inBufferSize));
+        m_audioBuffer.grow(outBufferSize);
+        if (isSilenced())
+            gst_audio_format_fill_silence(m_outputStreamDescription->getInfo()->finfo, m_audioBuffer.data(), outBufferSize);
+        else {
+            GstMappedBuffer inMap(inBuffer.get(), GST_MAP_READ);
+
+            gpointer in[1] = { inMap.data() };
+            gpointer out[1] = { m_audioBuffer.data() };
+            if (!gst_audio_converter_samples(m_sampleConverter.get(), static_cast<GstAudioConverterFlags>(0), in, inChunkSampleCount, out, outChunkSampleCount)) {
+                GST_ERROR("Could not convert samples.");
+
+                return;
+            }
         }
-    }
 
-    sendAudioFrames(outMap.data(), LibWebRTCAudioFormat::sampleSize, static_cast<int>(m_outputStreamDescription->sampleRate()),
-        static_cast<int>(m_outputStreamDescription->numberOfChannels()), outChunkSampleCount);
+        sendAudioFrames(m_audioBuffer.data(), LibWebRTCAudioFormat::sampleSize, static_cast<int>(m_outputStreamDescription->sampleRate()),
+            static_cast<int>(m_outputStreamDescription->numberOfChannels()), outChunkSampleCount);
+    }
 }
 
 bool RealtimeOutgoingAudioSourceLibWebRTC::isReachingBufferedAudioDataHighLimit()