[WebRTC] Fix remote audio rendering
authorjer.noble@apple.com <jer.noble@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Mon, 27 Feb 2017 18:22:49 +0000 (18:22 +0000)
committerjer.noble@apple.com <jer.noble@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Mon, 27 Feb 2017 18:22:49 +0000 (18:22 +0000)
https://bugs.webkit.org/show_bug.cgi?id=168898

Reviewed by Eric Carlson.

Source/WebCore:

Test: webrtc/audio-peer-connection-webaudio.html

Fix MediaStreamAudioSourceNode by not bailing out early if the input sample rate doesn't match
the AudioContext's sample rate; there's code in setFormat() to do the sample rate conversion
correctly.

* Modules/webaudio/MediaStreamAudioSourceNode.cpp:
(WebCore::MediaStreamAudioSourceNode::setFormat):

Fix AudioSampleBufferList by making the AudioConverter input proc a free function, and passing
its refCon a struct containing only the information it needs to perform its task. Because the
conversion may result in a different number of output samples than input ones, just ask to
generate the entire capacity of the scratch buffer, and signal that the input buffer was fully
converted with a special return value.

* platform/audio/mac/AudioSampleBufferList.cpp:
(WebCore::audioConverterFromABLCallback):
(WebCore::AudioSampleBufferList::copyFrom):
(WebCore::AudioSampleBufferList::convertInput): Deleted.
(WebCore::AudioSampleBufferList::audioConverterCallback): Deleted.
* platform/audio/mac/AudioSampleBufferList.h:

Fix AudioSampleDataSource by updating both the sampleCount and the sampleTime after doing
a sample rate conversion to take into account that both the number of samples may have changed,
as well as the timeScale of the sampleTime. This may result in small off-by-one rounding errors
due to the sample rate conversion of sampleTime, so remember what the next expected sampleTime
should be, and correct sampleTime if it is indeed off-by-one. If the pull operation has gotten
ahead of the push operation, delay the next pull by the empty amount by rolling back the
m_outputSampleOffset. Introduce the same offset behavior during pull operations.

* platform/audio/mac/AudioSampleDataSource.h:
* platform/audio/mac/AudioSampleDataSource.mm:
(WebCore::AudioSampleDataSource::pushSamplesInternal):
(WebCore::AudioSampleDataSource::pullSamplesInternal):
(WebCore::AudioSampleDataSource::pullAvalaibleSamplesAsChunks):

Fix MediaPlayerPrivateMediaStreamAVFObjC by obeying the m_muted property.

* platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm:
(WebCore::MediaPlayerPrivateMediaStreamAVFObjC::setVolume):
(WebCore::MediaPlayerPrivateMediaStreamAVFObjC::setMuted):

Fix LibWebRTCAudioModule by sleeping for the correct amount after emitting frames. Previously,
LibWebRTCAudioModule would sleep for a fixed amount of time, which meant it would get slowly out
of sync when emitting frames took a non-zero amount of time. Now, the amount of time before the
next cycle starts is correctly calculated, and then LibWebRTCAudioModule sleeps for a dynamic amount
of time in order to wake up correctly at the beginning of the next cycle.

* platform/mediastream/libwebrtc/LibWebRTCAudioModule.cpp:
(WebCore::LibWebRTCAudioModule::StartPlayoutOnAudioThread):

Fix AudioTrackPrivateMediaStreamCocoa by just using the output unit's preferred format
description (with the current system sample rate), rather than whatever is the current
input description.

* platform/mediastream/mac/AudioTrackPrivateMediaStreamCocoa.cpp:
(WebCore::AudioTrackPrivateMediaStreamCocoa::createAudioUnit):
(WebCore::AudioTrackPrivateMediaStreamCocoa::audioSamplesAvailable):
* platform/mediastream/mac/AudioTrackPrivateMediaStreamCocoa.h:

Fix RealtimeIncomingAudioSource by actually creating an AudioSourceProvider when asked.

* platform/mediastream/mac/RealtimeIncomingAudioSource.cpp:
(WebCore::RealtimeIncomingAudioSource::OnData):
(WebCore::RealtimeIncomingAudioSource::audioSourceProvider):
* platform/mediastream/mac/RealtimeIncomingAudioSource.h:

Fix RealtimeOutgoingAudioSource by using the outgoing format description rather than the
incoming one to determine the sample rate, channel count, sample byte size, etc., to use
when delivering data upstream to libWebRTC.

* platform/mediastream/mac/RealtimeOutgoingAudioSource.cpp:
(WebCore::RealtimeOutgoingAudioSource::audioSamplesAvailable):
(WebCore::RealtimeOutgoingAudioSource::pullAudioData):
* platform/mediastream/mac/RealtimeOutgoingAudioSource.h:

Fix WebAudioSourceProviderAVFObjC by using a AudioSampleDataSource to do format and sample
rate conversion rather than trying to duplicate all that code and use a CARingBuffer and
AudioConverter directly.

* platform/mediastream/mac/WebAudioSourceProviderAVFObjC.h:
* platform/mediastream/mac/WebAudioSourceProviderAVFObjC.mm:
(WebCore::WebAudioSourceProviderAVFObjC::~WebAudioSourceProviderAVFObjC):
(WebCore::WebAudioSourceProviderAVFObjC::provideInput):
(WebCore::WebAudioSourceProviderAVFObjC::prepare):
(WebCore::WebAudioSourceProviderAVFObjC::unprepare):
(WebCore::WebAudioSourceProviderAVFObjC::audioSamplesAvailable):

Fix the MockLibWebRTCAudioTrack by passing along the AddSink() sink to its AudioSourceInterface,
allowing the RealtimeOutgoingAudioSource to push data into the libWebRTC network stack. Also,
make sure m_enabled is initialized to a good value.

* testing/MockLibWebRTCPeerConnection.h:

LayoutTests:

* webrtc/audio-peer-connection-webaudio-expected.txt: Added.
* webrtc/audio-peer-connection-webaudio.html: Added.

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@213080 268f45cc-cd09-0410-ab3c-d52691b4dbfc

20 files changed:
LayoutTests/ChangeLog
LayoutTests/webrtc/audio-peer-connection-webaudio-expected.txt [new file with mode: 0644]
LayoutTests/webrtc/audio-peer-connection-webaudio.html [new file with mode: 0644]
Source/WebCore/ChangeLog
Source/WebCore/Modules/webaudio/MediaStreamAudioSourceNode.cpp
Source/WebCore/platform/audio/mac/AudioSampleBufferList.cpp
Source/WebCore/platform/audio/mac/AudioSampleBufferList.h
Source/WebCore/platform/audio/mac/AudioSampleDataSource.h
Source/WebCore/platform/audio/mac/AudioSampleDataSource.mm
Source/WebCore/platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm
Source/WebCore/platform/mediastream/libwebrtc/LibWebRTCAudioModule.cpp
Source/WebCore/platform/mediastream/mac/AudioTrackPrivateMediaStreamCocoa.cpp
Source/WebCore/platform/mediastream/mac/AudioTrackPrivateMediaStreamCocoa.h
Source/WebCore/platform/mediastream/mac/RealtimeIncomingAudioSource.cpp
Source/WebCore/platform/mediastream/mac/RealtimeIncomingAudioSource.h
Source/WebCore/platform/mediastream/mac/RealtimeOutgoingAudioSource.cpp
Source/WebCore/platform/mediastream/mac/RealtimeOutgoingAudioSource.h
Source/WebCore/platform/mediastream/mac/WebAudioSourceProviderAVFObjC.h
Source/WebCore/platform/mediastream/mac/WebAudioSourceProviderAVFObjC.mm
Source/WebCore/testing/MockLibWebRTCPeerConnection.h

index cafb694..0e2bccf 100644 (file)
@@ -1,3 +1,13 @@
+2017-02-27  Jer Noble  <jer.noble@apple.com>
+
+        [WebRTC] Fix remote audio rendering
+        https://bugs.webkit.org/show_bug.cgi?id=168898
+
+        Reviewed by Eric Carlson.
+
+        * webrtc/audio-peer-connection-webaudio-expected.txt: Added.
+        * webrtc/audio-peer-connection-webaudio.html: Added.
+
 2017-02-27  Fujii Hironori  <Hironori.Fujii@sony.com>
 
         compositing/transitions/transform-on-large-layer.html : ImageDiff produced stderr output
diff --git a/LayoutTests/webrtc/audio-peer-connection-webaudio-expected.txt b/LayoutTests/webrtc/audio-peer-connection-webaudio-expected.txt
new file mode 100644 (file)
index 0000000..8b666b5
--- /dev/null
@@ -0,0 +1,3 @@
+
+PASS Basic audio playback through a peer connection 
+
diff --git a/LayoutTests/webrtc/audio-peer-connection-webaudio.html b/LayoutTests/webrtc/audio-peer-connection-webaudio.html
new file mode 100644 (file)
index 0000000..a57eefd
--- /dev/null
@@ -0,0 +1,84 @@
+<!DOCTYPE html>
+<html>
+<head>
+    <meta charset="utf-8">
+    <title>Testing local audio capture playback causes "playing" event to fire</title>
+    <script src="../resources/testharness.js"></script>
+    <script src="../resources/testharnessreport.js"></script>
+    <script src ="routines.js"></script>
+    <script>
+    var test = async_test(() => {
+        if (window.internals)
+            internals.useMockRTCPeerConnectionFactory("TwoRealPeerConnections");
+
+        if (window.testRunner)
+            testRunner.setUserMediaPermission(true);
+
+        var heardHum = false;
+        var heardBop = false;
+        var heardBip = false;
+
+        navigator.mediaDevices.getUserMedia({audio: true}).then((stream) => {
+            createConnections((firstConnection) => {
+                firstConnection.addStream(stream);
+            }, (secondConnection) => {
+                secondConnection.onaddstream = (streamEvent) => { 
+                    var context = new webkitAudioContext();
+                    var sourceNode = context.createMediaStreamSource(streamEvent.stream);
+                    var analyser = context.createAnalyser();
+                    var gain = context.createGain();
+
+                    analyser.fftSize = 2048;
+                    analyser.smoothingTimeConstant = 0;
+                    analyser.minDecibels = -100;
+                    analyser.maxDecibels = 0;
+                    gain.gain.value = 0;
+
+                    sourceNode.connect(analyser);
+                    analyser.connect(gain);
+                    gain.connect(context.destination);
+
+                    function analyse() {
+                        var freqDomain = new Uint8Array(analyser.frequencyBinCount);
+                        analyser.getByteFrequencyData(freqDomain);
+
+                        var hasFrequency = expectedFrequency => {
+                            var bin = Math.floor(expectedFrequency * analyser.fftSize / context.sampleRate);
+                            return bin < freqDomain.length && freqDomain[bin] >= 150;
+                        };
+
+                        if (!heardHum)
+                            heardHum = hasFrequency(150);
+
+                        if (!heardBip)
+                            heardBip = hasFrequency(1500);
+
+                        if (!heardBop)
+                            heardBop = hasFrequency(500);
+
+                        if (heardHum && heardBip && heardBop)
+                            done();
+                    };
+
+                    var done = () => {
+                        clearTimeout(timeout);
+                        clearInterval(interval);
+
+                        assert_true(heardHum);
+                        assert_true(heardBip);
+                        assert_true(heardBop);
+                        test.done();
+                    };
+
+                    var timeout = setTimeout(done, 3000);
+                    var interval = setInterval(analyse, 1000 / 30);
+                    analyse();
+                }
+            });
+        });
+    }, "Basic audio playback through a peer connection");
+    </script>
+</head>
+<body>
+</body>
+</html>
index 28f090a..e7f5131 100644 (file)
@@ -1,3 +1,104 @@
+2017-02-27  Jer Noble  <jer.noble@apple.com>
+
+        [WebRTC] Fix remote audio rendering
+        https://bugs.webkit.org/show_bug.cgi?id=168898
+
+        Reviewed by Eric Carlson.
+
+        Test: webrtc/audio-peer-connection-webaudio.html
+
+        Fix MediaStreamAudioSourceNode by not bailing out early if the input sample rate doesn't match
+        the AudioContext's sample rate; there's code in setFormat() to do the sample rate conversion
+        correctly.
+
+        * Modules/webaudio/MediaStreamAudioSourceNode.cpp:
+        (WebCore::MediaStreamAudioSourceNode::setFormat):
+
+        Fix AudioSampleBufferList by making the AudioConverter input proc a free function, and passing
+        its refCon a struct containing only the information it needs to perform its task. Because the
+        conversion may result in a different number of output samples than input ones, just ask to
+        generate the entire capacity of the scratch buffer, and signal that the input buffer was fully
+        converted with a special return value.
+
+        * platform/audio/mac/AudioSampleBufferList.cpp:
+        (WebCore::audioConverterFromABLCallback):
+        (WebCore::AudioSampleBufferList::copyFrom):
+        (WebCore::AudioSampleBufferList::convertInput): Deleted.
+        (WebCore::AudioSampleBufferList::audioConverterCallback): Deleted.
+        * platform/audio/mac/AudioSampleBufferList.h:
+
+        Fix AudioSampleDataSource by updating both the sampleCount and the sampleTime after doing
+        a sample rate conversion to take into account that both the number of samples may have changed,
+        as well as the timeScale of the sampleTime. This may result in small off-by-one rounding errors
+        due to the sample rate conversion of sampleTime, so remember what the next expected sampleTime
+        should be, and correct sampleTime if it is indeed off-by-one. If the pull operation has gotten
+        ahead of the push operation, delay the next pull by the empty amount by rolling back the
+        m_outputSampleOffset. Introduce the same offset behavior during pull operations.
+
+        * platform/audio/mac/AudioSampleDataSource.h:
+        * platform/audio/mac/AudioSampleDataSource.mm:
+        (WebCore::AudioSampleDataSource::pushSamplesInternal):
+        (WebCore::AudioSampleDataSource::pullSamplesInternal):
+        (WebCore::AudioSampleDataSource::pullAvalaibleSamplesAsChunks):
+
+        Fix MediaPlayerPrivateMediaStreamAVFObjC by obeying the m_muted property.
+
+        * platform/graphics/avfoundation/objc/MediaPlayerPrivateMediaStreamAVFObjC.mm:
+        (WebCore::MediaPlayerPrivateMediaStreamAVFObjC::setVolume):
+        (WebCore::MediaPlayerPrivateMediaStreamAVFObjC::setMuted):
+
+        Fix LibWebRTCAudioModule by sleeping for the correct amount after emitting frames. Previously,
+        LibWebRTCAudioModule would sleep for a fixed amount of time, which meant it would get slowly out
+        of sync when emitting frames took a non-zero amount of time. Now, the amount of time before the
+        next cycle starts is correctly calculated, and then LibWebRTCAudioModule sleeps for a dynamic amount
+        of time in order to wake up correctly at the beginning of the next cycle.
+
+        * platform/mediastream/libwebrtc/LibWebRTCAudioModule.cpp:
+        (WebCore::LibWebRTCAudioModule::StartPlayoutOnAudioThread):
+
+        Fix AudioTrackPrivateMediaStreamCocoa by just using the output unit's preferred format
+        description (with the current system sample rate), rather than whatever is the current
+        input description.
+
+        * platform/mediastream/mac/AudioTrackPrivateMediaStreamCocoa.cpp:
+        (WebCore::AudioTrackPrivateMediaStreamCocoa::createAudioUnit):
+        (WebCore::AudioTrackPrivateMediaStreamCocoa::audioSamplesAvailable):
+        * platform/mediastream/mac/AudioTrackPrivateMediaStreamCocoa.h:
+
+        Fix RealtimeIncomingAudioSource by actually creating an AudioSourceProvider when asked.
+
+        * platform/mediastream/mac/RealtimeIncomingAudioSource.cpp:
+        (WebCore::RealtimeIncomingAudioSource::OnData):
+        (WebCore::RealtimeIncomingAudioSource::audioSourceProvider):
+        * platform/mediastream/mac/RealtimeIncomingAudioSource.h:
+
+        Fix RealtimeOutgoingAudioSource by using the outgoing format description rather than the
+        incoming one to determine the sample rate, channel count, sample byte size, etc., to use
+        when delivering data upstream to libWebRTC.
+
+        * platform/mediastream/mac/RealtimeOutgoingAudioSource.cpp:
+        (WebCore::RealtimeOutgoingAudioSource::audioSamplesAvailable):
+        (WebCore::RealtimeOutgoingAudioSource::pullAudioData):
+        * platform/mediastream/mac/RealtimeOutgoingAudioSource.h:
+
+        Fix WebAudioSourceProviderAVFObjC by using a AudioSampleDataSource to do format and sample
+        rate conversion rather than trying to duplicate all that code and use a CARingBuffer and 
+        AudioConverter directly.
+
+        * platform/mediastream/mac/WebAudioSourceProviderAVFObjC.h:
+        * platform/mediastream/mac/WebAudioSourceProviderAVFObjC.mm:
+        (WebCore::WebAudioSourceProviderAVFObjC::~WebAudioSourceProviderAVFObjC):
+        (WebCore::WebAudioSourceProviderAVFObjC::provideInput):
+        (WebCore::WebAudioSourceProviderAVFObjC::prepare):
+        (WebCore::WebAudioSourceProviderAVFObjC::unprepare):
+        (WebCore::WebAudioSourceProviderAVFObjC::audioSamplesAvailable):
+
+        Fix the MockLibWebRTCAudioTrack by passing along the AddSink() sink to its AudioSourceInterface,
+        allowing the RealtimeOutgoingAudioSource to push data into the libWebRTC network stack. Also,
+        make sure m_enabled is initialized to a good value.
+
+        * testing/MockLibWebRTCPeerConnection.h:
+
 2017-02-21  Jer Noble  <jer.noble@apple.com>
 
         AudioSampleDataSource should not exclusively lock its read and write threads.
index f640a68..fe06147 100644 (file)
@@ -73,7 +73,7 @@ void MediaStreamAudioSourceNode::setFormat(size_t numberOfChannels, float source
         return;
 
     // The sample-rate must be equal to the context's sample-rate.
-    if (!numberOfChannels || numberOfChannels > AudioContext::maxNumberOfChannels() || sourceSampleRate != sampleRate) {
+    if (!numberOfChannels || numberOfChannels > AudioContext::maxNumberOfChannels()) {
         // process() will generate silence for these uninitialized values.
         LOG(Media, "MediaStreamAudioSourceNode::setFormat(%u, %f) - unhandled format change", static_cast<unsigned>(numberOfChannels), sourceSampleRate);
         m_sourceNumberOfChannels = 0;
index c19ef7d..6f8e090 100644 (file)
@@ -218,42 +218,48 @@ void AudioSampleBufferList::zeroABL(AudioBufferList& buffer, size_t byteCount)
         memset(buffer.mBuffers[i].mData, 0, byteCount);
 }
 
-OSStatus AudioSampleBufferList::convertInput(UInt32* ioNumberDataPackets, AudioBufferList* ioData)
+struct AudioConverterFromABLContext {
+    const AudioBufferList& buffer;
+    size_t packetsAvailableToConvert;
+    size_t bytesPerPacket;
+};
+
+static const OSStatus kRanOutOfInputDataStatus = '!mor';
+
+static OSStatus audioConverterFromABLCallback(AudioConverterRef, UInt32* ioNumberDataPackets, AudioBufferList* ioData, AudioStreamPacketDescription**, void* inRefCon)
 {
-    if (!ioNumberDataPackets || !ioData || !m_converterInputBuffer) {
-        LOG_ERROR("AudioSampleBufferList::reconfigureInput(%p) invalid input to AudioConverterInput", this);
+    if (!ioNumberDataPackets || !ioData || !inRefCon) {
+        LOG_ERROR("AudioSampleBufferList::audioConverterCallback() invalid input to AudioConverterInput");
         return kAudioConverterErr_UnspecifiedError;
     }
 
-    size_t packetCount = m_converterInputBuffer->mBuffers[0].mDataByteSize / m_converterInputBytesPerPacket;
-    if (*ioNumberDataPackets > m_sampleCapacity) {
-        LOG_ERROR("AudioSampleBufferList::convertInput(%p) not enough internal storage: needed = %zu, available = %lu", this, (size_t)*ioNumberDataPackets, m_sampleCapacity);
-        return kAudioConverterErr_InvalidInputSize;
+    auto& context = *static_cast<AudioConverterFromABLContext*>(inRefCon);
+    if (!context.packetsAvailableToConvert) {
+        *ioNumberDataPackets = 0;
+        return kRanOutOfInputDataStatus;
     }
 
-    *ioNumberDataPackets = static_cast<UInt32>(packetCount);
+    *ioNumberDataPackets = static_cast<UInt32>(context.packetsAvailableToConvert);
+
     for (uint32_t i = 0; i < ioData->mNumberBuffers; ++i) {
-        ioData->mBuffers[i].mData = m_converterInputBuffer->mBuffers[i].mData;
-        ioData->mBuffers[i].mDataByteSize = m_converterInputBuffer->mBuffers[i].mDataByteSize;
+        ioData->mBuffers[i].mData = context.buffer.mBuffers[i].mData;
+        ioData->mBuffers[i].mDataByteSize = context.packetsAvailableToConvert * context.bytesPerPacket;
     }
+    context.packetsAvailableToConvert = 0;
 
     return 0;
 }
 
-OSStatus AudioSampleBufferList::audioConverterCallback(AudioConverterRef, UInt32* ioNumberDataPackets, AudioBufferList* ioData, AudioStreamPacketDescription**, void* inRefCon)
-{
-    return static_cast<AudioSampleBufferList*>(inRefCon)->convertInput(ioNumberDataPackets, ioData);
-}
-
-OSStatus AudioSampleBufferList::copyFrom(const AudioBufferList& source, AudioConverterRef converter)
+OSStatus AudioSampleBufferList::copyFrom(const AudioBufferList& source, size_t frameCount, AudioConverterRef converter)
 {
     reset();
 
     AudioStreamBasicDescription inputFormat;
     UInt32 propertyDataSize = sizeof(inputFormat);
     AudioConverterGetProperty(converter, kAudioConverterCurrentInputStreamDescription, &propertyDataSize, &inputFormat);
-    m_converterInputBytesPerPacket = inputFormat.mBytesPerPacket;
-    SetForScope<const AudioBufferList*> scopedInputBuffer(m_converterInputBuffer, &source);
+    ASSERT(frameCount <= source.mBuffers[0].mDataByteSize / inputFormat.mBytesPerPacket);
+
+    AudioConverterFromABLContext context { source, frameCount, inputFormat.mBytesPerPacket };
 
 #if !LOG_DISABLED
     AudioStreamBasicDescription outputFormat;
@@ -267,22 +273,22 @@ OSStatus AudioSampleBufferList::copyFrom(const AudioBufferList& source, AudioCon
     }
 #endif
 
-    UInt32 samplesConverted = static_cast<UInt32>(m_sampleCapacity);
-    OSStatus err = AudioConverterFillComplexBuffer(converter, audioConverterCallback, this, &samplesConverted, m_bufferList->list(), nullptr);
-    if (err) {
-        LOG_ERROR("AudioSampleBufferList::copyFrom(%p) AudioConverterFillComplexBuffer returned error %d (%.4s)", this, (int)err, (char*)&err);
-        m_sampleCount = std::min(m_sampleCapacity, static_cast<size_t>(samplesConverted));
-        zero();
-        return err;
+    UInt32 samplesConverted = m_sampleCapacity;
+    OSStatus err = AudioConverterFillComplexBuffer(converter, audioConverterFromABLCallback, &context, &samplesConverted, m_bufferList->list(), nullptr);
+    if (!err || err == kRanOutOfInputDataStatus) {
+        m_sampleCount = samplesConverted;
+        return 0;
     }
 
-    m_sampleCount = samplesConverted;
-    return 0;
+    LOG_ERROR("AudioSampleBufferList::copyFrom(%p) AudioConverterFillComplexBuffer returned error %d (%.4s)", this, (int)err, (char*)&err);
+    m_sampleCount = std::min(m_sampleCapacity, static_cast<size_t>(samplesConverted));
+    zero();
+    return err;
 }
 
-OSStatus AudioSampleBufferList::copyFrom(AudioSampleBufferList& source, AudioConverterRef converter)
+OSStatus AudioSampleBufferList::copyFrom(AudioSampleBufferList& source, size_t frameCount, AudioConverterRef converter)
 {
-    return copyFrom(source.bufferList(), converter);
+    return copyFrom(source.bufferList(), frameCount, converter);
 }
 
 OSStatus AudioSampleBufferList::copyFrom(CARingBuffer& ringBuffer, size_t sampleCount, uint64_t startFrame, CARingBuffer::FetchMode mode)
index e105bd3..1bc3bad 100644 (file)
@@ -51,8 +51,8 @@ public:
     void applyGain(float);
 
     OSStatus copyFrom(const AudioSampleBufferList&, size_t count = SIZE_MAX);
-    OSStatus copyFrom(const AudioBufferList&, AudioConverterRef);
-    OSStatus copyFrom(AudioSampleBufferList&, AudioConverterRef);
+    OSStatus copyFrom(const AudioBufferList&, size_t frameCount, AudioConverterRef);
+    OSStatus copyFrom(AudioSampleBufferList&, size_t frameCount, AudioConverterRef);
     OSStatus copyFrom(CARingBuffer&, size_t frameCount, uint64_t startFrame, CARingBuffer::FetchMode);
 
     OSStatus mixFrom(const AudioSampleBufferList&, size_t count = SIZE_MAX);
@@ -78,14 +78,8 @@ public:
 protected:
     AudioSampleBufferList(const CAAudioStreamDescription&, size_t);
 
-    static OSStatus audioConverterCallback(AudioConverterRef, UInt32*, AudioBufferList*, AudioStreamPacketDescription**, void*);
-    OSStatus convertInput(UInt32*, AudioBufferList*);
-
     std::unique_ptr<CAAudioStreamDescription> m_internalFormat;
 
-    const AudioBufferList* m_converterInputBuffer { nullptr };
-    uint32_t m_converterInputBytesPerPacket { 0 };
-
     uint64_t m_timestamp { 0 };
     double m_hostTime { -1 };
     size_t m_sampleCount { 0 };
index c41d175..ecc7cd7 100644 (file)
@@ -82,10 +82,12 @@ protected:
     MediaTime hostTime() const;
 
     uint64_t m_timeStamp { 0 };
+    uint64_t m_lastPushedSampleCount { 0 };
+    MediaTime m_expectedNextPushedSampleTime { MediaTime::invalidTime() };
     double m_hostTime { -1 };
 
     MediaTime m_inputSampleOffset;
-    uint64_t m_outputSampleOffset { 0 };
+    int64_t m_outputSampleOffset { 0 };
 
     AudioConverterRef m_converter;
     RefPtr<AudioSampleBufferList> m_scratchBuffer;
index b7e8cb9..f05c085 100644 (file)
@@ -141,24 +141,27 @@ MediaTime AudioSampleDataSource::hostTime() const
 
 void AudioSampleDataSource::pushSamplesInternal(const AudioBufferList& bufferList, const MediaTime& presentationTime, size_t sampleCount)
 {
+    MediaTime sampleTime = presentationTime;
+
     const AudioBufferList* sampleBufferList;
     if (m_converter) {
         m_scratchBuffer->reset();
-        OSStatus err = m_scratchBuffer->copyFrom(bufferList, m_converter);
+        OSStatus err = m_scratchBuffer->copyFrom(bufferList, sampleCount, m_converter);
         if (err)
             return;
 
         sampleBufferList = m_scratchBuffer->bufferList().list();
+        sampleCount = m_scratchBuffer->sampleCount();
+        sampleTime = presentationTime.toTimeScale(m_outputDescription->sampleRate(), MediaTime::RoundingFlags::TowardZero);
     } else
         sampleBufferList = &bufferList;
 
-    MediaTime sampleTime = presentationTime;
+    if (m_expectedNextPushedSampleTime.isValid() && abs(m_expectedNextPushedSampleTime - sampleTime).timeValue() == 1)
+        sampleTime = m_expectedNextPushedSampleTime;
+    m_expectedNextPushedSampleTime = sampleTime + MediaTime(sampleCount, sampleTime.timeScale());
+
     if (m_inputSampleOffset == MediaTime::invalidTime()) {
         m_inputSampleOffset = MediaTime(1 - sampleTime.timeValue(), sampleTime.timeScale());
-        if (m_inputSampleOffset.timeScale() != sampleTime.timeScale()) {
-            // FIXME: It should be possible to do this without calling CMTimeConvertScale.
-            m_inputSampleOffset = toMediaTime(CMTimeConvertScale(toCMTime(m_inputSampleOffset), sampleTime.timeScale(), kCMTimeRoundingMethod_Default));
-        }
         LOG(MediaCaptureSamples, "@@ pushSamples: input sample offset is %lld, m_maximumSampleCount = %zu", m_inputSampleOffset.timeValue(), m_maximumSampleCount);
     }
     sampleTime += m_inputSampleOffset;
@@ -267,11 +270,9 @@ bool AudioSampleDataSource::pullSamplesInternal(AudioBufferList& buffer, size_t&
 #endif
 
         if (framesAvailable < sampleCount) {
-            const double twentyMS = .02;
-            double sampleRate = m_outputDescription->sampleRate();
-            auto delta = static_cast<int64_t>(timeStamp) - endFrame;
-            if (delta > 0 && delta < sampleRate * twentyMS)
-                m_outputSampleOffset -= delta;
+            int64_t delta = static_cast<int64_t>(timeStamp) - static_cast<int64_t>(endFrame);
+            if (delta > 0)
+                m_outputSampleOffset -= std::min<int64_t>(delta, sampleCount);
         }
 
         if (!framesAvailable) {
@@ -300,8 +301,15 @@ bool AudioSampleDataSource::pullAvalaibleSamplesAsChunks(AudioBufferList& buffer
     uint64_t startFrame = 0;
     uint64_t endFrame = 0;
     m_ringBuffer->getCurrentFrameBounds(startFrame, endFrame);
+    if (m_transitioningFromPaused) {
+        m_outputSampleOffset = timeStamp + (endFrame - sampleCountPerChunk);
+        m_transitioningFromPaused = false;
+    }
+
+    timeStamp += m_outputSampleOffset;
+
     if (timeStamp < startFrame)
-        return false;
+        timeStamp = startFrame;
 
     startFrame = timeStamp;
     while (endFrame - startFrame >= sampleCountPerChunk) {
index ff59e68..99c0a2c 100644 (file)
@@ -571,7 +571,7 @@ void MediaPlayerPrivateMediaStreamAVFObjC::setVolume(float volume)
 
     m_volume = volume;
     for (const auto& track : m_audioTrackMap.values())
-        track->setVolume(m_volume);
+        track->setVolume(m_muted ? 0 : m_volume);
 }
 
 void MediaPlayerPrivateMediaStreamAVFObjC::setMuted(bool muted)
@@ -582,6 +582,8 @@ void MediaPlayerPrivateMediaStreamAVFObjC::setMuted(bool muted)
         return;
 
     m_muted = muted;
+    for (const auto& track : m_audioTrackMap.values())
+        track->setVolume(m_muted ? 0 : m_volume);
 }
 
 bool MediaPlayerPrivateMediaStreamAVFObjC::hasVideo() const
index 8ba6f43..cfe5cab 100644 (file)
@@ -28,6 +28,8 @@
 
 #if USE(LIBWEBRTC)
 
+#include <wtf/CurrentTime.h>
+
 namespace WebCore {
 
 LibWebRTCAudioModule::LibWebRTCAudioModule()
@@ -75,9 +77,13 @@ const unsigned bytesPerSample = 2;
 
 void LibWebRTCAudioModule::StartPlayoutOnAudioThread()
 {
+    double startTime = WTF::monotonicallyIncreasingTimeMS();
     while (true) {
         PollFromSource();
-        m_audioTaskRunner->SleepMs(pollInterval);
+
+        double now = WTF::monotonicallyIncreasingTimeMS();
+        double sleepFor = pollInterval - remainder(now - startTime, pollInterval);
+        m_audioTaskRunner->SleepMs(sleepFor);
         if (!m_isPlaying)
             return;
     }
index b6da0de..58de4db 100644 (file)
@@ -102,7 +102,7 @@ void AudioTrackPrivateMediaStreamCocoa::setVolume(float volume)
         m_dataSource->setVolume(m_volume);
 }
 
-AudioComponentInstance AudioTrackPrivateMediaStreamCocoa::createAudioUnit(const CAAudioStreamDescription& inputDescription, CAAudioStreamDescription& outputDescription)
+AudioComponentInstance AudioTrackPrivateMediaStreamCocoa::createAudioUnit(CAAudioStreamDescription& outputDescription)
 {
     AudioComponentInstance remoteIOUnit { nullptr };
 
@@ -142,14 +142,13 @@ AudioComponentInstance AudioTrackPrivateMediaStreamCocoa::createAudioUnit(const
         return nullptr;
     }
 
-    UInt32 size = sizeof(outputDescription);
-    err  = AudioUnitGetProperty(remoteIOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &outputDescription, &size);
+    UInt32 size = sizeof(outputDescription.streamDescription());
+    err  = AudioUnitGetProperty(remoteIOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &outputDescription.streamDescription(), &size);
     if (err) {
         LOG(Media, "AudioTrackPrivateMediaStreamCocoa::createAudioUnits(%p) unable to get input stream format, error %d (%.4s)", this, (int)err, (char*)&err);
         return nullptr;
     }
 
-    outputDescription = inputDescription;
     outputDescription.streamDescription().mSampleRate = AudioSession::sharedSession().sampleRate();
 
     err = AudioUnitSetProperty(remoteIOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &outputDescription.streamDescription(), sizeof(outputDescription.streamDescription()));
@@ -190,7 +189,7 @@ void AudioTrackPrivateMediaStreamCocoa::audioSamplesAvailable(const MediaTime& s
         CAAudioStreamDescription inputDescription = toCAAudioStreamDescription(description);
         CAAudioStreamDescription outputDescription;
 
-        auto remoteIOUnit = createAudioUnit(inputDescription, outputDescription);
+        auto remoteIOUnit = createAudioUnit(outputDescription);
         if (!remoteIOUnit)
             return;
 
index c8fa30a..9326d58 100644 (file)
@@ -66,7 +66,7 @@ private:
     static OSStatus inputProc(void*, AudioUnitRenderActionFlags*, const AudioTimeStamp*, UInt32 inBusNumber, UInt32 numberOfFrames, AudioBufferList*);
     OSStatus render(UInt32 sampleCount, AudioBufferList&, UInt32 inBusNumber, const AudioTimeStamp&, AudioUnitRenderActionFlags&);
 
-    AudioComponentInstance createAudioUnit(const CAAudioStreamDescription& inputDescription, CAAudioStreamDescription& outputDescription);
+    AudioComponentInstance createAudioUnit(CAAudioStreamDescription&);
     void cleanup();
     void zeroBufferList(AudioBufferList&, size_t);
     void playInternal();
index ccd96a0..11a63ff 100644 (file)
@@ -88,7 +88,12 @@ void RealtimeIncomingAudioSource::OnData(const void* audioData, int bitsPerSampl
     auto mediaTime = toMediaTime(startTime);
     m_numberOfFrames += numberOfFrames;
 
-    m_streamFormat = streamDescription(sampleRate, numberOfChannels);
+    AudioStreamBasicDescription newDescription = streamDescription(sampleRate, numberOfChannels);
+    if (newDescription != m_streamFormat) {
+        m_streamFormat = newDescription;
+        if (m_audioSourceProvider)
+            m_audioSourceProvider->prepare(&m_streamFormat);
+    }
 
     WebAudioBufferList audioBufferList { CAAudioStreamDescription(m_streamFormat), WTF::safeCast<uint32_t>(numberOfFrames) };
     audioBufferList.buffer(0)->mDataByteSize = numberOfChannels * numberOfFrames * bitsPerSample / 8;
@@ -138,8 +143,13 @@ RealtimeMediaSourceSupportedConstraints& RealtimeIncomingAudioSource::supportedC
 
 AudioSourceProvider* RealtimeIncomingAudioSource::audioSourceProvider()
 {
-    // FIXME: Create the audioSourceProvider
-    return nullptr;
+    if (!m_audioSourceProvider) {
+        m_audioSourceProvider = WebAudioSourceProviderAVFObjC::create(*this);
+        if (m_numberOfFrames)
+            m_audioSourceProvider->prepare(&m_streamFormat);
+    }
+
+    return m_audioSourceProvider.get();
 }
 
 } // namespace WebCore
index 2178204..c7aa58b 100644 (file)
@@ -77,8 +77,8 @@ private:
     rtc::scoped_refptr<webrtc::AudioTrackInterface> m_audioTrack;
 
     RefPtr<WebAudioSourceProviderAVFObjC> m_audioSourceProvider;
-    AudioStreamBasicDescription m_streamFormat;
-    uint64_t m_numberOfFrames;
+    AudioStreamBasicDescription m_streamFormat { };
+    uint64_t m_numberOfFrames { 0 };
 };
 
 } // namespace WebCore
index 2f65c4a..4ec9f88 100644 (file)
@@ -72,7 +72,8 @@ void RealtimeOutgoingAudioSource::audioSamplesAvailable(const MediaTime& time, c
         auto status  = m_sampleConverter->setInputFormat(m_inputStreamDescription);
         ASSERT_UNUSED(status, !status);
 
-        status = m_sampleConverter->setOutputFormat(libwebrtcAudioFormat(streamDescription.sampleRate(), streamDescription.numberOfChannels()));
+        m_outputStreamDescription = libwebrtcAudioFormat(LibWebRTCAudioFormat::sampleRate, streamDescription.numberOfChannels());
+        status = m_sampleConverter->setOutputFormat(m_outputStreamDescription.streamDescription());
         ASSERT(!status);
     }
     m_sampleConverter->pushSamples(time, audioData, sampleCount);
@@ -85,20 +86,20 @@ void RealtimeOutgoingAudioSource::audioSamplesAvailable(const MediaTime& time, c
 void RealtimeOutgoingAudioSource::pullAudioData()
 {
     // libwebrtc expects 10 ms chunks.
-    size_t chunkSampleCount = m_inputStreamDescription.sampleRate() / 100;
-    size_t bufferSize = chunkSampleCount * LibWebRTCAudioFormat::sampleByteSize * m_inputStreamDescription.numberOfChannels();
+    size_t chunkSampleCount = m_outputStreamDescription.sampleRate() / 100;
+    size_t bufferSize = chunkSampleCount * LibWebRTCAudioFormat::sampleByteSize * m_outputStreamDescription.numberOfChannels();
     m_audioBuffer.reserveCapacity(bufferSize);
 
     AudioBufferList bufferList;
     bufferList.mNumberBuffers = 1;
-    bufferList.mBuffers[0].mNumberChannels = m_inputStreamDescription.numberOfChannels();
+    bufferList.mBuffers[0].mNumberChannels = m_outputStreamDescription.numberOfChannels();
     bufferList.mBuffers[0].mDataByteSize = bufferSize;
     bufferList.mBuffers[0].mData = m_audioBuffer.data();
 
     m_sampleConverter->pullAvalaibleSamplesAsChunks(bufferList, chunkSampleCount, m_startFrame, [this, chunkSampleCount] {
         m_startFrame += chunkSampleCount;
         for (auto sink : m_sinks)
-            sink->OnData(m_audioBuffer.data(), LibWebRTCAudioFormat::sampleSize, m_inputStreamDescription.sampleRate(), m_inputStreamDescription.numberOfChannels(), chunkSampleCount);
+            sink->OnData(m_audioBuffer.data(), LibWebRTCAudioFormat::sampleSize, m_outputStreamDescription.sampleRate(), m_outputStreamDescription.numberOfChannels(), chunkSampleCount);
     });
 }
 
index f24036d..39d0200 100644 (file)
@@ -74,6 +74,7 @@ private:
     rtc::scoped_refptr<webrtc::AudioTrackInterface> m_track;
     Ref<AudioSampleDataSource> m_sampleConverter;
     CAAudioStreamDescription m_inputStreamDescription;
+    CAAudioStreamDescription m_outputStreamDescription;
 
     Vector<uint16_t> m_audioBuffer;
     uint64_t m_startFrame { 0 };
index f906662..10ad961 100644 (file)
@@ -41,7 +41,8 @@ typedef struct opaqueCMSampleBuffer *CMSampleBufferRef;
 
 namespace WebCore {
 
-class CARingBuffer;
+class AudioSampleDataSource;
+class CAAudioStreamDescription;
 
 class WebAudioSourceProviderAVFObjC : public RefCounted<WebAudioSourceProviderAVFObjC>, public AudioSourceProvider, RealtimeMediaSource::Observer {
 public:
@@ -62,11 +63,9 @@ private:
     void audioSamplesAvailable(const MediaTime&, const PlatformAudioData&, const AudioStreamDescription&, size_t) final;
 
     size_t m_listBufferSize { 0 };
-    std::unique_ptr<AudioBufferList> m_list;
-    AudioConverterRef m_converter;
-    std::unique_ptr<AudioStreamBasicDescription> m_inputDescription;
-    std::unique_ptr<AudioStreamBasicDescription> m_outputDescription;
-    std::unique_ptr<CARingBuffer> m_ringBuffer;
+    std::unique_ptr<CAAudioStreamDescription> m_inputDescription;
+    std::unique_ptr<CAAudioStreamDescription> m_outputDescription;
+    RefPtr<AudioSampleDataSource> m_dataSource;
 
     uint64_t m_writeCount { 0 };
     uint64_t m_readCount { 0 };
index 028befd..bba56af 100644 (file)
 
 #import "AudioBus.h"
 #import "AudioChannel.h"
+#import "AudioSampleDataSource.h"
 #import "AudioSourceProviderClient.h"
-#import "CARingBuffer.h"
 #import "Logging.h"
 #import "MediaTimeAVFoundation.h"
 #import "WebAudioBufferList.h"
-#import <AudioToolbox/AudioToolbox.h>
 #import <objc/runtime.h>
 #import <wtf/MainThread.h>
 
 
 #import "CoreMediaSoftLink.h"
 
-SOFT_LINK_FRAMEWORK(AudioToolbox)
-
-SOFT_LINK(AudioToolbox, AudioConverterConvertComplexBuffer, OSStatus, (AudioConverterRef inAudioConverter, UInt32 inNumberPCMFrames, const AudioBufferList* inInputData, AudioBufferList* outOutputData), (inAudioConverter, inNumberPCMFrames, inInputData, outOutputData))
-SOFT_LINK(AudioToolbox, AudioConverterNew, OSStatus, (const AudioStreamBasicDescription* inSourceFormat, const AudioStreamBasicDescription* inDestinationFormat, AudioConverterRef* outAudioConverter), (inSourceFormat, inDestinationFormat, outAudioConverter))
-
 namespace WebCore {
 
 static const double kRingBufferDuration = 1;
@@ -68,11 +62,6 @@ WebAudioSourceProviderAVFObjC::~WebAudioSourceProviderAVFObjC()
 {
     std::lock_guard<Lock> lock(m_mutex);
 
-    if (m_converter) {
-        // FIXME: make and use a smart pointer for AudioConverter
-        AudioConverterDispose(m_converter);
-        m_converter = nullptr;
-    }
     if (m_connected && m_captureSource)
         m_captureSource->removeObserver(*this);
 }
@@ -80,45 +69,27 @@ WebAudioSourceProviderAVFObjC::~WebAudioSourceProviderAVFObjC()
 void WebAudioSourceProviderAVFObjC::provideInput(AudioBus* bus, size_t framesToProcess)
 {
     std::unique_lock<Lock> lock(m_mutex, std::try_to_lock);
-    if (!lock.owns_lock() || !m_ringBuffer) {
+    if (!lock.owns_lock() || !m_dataSource) {
         bus->zero();
         return;
     }
 
-    uint64_t startFrame = 0;
-    uint64_t endFrame = 0;
-    m_ringBuffer->getCurrentFrameBounds(startFrame, endFrame);
-
     if (m_writeCount <= m_readCount) {
         bus->zero();
         return;
     }
 
-    uint64_t framesAvailable = endFrame - m_readCount;
-    if (framesAvailable < framesToProcess) {
-        framesToProcess = static_cast<size_t>(framesAvailable);
-        bus->zero();
-    }
-
-    ASSERT(bus->numberOfChannels() == m_ringBuffer->channelCount());
-    if (bus->numberOfChannels() != m_ringBuffer->channelCount()) {
-        bus->zero();
-        return;
-    }
-
-    for (unsigned i = 0; i < m_list->mNumberBuffers; ++i) {
+    WebAudioBufferList list { *m_outputDescription };
+    for (unsigned i = 0; i < list.bufferCount(); ++i) {
         AudioChannel& channel = *bus->channel(i);
-        auto& buffer = m_list->mBuffers[i];
-        buffer.mNumberChannels = 1;
-        buffer.mData = channel.mutableData();
-        buffer.mDataByteSize = channel.length() * sizeof(float);
+        auto* buffer = list.buffer(i);
+        buffer->mNumberChannels = 1;
+        buffer->mData = channel.mutableData();
+        buffer->mDataByteSize = channel.length() * sizeof(float);
     }
 
-    m_ringBuffer->fetch(m_list.get(), framesToProcess, m_readCount);
+    m_dataSource->pullSamples(*list.list(), framesToProcess, m_readCount, 0, AudioSampleDataSource::Copy);
     m_readCount += framesToProcess;
-
-    if (m_converter)
-        AudioConverterConvertComplexBuffer(m_converter, framesToProcess, m_list.get(), m_list.get());
 }
 
 void WebAudioSourceProviderAVFObjC::setClient(AudioSourceProviderClient* client)
@@ -147,58 +118,24 @@ void WebAudioSourceProviderAVFObjC::prepare(const AudioStreamBasicDescription* f
 
     LOG(Media, "WebAudioSourceProviderAVFObjC::prepare(%p)", this);
 
-    m_inputDescription = std::make_unique<AudioStreamBasicDescription>(*format);
+    m_inputDescription = std::make_unique<CAAudioStreamDescription>(*format);
     int numberOfChannels = format->mChannelsPerFrame;
     double sampleRate = format->mSampleRate;
     ASSERT(sampleRate >= 0);
 
-    m_outputDescription = std::make_unique<AudioStreamBasicDescription>();
-    m_outputDescription->mSampleRate = sampleRate;
-    m_outputDescription->mFormatID = kAudioFormatLinearPCM;
-    m_outputDescription->mFormatFlags = kAudioFormatFlagsNativeFloatPacked;
-    m_outputDescription->mBitsPerChannel = 8 * sizeof(Float32);
-    m_outputDescription->mChannelsPerFrame = numberOfChannels;
-    m_outputDescription->mFramesPerPacket = 1;
-    m_outputDescription->mBytesPerPacket = sizeof(Float32);
-    m_outputDescription->mBytesPerFrame = sizeof(Float32);
-    m_outputDescription->mFormatFlags |= kAudioFormatFlagIsNonInterleaved;
-
-    if (m_converter) {
-        // FIXME: make and use a smart pointer for AudioConverter
-        AudioConverterDispose(m_converter);
-        m_converter = nullptr;
-    }
-
-    if (*m_inputDescription != *m_outputDescription) {
-        AudioConverterRef outConverter = nullptr;
-        OSStatus err = AudioConverterNew(m_inputDescription.get(), m_outputDescription.get(), &outConverter);
-        if (err) {
-            LOG(Media, "WebAudioSourceProviderAVFObjC::prepare(%p) - AudioConverterNew returned error %i", this, err);
-            return;
-        }
-        m_converter = outConverter;
-    }
+    const int bytesPerFloat = sizeof(Float32);
+    const int bitsPerByte = 8;
+    const bool isFloat = true;
+    const bool isBigEndian = false;
+    const bool isNonInterleaved = true;
+    AudioStreamBasicDescription outputDescription { };
+    FillOutASBDForLPCM(outputDescription, sampleRate, numberOfChannels, bitsPerByte * bytesPerFloat, bitsPerByte * bytesPerFloat, isFloat, isBigEndian, isNonInterleaved);
+    m_outputDescription = std::make_unique<CAAudioStreamDescription>(outputDescription);
 
-    // Make the ringbuffer large enough to store 1 second.
-    uint64_t capacity = kRingBufferDuration * sampleRate;
-    ASSERT(capacity <= SIZE_MAX);
-    if (capacity > SIZE_MAX)
-        return;
-
-    // AudioBufferList is a variable-length struct, so create on the heap with a generic new() operator
-    // with a custom size, and initialize the struct manually.
-    uint64_t bufferListSize = offsetof(AudioBufferList, mBuffers) + (sizeof(AudioBuffer) * std::max(1, numberOfChannels));
-    ASSERT(bufferListSize <= SIZE_MAX);
-    if (bufferListSize > SIZE_MAX)
-        return;
-
-    m_ringBuffer = std::make_unique<CARingBuffer>();
-    m_ringBuffer->allocate(CAAudioStreamDescription(*format), static_cast<size_t>(capacity));
-
-    m_listBufferSize = static_cast<size_t>(bufferListSize);
-    m_list = std::unique_ptr<AudioBufferList>(static_cast<AudioBufferList*>(::operator new (m_listBufferSize)));
-    memset(m_list.get(), 0, m_listBufferSize);
-    m_list->mNumberBuffers = numberOfChannels;
+    if (!m_dataSource)
+        m_dataSource = AudioSampleDataSource::create(kRingBufferDuration * sampleRate);
+    m_dataSource->setInputFormat(*m_inputDescription);
+    m_dataSource->setOutputFormat(*m_outputDescription);
 
     RefPtr<WebAudioSourceProviderAVFObjC> protectedThis = this;
     callOnMainThread([protectedThis = WTFMove(protectedThis), numberOfChannels, sampleRate] {
@@ -213,29 +150,21 @@ void WebAudioSourceProviderAVFObjC::unprepare()
 
     m_inputDescription = nullptr;
     m_outputDescription = nullptr;
-    m_ringBuffer = nullptr;
-    m_list = nullptr;
+    m_dataSource = nullptr;
     m_listBufferSize = 0;
     if (m_captureSource) {
         m_captureSource->removeObserver(*this);
         m_captureSource = nullptr;
     }
-
-    if (m_converter) {
-        // FIXME: make and use a smart pointer for AudioConverter
-        AudioConverterDispose(m_converter);
-        m_converter = nullptr;
-    }
 }
 
 void WebAudioSourceProviderAVFObjC::audioSamplesAvailable(const MediaTime&, const PlatformAudioData& data, const AudioStreamDescription&, size_t frameCount)
 {
-    if (!m_ringBuffer)
+    if (!m_dataSource)
         return;
 
-    auto& bufferList = downcast<WebAudioBufferList>(data);
+    m_dataSource->pushSamples(MediaTime(m_writeCount, m_outputDescription->sampleRate()), data, frameCount);
 
-    m_ringBuffer->store(bufferList.list(), frameCount, m_writeCount);
     m_writeCount += frameCount;
 }
 
index a050897..902f5c1 100644 (file)
@@ -124,8 +124,14 @@ public:
 
 private:
     webrtc::AudioSourceInterface* GetSource() const final { return m_source; }
-    void AddSink(webrtc::AudioTrackSinkInterface*) final { }
-    void RemoveSink(webrtc::AudioTrackSinkInterface*) final { }
+    void AddSink(webrtc::AudioTrackSinkInterface* sink) final {
+        if (m_source)
+            m_source->AddSink(sink);
+    }
+    void RemoveSink(webrtc::AudioTrackSinkInterface* sink) final {
+        if (m_source)
+            m_source->RemoveSink(sink);
+    }
     void RegisterObserver(webrtc::ObserverInterface*) final { }
     void UnregisterObserver(webrtc::ObserverInterface*) final { }
 
@@ -135,7 +141,7 @@ private:
     TrackState state() const final { return kLive; }
     bool set_enabled(bool enabled) final { m_enabled = enabled; return true; }
 
-    bool m_enabled;
+    bool m_enabled { true };
     std::string m_id;
     webrtc::AudioSourceInterface* m_source { nullptr };
 };