Configure MockRealtimeAudioSourceMac to generate stereo audio
authorjer.noble@apple.com <jer.noble@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Thu, 9 Feb 2017 16:33:06 +0000 (16:33 +0000)
committerjer.noble@apple.com <jer.noble@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Thu, 9 Feb 2017 16:33:06 +0000 (16:33 +0000)
https://bugs.webkit.org/show_bug.cgi?id=168027

Reviewed by Eric Carlson.

Update MockRealtimeAudioSourceMac to generate stereo audio.

First, because the pattern of creating a AudioBufferList structure (with all it's quirks and
weird requirements) was repeated multiple places, add a new wrapper around ABL called
WebAudioBufferList which takes care of correctly initializing the ABL structure and manages
the lifetime of its data members.

* WebCore.xcodeproj/project.pbxproj:
* platform/audio/PlatformAudioData.h: Added.
(WebCore::PlatformAudioData::kind):
* platform/audio/WebAudioBufferList.cpp: Added.
(WebCore::WebAudioBufferList::WebAudioBufferList):
(WebCore::WebAudioBufferList::buffers):
(WebCore::WebAudioBufferList::bufferCount):
(WebCore::WebAudioBufferList::buffer):
* platform/audio/WebAudioBufferList.h: Added.
(WebCore::WebAudioBufferList::list):
(WebCore::WebAudioBufferList::operator AudioBufferList&):
(WebCore::WebAudioBufferList::kind):
(isType):

Then update existing code to work in terms of WebAudioBufferList:

* platform/audio/mac/AudioSampleBufferList.cpp:
(WebCore::AudioSampleBufferList::AudioSampleBufferList):
(WebCore::AudioSampleBufferList::mixFrom):
(WebCore::AudioSampleBufferList::copyFrom):
(WebCore::AudioSampleBufferList::copyTo):
(WebCore::AudioSampleBufferList::reset):
(WebCore::AudioSampleBufferList::configureBufferListForStream): Deleted.
* platform/audio/mac/AudioSampleBufferList.h:
(WebCore::AudioSampleBufferList::bufferList):
* platform/audio/mac/AudioSampleDataSource.cpp:
(WebCore::AudioSampleDataSource::pushSamples):
* platform/audio/mac/AudioSampleDataSource.h:
* platform/mediastream/RealtimeMediaSource.cpp:
(WebCore::RealtimeMediaSource::audioSamplesAvailable):
* platform/mediastream/RealtimeMediaSource.h:
(WebCore::RealtimeMediaSource::Observer::audioSamplesAvailable):
* platform/mediastream/mac/AVAudioCaptureSource.h:
* platform/mediastream/mac/AVAudioCaptureSource.mm:
(WebCore::AVAudioCaptureSource::captureOutputDidOutputSampleBufferFromConnection):
* platform/mediastream/mac/AudioTrackPrivateMediaStreamCocoa.cpp:
(WebCore::AudioTrackPrivateMediaStreamCocoa::audioSamplesAvailable):
* platform/mediastream/mac/AudioTrackPrivateMediaStreamCocoa.h:
* platform/mediastream/mac/RealtimeOutgoingAudioSource.cpp:
(WebCore::RealtimeOutgoingAudioSource::audioSamplesAvailable):
* platform/mediastream/mac/RealtimeOutgoingAudioSource.h:

Finally, actually update MockRealtimeAudioSource to emit stereo samples. Importantly, set
the correct values for the m_streamFormat; mBytesPerFrame and mBytesPerPacket are not
multiplied by the channelCount. When generating audio, write to both channels of data.

* platform/mediastream/mac/MockRealtimeAudioSourceMac.h:
* platform/mediastream/mac/MockRealtimeAudioSourceMac.mm:
(WebCore::MockRealtimeAudioSourceMac::emitSampleBuffers):
(WebCore::MockRealtimeAudioSourceMac::reconfigure):
(WebCore::MockRealtimeAudioSourceMac::render):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@211959 268f45cc-cd09-0410-ab3c-d52691b4dbfc

19 files changed:
Source/WebCore/ChangeLog
Source/WebCore/WebCore.xcodeproj/project.pbxproj
Source/WebCore/platform/audio/PlatformAudioData.h [new file with mode: 0644]
Source/WebCore/platform/audio/WebAudioBufferList.cpp [new file with mode: 0644]
Source/WebCore/platform/audio/WebAudioBufferList.h [new file with mode: 0644]
Source/WebCore/platform/audio/mac/AudioSampleBufferList.cpp
Source/WebCore/platform/audio/mac/AudioSampleBufferList.h
Source/WebCore/platform/audio/mac/AudioSampleDataSource.cpp
Source/WebCore/platform/audio/mac/AudioSampleDataSource.h
Source/WebCore/platform/mediastream/RealtimeMediaSource.cpp
Source/WebCore/platform/mediastream/RealtimeMediaSource.h
Source/WebCore/platform/mediastream/mac/AVAudioCaptureSource.h
Source/WebCore/platform/mediastream/mac/AVAudioCaptureSource.mm
Source/WebCore/platform/mediastream/mac/AudioTrackPrivateMediaStreamCocoa.cpp
Source/WebCore/platform/mediastream/mac/AudioTrackPrivateMediaStreamCocoa.h
Source/WebCore/platform/mediastream/mac/MockRealtimeAudioSourceMac.h
Source/WebCore/platform/mediastream/mac/MockRealtimeAudioSourceMac.mm
Source/WebCore/platform/mediastream/mac/RealtimeOutgoingAudioSource.cpp
Source/WebCore/platform/mediastream/mac/RealtimeOutgoingAudioSource.h

index b994d1a..6c9daf7 100644 (file)
@@ -1,3 +1,69 @@
+2017-02-09  Jer Noble  <jer.noble@apple.com>
+
+        Configure MockRealtimeAudioSourceMac to generate stereo audio
+        https://bugs.webkit.org/show_bug.cgi?id=168027
+
+        Reviewed by Eric Carlson.
+
+        Update MockRealtimeAudioSourceMac to generate stereo audio.
+
+        First, because the pattern of creating a AudioBufferList structure (with all it's quirks and
+        weird requirements) was repeated multiple places, add a new wrapper around ABL called
+        WebAudioBufferList which takes care of correctly initializing the ABL structure and manages
+        the lifetime of its data members.
+
+        * WebCore.xcodeproj/project.pbxproj:
+        * platform/audio/PlatformAudioData.h: Added.
+        (WebCore::PlatformAudioData::kind):
+        * platform/audio/WebAudioBufferList.cpp: Added.
+        (WebCore::WebAudioBufferList::WebAudioBufferList):
+        (WebCore::WebAudioBufferList::buffers):
+        (WebCore::WebAudioBufferList::bufferCount):
+        (WebCore::WebAudioBufferList::buffer):
+        * platform/audio/WebAudioBufferList.h: Added.
+        (WebCore::WebAudioBufferList::list):
+        (WebCore::WebAudioBufferList::operator AudioBufferList&):
+        (WebCore::WebAudioBufferList::kind):
+        (isType):
+
+        Then update existing code to work in terms of WebAudioBufferList:
+
+        * platform/audio/mac/AudioSampleBufferList.cpp:
+        (WebCore::AudioSampleBufferList::AudioSampleBufferList):
+        (WebCore::AudioSampleBufferList::mixFrom):
+        (WebCore::AudioSampleBufferList::copyFrom):
+        (WebCore::AudioSampleBufferList::copyTo):
+        (WebCore::AudioSampleBufferList::reset):
+        (WebCore::AudioSampleBufferList::configureBufferListForStream): Deleted.
+        * platform/audio/mac/AudioSampleBufferList.h:
+        (WebCore::AudioSampleBufferList::bufferList):
+        * platform/audio/mac/AudioSampleDataSource.cpp:
+        (WebCore::AudioSampleDataSource::pushSamples):
+        * platform/audio/mac/AudioSampleDataSource.h:
+        * platform/mediastream/RealtimeMediaSource.cpp:
+        (WebCore::RealtimeMediaSource::audioSamplesAvailable):
+        * platform/mediastream/RealtimeMediaSource.h:
+        (WebCore::RealtimeMediaSource::Observer::audioSamplesAvailable):
+        * platform/mediastream/mac/AVAudioCaptureSource.h:
+        * platform/mediastream/mac/AVAudioCaptureSource.mm:
+        (WebCore::AVAudioCaptureSource::captureOutputDidOutputSampleBufferFromConnection):
+        * platform/mediastream/mac/AudioTrackPrivateMediaStreamCocoa.cpp:
+        (WebCore::AudioTrackPrivateMediaStreamCocoa::audioSamplesAvailable):
+        * platform/mediastream/mac/AudioTrackPrivateMediaStreamCocoa.h:
+        * platform/mediastream/mac/RealtimeOutgoingAudioSource.cpp:
+        (WebCore::RealtimeOutgoingAudioSource::audioSamplesAvailable):
+        * platform/mediastream/mac/RealtimeOutgoingAudioSource.h:
+
+        Finally, actually update MockRealtimeAudioSource to emit stereo samples. Importantly, set
+        the correct values for the m_streamFormat; mBytesPerFrame and mBytesPerPacket are not
+        multiplied by the channelCount. When generating audio, write to both channels of data.
+
+        * platform/mediastream/mac/MockRealtimeAudioSourceMac.h:
+        * platform/mediastream/mac/MockRealtimeAudioSourceMac.mm:
+        (WebCore::MockRealtimeAudioSourceMac::emitSampleBuffers):
+        (WebCore::MockRealtimeAudioSourceMac::reconfigure):
+        (WebCore::MockRealtimeAudioSourceMac::render):
+
 2017-02-09  Antti Koivisto  <antti@apple.com>
 
         Nullptr crash under styleForFirstLetter
index 8e3b06d..7170780 100644 (file)
                CDE595951BF16DF300A1CBE8 /* CDMSessionAVContentKeySession.mm in Sources */ = {isa = PBXBuildFile; fileRef = CDE595931BF166AD00A1CBE8 /* CDMSessionAVContentKeySession.mm */; };
                CDE595971BF26E2100A1CBE8 /* CDMSessionMediaSourceAVFObjC.h in Headers */ = {isa = PBXBuildFile; fileRef = CDE595961BF26E2100A1CBE8 /* CDMSessionMediaSourceAVFObjC.h */; };
                CDE5959D1BF2757100A1CBE8 /* CDMSessionMediaSourceAVFObjC.mm in Sources */ = {isa = PBXBuildFile; fileRef = CDE5959C1BF2757100A1CBE8 /* CDMSessionMediaSourceAVFObjC.mm */; };
+               CDE667A41E4BBF1500E8154A /* WebAudioBufferList.cpp in Sources */ = {isa = PBXBuildFile; fileRef = CDE667A21E4BBF1500E8154A /* WebAudioBufferList.cpp */; };
+               CDE667A51E4BBF1500E8154A /* WebAudioBufferList.h in Headers */ = {isa = PBXBuildFile; fileRef = CDE667A31E4BBF1500E8154A /* WebAudioBufferList.h */; };
                CDE7FC44181904B1002BBB77 /* OrderIterator.cpp in Sources */ = {isa = PBXBuildFile; fileRef = CDE7FC42181904B1002BBB77 /* OrderIterator.cpp */; };
                CDE7FC45181904B1002BBB77 /* OrderIterator.h in Headers */ = {isa = PBXBuildFile; fileRef = CDE7FC43181904B1002BBB77 /* OrderIterator.h */; settings = {ATTRIBUTES = (Private, ); }; };
                CDE83DB1183C44060031EAA3 /* VideoPlaybackQuality.cpp in Sources */ = {isa = PBXBuildFile; fileRef = CDE83DAF183C44060031EAA3 /* VideoPlaybackQuality.cpp */; };
                CDE595961BF26E2100A1CBE8 /* CDMSessionMediaSourceAVFObjC.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = CDMSessionMediaSourceAVFObjC.h; sourceTree = "<group>"; };
                CDE5959C1BF2757100A1CBE8 /* CDMSessionMediaSourceAVFObjC.mm */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.objcpp; path = CDMSessionMediaSourceAVFObjC.mm; sourceTree = "<group>"; };
                CDE6560E17CA6E7600526BA7 /* mediaControlsApple.js */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.javascript; path = mediaControlsApple.js; sourceTree = "<group>"; };
+               CDE667A11E4BBA4D00E8154A /* PlatformAudioData.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = PlatformAudioData.h; sourceTree = "<group>"; };
+               CDE667A21E4BBF1500E8154A /* WebAudioBufferList.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = WebAudioBufferList.cpp; sourceTree = "<group>"; };
+               CDE667A31E4BBF1500E8154A /* WebAudioBufferList.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = WebAudioBufferList.h; sourceTree = "<group>"; };
                CDE7FC42181904B1002BBB77 /* OrderIterator.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = OrderIterator.cpp; sourceTree = "<group>"; };
                CDE7FC43181904B1002BBB77 /* OrderIterator.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = OrderIterator.h; sourceTree = "<group>"; };
                CDE83DAF183C44060031EAA3 /* VideoPlaybackQuality.cpp */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = VideoPlaybackQuality.cpp; sourceTree = "<group>"; };
                        isa = PBXGroup;
                        children = (
                                CD669D661D232DFF004D1866 /* MediaSessionManagerCocoa.cpp */,
+                               CDE667A21E4BBF1500E8154A /* WebAudioBufferList.cpp */,
+                               CDE667A31E4BBF1500E8154A /* WebAudioBufferList.h */,
                        );
                        name = cocoa;
                        sourceTree = "<group>";
                                FDB1700414A2BAB200A2B5D9 /* MultiChannelResampler.h */,
                                FD31606C12B026F700C1A359 /* Panner.cpp */,
                                FD31606D12B026F700C1A359 /* Panner.h */,
+                               CDE667A11E4BBA4D00E8154A /* PlatformAudioData.h */,
                                070E091A1875EF71003A1D3C /* PlatformMediaSession.cpp */,
                                070E09181875ED93003A1D3C /* PlatformMediaSession.h */,
                                CDAE8C071746B95700532D78 /* PlatformMediaSessionManager.cpp */,
                                E17B492116A9B8FF001C8839 /* JSTransitionEvent.h in Headers */,
                                1A750D5D0A90DEE1000FF215 /* JSTreeWalker.h in Headers */,
                                A86629CF09DA2B47009633A5 /* JSUIEvent.h in Headers */,
+                               CDE667A51E4BBF1500E8154A /* WebAudioBufferList.h in Headers */,
                                465307D01DB6EE4800E4137C /* JSUIEventInit.h in Headers */,
                                7C73FB12191EF6F4007DE061 /* JSUserMessageHandler.h in Headers */,
                                7C73FB0D191EF5A8007DE061 /* JSUserMessageHandlersNamespace.h in Headers */,
                                BC6932730D7E293900AE44D1 /* JSDOMWindowBase.cpp in Sources */,
                                BCD9C2620C17AA67005C90A2 /* JSDOMWindowCustom.cpp in Sources */,
                                460CBF351D4BCD0E0092E88E /* JSDOMWindowProperties.cpp in Sources */,
+                               CDE667A41E4BBF1500E8154A /* WebAudioBufferList.cpp in Sources */,
                                BCBFB53C0DCD29CF0019B3E5 /* JSDOMWindowShell.cpp in Sources */,
                                A1CC11641E493D0100EFA69C /* FileSystemMac.mm in Sources */,
                                4170A2EA1D8C0CCA00318452 /* JSDOMWrapper.cpp in Sources */,
diff --git a/Source/WebCore/platform/audio/PlatformAudioData.h b/Source/WebCore/platform/audio/PlatformAudioData.h
new file mode 100644 (file)
index 0000000..e4544f6
--- /dev/null
@@ -0,0 +1,45 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+namespace WebCore {
+
+class PlatformAudioData {
+public:
+    virtual ~PlatformAudioData() = default;
+
+    enum class Kind {
+        None,
+        WebAudioBufferList,
+    };
+
+    virtual Kind kind() const { return Kind::None; }
+
+protected:
+    PlatformAudioData() = default;
+};
+
+}
diff --git a/Source/WebCore/platform/audio/WebAudioBufferList.cpp b/Source/WebCore/platform/audio/WebAudioBufferList.cpp
new file mode 100644 (file)
index 0000000..093698c
--- /dev/null
@@ -0,0 +1,100 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include "config.h"
+#include "WebAudioBufferList.h"
+
+#include "CAAudioStreamDescription.h"
+#include "CoreMediaSoftLink.h"
+
+namespace WebCore {
+
+WebAudioBufferList::WebAudioBufferList(const CAAudioStreamDescription& format)
+{
+    // AudioBufferList is a variable-length struct, so create on the heap with a generic new() operator
+    // with a custom size, and initialize the struct manually.
+    uint32_t bufferCount = format.numberOfChannelStreams();
+    uint32_t channelCount = format.numberOfInterleavedChannels();
+
+    uint64_t bufferListSize = offsetof(AudioBufferList, mBuffers) + (sizeof(AudioBuffer) * std::max(1U, bufferCount));
+    ASSERT(bufferListSize <= SIZE_MAX);
+
+    m_listBufferSize = static_cast<size_t>(bufferListSize);
+    m_list = std::unique_ptr<AudioBufferList>(static_cast<AudioBufferList*>(::operator new (m_listBufferSize)));
+    memset(m_list.get(), 0, m_listBufferSize);
+    m_list->mNumberBuffers = bufferCount;
+    for (uint32_t buffer = 0; buffer < bufferCount; ++buffer)
+        m_list->mBuffers[buffer].mNumberChannels = channelCount;
+}
+
+WebAudioBufferList::WebAudioBufferList(const CAAudioStreamDescription& format, uint32_t sampleCount)
+    : WebAudioBufferList(format)
+{
+    if (!sampleCount)
+        return;
+
+    uint32_t bufferCount = format.numberOfChannelStreams();
+    uint32_t channelCount = format.numberOfInterleavedChannels();
+
+    size_t bytesPerBuffer = sampleCount * channelCount * format.bytesPerFrame();
+    m_flatBuffer.reserveInitialCapacity(bufferCount * bytesPerBuffer);
+    auto data = m_flatBuffer.data();
+
+    for (uint32_t buffer = 0; buffer < m_list->mNumberBuffers; ++buffer) {
+        m_list->mBuffers[buffer].mData = data;
+        data += bytesPerBuffer;
+    }
+}
+
+WebAudioBufferList::WebAudioBufferList(const CAAudioStreamDescription& format, CMSampleBufferRef sampleBuffer)
+    : WebAudioBufferList(format)
+{
+    if (!sampleBuffer)
+        return;
+
+    CMBlockBufferRef buffer = nullptr;
+    if (noErr == CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, nullptr, m_list.get(), m_listBufferSize, kCFAllocatorSystemDefault, kCFAllocatorSystemDefault, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &buffer))
+        m_blockBuffer = adoptCF(buffer);
+}
+
+WTF::IteratorRange<AudioBuffer*> WebAudioBufferList::buffers() const
+{
+    return WTF::makeIteratorRange(&m_list->mBuffers[0], &m_list->mBuffers[m_list->mNumberBuffers]);
+}
+
+uint32_t WebAudioBufferList::bufferCount() const
+{
+    return m_list->mNumberBuffers;
+}
+
+AudioBuffer* WebAudioBufferList::buffer(uint32_t index) const
+{
+    ASSERT(index < m_list->mNumberBuffers);
+    if (index < m_list->mNumberBuffers)
+        return &m_list->mBuffers[index];
+    return nullptr;
+}
+
+}
diff --git a/Source/WebCore/platform/audio/WebAudioBufferList.h b/Source/WebCore/platform/audio/WebAudioBufferList.h
new file mode 100644 (file)
index 0000000..bbf3c13
--- /dev/null
@@ -0,0 +1,68 @@
+/*
+ * Copyright (C) 2017 Apple Inc. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ *    notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ *    notice, this list of conditions and the following disclaimer in the
+ *    documentation and/or other materials provided with the distribution.
+ *
+ * THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS''
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
+ * THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+ * PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS
+ * BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
+ * THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#pragma once
+
+#include "PlatformAudioData.h"
+#include <wtf/IteratorRange.h>
+#include <wtf/RetainPtr.h>
+#include <wtf/Vector.h>
+
+struct AudioBuffer;
+struct AudioBufferList;
+typedef struct OpaqueCMBlockBuffer *CMBlockBufferRef;
+typedef struct opaqueCMSampleBuffer *CMSampleBufferRef;
+
+namespace WebCore {
+
+class CAAudioStreamDescription;
+
+class WebAudioBufferList : public PlatformAudioData {
+public:
+    WebAudioBufferList(const CAAudioStreamDescription&);
+    WebAudioBufferList(const CAAudioStreamDescription&, uint32_t sampleCount);
+    WebAudioBufferList(const CAAudioStreamDescription&, CMSampleBufferRef);
+
+    AudioBufferList* list() const { return m_list.get(); }
+    operator AudioBufferList&() const { return *m_list; }
+
+    uint32_t bufferCount() const;
+    AudioBuffer* buffer(uint32_t index) const;
+    WTF::IteratorRange<AudioBuffer*> buffers() const;
+
+private:
+    Kind kind() const { return Kind::WebAudioBufferList; }
+
+    size_t m_listBufferSize { 0 };
+    std::unique_ptr<AudioBufferList> m_list;
+    RetainPtr<CMBlockBufferRef> m_blockBuffer;
+    Vector<uint8_t> m_flatBuffer;
+};
+
+}
+
+SPECIALIZE_TYPE_TRAITS_BEGIN(WebCore::WebAudioBufferList)
+static bool isType(const WebCore::PlatformAudioData& data) { return data.kind() == WebCore::PlatformAudioData::Kind::WebAudioBufferList; }
+SPECIALIZE_TYPE_TRAITS_END()
index afc714d..0b53a33 100644 (file)
@@ -47,24 +47,9 @@ AudioSampleBufferList::AudioSampleBufferList(const CAAudioStreamDescription& for
     m_internalFormat = std::make_unique<CAAudioStreamDescription>(format);
 
     m_sampleCapacity = maximumSampleCount;
-    m_sampleCount = 0;
     m_maxBufferSizePerChannel = maximumSampleCount * format.bytesPerFrame() / format.numberOfChannelStreams();
 
     ASSERT(format.sampleRate() >= 0);
-
-    size_t bufferSize = format.numberOfChannelStreams() * m_maxBufferSizePerChannel;
-    ASSERT(bufferSize <= SIZE_MAX);
-    if (bufferSize > SIZE_MAX)
-        return;
-
-    m_bufferListBaseSize = audioBufferListSizeForStream(format);
-    ASSERT(m_bufferListBaseSize <= SIZE_MAX);
-    if (m_bufferListBaseSize > SIZE_MAX)
-        return;
-
-    size_t allocSize = m_bufferListBaseSize + bufferSize;
-    m_bufferList = std::unique_ptr<AudioBufferList>(static_cast<AudioBufferList*>(::operator new (allocSize)));
-
     reset();
 }
 
@@ -138,29 +123,29 @@ OSStatus AudioSampleBufferList::mixFrom(const AudioSampleBufferList& source, siz
 
     m_sampleCount = frameCount;
 
-    AudioBufferList& sourceBuffer = source.bufferList();
-    for (uint32_t i = 0; i < m_bufferList->mNumberBuffers; i++) {
+    WebAudioBufferList& sourceBuffer = source.bufferList();
+    for (uint32_t i = 0; i < m_bufferList->bufferCount(); i++) {
         switch (m_internalFormat->format()) {
         case AudioStreamDescription::Int16: {
-            int16_t* destination = static_cast<int16_t*>(m_bufferList->mBuffers[i].mData);
-            int16_t* source = static_cast<int16_t*>(sourceBuffer.mBuffers[i].mData);
+            int16_t* destination = static_cast<int16_t*>(m_bufferList->buffer(i)->mData);
+            int16_t* source = static_cast<int16_t*>(sourceBuffer.buffer(i)->mData);
             for (size_t i = 0; i < frameCount; i++)
                 destination[i] += source[i];
             break;
         }
         case AudioStreamDescription::Int32: {
-            int32_t* destination = static_cast<int32_t*>(m_bufferList->mBuffers[i].mData);
-            vDSP_vaddi(destination, 1, reinterpret_cast<int32_t*>(sourceBuffer.mBuffers[i].mData), 1, destination, 1, frameCount);
+            int32_t* destination = static_cast<int32_t*>(m_bufferList->buffer(i)->mData);
+            vDSP_vaddi(destination, 1, reinterpret_cast<int32_t*>(sourceBuffer.buffer(i)->mData), 1, destination, 1, frameCount);
             break;
         }
         case AudioStreamDescription::Float32: {
-            float* destination = static_cast<float*>(m_bufferList->mBuffers[i].mData);
-            vDSP_vadd(destination, 1, reinterpret_cast<float*>(sourceBuffer.mBuffers[i].mData), 1, destination, 1, frameCount);
+            float* destination = static_cast<float*>(m_bufferList->buffer(i)->mData);
+            vDSP_vadd(destination, 1, reinterpret_cast<float*>(sourceBuffer.buffer(i)->mData), 1, destination, 1, frameCount);
             break;
         }
         case AudioStreamDescription::Float64: {
-            double* destination = static_cast<double*>(m_bufferList->mBuffers[i].mData);
-            vDSP_vaddD(destination, 1, reinterpret_cast<double*>(sourceBuffer.mBuffers[i].mData), 1, destination, 1, frameCount);
+            double* destination = static_cast<double*>(m_bufferList->buffer(i)->mData);
+            vDSP_vaddD(destination, 1, reinterpret_cast<double*>(sourceBuffer.buffer(i)->mData), 1, destination, 1, frameCount);
             break;
         }
         case AudioStreamDescription::None:
@@ -187,9 +172,9 @@ OSStatus AudioSampleBufferList::copyFrom(const AudioSampleBufferList& source, si
 
     m_sampleCount = frameCount;
 
-    for (uint32_t i = 0; i < m_bufferList->mNumberBuffers; i++) {
-        uint8_t* sourceData = static_cast<uint8_t*>(source.bufferList().mBuffers[i].mData);
-        uint8_t* destination = static_cast<uint8_t*>(m_bufferList->mBuffers[i].mData);
+    for (uint32_t i = 0; i < m_bufferList->bufferCount(); i++) {
+        uint8_t* sourceData = static_cast<uint8_t*>(source.bufferList().buffer(i)->mData);
+        uint8_t* destination = static_cast<uint8_t*>(m_bufferList->buffer(i)->mData);
         memcpy(destination, sourceData, frameCount * m_internalFormat->bytesPerPacket());
     }
 
@@ -200,11 +185,11 @@ OSStatus AudioSampleBufferList::copyTo(AudioBufferList& buffer, size_t frameCoun
 {
     if (frameCount > m_sampleCount)
         return kAudio_ParamError;
-    if (buffer.mNumberBuffers > m_bufferList->mNumberBuffers)
+    if (buffer.mNumberBuffers > m_bufferList->bufferCount())
         return kAudio_ParamError;
 
     for (uint32_t i = 0; i < buffer.mNumberBuffers; i++) {
-        uint8_t* sourceData = static_cast<uint8_t*>(m_bufferList->mBuffers[i].mData);
+        uint8_t* sourceData = static_cast<uint8_t*>(m_bufferList->buffer(i)->mData);
         uint8_t* destination = static_cast<uint8_t*>(buffer.mBuffers[i].mData);
         memcpy(destination, sourceData, frameCount * m_internalFormat->bytesPerPacket());
     }
@@ -218,15 +203,7 @@ void AudioSampleBufferList::reset()
     m_timestamp = 0;
     m_hostTime = -1;
 
-    uint8_t* data = reinterpret_cast<uint8_t*>(m_bufferList.get()) + m_bufferListBaseSize;
-    m_bufferList->mNumberBuffers = m_internalFormat->numberOfChannelStreams();
-    for (uint32_t i = 0; i < m_bufferList->mNumberBuffers; ++i) {
-        auto& buffer = m_bufferList->mBuffers[i];
-        buffer.mData = data;
-        buffer.mDataByteSize = m_maxBufferSizePerChannel;
-        buffer.mNumberChannels = m_internalFormat->numberOfInterleavedChannels();
-        data = data + m_maxBufferSizePerChannel;
-    }
+    m_bufferList = std::make_unique<WebAudioBufferList>(*m_internalFormat, m_maxBufferSizePerChannel);
 }
 
 void AudioSampleBufferList::zero()
@@ -277,22 +254,20 @@ OSStatus AudioSampleBufferList::copyFrom(AudioBufferList& source, AudioConverter
     m_converterInputBytesPerPacket = inputFormat.mBytesPerPacket;
     m_converterInputBuffer = &source;
 
-    auto* outputData = m_bufferList.get();
-
 #if !LOG_DISABLED
     AudioStreamBasicDescription outputFormat;
     propertyDataSize = sizeof(outputFormat);
     AudioConverterGetProperty(converter, kAudioConverterCurrentOutputStreamDescription, &propertyDataSize, &outputFormat);
 
-    ASSERT(outputFormat.mChannelsPerFrame == outputData->mNumberBuffers);
-    for (uint32_t i = 0; i < outputData->mNumberBuffers; ++i) {
-        ASSERT(outputData->mBuffers[i].mData);
-        ASSERT(outputData->mBuffers[i].mDataByteSize);
+    ASSERT(outputFormat.mChannelsPerFrame == m_bufferList->bufferCount());
+    for (uint32_t i = 0; i < m_bufferList->bufferCount(); ++i) {
+        ASSERT(m_bufferList->buffer(i)->mData);
+        ASSERT(m_bufferList->buffer(i)->mDataByteSize);
     }
 #endif
 
     UInt32 samplesConverted = static_cast<UInt32>(m_sampleCapacity);
-    OSStatus err = AudioConverterFillComplexBuffer(converter, audioConverterCallback, this, &samplesConverted, outputData, nullptr);
+    OSStatus err = AudioConverterFillComplexBuffer(converter, audioConverterCallback, this, &samplesConverted, m_bufferList->list(), nullptr);
     if (err) {
         LOG_ERROR("AudioSampleBufferList::copyFrom(%p) AudioConverterFillComplexBuffer returned error %d (%.4s)", this, (int)err, (char*)&err);
         m_sampleCount = std::min(m_sampleCapacity, static_cast<size_t>(samplesConverted));
@@ -312,29 +287,13 @@ OSStatus AudioSampleBufferList::copyFrom(AudioSampleBufferList& source, AudioCon
 OSStatus AudioSampleBufferList::copyFrom(CARingBuffer& ringBuffer, size_t sampleCount, uint64_t startFrame, CARingBuffer::FetchMode mode)
 {
     reset();
-    if (ringBuffer.fetch(&bufferList(), sampleCount, startFrame, mode) != CARingBuffer::Ok)
+    if (ringBuffer.fetch(bufferList().list(), sampleCount, startFrame, mode) != CARingBuffer::Ok)
         return kAudio_ParamError;
 
     m_sampleCount = sampleCount;
     return 0;
 }
 
-void AudioSampleBufferList::configureBufferListForStream(AudioBufferList& bufferList, const CAAudioStreamDescription& format, uint8_t* bufferData, size_t sampleCount)
-{
-    size_t bufferCount = format.numberOfChannelStreams();
-    size_t channelCount = format.numberOfInterleavedChannels();
-    size_t bytesPerChannel = sampleCount * format.bytesPerFrame();
-
-    bufferList.mNumberBuffers = bufferCount;
-    for (unsigned i = 0; i < bufferCount; ++i) {
-        bufferList.mBuffers[i].mNumberChannels = channelCount;
-        bufferList.mBuffers[i].mDataByteSize = bytesPerChannel;
-        bufferList.mBuffers[i].mData = bufferData;
-        if (bufferData)
-            bufferData = bufferData + bytesPerChannel;
-    }
-}
-
 } // namespace WebCore
 
 #endif // ENABLE(MEDIA_STREAM)
index 0e8e24c..c99ee9a 100644 (file)
@@ -28,6 +28,7 @@
 #if ENABLE(MEDIA_STREAM)
 
 #include "CARingBuffer.h"
+#include "WebAudioBufferList.h"
 #include <CoreAudio/CoreAudioTypes.h>
 #include <wtf/Lock.h>
 #include <wtf/RefCounted.h>
@@ -44,7 +45,6 @@ public:
 
     ~AudioSampleBufferList();
 
-    static void configureBufferListForStream(AudioBufferList&, const CAAudioStreamDescription&, uint8_t*, size_t);
     static inline size_t audioBufferListSizeForStream(const CAAudioStreamDescription&);
 
     static void applyGain(AudioBufferList&, float, AudioStreamDescription::PCMFormat);
@@ -60,7 +60,7 @@ public:
     OSStatus copyTo(AudioBufferList&, size_t count = SIZE_MAX);
 
     const AudioStreamBasicDescription& streamDescription() const { return m_internalFormat->streamDescription(); }
-    AudioBufferList& bufferList() const { return *m_bufferList.get(); }
+    WebAudioBufferList& bufferList() const { return *m_bufferList; }
 
     uint32_t sampleCapacity() const { return m_sampleCapacity; }
     uint32_t sampleCount() const { return m_sampleCount; }
@@ -93,7 +93,7 @@ protected:
     size_t m_sampleCapacity { 0 };
     size_t m_maxBufferSizePerChannel { 0 };
     size_t m_bufferListBaseSize { 0 };
-    std::unique_ptr<AudioBufferList> m_bufferList;
+    std::unique_ptr<WebAudioBufferList> m_bufferList;
 };
 
 inline size_t AudioSampleBufferList::audioBufferListSizeForStream(const CAAudioStreamDescription& description)
index 7b6391b..9f16352 100644 (file)
@@ -152,7 +152,7 @@ void AudioSampleDataSource::pushSamplesInternal(AudioBufferList& bufferList, con
         if (err)
             return;
 
-        sampleBufferList = &m_scratchBuffer->bufferList();
+        sampleBufferList = m_scratchBuffer->bufferList().list();
     } else
         sampleBufferList = &bufferList;
 
@@ -192,34 +192,16 @@ void AudioSampleDataSource::pushSamples(const AudioStreamBasicDescription& sampl
 
     ASSERT_UNUSED(sampleDescription, *m_inputDescription == sampleDescription);
     ASSERT(m_ringBuffer);
-
-    size_t bufferSize = AudioSampleBufferList::audioBufferListSizeForStream(*m_inputDescription.get());
-    uint8_t bufferData[bufferSize];
-    AudioBufferList* bufferList = reinterpret_cast<AudioBufferList*>(bufferData);
-    bufferList->mNumberBuffers = m_inputDescription->numberOfInterleavedChannels();
-
-    CMBlockBufferRef buffer = nullptr;
-    OSStatus err = CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, nullptr, bufferList, bufferSize, kCFAllocatorSystemDefault, kCFAllocatorSystemDefault, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &buffer);
-    if (err) {
-        LOG_ERROR("AudioSampleDataSource::pushSamples(%p) - CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer returned error %d (%.4s)", this, (int)err, (char*)&err);
-        return;
-    }
-
-    pushSamplesInternal(*bufferList, toMediaTime(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)), CMSampleBufferGetNumSamples(sampleBuffer));
+    
+    WebAudioBufferList list(*m_inputDescription, sampleBuffer);
+    pushSamplesInternal(list, toMediaTime(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)), CMSampleBufferGetNumSamples(sampleBuffer));
 }
 
-void AudioSampleDataSource::pushSamples(const AudioStreamBasicDescription& sampleDescription, const MediaTime& sampleTime, void* audioData, size_t sampleCount)
+void AudioSampleDataSource::pushSamples(const MediaTime& sampleTime, PlatformAudioData& audioData, size_t sampleCount)
 {
     std::unique_lock<Lock> lock(m_lock, std::try_to_lock);
-    ASSERT(*m_inputDescription == sampleDescription);
-
-    CAAudioStreamDescription description(sampleDescription);
-    size_t bufferSize = AudioSampleBufferList::audioBufferListSizeForStream(description);
-    uint8_t bufferData[bufferSize];
-    AudioBufferList* bufferList = reinterpret_cast<AudioBufferList*>(bufferData);
-
-    AudioSampleBufferList::configureBufferListForStream(*bufferList, description, reinterpret_cast<uint8_t*>(audioData), sampleCount);
-    pushSamplesInternal(*bufferList, sampleTime, sampleCount);
+    ASSERT(is<WebAudioBufferList>(audioData));
+    pushSamplesInternal(downcast<WebAudioBufferList>(audioData), sampleTime, sampleCount);
 }
 
 bool AudioSampleDataSource::pullSamplesInternal(AudioBufferList& buffer, size_t& sampleCount, uint64_t timeStamp, double /*hostTime*/, PullMode mode)
index decdeee..9ac3146 100644 (file)
@@ -52,7 +52,7 @@ public:
     OSStatus setInputFormat(const CAAudioStreamDescription&);
     OSStatus setOutputFormat(const CAAudioStreamDescription&);
 
-    void pushSamples(const AudioStreamBasicDescription&, const MediaTime&, void*, size_t);
+    void pushSamples(const MediaTime&, PlatformAudioData&, size_t);
     void pushSamples(const AudioStreamBasicDescription&, CMSampleBufferRef);
 
     enum PullMode { Copy, Mix };
index ebdad25..7a8d740 100644 (file)
@@ -119,7 +119,7 @@ void RealtimeMediaSource::videoSampleAvailable(MediaSample& mediaSample)
         observer->videoSampleAvailable(mediaSample);
 }
 
-void RealtimeMediaSource::audioSamplesAvailable(const MediaTime& time, void* audioData, const AudioStreamDescription& description, size_t numberOfFrames)
+void RealtimeMediaSource::audioSamplesAvailable(const MediaTime& time, PlatformAudioData& audioData, const AudioStreamDescription& description, size_t numberOfFrames)
 {
     for (const auto& observer : m_observers)
         observer->audioSamplesAvailable(time, audioData, description, numberOfFrames);
index 8fbe694..154f828 100644 (file)
@@ -57,6 +57,7 @@ class AudioStreamDescription;
 class FloatRect;
 class GraphicsContext;
 class MediaStreamPrivate;
+class PlatformAudioData;
 class RealtimeMediaSourceSettings;
 
 class RealtimeMediaSource : public RefCounted<RealtimeMediaSource> {
@@ -77,7 +78,7 @@ public:
         virtual void videoSampleAvailable(MediaSample&) { }
 
         // May be called on a background thread.
-        virtual void audioSamplesAvailable(const MediaTime&, void* /*audioData*/, const AudioStreamDescription&, size_t /*numberOfFrames*/) { }
+        virtual void audioSamplesAvailable(const MediaTime&, PlatformAudioData&, const AudioStreamDescription&, size_t /*numberOfFrames*/) { }
     };
 
     virtual ~RealtimeMediaSource() { }
@@ -109,7 +110,7 @@ public:
     virtual void settingsDidChange();
 
     void videoSampleAvailable(MediaSample&);
-    void audioSamplesAvailable(const MediaTime&, void*, const AudioStreamDescription&, size_t);
+    void audioSamplesAvailable(const MediaTime&, PlatformAudioData&, const AudioStreamDescription&, size_t);
     
     bool stopped() const { return m_stopped; }
 
index 6b9e84e..9e5c856 100644 (file)
@@ -39,6 +39,7 @@ typedef const struct opaqueCMFormatDescription *CMFormatDescriptionRef;
 
 namespace WebCore {
 
+class WebAudioBufferList;
 class WebAudioSourceProviderAVFObjC;
 
 class AVAudioCaptureSource : public AVMediaCaptureSource, public AudioCaptureSourceProviderObjC {
@@ -66,8 +67,7 @@ private:
     AudioSourceProvider* audioSourceProvider() override;
 
     RetainPtr<AVCaptureConnection> m_audioConnection;
-    size_t m_listBufferSize { 0 };
-    std::unique_ptr<AudioBufferList> m_list;
+    std::unique_ptr<WebAudioBufferList> m_list;
 
     RefPtr<WebAudioSourceProviderAVFObjC> m_audioSourceProvider;
     std::unique_ptr<CAAudioStreamDescription> m_inputDescription;
index 020f280..3a613b0 100644 (file)
@@ -193,10 +193,6 @@ void AVAudioCaptureSource::captureOutputDidOutputSampleBufferFromConnection(AVCa
     const AudioStreamBasicDescription* streamDescription = CMAudioFormatDescriptionGetStreamBasicDescription(formatDescription);
     if (!m_inputDescription || *m_inputDescription != *streamDescription) {
         m_inputDescription = std::make_unique<CAAudioStreamDescription>(*streamDescription);
-        m_listBufferSize = AudioSampleBufferList::audioBufferListSizeForStream(*m_inputDescription.get());
-        m_list = std::unique_ptr<AudioBufferList>(static_cast<AudioBufferList*>(::operator new (m_listBufferSize)));
-        memset(m_list.get(), 0, m_listBufferSize);
-        m_list->mNumberBuffers = m_inputDescription->numberOfChannelStreams();
 
         if (!m_observers.isEmpty()) {
             for (auto& observer : m_observers)
@@ -204,13 +200,8 @@ void AVAudioCaptureSource::captureOutputDidOutputSampleBufferFromConnection(AVCa
         }
     }
 
-    CMItemCount frameCount = CMSampleBufferGetNumSamples(sampleBuffer);
-    CMBlockBufferRef buffer = nil;
-    OSStatus err = CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, nullptr, m_list.get(), m_listBufferSize, kCFAllocatorSystemDefault, kCFAllocatorSystemDefault, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &buffer);
-    if (!err)
-        audioSamplesAvailable(toMediaTime(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)), m_list->mBuffers[0].mData, CAAudioStreamDescription(*streamDescription), frameCount);
-    else
-        LOG_ERROR("AVAudioCaptureSource::captureOutputDidOutputSampleBufferFromConnection(%p) - CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer returned error %d (%.4s)", this, (int)err, (char*)&err);
+    m_list = std::make_unique<WebAudioBufferList>(*m_inputDescription, sampleBuffer);
+    audioSamplesAvailable(toMediaTime(CMSampleBufferGetPresentationTimeStamp(sampleBuffer)), *m_list, CAAudioStreamDescription(*streamDescription), CMSampleBufferGetNumSamples(sampleBuffer));
 
     if (m_observers.isEmpty())
         return;
index 6d63b7e..c804cf0 100644 (file)
@@ -178,7 +178,7 @@ OSStatus AudioTrackPrivateMediaStreamCocoa::setupAudioUnit()
     return err;
 }
 
-void AudioTrackPrivateMediaStreamCocoa::audioSamplesAvailable(const MediaTime& sampleTime, void* audioData, const AudioStreamDescription& description, size_t sampleCount)
+void AudioTrackPrivateMediaStreamCocoa::audioSamplesAvailable(const MediaTime& sampleTime, PlatformAudioData& audioData, const AudioStreamDescription& description, size_t sampleCount)
 {
     ASSERT(description.platformDescription().type == PlatformDescription::CAAudioStreamBasicType);
 
@@ -215,7 +215,7 @@ void AudioTrackPrivateMediaStreamCocoa::audioSamplesAvailable(const MediaTime& s
         m_dataSource->setVolume(m_volume);
     }
 
-    m_dataSource->pushSamples(m_inputDescription->streamDescription(), sampleTime, audioData, sampleCount);
+    m_dataSource->pushSamples(sampleTime, audioData, sampleCount);
 
     if (m_autoPlay)
         playInternal();
index 27f99da..b6e7f41 100644 (file)
@@ -66,7 +66,7 @@ private:
     void sourceMutedChanged()  final { }
     void sourceSettingsChanged() final { }
     bool preventSourceFromStopping() final { return false; }
-    void audioSamplesAvailable(const MediaTime&, void*, const AudioStreamDescription&, size_t) final;
+    void audioSamplesAvailable(const MediaTime&, PlatformAudioData&, const AudioStreamDescription&, size_t) final;
 
     static OSStatus inputProc(void*, AudioUnitRenderActionFlags*, const AudioTimeStamp*, UInt32 inBusNumber, UInt32 numberOfFrames, AudioBufferList*);
     OSStatus render(UInt32 sampleCount, AudioBufferList&, UInt32 inBusNumber, const AudioTimeStamp&, AudioUnitRenderActionFlags&);
index 1433d7d..915ce5f 100644 (file)
 #include <CoreAudio/CoreAudioTypes.h>
 
 OBJC_CLASS AVAudioPCMBuffer;
-typedef struct AudioBufferList AudioBufferList;
 typedef struct OpaqueCMClock* CMClockRef;
 typedef const struct opaqueCMFormatDescription* CMFormatDescriptionRef;
 
 namespace WebCore {
 
+class WebAudioBufferList;
 class WebAudioSourceProviderAVFObjC;
 
 class MockRealtimeAudioSourceMac final : public MockRealtimeAudioSource, public AudioCaptureSourceProviderObjC {
@@ -68,7 +68,7 @@ private:
     AudioSourceProvider* audioSourceProvider() final;
 
     size_t m_audioBufferListBufferSize { 0 };
-    std::unique_ptr<AudioBufferList> m_audioBufferList;
+    std::unique_ptr<WebAudioBufferList> m_audioBufferList;
 
     uint32_t m_maximiumFrameCount;
     uint32_t m_sampleRate { 44100 };
index b4f8745..09dc203 100644 (file)
@@ -38,6 +38,7 @@
 #import "MediaSampleAVFObjC.h"
 #import "NotImplemented.h"
 #import "RealtimeMediaSourceSettings.h"
+#import "WebAudioBufferList.h"
 #import "WebAudioSourceProviderAVFObjC.h"
 #import <AVFoundation/AVAudioBuffer.h>
 #import <AudioToolbox/AudioConverter.h>
@@ -102,7 +103,7 @@ void MockRealtimeAudioSourceMac::emitSampleBuffers(uint32_t frameCount)
     CMTime startTime = CMTimeMake(m_bytesEmitted, m_sampleRate);
     m_bytesEmitted += frameCount;
 
-    audioSamplesAvailable(toMediaTime(startTime), m_audioBufferList->mBuffers[0].mData, CAAudioStreamDescription(m_streamFormat), frameCount);
+    audioSamplesAvailable(toMediaTime(startTime), *m_audioBufferList, CAAudioStreamDescription(m_streamFormat), frameCount);
 
     CMSampleBufferRef sampleBuffer;
     OSStatus result = CMAudioSampleBufferCreateWithPacketDescriptions(nullptr, nullptr, true, nullptr, nullptr, m_formatDescription.get(), frameCount, startTime, nullptr, &sampleBuffer);
@@ -113,7 +114,7 @@ void MockRealtimeAudioSourceMac::emitSampleBuffers(uint32_t frameCount)
         return;
 
     auto buffer = adoptCF(sampleBuffer);
-    result = CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer, kCFAllocatorDefault, kCFAllocatorDefault, 0, m_audioBufferList.get());
+    result = CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer, kCFAllocatorDefault, kCFAllocatorDefault, 0, m_audioBufferList->list());
     ASSERT(!result);
 
     result = CMSampleBufferSetDataReady(sampleBuffer);
@@ -130,33 +131,13 @@ void MockRealtimeAudioSourceMac::reconfigure()
 
     const int bytesPerFloat = sizeof(Float32);
     const int bitsPerByte = 8;
-    int channelCount = 1;
-    m_streamFormat = { };
-    m_streamFormat.mSampleRate = m_sampleRate;
-    m_streamFormat.mFormatID = kAudioFormatLinearPCM;
-    m_streamFormat.mFormatFlags = kAudioFormatFlagsNativeFloatPacked;
-    m_streamFormat.mBytesPerPacket = bytesPerFloat * channelCount;
-    m_streamFormat.mFramesPerPacket = 1;
-    m_streamFormat.mBytesPerFrame = bytesPerFloat * channelCount;
-    m_streamFormat.mChannelsPerFrame = channelCount;
-    m_streamFormat.mBitsPerChannel = bitsPerByte * bytesPerFloat;
-
-    // AudioBufferList is a variable-length struct, so create on the heap with a generic new() operator
-    // with a custom size, and initialize the struct manually.
-    uint32_t bufferDataSize = m_streamFormat.mBytesPerFrame * m_maximiumFrameCount;
-    uint32_t baseSize = AudioSampleBufferList::audioBufferListSizeForStream(m_streamFormat);
-
-    uint64_t bufferListSize = baseSize + bufferDataSize;
-    ASSERT(bufferListSize <= SIZE_MAX);
-    if (bufferListSize > SIZE_MAX)
-        return;
-
-    m_audioBufferListBufferSize = static_cast<size_t>(bufferListSize);
-    m_audioBufferList = std::unique_ptr<AudioBufferList>(static_cast<AudioBufferList*>(::operator new (m_audioBufferListBufferSize)));
-    memset(m_audioBufferList.get(), 0, m_audioBufferListBufferSize);
+    const int channelCount = 2;
+    const bool isFloat = true;
+    const bool isBigEndian = false;
+    const bool isNonInterleaved = true;
+    FillOutASBDForLPCM(m_streamFormat, m_sampleRate, channelCount, bitsPerByte * bytesPerFloat, bitsPerByte * bytesPerFloat, isFloat, isBigEndian, isNonInterleaved);
 
-    uint8_t* bufferData = reinterpret_cast<uint8_t*>(m_audioBufferList.get()) + baseSize;
-    AudioSampleBufferList::configureBufferListForStream(*m_audioBufferList.get(), m_streamFormat, bufferData, bufferDataSize);
+    m_audioBufferList = std::make_unique<WebAudioBufferList>(m_streamFormat, m_streamFormat.mBytesPerFrame * m_maximiumFrameCount);
 
     CMFormatDescriptionRef formatDescription;
     CMAudioFormatDescriptionCreate(NULL, &m_streamFormat, 0, NULL, 0, NULL, NULL, &formatDescription);
@@ -178,40 +159,42 @@ void MockRealtimeAudioSourceMac::render(double delta)
     uint32_t totalFrameCount = alignTo16Bytes(delta * m_sampleRate);
     uint32_t frameCount = std::min(totalFrameCount, m_maximiumFrameCount);
     double elapsed = elapsedTime();
-    while (frameCount) {
-        float *buffer = static_cast<float *>(m_audioBufferList->mBuffers[0].mData);
-        for (uint32_t frame = 0; frame < frameCount; ++frame) {
-            int phase = fmod(elapsed, 2) * 15;
-            double increment = 0;
-            bool silent = true;
-
-            switch (phase) {
-            case 0:
-            case 14: {
-                int index = fmod(elapsed, 1) * 2;
-                increment = tau * frequencies[index] / m_sampleRate;
-                silent = false;
-                break;
-            }
-            default:
-                break;
-            }
 
-            if (silent) {
-                buffer[frame] = 0;
-                continue;
-            }
-
-            float tone = sin(theta) * 0.25;
-            buffer[frame] = tone;
+    while (frameCount) {
+        for (auto& audioBuffer : m_audioBufferList->buffers()) {
+            audioBuffer.mDataByteSize = frameCount * m_streamFormat.mBytesPerFrame;
+            float *buffer = static_cast<float *>(audioBuffer.mData);
+            for (uint32_t frame = 0; frame < frameCount; ++frame) {
+                int phase = fmod(elapsed, 2) * 15;
+                double increment = 0;
+                bool silent = true;
+
+                switch (phase) {
+                case 0:
+                case 14: {
+                    int index = fmod(elapsed, 1) * 2;
+                    increment = tau * frequencies[index] / m_sampleRate;
+                    silent = false;
+                    break;
+                }
+                default:
+                    break;
+                }
+
+                if (silent) {
+                    buffer[frame] = 0;
+                    continue;
+                }
+
+                float tone = sin(theta) * 0.25;
+                buffer[frame] = tone;
 
                 theta += increment;
-            if (theta > tau)
-                theta -= tau;
-            elapsed += 1 / m_sampleRate;
+                if (theta > tau)
+                    theta -= tau;
+                elapsed += 1 / m_sampleRate;
+            }
         }
-
-        m_audioBufferList->mBuffers[0].mDataByteSize = frameCount * sizeof(float);
         emitSampleBuffers(frameCount);
         totalFrameCount -= frameCount;
         frameCount = std::min(totalFrameCount, m_maximiumFrameCount);
index 3ebac2d..f1e9e99 100644 (file)
@@ -35,7 +35,7 @@
 
 namespace WebCore {
 
-void RealtimeOutgoingAudioSource::audioSamplesAvailable(const MediaTime&, void*, const AudioStreamDescription&, size_t)
+void RealtimeOutgoingAudioSource::audioSamplesAvailable(const MediaTime&, PlatformAudioData&, const AudioStreamDescription&, size_t)
 {
     notImplemented();
 }
index 4c4e918..fe1410c 100644 (file)
@@ -66,7 +66,7 @@ private:
     void sourceMutedChanged() final { }
     void sourceSettingsChanged() final { }
     bool preventSourceFromStopping() final { return false; }
-    void audioSamplesAvailable(const MediaTime&, void*, const AudioStreamDescription&, size_t) final;
+    void audioSamplesAvailable(const MediaTime&, PlatformAudioData&, const AudioStreamDescription&, size_t) final;
 
     void convertAndSendMonoSamples();
     void convertAndSendStereoSamples();