[WebRTC] Prevent capturing at unconventional resolutions when using the SW encoder...
authorcommit-queue@webkit.org <commit-queue@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Thu, 22 Jun 2017 15:29:14 +0000 (15:29 +0000)
committercommit-queue@webkit.org <commit-queue@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Thu, 22 Jun 2017 15:29:14 +0000 (15:29 +0000)
https://bugs.webkit.org/show_bug.cgi?id=172602
<rdar://problem/32407693>

Patch by Youenn Fablet <youenn@apple.com> on 2017-06-22
Reviewed by Eric Carlson.

Source/ThirdParty/libwebrtc:

Adding a parameter to disable hardware encoder.

* Source/webrtc/sdk/objc/Framework/Classes/VideoToolbox/encoder.h:
* Source/webrtc/sdk/objc/Framework/Classes/VideoToolbox/encoder.mm:
(webrtc::H264VideoToolboxEncoder::CreateCompressionSession):

Source/WebCore:

Test: platform/mac/webrtc/captureCanvas-webrtc-software-encoder.html

Add internal API to switch on/off hardware H264 encoder.
Add checks for standard size. If using a software encoder and frame size is not standard,
the session is destroyed and no frame is sent at all.

Added tests based on captureStream.
Fixed the case of capturing a canvas which size is changing.

* Modules/mediastream/CanvasCaptureMediaStreamTrack.cpp:
(WebCore::CanvasCaptureMediaStreamTrack::Source::canvasResized):
* platform/mediastream/libwebrtc/H264VideoToolBoxEncoder.h:
* platform/mediastream/libwebrtc/H264VideoToolBoxEncoder.mm:
(WebCore::H264VideoToolboxEncoder::setHardwareEncoderForWebRTCAllowed):
(WebCore::H264VideoToolboxEncoder::hardwareEncoderForWebRTCAllowed):
(WebCore::isUsingSoftwareEncoder):
(WebCore::H264VideoToolboxEncoder::CreateCompressionSession):
(isStandardFrameSize): Added.
(isUsingSoftwareEncoder): Added.
* testing/Internals.cpp:
(WebCore::Internals::setH264HardwareEncoderAllowed):
* testing/Internals.h:
* testing/Internals.idl:

LayoutTests:

* platform/mac-wk1/TestExpectations: Mark captureCanvas as flaky due to AVDCreateGPUAccelerator: Error loading GPU renderer" appearing on some bots.
* platform/mac/webrtc/captureCanvas-webrtc-software-encoder-expected.txt: Copied from LayoutTests/webrtc/captureCanvas-webrtc-expected.txt.
* platform/mac/webrtc/captureCanvas-webrtc-software-encoder.html: Added.
* webrtc/captureCanvas-webrtc-expected.txt:
* webrtc/captureCanvas-webrtc.html:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@218699 268f45cc-cd09-0410-ab3c-d52691b4dbfc

18 files changed:
LayoutTests/ChangeLog
LayoutTests/platform/mac-wk1/TestExpectations
LayoutTests/platform/mac/webrtc/captureCanvas-webrtc-software-encoder-expected.txt [new file with mode: 0644]
LayoutTests/platform/mac/webrtc/captureCanvas-webrtc-software-encoder.html [new file with mode: 0644]
LayoutTests/webrtc/captureCanvas-webrtc-expected.txt
LayoutTests/webrtc/captureCanvas-webrtc.html
LayoutTests/webrtc/routines.js
LayoutTests/webrtc/video.html
Source/ThirdParty/libwebrtc/ChangeLog
Source/ThirdParty/libwebrtc/Source/webrtc/sdk/objc/Framework/Classes/VideoToolbox/encoder.h
Source/ThirdParty/libwebrtc/Source/webrtc/sdk/objc/Framework/Classes/VideoToolbox/encoder.mm
Source/WebCore/ChangeLog
Source/WebCore/Modules/mediastream/CanvasCaptureMediaStreamTrack.cpp
Source/WebCore/platform/mediastream/libwebrtc/H264VideoToolBoxEncoder.h
Source/WebCore/platform/mediastream/libwebrtc/H264VideoToolBoxEncoder.mm
Source/WebCore/testing/Internals.cpp
Source/WebCore/testing/Internals.h
Source/WebCore/testing/Internals.idl

index 64bc72e..b42661b 100644 (file)
@@ -1,3 +1,17 @@
+2017-06-22  Youenn Fablet  <youenn@apple.com>
+
+        [WebRTC] Prevent capturing at unconventional resolutions when using the SW encoder on Mac
+        https://bugs.webkit.org/show_bug.cgi?id=172602
+        <rdar://problem/32407693>
+
+        Reviewed by Eric Carlson.
+
+        * platform/mac-wk1/TestExpectations: Mark captureCanvas as flaky due to AVDCreateGPUAccelerator: Error loading GPU renderer" appearing on some bots.
+        * platform/mac/webrtc/captureCanvas-webrtc-software-encoder-expected.txt: Copied from LayoutTests/webrtc/captureCanvas-webrtc-expected.txt.
+        * platform/mac/webrtc/captureCanvas-webrtc-software-encoder.html: Added.
+        * webrtc/captureCanvas-webrtc-expected.txt:
+        * webrtc/captureCanvas-webrtc.html:
+
 2017-06-22  Joseph Pecoraro  <pecoraro@apple.com>
 
         LayoutTests/inspector/indexeddb/requestDatabaseNames.html: Sort database names to prevent flakiness
index 8ea6886..4704a23 100644 (file)
@@ -98,7 +98,7 @@ http/tests/ssl/media-stream
 imported/w3c/web-platform-tests/webrtc [ Skip ]
 webrtc [ Skip ]
 webrtc/datachannel [ Pass ]
-webrtc/captureCanvas-webrtc.html [ Pass ]
+webrtc/captureCanvas-webrtc.html [ Failure Pass ]
 
 # These tests test the Shadow DOM based HTML form validation UI but Mac WK1 is using native dialogs instead.
 fast/forms/validation-message-on-listbox.html
diff --git a/LayoutTests/platform/mac/webrtc/captureCanvas-webrtc-software-encoder-expected.txt b/LayoutTests/platform/mac/webrtc/captureCanvas-webrtc-software-encoder-expected.txt
new file mode 100644 (file)
index 0000000..9334707
--- /dev/null
@@ -0,0 +1,4 @@
+   
+
+PASS captureStream with webrtc 
+
diff --git a/LayoutTests/platform/mac/webrtc/captureCanvas-webrtc-software-encoder.html b/LayoutTests/platform/mac/webrtc/captureCanvas-webrtc-software-encoder.html
new file mode 100644 (file)
index 0000000..0a5ce25
--- /dev/null
@@ -0,0 +1,99 @@
+<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
+<html>
+    <head>
+        <canvas id="canvas0" width=320px height=240px></canvas>
+        <canvas id="canvas1" width=100px height=100px></canvas>
+        <video id="video" autoplay width=320px height=240px></video>
+        <canvas id="canvas2" width=320px height=240px></canvas>
+        <script src="../../../resources/testharness.js"></script>
+        <script src="../../../resources/testharnessreport.js"></script>
+        <script src="../../../webrtc/routines.js"></script>
+        <script>
+
+function printRectangle(canvas)
+{
+    var context = canvas.getContext("2d");
+    context.fillStyle = canvas.color;
+    context.fillRect(0, 0, 100, 100);
+    setTimeout(() => printRectangle(canvas), 50);
+}
+
+if (window.internals)
+    internals.setH264HardwareEncoderAllowed(false);
+
+function testCanvas(testName, array1, isSame, count)
+{
+    if (count === undefined)
+        count = 0;
+    canvas2.getContext("2d").drawImage(video, 0 ,0);
+    array2 = canvas2.getContext("2d").getImageData(20, 20, 60, 60).data;
+    var isEqual = true;
+    for (var index = 0; index < array1.length; ++index) {
+        // Rough comparison since we are compressing data.
+        // This test still catches errors since we are going from green to blue to red.
+        if (Math.abs(array1[index] - array2[index]) > 40) {
+            isEqual = false;
+            continue;
+        }
+    }
+    if (isEqual === isSame)
+        return;
+
+    if (count === 20)
+        return Promise.reject(testName + " failed");
+
+    return waitFor(50).then(() => {
+        return testCanvas(testName, array1, isSame, ++count);
+    });
+}
+
+var canvas0Track;
+var sender;
+promise_test((test) => {
+    canvas0.color = "green";
+    printRectangle(canvas0);
+    return new Promise((resolve, reject) => {
+        createConnections((firstConnection) => {
+            var stream = canvas0.captureStream();
+            canvas0Track = stream.getVideoTracks()[0];
+            sender = firstConnection.addTrack(canvas0Track, stream);
+        }, (secondConnection) => {
+            secondConnection.ontrack = (trackEvent) => {
+                resolve(trackEvent.streams[0]);
+            };
+        });
+        setTimeout(() => reject("Test timed out"), 5000);
+    }).then((stream) => {
+        video.srcObject = stream;
+        return video.play();
+    }).then(() => {
+        return testCanvas("test1", canvas0.getContext("2d").getImageData(20 ,20, 60, 60).data, true);
+    }).then(() => {
+        canvas1.color = "blue";
+        printRectangle(canvas1);
+        var stream = canvas1.captureStream();
+        return sender.replaceTrack(stream.getVideoTracks()[0]);
+    }).then(() => {
+        return waitFor(200);
+    }).then(() => {
+        return testCanvas("test2", canvas1.getContext("2d").getImageData(20 ,20, 60, 60).data, false);
+    }).then(() => {
+        return testCanvas("test3", canvas0.getContext("2d").getImageData(20 ,20, 60, 60).data, true);
+    }).then(() => {
+        return sender.replaceTrack(canvas0Track);
+    }).then(() => {
+        canvas0.color = "red";
+        // Let's wait for red color to be printed on canvas0
+        return waitFor(200);
+    }).then(() => {
+        return testCanvas("test4", canvas0.getContext("2d").getImageData(20 ,20, 60, 60).data, true);
+    }).catch((error) => {
+        if (window.internals)
+            internals.setH264HardwareEncoderAllowed(true);
+       return Promise.reject(error);
+    });
+}, "captureStream with webrtc");
+
+        </script>
+    </head>
+</html>
index 36e7e58..7616ec7 100644 (file)
@@ -1,7 +1,8 @@
-   
 
 PASS Setting up the connection 
 PASS Checking canvas is green 
 PASS Checking canvas is red 
 PASS Checking canvas is green again 
+PASS Checking canvas size change 
 
index 50ea4b9..298c6c1 100644 (file)
@@ -1,27 +1,50 @@
 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML//EN">
 <html>
     <head>
-        <canvas id="canvas1" width=100px height=100px></canvas>
-        <video id="video" autoplay width=100px height=100px></video>
-        <canvas id="canvas2" width=100px height=100px></canvas>
+        <canvas id="canvas1" width=320px height=240px></canvas>
+        <video id="video" autoplay></video>
+        <canvas id="canvas2" width=320px height=240px></canvas>
         <script src="../resources/testharness.js"></script>
         <script src="../resources/testharnessreport.js"></script>
         <script src ="routines.js"></script>
         <script>
 
-var canvas1 = document.getElementById("canvas1");
-var canvas2 = document.getElementById("canvas2");
-var video = document.getElementById("video");
-
 var color = "green";
 function printRectangle()
 {
     var context = canvas1.getContext("2d");
     context.fillStyle = color;
-    context.fillRect(0, 0, 100, 100);
+    context.fillRect(0, 0, 320, 240);
     setTimeout(printRectangle, 50);
 }
 
+function testCanvas(testName, array1, isSame, count)
+{
+    if (count === undefined)
+        count = 0;
+    canvas2.getContext("2d").drawImage(video, 0 ,0);
+    array2 = canvas2.getContext("2d").getImageData(20, 20, 60, 60).data;
+    var isEqual = true;
+    var index = 0;
+    for (index = 0; index < array1.length; ++index) {
+        // Rough comparison since we are compressing data.
+        // This test still catches errors since we are going from green to blue to red.
+        if (Math.abs(array1[index] - array2[index]) > 40) {
+            isEqual = false;
+            continue;
+        }
+    }
+    if (isEqual === isSame)
+        return;
+
+    if (count === 20)
+        return Promise.reject(testName + " failed, expected " + JSON.stringify(array1) + " but got " + JSON.stringify(array2));
+
+    return waitFor(50).then(() => {
+        return testCanvas(testName, array1, isSame, ++count);
+    });
+}
+
 promise_test((test) => {
     printRectangle();
     return new Promise((resolve, reject) => {
@@ -46,16 +69,14 @@ promise_test((test) => {
 
 promise_test((test) => {
     return waitFor(100).then(() => {
-        canvas2.getContext("2d").drawImage(video, 0 ,0);
-        assert_array_equals(canvas2.getContext("2d").getImageData(20 ,20, 60, 60), canvas1.getContext("2d").getImageData(20, 20, 60, 60));
+        return testCanvas("test 1", canvas1.getContext("2d").getImageData(20, 20, 60, 60).data, true);
     });
 }, "Checking canvas is green");
 
 promise_test((test) => {
     color = "red";
     return waitFor(300).then(() => {
-        canvas2.getContext("2d").drawImage(video, 0 ,0);
-        assert_array_equals(canvas2.getContext("2d").getImageData(20 ,20, 60, 60), canvas1.getContext("2d").getImageData(20, 20, 60, 60));
+        return testCanvas("test 2", canvas1.getContext("2d").getImageData(20, 20, 60, 60).data, true);
     });
 }, "Checking canvas is red");
 
@@ -63,10 +84,15 @@ promise_test((test) => {
 promise_test((test) => {
     color = "green";
     return waitFor(300).then(() => {
-        canvas2.getContext("2d").drawImage(video, 0 ,0);
-        assert_array_equals(canvas2.getContext("2d").getImageData(20 ,20, 60, 60), canvas1.getContext("2d").getImageData(20, 20, 60, 60));
+        return testCanvas("test 3", canvas1.getContext("2d").getImageData(20, 20, 60, 60).data, true);
     });
 }, "Checking canvas is green again");
+
+promise_test((test) => {
+        canvas1.width = 640;
+        canvas1.height = 480;
+        return waitForVideoSize(video, 640, 480);
+}, "Checking canvas size change");
         </script>
     </head>
 </html>
index 9567b38..abe3920 100644 (file)
@@ -141,7 +141,7 @@ function waitForVideoSize(video, width, height, count)
     if (count === undefined)
         count = 0;
     if (++count > 20)
-        return Promise.reject("waitForVideoSize timed out");
+        return Promise.reject("waitForVideoSize timed out, expected " + width + "x"+ height + " but got " + video.videoWidth + "x" + video.videoHeight);
 
     return waitFor(50).then(() => {
         return waitForVideoSize(video, width, height, count);
index 766a6db..6d060e6 100644 (file)
@@ -43,7 +43,7 @@ promise_test((test) => {
     if (window.testRunner)
         testRunner.setUserMediaPermission(true);
 
-    return navigator.mediaDevices.getUserMedia({ video: true}).then((stream) => {
+    return navigator.mediaDevices.getUserMedia({video: {advanced: [{width:{min:1280}}, {height:{min:720} } ]}}).then((stream) => {
         return new Promise((resolve, reject) => {
             createConnections((firstConnection) => {
                 var track = stream.getVideoTracks()[0];
index c803998..cfc2916 100644 (file)
@@ -1,3 +1,17 @@
+2017-06-22  Youenn Fablet  <youenn@apple.com>
+
+        [WebRTC] Prevent capturing at unconventional resolutions when using the SW encoder on Mac
+        https://bugs.webkit.org/show_bug.cgi?id=172602
+        <rdar://problem/32407693>
+
+        Reviewed by Eric Carlson.
+
+        Adding a parameter to disable hardware encoder.
+
+        * Source/webrtc/sdk/objc/Framework/Classes/VideoToolbox/encoder.h:
+        * Source/webrtc/sdk/objc/Framework/Classes/VideoToolbox/encoder.mm:
+        (webrtc::H264VideoToolboxEncoder::CreateCompressionSession):
+
 2017-06-21  Youenn Fablet  <youenn@apple.com>
 
         Update libyuv to 8cab2e31d76246263206318f3568d452e7f3ff3e
index 15aaa6d..bf29364 100644 (file)
@@ -70,12 +70,12 @@ class WEBRTC_DYLIB_EXPORT H264VideoToolboxEncoder : public H264Encoder {
   ScalingSettings GetScalingSettings() const override;
 
  protected:
-  virtual int CreateCompressionSession(VTCompressionSessionRef&, VTCompressionOutputCallback, int32_t width, int32_t height);
+  virtual int CreateCompressionSession(VTCompressionSessionRef&, VTCompressionOutputCallback, int32_t width, int32_t height, bool useHardwareEncoder = true);
+  void DestroyCompressionSession();
 
  private:
   int ResetCompressionSession();
   void ConfigureCompressionSession();
-  void DestroyCompressionSession();
   rtc::scoped_refptr<VideoFrameBuffer> GetScaledBufferOnEncode(
       const rtc::scoped_refptr<VideoFrameBuffer>& frame);
   void SetBitrateBps(uint32_t bitrate_bps);
index de69405..15ce9b3 100644 (file)
@@ -533,7 +533,7 @@ int H264VideoToolboxEncoder::ResetCompressionSession() {
   return WEBRTC_VIDEO_CODEC_OK;
 }
 
-int H264VideoToolboxEncoder::CreateCompressionSession(VTCompressionSessionRef& compressionSession, VTCompressionOutputCallback outputCallback, int32_t width, int32_t height) {
+int H264VideoToolboxEncoder::CreateCompressionSession(VTCompressionSessionRef& compressionSession, VTCompressionOutputCallback outputCallback, int32_t width, int32_t height, bool useHardwareEncoder) {
 
   // Set source image buffer attributes. These attributes will be present on
   // buffers retrieved from the encoder's pixel buffer pool.
@@ -567,7 +567,7 @@ int H264VideoToolboxEncoder::CreateCompressionSession(VTCompressionSessionRef& c
 
 #if defined(WEBRTC_USE_VTB_HARDWARE_ENCODER)
   CFTypeRef sessionKeys[] = {kVTVideoEncoderSpecification_EnableHardwareAcceleratedVideoEncoder};
-  CFTypeRef sessionValues[] = { kCFBooleanTrue };
+  CFTypeRef sessionValues[] = { useHardwareEncoder ? kCFBooleanTrue : kCFBooleanFalse };
   CFDictionaryRef encoderSpecification = internal::CreateCFDictionary(sessionKeys, sessionValues, 1);
 #else
   CFDictionaryRef encoderSpecification = nullptr;
index 2e02966..63e0e6e 100644 (file)
@@ -1,3 +1,35 @@
+2017-06-22  Youenn Fablet  <youenn@apple.com>
+
+        [WebRTC] Prevent capturing at unconventional resolutions when using the SW encoder on Mac
+        https://bugs.webkit.org/show_bug.cgi?id=172602
+        <rdar://problem/32407693>
+
+        Reviewed by Eric Carlson.
+
+        Test: platform/mac/webrtc/captureCanvas-webrtc-software-encoder.html
+
+        Add internal API to switch on/off hardware H264 encoder.
+        Add checks for standard size. If using a software encoder and frame size is not standard,
+        the session is destroyed and no frame is sent at all.
+
+        Added tests based on captureStream.
+        Fixed the case of capturing a canvas which size is changing.
+
+        * Modules/mediastream/CanvasCaptureMediaStreamTrack.cpp:
+        (WebCore::CanvasCaptureMediaStreamTrack::Source::canvasResized):
+        * platform/mediastream/libwebrtc/H264VideoToolBoxEncoder.h:
+        * platform/mediastream/libwebrtc/H264VideoToolBoxEncoder.mm:
+        (WebCore::H264VideoToolboxEncoder::setHardwareEncoderForWebRTCAllowed):
+        (WebCore::H264VideoToolboxEncoder::hardwareEncoderForWebRTCAllowed):
+        (WebCore::isUsingSoftwareEncoder):
+        (WebCore::H264VideoToolboxEncoder::CreateCompressionSession):
+        (isStandardFrameSize): Added.
+        (isUsingSoftwareEncoder): Added.
+        * testing/Internals.cpp:
+        (WebCore::Internals::setH264HardwareEncoderAllowed):
+        * testing/Internals.h:
+        * testing/Internals.idl:
+
 2017-06-21  Youenn Fablet  <youenn@apple.com>
 
         [Fetch API] TypeError when called with body === {}
index 5e05c3e..2441ce3 100644 (file)
@@ -115,6 +115,8 @@ void CanvasCaptureMediaStreamTrack::Source::canvasResized(HTMLCanvasElement& can
 
     m_settings.setWidth(m_canvas->width());
     m_settings.setHeight(m_canvas->height());
+
+    settingsDidChange();
 }
 
 void CanvasCaptureMediaStreamTrack::Source::canvasChanged(HTMLCanvasElement& canvas, const FloatRect&)
index d95237a..de13a4b 100644 (file)
@@ -35,9 +35,11 @@ namespace WebCore {
 class H264VideoToolboxEncoder final : public webrtc::H264VideoToolboxEncoder {
 public:
     explicit H264VideoToolboxEncoder(const cricket::VideoCodec& codec) : webrtc::H264VideoToolboxEncoder(codec) { }
+    WEBCORE_EXPORT static void setHardwareEncoderForWebRTCAllowed(bool);
+    static bool hardwareEncoderForWebRTCAllowed();
 
 private:
-    int CreateCompressionSession(VTCompressionSessionRef&, VTCompressionOutputCallback, int32_t width, int32_t height) final;
+    int CreateCompressionSession(VTCompressionSessionRef&, VTCompressionOutputCallback, int32_t width, int32_t height, bool useHardwareAcceleratedVideoEncoder) final;
 };
 
 }
index bfd4b09..44f3b49 100644 (file)
 #include "config.h"
 #include "H264VideoToolBoxEncoder.h"
 
-#if USE(LIBWEBRTC) && PLATFORM(COCOA)
+#include "Logging.h"
+#include "SoftLinking.h"
+#include <wtf/RetainPtr.h>
 
-#if ENABLE(MAC_VIDEO_TOOLBOX) && USE(APPLE_INTERNAL_SDK) && __has_include(<WebKitAdditions/VideoToolBoxEncoderMac.mm>)
-#import <WebKitAdditions/VideoToolBoxEncoderMac.mm>
-#else
+SOFT_LINK_FRAMEWORK_OPTIONAL(VideoToolBox)
+SOFT_LINK_POINTER_OPTIONAL(VideoToolBox, kVTVideoEncoderSpecification_Usage, NSString *)
+
+#if USE(LIBWEBRTC) && PLATFORM(COCOA)
 
 namespace WebCore {
 
-int H264VideoToolboxEncoder::CreateCompressionSession(VTCompressionSessionRef& compressionSession, VTCompressionOutputCallback outputCallback, int32_t width, int32_t height)
+static bool isHardwareEncoderForWebRTCAllowed = true;
+
+void H264VideoToolboxEncoder::setHardwareEncoderForWebRTCAllowed(bool allowed)
+{
+    isHardwareEncoderForWebRTCAllowed = allowed;
+}
+
+bool H264VideoToolboxEncoder::hardwareEncoderForWebRTCAllowed()
 {
-    return webrtc::H264VideoToolboxEncoder::CreateCompressionSession(compressionSession, outputCallback, width, height);
+    return isHardwareEncoderForWebRTCAllowed;
 }
-    
+
+#if PLATFORM(MAC) && ENABLE(MAC_VIDEO_TOOLBOX)
+static inline bool isStandardFrameSize(int32_t width, int32_t height)
+{
+    // FIXME: Envision relaxing this rule, something like width and height dividable by 4 or 8 should be good enough.
+    if (width == 1280)
+        return height == 720;
+    if (width == 720)
+        return height == 1280;
+    if (width == 960)
+        return height == 540;
+    if (width == 540)
+        return height == 960;
+    if (width == 640)
+        return height == 480;
+    if (width == 480)
+        return height == 640;
+    if (width == 288)
+        return height == 352;
+    if (width == 352)
+        return height == 288;
+    if (width == 320)
+        return height == 240;
+    if (width == 240)
+        return height == 320;
+    return false;
 }
 #endif
 
+int H264VideoToolboxEncoder::CreateCompressionSession(VTCompressionSessionRef& compressionSession, VTCompressionOutputCallback outputCallback, int32_t width, int32_t height, bool useHardwareEncoder)
+{
+#if PLATFORM(MAC) && ENABLE(MAC_VIDEO_TOOLBOX)
+    // This code is deriving from libwebrtc h264_video_toolbox_encoder.h
+    // Set source image buffer attributes. These attributes will be present on
+    // buffers retrieved from the encoder's pixel buffer pool.
+    const size_t attributesSize = 3;
+    CFTypeRef keys[attributesSize] = {
+        kCVPixelBufferOpenGLCompatibilityKey,
+        kCVPixelBufferIOSurfacePropertiesKey,
+        kCVPixelBufferPixelFormatTypeKey
+    };
+    auto ioSurfaceValue = adoptCF(CFDictionaryCreate(kCFAllocatorDefault, nullptr, nullptr, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks));
+    int64_t nv12type = kCVPixelFormatType_420YpCbCr8BiPlanarFullRange;
+    auto pixelFormat = adoptCF(CFNumberCreate(kCFAllocatorDefault, kCFNumberLongType, &nv12type));
+    CFTypeRef values[attributesSize] = {kCFBooleanTrue, ioSurfaceValue.get(), pixelFormat.get()};
+    auto sourceAttributes = adoptCF(CFDictionaryCreate(kCFAllocatorDefault, keys, values, attributesSize, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks));
+
+    OSStatus status = -1;
+    if (useHardwareEncoder) {
+        NSDictionary* encoderSpecification = @{ (NSString*)kVTVideoEncoderSpecification_RequireHardwareAcceleratedVideoEncoder : @true };
+        status = VTCompressionSessionCreate(kCFAllocatorDefault, width, height, kCMVideoCodecType_H264,
+            (__bridge CFDictionaryRef)encoderSpecification, sourceAttributes.get(), nullptr, outputCallback, this, &compressionSession);
+    }
+
+    if (status == noErr && compressionSession) {
+        RELEASE_LOG(WebRTC, "H264VideoToolboxEncoder: Using H264 hardware encoder");
+        return noErr;
+    }
+
+    if (!isStandardFrameSize(width, height)) {
+        RELEASE_LOG(WebRTC, "Using H264 software encoder with non standard size is not supported");
+        DestroyCompressionSession();
+        return -1;
+    }
+
+    if (!getkVTVideoEncoderSpecification_Usage()) {
+        RELEASE_LOG(WebRTC, "H264VideoToolboxEncoder: Cannot create a H264 software encoder");
+        return -1;
+    }
+
+    RELEASE_LOG(WebRTC, "H264VideoToolboxEncoder: Using H264 software encoder");
+
+    NSDictionary* encoderSpecification = @{ (NSString*)kVTVideoEncoderSpecification_RequireHardwareAcceleratedVideoEncoder : @false, (NSString*)getkVTVideoEncoderSpecification_Usage() : @1 };
+    status = VTCompressionSessionCreate(kCFAllocatorDefault, width, height, kCMVideoCodecType_H264,
+        (__bridge CFDictionaryRef)encoderSpecification, sourceAttributes.get(), nullptr, outputCallback, this, &compressionSession);
+
+    return status;
+#else
+    UNUSED_PARAM(useHardwareEncoder);
+    return webrtc::H264VideoToolboxEncoder::CreateCompressionSession(compressionSession, outputCallback, width, height, hardwareEncoderForWebRTCAllowed() ? useHardwareEncoder : false);
+#endif
+}
+
+}
+
 #endif
index bd6ede6..051d35b 100644 (file)
 #include "MockMediaPlayerMediaSource.h"
 #endif
 
+#if USE(LIBWEBRTC) && PLATFORM(COCOA)
+#include "H264VideoToolboxEncoder.h"
+#endif
+
 #if PLATFORM(MAC)
 #include "DictionaryLookup.h"
 #endif
@@ -4039,6 +4043,17 @@ void Internals::setPageVisibility(bool isVisible)
     page.setActivityState(state);
 }
 
+#if ENABLE(WEB_RTC)
+void Internals::setH264HardwareEncoderAllowed(bool allowed)
+{
+#if PLATFORM(MAC)
+    H264VideoToolboxEncoder::setHardwareEncoderForWebRTCAllowed(allowed);
+#else
+    UNUSED_PARAM(allowed);
+#endif
+}
+#endif
+
 #if ENABLE(MEDIA_STREAM)
 
 void Internals::setCameraMediaStreamTrackOrientation(MediaStreamTrack& track, int orientation)
index 6f2e3da..3943678 100644 (file)
@@ -579,6 +579,10 @@ public:
 
     void setPageVisibility(bool isVisible);
 
+#if ENABLE(WEB_RTC)
+    void setH264HardwareEncoderAllowed(bool allowed);
+#endif
+
 #if ENABLE(MEDIA_STREAM)
     void setCameraMediaStreamTrackOrientation(MediaStreamTrack&, int orientation);
     ExceptionOr<void> setMediaDeviceState(const String& id, const String& property, bool value);
index 2bb2772..a0ff944 100644 (file)
@@ -537,6 +537,7 @@ enum EventThrottlingBehavior {
 
     void setPageVisibility(boolean isVisible);
 
+    [Conditional=WEB_RTC] void setH264HardwareEncoderAllowed(boolean allowed);
     [Conditional=WEB_RTC] void applyRotationForOutgoingVideoSources(RTCPeerConnection connection);
 
     [Conditional=MEDIA_STREAM] void setCameraMediaStreamTrackOrientation(MediaStreamTrack track, short orientation);