Add public page loading performance tests using web-page-replay
authorrniwa@webkit.org <rniwa@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 1 Jun 2012 03:19:31 +0000 (03:19 +0000)
committerrniwa@webkit.org <rniwa@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 1 Jun 2012 03:19:31 +0000 (03:19 +0000)
https://bugs.webkit.org/show_bug.cgi?id=84008

Reviewed by Dirk Pranke.

PerformanceTests:

Add replay tests for google.com and youtube.com as examples.

* Replay: Added.
* Replay/www.google.com.replay: Added.
* Replay/www.youtube.com.replay: Added.

Tools:

Add the primitive implementation of replay performance tests. We use web-page-replay (http://code.google.com/p/web-page-replay/)
to cache data locally. Each replay test is represented by a text file with .replay extension containing a single URL.
To hash out bugs and isolate them from the rest of performance tests, replay tests are hidden behind --replay flag.

Run "run-perf-tests --replay PerformanceTests/Replay" after changing the system network preference to forward HTTP and HTTPS requests
to localhost:8080 and localhost:8443 respectively (i.e. configure the system as if there are HTTP proxies at ports 8080 and 8443)
excluding: *.webkit.org, *.googlecode.com, *.sourceforge.net, pypi.python.org, and www.adambarth.com for thirdparty Python dependencies.
run-perf-tests starts web-page-replay, which provides HTTP proxies at ports 8080 and 8443 to replay pages.

* Scripts/webkitpy/layout_tests/port/driver.py:
(Driver.is_external_http_test): Added.
* Scripts/webkitpy/layout_tests/port/webkit.py:
(WebKitDriver._command_from_driver_input): Allow test names that starts with http:// or https://.
* Scripts/webkitpy/performance_tests/perftest.py:
(PerfTest.__init__): Takes port.
(PerfTest.prepare): Added. Overridden by ReplayPerfTest.
(PerfTest):
(PerfTest.run): Calls run_single.
(PerfTest.run_single): Extracted from PageLoadingPerfTest.run.
(ChromiumStylePerfTest.__init__):
(PageLoadingPerfTest.__init__):
(PageLoadingPerfTest.run):
(ReplayServer): Added. Responsible for starting and stopping replay.py in the web-page-replay.
(ReplayServer.__init__):
(ReplayServer.wait_until_ready): Wait until port 8080 is ready. I have tried looking at the piped output from web-page-replay
but it caused a dead lock on some web pages.
(ReplayServer.stop):
(ReplayServer.__del__):
(ReplayPerfTest):
(ReplayPerfTest.__init__):
(ReplayPerfTest._start_replay_server):
(ReplayPerfTest.prepare): Creates test.wpr and test-expected.png to cache the page when a replay test is ran for the first time.
The subsequent runs of the same test will just use test.wpr.
(ReplayPerfTest.run_single):
(PerfTestFactory):
(PerfTestFactory.create_perf_test):
* Scripts/webkitpy/performance_tests/perftest_unittest.py:
(MainTest.test_parse_output):
(MainTest.test_parse_output_with_failing_line):
(TestPageLoadingPerfTest.test_run):
(TestPageLoadingPerfTest.test_run_with_bad_output):
(TestReplayPerfTest):
(TestReplayPerfTest.ReplayTestPort):
(TestReplayPerfTest.ReplayTestPort.__init__):
(TestReplayPerfTest.ReplayTestPort.__init__.ReplayTestDriver):
(TestReplayPerfTest.ReplayTestPort.__init__.ReplayTestDriver.run_test):
(TestReplayPerfTest.ReplayTestPort._driver_class):
(TestReplayPerfTest.MockReplayServer):
(TestReplayPerfTest.MockReplayServer.__init__):
(TestReplayPerfTest.MockReplayServer.stop):
(TestReplayPerfTest._add_file):
(TestReplayPerfTest._setup_test):
(TestReplayPerfTest.test_run_single):
(TestReplayPerfTest.test_run_single.run_test):
(TestReplayPerfTest.test_run_single_fails_without_webpagereplay):
(TestReplayPerfTest.test_prepare_fails_when_wait_until_ready_fails):
(TestReplayPerfTest.test_run_single_fails_when_output_has_error):
(TestReplayPerfTest.test_run_single_fails_when_output_has_error.run_test):
(TestReplayPerfTest.test_prepare):
(TestReplayPerfTest.test_prepare.run_test):
(TestReplayPerfTest.test_prepare_calls_run_single):
(TestReplayPerfTest.test_prepare_calls_run_single.run_single):
(TestPerfTestFactory.test_regular_test):
(TestPerfTestFactory.test_inspector_test):
(TestPerfTestFactory.test_page_loading_test):
* Scripts/webkitpy/performance_tests/perftestsrunner.py:
(PerfTestsRunner):
(PerfTestsRunner._parse_args): Added --replay flag to enable replay tests.
(PerfTestsRunner._collect_tests): Collect .replay files when replay tests are enabled.
(PerfTestsRunner._collect_tests._is_test_file):
(PerfTestsRunner.run): Exit early if one of calls to prepare() fails.
* Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py:
(create_runner):
(run_test):
(_tests_for_runner):
(test_run_test_set):
(test_run_test_set_kills_drt_per_run):
(test_run_test_pause_before_testing):
(test_run_test_set_for_parser_tests):
(test_run_test_set_with_json_output):
(test_run_test_set_with_json_source):
(test_run_test_set_with_multiple_repositories):
(test_run_with_upload_json):
(test_upload_json):
(test_upload_json.MockFileUploader.upload_single_text_file):
(_add_file):
(test_collect_tests):
(test_collect_tests_with_multile_files):
(test_collect_tests_with_multile_files.add_file):
(test_collect_tests_with_skipped_list):
(test_collect_tests_with_page_load_svg):
(test_collect_tests_should_ignore_replay_tests_by_default):
(test_collect_tests_with_replay_tests):
(test_parse_args):
* Scripts/webkitpy/thirdparty/__init__.py: Added the dependency for web-page-replay version 1.1.1.
(AutoinstallImportHook.find_module):
(AutoinstallImportHook._install_webpagereplay):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@119188 268f45cc-cd09-0410-ab3c-d52691b4dbfc

PerformanceTests/ChangeLog
PerformanceTests/Replay/www.google.com.replay [new file with mode: 0644]
PerformanceTests/Replay/www.youtube.com.replay [new file with mode: 0644]
Tools/ChangeLog
Tools/Scripts/webkitpy/layout_tests/port/webkit.py
Tools/Scripts/webkitpy/performance_tests/perftest.py
Tools/Scripts/webkitpy/performance_tests/perftest_unittest.py
Tools/Scripts/webkitpy/performance_tests/perftestsrunner.py
Tools/Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py
Tools/Scripts/webkitpy/thirdparty/__init__.py

index 3fcdfb72e5cae5b87163b6d5c5593532549cfac5..1fcc56a369fac104c51f8a5fd4194e6e43c60719 100644 (file)
@@ -1,3 +1,16 @@
+2012-06-01  Ryosuke Niwa  <rniwa@webkit.org>
+
+        Add public page loading performance tests using web-page-replay
+        https://bugs.webkit.org/show_bug.cgi?id=84008
+
+        Reviewed by Dirk Pranke.
+
+        Add replay tests for google.com and youtube.com as examples.
+
+        * Replay: Added.
+        * Replay/www.google.com.replay: Added.
+        * Replay/www.youtube.com.replay: Added.
+
 2012-05-30  Kentaro Hara  <haraken@chromium.org>
 
         [perf-test] Add a benchmark for querySelector()
diff --git a/PerformanceTests/Replay/www.google.com.replay b/PerformanceTests/Replay/www.google.com.replay
new file mode 100644 (file)
index 0000000..a4f25b6
--- /dev/null
@@ -0,0 +1 @@
+http://web.archive.org/web/20110729013436/http://www.google.com/
diff --git a/PerformanceTests/Replay/www.youtube.com.replay b/PerformanceTests/Replay/www.youtube.com.replay
new file mode 100644 (file)
index 0000000..988ac98
--- /dev/null
@@ -0,0 +1 @@
+http://web.archive.org/web/20110101045027/http://www.youtube.com/
index 0962b9883f3e861e089c8732567c83503a57ffa4..63af81316f3111d49ed84e8d0539e6b723cd668b 100644 (file)
@@ -1,3 +1,108 @@
+2012-06-01  Ryosuke Niwa  <rniwa@webkit.org>
+
+        Add public page loading performance tests using web-page-replay
+        https://bugs.webkit.org/show_bug.cgi?id=84008
+
+        Reviewed by Dirk Pranke.
+
+        Add the primitive implementation of replay performance tests. We use web-page-replay (http://code.google.com/p/web-page-replay/)
+        to cache data locally. Each replay test is represented by a text file with .replay extension containing a single URL.
+        To hash out bugs and isolate them from the rest of performance tests, replay tests are hidden behind --replay flag.
+
+        Run "run-perf-tests --replay PerformanceTests/Replay" after changing the system network preference to forward HTTP and HTTPS requests
+        to localhost:8080 and localhost:8443 respectively (i.e. configure the system as if there are HTTP proxies at ports 8080 and 8443)
+        excluding: *.webkit.org, *.googlecode.com, *.sourceforge.net, pypi.python.org, and www.adambarth.com for thirdparty Python dependencies.
+        run-perf-tests starts web-page-replay, which provides HTTP proxies at ports 8080 and 8443 to replay pages.
+
+        * Scripts/webkitpy/layout_tests/port/driver.py:
+        (Driver.is_external_http_test): Added.
+        * Scripts/webkitpy/layout_tests/port/webkit.py:
+        (WebKitDriver._command_from_driver_input): Allow test names that starts with http:// or https://.
+        * Scripts/webkitpy/performance_tests/perftest.py:
+        (PerfTest.__init__): Takes port.
+        (PerfTest.prepare): Added. Overridden by ReplayPerfTest.
+        (PerfTest):
+        (PerfTest.run): Calls run_single.
+        (PerfTest.run_single): Extracted from PageLoadingPerfTest.run.
+        (ChromiumStylePerfTest.__init__):
+        (PageLoadingPerfTest.__init__):
+        (PageLoadingPerfTest.run):
+        (ReplayServer): Added. Responsible for starting and stopping replay.py in the web-page-replay.
+        (ReplayServer.__init__):
+        (ReplayServer.wait_until_ready): Wait until port 8080 is ready. I have tried looking at the piped output from web-page-replay
+        but it caused a dead lock on some web pages.
+        (ReplayServer.stop):
+        (ReplayServer.__del__):
+        (ReplayPerfTest):
+        (ReplayPerfTest.__init__):
+        (ReplayPerfTest._start_replay_server):
+        (ReplayPerfTest.prepare): Creates test.wpr and test-expected.png to cache the page when a replay test is ran for the first time.
+        The subsequent runs of the same test will just use test.wpr.
+        (ReplayPerfTest.run_single):
+        (PerfTestFactory):
+        (PerfTestFactory.create_perf_test):
+        * Scripts/webkitpy/performance_tests/perftest_unittest.py:
+        (MainTest.test_parse_output):
+        (MainTest.test_parse_output_with_failing_line):
+        (TestPageLoadingPerfTest.test_run):
+        (TestPageLoadingPerfTest.test_run_with_bad_output):
+        (TestReplayPerfTest):
+        (TestReplayPerfTest.ReplayTestPort):
+        (TestReplayPerfTest.ReplayTestPort.__init__):
+        (TestReplayPerfTest.ReplayTestPort.__init__.ReplayTestDriver):
+        (TestReplayPerfTest.ReplayTestPort.__init__.ReplayTestDriver.run_test):
+        (TestReplayPerfTest.ReplayTestPort._driver_class):
+        (TestReplayPerfTest.MockReplayServer):
+        (TestReplayPerfTest.MockReplayServer.__init__):
+        (TestReplayPerfTest.MockReplayServer.stop):
+        (TestReplayPerfTest._add_file):
+        (TestReplayPerfTest._setup_test):
+        (TestReplayPerfTest.test_run_single):
+        (TestReplayPerfTest.test_run_single.run_test):
+        (TestReplayPerfTest.test_run_single_fails_without_webpagereplay):
+        (TestReplayPerfTest.test_prepare_fails_when_wait_until_ready_fails):
+        (TestReplayPerfTest.test_run_single_fails_when_output_has_error):
+        (TestReplayPerfTest.test_run_single_fails_when_output_has_error.run_test):
+        (TestReplayPerfTest.test_prepare):
+        (TestReplayPerfTest.test_prepare.run_test):
+        (TestReplayPerfTest.test_prepare_calls_run_single):
+        (TestReplayPerfTest.test_prepare_calls_run_single.run_single):
+        (TestPerfTestFactory.test_regular_test):
+        (TestPerfTestFactory.test_inspector_test):
+        (TestPerfTestFactory.test_page_loading_test):
+        * Scripts/webkitpy/performance_tests/perftestsrunner.py:
+        (PerfTestsRunner):
+        (PerfTestsRunner._parse_args): Added --replay flag to enable replay tests.
+        (PerfTestsRunner._collect_tests): Collect .replay files when replay tests are enabled.
+        (PerfTestsRunner._collect_tests._is_test_file):
+        (PerfTestsRunner.run): Exit early if one of calls to prepare() fails.
+        * Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py:
+        (create_runner):
+        (run_test):
+        (_tests_for_runner):
+        (test_run_test_set):
+        (test_run_test_set_kills_drt_per_run):
+        (test_run_test_pause_before_testing):
+        (test_run_test_set_for_parser_tests):
+        (test_run_test_set_with_json_output):
+        (test_run_test_set_with_json_source):
+        (test_run_test_set_with_multiple_repositories):
+        (test_run_with_upload_json):
+        (test_upload_json):
+        (test_upload_json.MockFileUploader.upload_single_text_file):
+        (_add_file):
+        (test_collect_tests):
+        (test_collect_tests_with_multile_files):
+        (test_collect_tests_with_multile_files.add_file):
+        (test_collect_tests_with_skipped_list):
+        (test_collect_tests_with_page_load_svg):
+        (test_collect_tests_should_ignore_replay_tests_by_default):
+        (test_collect_tests_with_replay_tests):
+        (test_parse_args):
+        * Scripts/webkitpy/thirdparty/__init__.py: Added the dependency for web-page-replay version 1.1.1.
+        (AutoinstallImportHook.find_module):
+        (AutoinstallImportHook._install_webpagereplay):
+
 2012-05-31  Yaron Friedman  <yfriedman@chromium.org>
 
         Support building the Android port of chromium with Ninja
index 31dbbcbca39dbf2f992a9d5ada0fac81b0493ec2..879c79abf2b72dd0358e1bc8731c25037e0d4c36 100644 (file)
@@ -534,7 +534,10 @@ class WebKitDriver(Driver):
         return self.has_crashed()
 
     def _command_from_driver_input(self, driver_input):
-        if self.is_http_test(driver_input.test_name):
+        # FIXME: performance tests pass in full URLs instead of test names.
+        if driver_input.test_name.startswith('http://') or driver_input.test_name.startswith('https://'):
+            command = driver_input.test_name
+        elif self.is_http_test(driver_input.test_name):
             command = self.test_to_uri(driver_input.test_name)
         else:
             command = self._port.abspath_for_test(driver_input.test_name)
index a81c4788089ad91025f6d6cc1496ffc6e8a09852..20d3d5838d4c8bf9e7da73bde4eb4aaaf8258f9d 100644 (file)
 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
 
+import errno
 import logging
 import math
 import re
+import os
+import signal
+import socket
+import subprocess
+import time
 
+# Import for auto-install
+import webkitpy.thirdparty.autoinstalled.webpagereplay.replay
+
+from webkitpy.layout_tests.controllers.test_result_writer import TestResultWriter
 from webkitpy.layout_tests.port.driver import DriverInput
+from webkitpy.layout_tests.port.driver import DriverOutput
 
 
 _log = logging.getLogger(__name__)
 
 
 class PerfTest(object):
-    def __init__(self, test_name, path_or_url):
+    def __init__(self, port, test_name, path_or_url):
+        self._port = port
         self._test_name = test_name
         self._path_or_url = path_or_url
 
@@ -49,12 +61,18 @@ class PerfTest(object):
     def path_or_url(self):
         return self._path_or_url
 
-    def run(self, driver, timeout_ms):
-        output = driver.run_test(DriverInput(self.path_or_url(), timeout_ms, None, False))
+    def prepare(self, time_out_ms):
+        return True
+
+    def run(self, driver, time_out_ms):
+        output = self.run_single(driver, self.path_or_url(), time_out_ms)
         if self.run_failed(output):
             return None
         return self.parse_output(output)
 
+    def run_single(self, driver, path_or_url, time_out_ms, should_run_pixel_test=False):
+        return driver.run_test(DriverInput(path_or_url, time_out_ms, image_hash=None, should_run_pixel_test=should_run_pixel_test))
+
     def run_failed(self, output):
         if output.text == None or output.error:
             pass
@@ -137,8 +155,8 @@ class PerfTest(object):
 class ChromiumStylePerfTest(PerfTest):
     _chromium_style_result_regex = re.compile(r'^RESULT\s+(?P<name>[^=]+)\s*=\s+(?P<value>\d+(\.\d+)?)\s*(?P<unit>\w+)$')
 
-    def __init__(self, test_name, path_or_url):
-        super(ChromiumStylePerfTest, self).__init__(test_name, path_or_url)
+    def __init__(self, port, test_name, path_or_url):
+        super(ChromiumStylePerfTest, self).__init__(port, test_name, path_or_url)
 
     def parse_output(self, output):
         test_failed = False
@@ -157,15 +175,15 @@ class ChromiumStylePerfTest(PerfTest):
 
 
 class PageLoadingPerfTest(PerfTest):
-    def __init__(self, test_name, path_or_url):
-        super(PageLoadingPerfTest, self).__init__(test_name, path_or_url)
+    def __init__(self, port, test_name, path_or_url):
+        super(PageLoadingPerfTest, self).__init__(port, test_name, path_or_url)
 
-    def run(self, driver, timeout_ms):
+    def run(self, driver, time_out_ms):
         test_times = []
 
         for i in range(0, 20):
-            output = driver.run_test(DriverInput(self.path_or_url(), timeout_ms, None, False))
-            if self.run_failed(output):
+            output = self.run_single(driver, self.path_or_url(), time_out_ms)
+            if not output or self.run_failed(output):
                 return None
             if i == 0:
                 continue
@@ -194,16 +212,130 @@ class PageLoadingPerfTest(PerfTest):
         return {self.test_name(): results}
 
 
+class ReplayServer(object):
+    def __init__(self, archive, record):
+        self._process = None
+
+        # FIXME: Should error if local proxy isn't set to forward requests to localhost:8080 and localhost:8413
+
+        replay_path = webkitpy.thirdparty.autoinstalled.webpagereplay.replay.__file__
+        args = ['python', replay_path, '--no-dns_forwarding', '--port', '8080', '--ssl_port', '8413', '--use_closest_match', '--log_level', 'warning']
+        if record:
+            args.append('--record')
+        args.append(archive)
+
+        self._process = subprocess.Popen(args)
+
+    def wait_until_ready(self):
+        for i in range(0, 10):
+            try:
+                connection = socket.create_connection(('localhost', '8080'), timeout=1)
+                connection.close()
+                return True
+            except socket.error:
+                time.sleep(1)
+                continue
+        return False
+
+    def stop(self):
+        if self._process:
+            self._process.send_signal(signal.SIGINT)
+            self._process.wait()
+        self._process = None
+
+    def __del__(self):
+        self.stop()
+
+
+class ReplayPerfTest(PageLoadingPerfTest):
+    def __init__(self, port, test_name, path_or_url):
+        super(ReplayPerfTest, self).__init__(port, test_name, path_or_url)
+
+    def _start_replay_server(self, archive, record):
+        try:
+            return ReplayServer(archive, record)
+        except OSError as error:
+            if error.errno == errno.ENOENT:
+                _log.error("Replay tests require web-page-replay.")
+            else:
+                raise error
+
+    def prepare(self, time_out_ms):
+        filesystem = self._port.host.filesystem
+        path_without_ext = filesystem.splitext(self.path_or_url())[0]
+
+        self._archive_path = filesystem.join(path_without_ext + '.wpr')
+        self._expected_image_path = filesystem.join(path_without_ext + '-expected.png')
+        self._url = filesystem.read_text_file(self.path_or_url()).split('\n')[0]
+
+        if filesystem.isfile(self._archive_path) and filesystem.isfile(self._expected_image_path):
+            _log.info("Replay ready for %s" % self._archive_path)
+            return True
+
+        _log.info("Preparing replay for %s" % self.test_name())
+
+        driver = self._port.create_driver(worker_number=1, no_timeout=True)
+        try:
+            output = self.run_single(driver, self._url, time_out_ms, record=True)
+        finally:
+            driver.stop()
+
+        if not output or not filesystem.isfile(self._archive_path):
+            _log.error("Failed to prepare a replay for %s" % self.test_name())
+            return False
+
+        _log.info("Prepared replay for %s" % self.test_name())
+
+        return True
+
+    def run_single(self, driver, url, time_out_ms, record=False):
+        server = self._start_replay_server(self._archive_path, record)
+        if not server:
+            _log.error("Web page replay didn't start.")
+            return None
+
+        try:
+            if not server.wait_until_ready():
+                _log.error("Web page replay didn't start.")
+                return None
+
+            super(ReplayPerfTest, self).run_single(driver, "about:blank", time_out_ms)
+            _log.debug("Loading the page")
+
+            output = super(ReplayPerfTest, self).run_single(driver, self._url, time_out_ms, should_run_pixel_test=True)
+            if self.run_failed(output):
+                return None
+
+            if not output.image:
+                _log.error("Loading the page did not generate image results")
+                _log.error(output.text)
+                return None
+
+            filesystem = self._port.host.filesystem
+            dirname = filesystem.dirname(url)
+            filename = filesystem.split(url)[1]
+            writer = TestResultWriter(filesystem, self._port, dirname, filename)
+            if record:
+                writer.write_image_files(actual_image=None, expected_image=output.image)
+            else:
+                writer.write_image_files(actual_image=output.image, expected_image=None)
+
+            return output
+        finally:
+            server.stop()
+
+
 class PerfTestFactory(object):
 
     _pattern_map = [
-        (re.compile('^inspector/'), ChromiumStylePerfTest),
-        (re.compile('^PageLoad/'), PageLoadingPerfTest),
+        (re.compile(r'^inspector/'), ChromiumStylePerfTest),
+        (re.compile(r'^PageLoad/'), PageLoadingPerfTest),
+        (re.compile(r'(.+)\.replay$'), ReplayPerfTest),
     ]
 
     @classmethod
-    def create_perf_test(cls, test_name, path):
+    def create_perf_test(cls, port, test_name, path):
         for (pattern, test_class) in cls._pattern_map:
             if pattern.match(test_name):
-                return test_class(test_name, path)
-        return PerfTest(test_name, path)
+                return test_class(port, test_name, path)
+        return PerfTest(port, test_name, path)
index 21efd2c3ccf55598a847b770102f5087da9f2951..078f08a466896f52b4a0bea87facd11633a7b9e4 100755 (executable)
@@ -31,12 +31,16 @@ import StringIO
 import math
 import unittest
 
+from webkitpy.common.host_mock import MockHost
 from webkitpy.common.system.outputcapture import OutputCapture
 from webkitpy.layout_tests.port.driver import DriverOutput
+from webkitpy.layout_tests.port.test import TestDriver
+from webkitpy.layout_tests.port.test import TestPort
 from webkitpy.performance_tests.perftest import ChromiumStylePerfTest
 from webkitpy.performance_tests.perftest import PageLoadingPerfTest
 from webkitpy.performance_tests.perftest import PerfTest
 from webkitpy.performance_tests.perftest import PerfTestFactory
+from webkitpy.performance_tests.perftest import ReplayPerfTest
 
 
 class MainTest(unittest.TestCase):
@@ -53,7 +57,7 @@ class MainTest(unittest.TestCase):
         output_capture = OutputCapture()
         output_capture.capture_output()
         try:
-            test = PerfTest('some-test', '/path/some-dir/some-test')
+            test = PerfTest(None, 'some-test', '/path/some-dir/some-test')
             self.assertEqual(test.parse_output(output),
                 {'some-test': {'avg': 1100.0, 'median': 1101.0, 'min': 1080.0, 'max': 1120.0, 'stdev': 11.0, 'unit': 'ms'}})
         finally:
@@ -77,7 +81,7 @@ class MainTest(unittest.TestCase):
         output_capture = OutputCapture()
         output_capture.capture_output()
         try:
-            test = PerfTest('some-test', '/path/some-dir/some-test')
+            test = PerfTest(None, 'some-test', '/path/some-dir/some-test')
             self.assertEqual(test.parse_output(output), None)
         finally:
             actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
@@ -101,7 +105,7 @@ class TestPageLoadingPerfTest(unittest.TestCase):
                 return DriverOutput('some output', image=None, image_hash=None, audio=None, test_time=self._values[self._index - 1])
 
     def test_run(self):
-        test = PageLoadingPerfTest('some-test', '/path/some-dir/some-test')
+        test = PageLoadingPerfTest(None, 'some-test', '/path/some-dir/some-test')
         driver = TestPageLoadingPerfTest.MockDriver([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20])
         output_capture = OutputCapture()
         output_capture.capture_output()
@@ -118,7 +122,7 @@ class TestPageLoadingPerfTest(unittest.TestCase):
         output_capture = OutputCapture()
         output_capture.capture_output()
         try:
-            test = PageLoadingPerfTest('some-test', '/path/some-dir/some-test')
+            test = PageLoadingPerfTest(None, 'some-test', '/path/some-dir/some-test')
             driver = TestPageLoadingPerfTest.MockDriver([1, 2, 3, 4, 5, 6, 7, 'some error', 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20])
             self.assertEqual(test.run(driver, None), None)
         finally:
@@ -128,17 +132,192 @@ class TestPageLoadingPerfTest(unittest.TestCase):
         self.assertEqual(actual_logs, 'error: some-test\nsome error\n')
 
 
+class TestReplayPerfTest(unittest.TestCase):
+
+    class ReplayTestPort(TestPort):
+        def __init__(self, custom_run_test=None):
+
+            class ReplayTestDriver(TestDriver):
+                def run_test(self, text_input):
+                    return custom_run_test(text_input) if custom_run_test else None
+
+            self._custom_driver_class = ReplayTestDriver
+            super(self.__class__, self).__init__(host=MockHost())
+
+        def _driver_class(self):
+            return self._custom_driver_class
+
+    class MockReplayServer(object):
+        def __init__(self, wait_until_ready=True):
+            self.wait_until_ready = lambda: wait_until_ready
+
+        def stop(self):
+            pass
+
+    def _add_file(self, port, dirname, filename, content=True):
+        port.host.filesystem.maybe_make_directory(dirname)
+        port.host.filesystem.files[port.host.filesystem.join(dirname, filename)] = content
+
+    def _setup_test(self, run_test=None):
+        test_port = self.ReplayTestPort(run_test)
+        self._add_file(test_port, '/path/some-dir', 'some-test.replay', 'http://some-test/')
+        test = ReplayPerfTest(test_port, 'some-test.replay', '/path/some-dir/some-test.replay')
+        test._start_replay_server = lambda archive, record: self.__class__.MockReplayServer()
+        return test, test_port
+
+    def test_run_single(self):
+        output_capture = OutputCapture()
+        output_capture.capture_output()
+
+        loaded_pages = []
+
+        def run_test(test_input):
+            if test_input.test_name != "about:blank":
+                self.assertEqual(test_input.test_name, 'http://some-test/')
+            loaded_pages.append(test_input)
+            self._add_file(port, '/path/some-dir', 'some-test.wpr', 'wpr content')
+            return DriverOutput('actual text', 'actual image', 'actual checksum',
+                audio=None, crash=False, timeout=False, error=False)
+
+        test, port = self._setup_test(run_test)
+        test._archive_path = '/path/some-dir/some-test.wpr'
+        test._url = 'http://some-test/'
+
+        try:
+            driver = port.create_driver(worker_number=1, no_timeout=True)
+            self.assertTrue(test.run_single(driver, '/path/some-dir/some-test.replay', time_out_ms=100))
+        finally:
+            actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
+
+        self.assertEqual(len(loaded_pages), 2)
+        self.assertEqual(loaded_pages[0].test_name, 'about:blank')
+        self.assertEqual(loaded_pages[1].test_name, 'http://some-test/')
+        self.assertEqual(actual_stdout, '')
+        self.assertEqual(actual_stderr, '')
+        self.assertEqual(actual_logs, '')
+
+    def test_run_single_fails_without_webpagereplay(self):
+        output_capture = OutputCapture()
+        output_capture.capture_output()
+
+        test, port = self._setup_test()
+        test._start_replay_server = lambda archive, record: None
+        test._archive_path = '/path/some-dir.wpr'
+        test._url = 'http://some-test/'
+
+        try:
+            driver = port.create_driver(worker_number=1, no_timeout=True)
+            self.assertEqual(test.run_single(driver, '/path/some-dir/some-test.replay', time_out_ms=100), None)
+        finally:
+            actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
+        self.assertEqual(actual_stdout, '')
+        self.assertEqual(actual_stderr, '')
+        self.assertEqual(actual_logs, "Web page replay didn't start.\n")
+
+    def test_prepare_fails_when_wait_until_ready_fails(self):
+        output_capture = OutputCapture()
+        output_capture.capture_output()
+
+        test, port = self._setup_test()
+        test._start_replay_server = lambda archive, record: self.__class__.MockReplayServer(wait_until_ready=False)
+        test._archive_path = '/path/some-dir.wpr'
+        test._url = 'http://some-test/'
+
+        try:
+            driver = port.create_driver(worker_number=1, no_timeout=True)
+            self.assertEqual(test.run_single(driver, '/path/some-dir/some-test.replay', time_out_ms=100), None)
+        finally:
+            actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
+
+        self.assertEqual(actual_stdout, '')
+        self.assertEqual(actual_stderr, '')
+        self.assertEqual(actual_logs, "Web page replay didn't start.\n")
+
+    def test_run_single_fails_when_output_has_error(self):
+        output_capture = OutputCapture()
+        output_capture.capture_output()
+
+        loaded_pages = []
+
+        def run_test(test_input):
+            loaded_pages.append(test_input)
+            self._add_file(port, '/path/some-dir', 'some-test.wpr', 'wpr content')
+            return DriverOutput('actual text', 'actual image', 'actual checksum',
+                audio=None, crash=False, timeout=False, error='some error')
+
+        test, port = self._setup_test(run_test)
+        test._archive_path = '/path/some-dir.wpr'
+        test._url = 'http://some-test/'
+
+        try:
+            driver = port.create_driver(worker_number=1, no_timeout=True)
+            self.assertEqual(test.run_single(driver, '/path/some-dir/some-test.replay', time_out_ms=100), None)
+        finally:
+            actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
+
+        self.assertEqual(len(loaded_pages), 2)
+        self.assertEqual(loaded_pages[0].test_name, 'about:blank')
+        self.assertEqual(loaded_pages[1].test_name, 'http://some-test/')
+        self.assertEqual(actual_stdout, '')
+        self.assertEqual(actual_stderr, '')
+        self.assertEqual(actual_logs, 'error: some-test.replay\nsome error\n')
+
+    def test_prepare(self):
+        output_capture = OutputCapture()
+        output_capture.capture_output()
+
+        def run_test(test_input):
+            self._add_file(port, '/path/some-dir', 'some-test.wpr', 'wpr content')
+            return DriverOutput('actual text', 'actual image', 'actual checksum',
+                audio=None, crash=False, timeout=False, error=False)
+
+        test, port = self._setup_test(run_test)
+
+        try:
+            self.assertEqual(test.prepare(time_out_ms=100), True)
+        finally:
+            actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
+
+        self.assertEqual(actual_stdout, '')
+        self.assertEqual(actual_stderr, '')
+        self.assertEqual(actual_logs, 'Preparing replay for some-test.replay\nPrepared replay for some-test.replay\n')
+
+    def test_prepare_calls_run_single(self):
+        output_capture = OutputCapture()
+        output_capture.capture_output()
+        called = [False]
+
+        def run_single(driver, url, time_out_ms, record):
+            self.assertTrue(record)
+            self.assertEqual(url, 'http://some-test/')
+            called[0] = True
+            return False
+
+        test, port = self._setup_test()
+        test.run_single = run_single
+
+        try:
+            self.assertEqual(test.prepare(time_out_ms=100), False)
+        finally:
+            actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
+        self.assertTrue(called[0])
+        self.assertEqual(test._archive_path, '/path/some-dir/some-test.wpr')
+        self.assertEqual(test._url, 'http://some-test/')
+        self.assertEqual(actual_stdout, '')
+        self.assertEqual(actual_stderr, '')
+        self.assertEqual(actual_logs, "Preparing replay for some-test.replay\nFailed to prepare a replay for some-test.replay\n")
+
 class TestPerfTestFactory(unittest.TestCase):
     def test_regular_test(self):
-        test = PerfTestFactory.create_perf_test('some-dir/some-test', '/path/some-dir/some-test')
+        test = PerfTestFactory.create_perf_test(None, 'some-dir/some-test', '/path/some-dir/some-test')
         self.assertEqual(test.__class__, PerfTest)
 
     def test_inspector_test(self):
-        test = PerfTestFactory.create_perf_test('inspector/some-test', '/path/inspector/some-test')
+        test = PerfTestFactory.create_perf_test(None, 'inspector/some-test', '/path/inspector/some-test')
         self.assertEqual(test.__class__, ChromiumStylePerfTest)
 
     def test_page_loading_test(self):
-        test = PerfTestFactory.create_perf_test('PageLoad/some-test', '/path/PageLoad/some-test')
+        test = PerfTestFactory.create_perf_test(None, 'PageLoad/some-test', '/path/PageLoad/some-test')
         self.assertEqual(test.__class__, PageLoadingPerfTest)
 
 
index b4c29490a4503b4ff60cccea58b4a8953af54456..9a3757128d0ce41d1c178a5ce7615b9a0a1714aa 100644 (file)
@@ -41,6 +41,7 @@ from webkitpy.common.host import Host
 from webkitpy.common.net.file_uploader import FileUploader
 from webkitpy.layout_tests.views import printing
 from webkitpy.performance_tests.perftest import PerfTestFactory
+from webkitpy.performance_tests.perftest import ReplayPerfTest
 
 
 _log = logging.getLogger(__name__)
@@ -51,6 +52,7 @@ class PerfTestsRunner(object):
     _EXIT_CODE_BAD_BUILD = -1
     _EXIT_CODE_BAD_JSON = -2
     _EXIT_CODE_FAILED_UPLOADING = -3
+    _EXIT_CODE_BAD_PREPARATION = -4
 
     def __init__(self, args=None, port=None):
         self._options, self._args = PerfTestsRunner._parse_args(args)
@@ -90,21 +92,27 @@ class PerfTestsRunner(object):
             optparse.make_option("--pause-before-testing", dest="pause_before_testing", action="store_true", default=False,
                 help="Pause before running the tests to let user attach a performance monitor."),
             optparse.make_option("--output-json-path",
-                help="Filename of the JSON file that summaries the results"),
+                help="Filename of the JSON file that summaries the results."),
             optparse.make_option("--source-json-path",
-                help="Path to a JSON file to be merged into the JSON file when --output-json-path is present"),
+                help="Path to a JSON file to be merged into the JSON file when --output-json-path is present."),
             optparse.make_option("--test-results-server",
-                help="Upload the generated JSON file to the specified server when --output-json-path is present"),
+                help="Upload the generated JSON file to the specified server when --output-json-path is present."),
             optparse.make_option("--webkit-test-runner", "-2", action="store_true",
                 help="Use WebKitTestRunner rather than DumpRenderTree."),
+            optparse.make_option("--replay", dest="replay", action="store_true", default=False,
+                help="Run replay tests."),
             ]
         return optparse.OptionParser(option_list=(perf_option_list)).parse_args(args)
 
     def _collect_tests(self):
         """Return the list of tests found."""
 
+        test_extensions = ['.html', '.svg']
+        if self._options.replay:
+            test_extensions.append('.replay')
+
         def _is_test_file(filesystem, dirname, filename):
-            return filesystem.splitext(filename)[1] in ['.html', '.svg']
+            return filesystem.splitext(filename)[1] in test_extensions
 
         filesystem = self._host.filesystem
 
@@ -122,7 +130,8 @@ class PerfTestsRunner(object):
             relative_path = self._port.relative_perf_test_filename(path).replace('\\', '/')
             if self._port.skips_perf_test(relative_path):
                 continue
-            tests.append(PerfTestFactory.create_perf_test(relative_path, path))
+            test = PerfTestFactory.create_perf_test(self._port, relative_path, path)
+            tests.append(test)
 
         return tests
 
@@ -131,10 +140,13 @@ class PerfTestsRunner(object):
             _log.error("Build not up to date for %s" % self._port._path_to_driver())
             return self._EXIT_CODE_BAD_BUILD
 
-        # We wrap any parts of the run that are slow or likely to raise exceptions
-        # in a try/finally to ensure that we clean up the logging configuration.
-        unexpected = -1
         tests = self._collect_tests()
+        _log.info("Running %d tests" % len(tests))
+
+        for test in tests:
+            if not test.prepare(self._options.time_out_ms):
+                return self._EXIT_CODE_BAD_PREPARATION
+
         unexpected = self._run_tests_set(sorted(list(tests), key=lambda test: test.test_name()), self._port)
 
         options = self._options
index be925c953763b7389b984c7ca8ffbfc8cefaeaaa..8e1eb57ff768faa4ee11f9c4f128afb97f83cf2f 100755 (executable)
@@ -120,12 +120,12 @@ max 1120
         runner._host.filesystem.maybe_make_directory(runner._base_path, 'inspector')
         runner._host.filesystem.maybe_make_directory(runner._base_path, 'Bindings')
         runner._host.filesystem.maybe_make_directory(runner._base_path, 'Parser')
-        return runner
+        return runner, test_port
 
     def run_test(self, test_name):
-        runner = self.create_runner()
+        runner, port = self.create_runner()
         driver = MainTest.TestDriver()
-        return runner._run_single_test(ChromiumStylePerfTest(test_name, runner._host.filesystem.join('some-dir', test_name)), driver)
+        return runner._run_single_test(ChromiumStylePerfTest(port, test_name, runner._host.filesystem.join('some-dir', test_name)), driver)
 
     def test_run_passing_test(self):
         self.assertTrue(self.run_test('pass.html'))
@@ -152,19 +152,19 @@ max 1120
             path = filesystem.join(runner._base_path, test)
             dirname = filesystem.dirname(path)
             if test.startswith('inspector/'):
-                tests.append(ChromiumStylePerfTest(test, path))
+                tests.append(ChromiumStylePerfTest(runner._port, test, path))
             else:
-                tests.append(PerfTest(test, path))
+                tests.append(PerfTest(runner._port, test, path))
         return tests
 
     def test_run_test_set(self):
-        runner = self.create_runner()
+        runner, port = self.create_runner()
         tests = self._tests_for_runner(runner, ['inspector/pass.html', 'inspector/silent.html', 'inspector/failed.html',
             'inspector/tonguey.html', 'inspector/timeout.html', 'inspector/crash.html'])
         output = OutputCapture()
         output.capture_output()
         try:
-            unexpected_result_count = runner._run_tests_set(tests, runner._port)
+            unexpected_result_count = runner._run_tests_set(tests, port)
         finally:
             stdout, stderr, log = output.restore_output()
         self.assertEqual(unexpected_result_count, len(tests) - 1)
@@ -178,11 +178,11 @@ max 1120
             def stop(self):
                 TestDriverWithStopCount.stop_count += 1
 
-        runner = self.create_runner(driver_class=TestDriverWithStopCount)
+        runner, port = self.create_runner(driver_class=TestDriverWithStopCount)
 
         tests = self._tests_for_runner(runner, ['inspector/pass.html', 'inspector/silent.html', 'inspector/failed.html',
             'inspector/tonguey.html', 'inspector/timeout.html', 'inspector/crash.html'])
-        unexpected_result_count = runner._run_tests_set(tests, runner._port)
+        unexpected_result_count = runner._run_tests_set(tests, port)
 
         self.assertEqual(TestDriverWithStopCount.stop_count, 6)
 
@@ -193,13 +193,13 @@ max 1120
             def start(self):
                 TestDriverWithStartCount.start_count += 1
 
-        runner = self.create_runner(args=["--pause-before-testing"], driver_class=TestDriverWithStartCount)
+        runner, port = self.create_runner(args=["--pause-before-testing"], driver_class=TestDriverWithStartCount)
         tests = self._tests_for_runner(runner, ['inspector/pass.html'])
 
         output = OutputCapture()
         output.capture_output()
         try:
-            unexpected_result_count = runner._run_tests_set(tests, runner._port)
+            unexpected_result_count = runner._run_tests_set(tests, port)
             self.assertEqual(TestDriverWithStartCount.start_count, 1)
         finally:
             stdout, stderr, log = output.restore_output()
@@ -207,12 +207,12 @@ max 1120
         self.assertEqual(log, "Running inspector/pass.html (1 of 1)\nRESULT group_name: test_name= 42 ms\n\n")
 
     def test_run_test_set_for_parser_tests(self):
-        runner = self.create_runner()
+        runner, port = self.create_runner()
         tests = self._tests_for_runner(runner, ['Bindings/event-target-wrapper.html', 'Parser/some-parser.html'])
         output = OutputCapture()
         output.capture_output()
         try:
-            unexpected_result_count = runner._run_tests_set(tests, runner._port)
+            unexpected_result_count = runner._run_tests_set(tests, port)
         finally:
             stdout, stderr, log = output.restore_output()
         self.assertEqual(unexpected_result_count, 0)
@@ -226,9 +226,9 @@ max 1120
         '', '']))
 
     def test_run_test_set_with_json_output(self):
-        runner = self.create_runner(args=['--output-json-path=/mock-checkout/output.json'])
-        runner._host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
-        runner._host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True
+        runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json'])
+        port.host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
+        port.host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True
         runner._timestamp = 123456789
         output_capture = OutputCapture()
         output_capture.capture_output()
@@ -238,7 +238,8 @@ max 1120
             stdout, stderr, logs = output_capture.restore_output()
 
         self.assertEqual(logs,
-            '\n'.join(['Running Bindings/event-target-wrapper.html (1 of 2)',
+            '\n'.join(['Running 2 tests',
+                       'Running Bindings/event-target-wrapper.html (1 of 2)',
                        'RESULT Bindings: event-target-wrapper= 1489.05 ms',
                        'median= 1487.0 ms, stdev= 14.46 ms, min= 1471.0 ms, max= 1510.0 ms',
                        '',
@@ -246,17 +247,17 @@ max 1120
                        'RESULT group_name: test_name= 42 ms',
                        '', '']))
 
-        self.assertEqual(json.loads(runner._host.filesystem.files['/mock-checkout/output.json']), {
+        self.assertEqual(json.loads(port.host.filesystem.files['/mock-checkout/output.json']), {
             "timestamp": 123456789, "results":
             {"Bindings/event-target-wrapper": {"max": 1510, "avg": 1489.05, "median": 1487, "min": 1471, "stdev": 14.46, "unit": "ms"},
             "inspector/pass.html:group_name:test_name": 42},
             "webkit-revision": 5678})
 
     def test_run_test_set_with_json_source(self):
-        runner = self.create_runner(args=['--output-json-path=/mock-checkout/output.json', '--source-json-path=/mock-checkout/source.json'])
-        runner._host.filesystem.files['/mock-checkout/source.json'] = '{"key": "value"}'
-        runner._host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
-        runner._host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True
+        runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json', '--source-json-path=/mock-checkout/source.json'])
+        port.host.filesystem.files['/mock-checkout/source.json'] = '{"key": "value"}'
+        port.host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
+        port.host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True
         runner._timestamp = 123456789
         output_capture = OutputCapture()
         output_capture.capture_output()
@@ -265,7 +266,8 @@ max 1120
         finally:
             stdout, stderr, logs = output_capture.restore_output()
 
-        self.assertEqual(logs, '\n'.join(['Running Bindings/event-target-wrapper.html (1 of 2)',
+        self.assertEqual(logs, '\n'.join(['Running 2 tests',
+            'Running Bindings/event-target-wrapper.html (1 of 2)',
             'RESULT Bindings: event-target-wrapper= 1489.05 ms',
             'median= 1487.0 ms, stdev= 14.46 ms, min= 1471.0 ms, max= 1510.0 ms',
             '',
@@ -273,7 +275,7 @@ max 1120
             'RESULT group_name: test_name= 42 ms',
             '', '']))
 
-        self.assertEqual(json.loads(runner._host.filesystem.files['/mock-checkout/output.json']), {
+        self.assertEqual(json.loads(port.host.filesystem.files['/mock-checkout/output.json']), {
             "timestamp": 123456789, "results":
             {"Bindings/event-target-wrapper": {"max": 1510, "avg": 1489.05, "median": 1487, "min": 1471, "stdev": 14.46, "unit": "ms"},
             "inspector/pass.html:group_name:test_name": 42},
@@ -281,16 +283,16 @@ max 1120
             "key": "value"})
 
     def test_run_test_set_with_multiple_repositories(self):
-        runner = self.create_runner(args=['--output-json-path=/mock-checkout/output.json'])
-        runner._host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
+        runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json'])
+        port.host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
         runner._timestamp = 123456789
-        runner._port.repository_paths = lambda: [('webkit', '/mock-checkout'), ('some', '/mock-checkout/some')]
+        port.repository_paths = lambda: [('webkit', '/mock-checkout'), ('some', '/mock-checkout/some')]
         self.assertEqual(runner.run(), 0)
-        self.assertEqual(json.loads(runner._host.filesystem.files['/mock-checkout/output.json']), {
+        self.assertEqual(json.loads(port.host.filesystem.files['/mock-checkout/output.json']), {
             "timestamp": 123456789, "results": {"inspector/pass.html:group_name:test_name": 42.0}, "webkit-revision": 5678, "some-revision": 5678})
 
     def test_run_with_upload_json(self):
-        runner = self.create_runner(args=['--output-json-path=/mock-checkout/output.json',
+        runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json',
             '--test-results-server', 'some.host', '--platform', 'platform1', '--builder-name', 'builder1', '--build-number', '123'])
         upload_json_is_called = [False]
         upload_json_returns_true = True
@@ -302,26 +304,26 @@ max 1120
             return upload_json_returns_true
 
         runner._upload_json = mock_upload_json
-        runner._host.filesystem.files['/mock-checkout/source.json'] = '{"key": "value"}'
-        runner._host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
-        runner._host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True
+        port.host.filesystem.files['/mock-checkout/source.json'] = '{"key": "value"}'
+        port.host.filesystem.files[runner._base_path + '/inspector/pass.html'] = True
+        port.host.filesystem.files[runner._base_path + '/Bindings/event-target-wrapper.html'] = True
         runner._timestamp = 123456789
         self.assertEqual(runner.run(), 0)
         self.assertEqual(upload_json_is_called[0], True)
-        generated_json = json.loads(runner._host.filesystem.files['/mock-checkout/output.json'])
+        generated_json = json.loads(port.host.filesystem.files['/mock-checkout/output.json'])
         self.assertEqual(generated_json['platform'], 'platform1')
         self.assertEqual(generated_json['builder-name'], 'builder1')
         self.assertEqual(generated_json['build-number'], 123)
         upload_json_returns_true = False
 
-        runner = self.create_runner(args=['--output-json-path=/mock-checkout/output.json',
+        runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json',
             '--test-results-server', 'some.host', '--platform', 'platform1', '--builder-name', 'builder1', '--build-number', '123'])
         runner._upload_json = mock_upload_json
         self.assertEqual(runner.run(), -3)
 
     def test_upload_json(self):
-        runner = self.create_runner()
-        runner._host.filesystem.files['/mock-checkout/some.json'] = 'some content'
+        runner, port = self.create_runner()
+        port.host.filesystem.files['/mock-checkout/some.json'] = 'some content'
 
         called = []
         upload_single_text_file_throws = False
@@ -334,7 +336,7 @@ max 1120
                 called.append('FileUploader')
 
             def upload_single_text_file(mock, filesystem, content_type, filename):
-                self.assertEqual(filesystem, runner._host.filesystem)
+                self.assertEqual(filesystem, port.host.filesystem)
                 self.assertEqual(content_type, 'application/json')
                 self.assertEqual(filename, 'some.json')
                 called.append('upload_single_text_file')
@@ -358,59 +360,64 @@ max 1120
         runner._upload_json('some.host', 'some.json', MockFileUploader)
         self.assertEqual(called, ['FileUploader', 'upload_single_text_file'])
 
+    def _add_file(self, runner, dirname, filename, content=True):
+        dirname = runner._host.filesystem.join(runner._base_path, dirname) if dirname else runner._base_path
+        runner._host.filesystem.maybe_make_directory(dirname)
+        runner._host.filesystem.files[runner._host.filesystem.join(dirname, filename)] = content
+
     def test_collect_tests(self):
-        runner = self.create_runner()
-        filename = runner._host.filesystem.join(runner._base_path, 'inspector', 'a_file.html')
-        runner._host.filesystem.files[filename] = 'a content'
+        runner, port = self.create_runner()
+        self._add_file(runner, 'inspector', 'a_file.html', 'a content')
         tests = runner._collect_tests()
         self.assertEqual(len(tests), 1)
 
     def _collect_tests_and_sort_test_name(self, runner):
         return sorted([test.test_name() for test in runner._collect_tests()])
 
-    def test_collect_tests(self):
-        runner = self.create_runner(args=['PerformanceTests/test1.html', 'test2.html'])
+    def test_collect_tests_with_multile_files(self):
+        runner, port = self.create_runner(args=['PerformanceTests/test1.html', 'test2.html'])
 
         def add_file(filename):
-            runner._host.filesystem.files[runner._host.filesystem.join(runner._base_path, filename)] = 'some content'
+            port.host.filesystem.files[runner._host.filesystem.join(runner._base_path, filename)] = 'some content'
 
         add_file('test1.html')
         add_file('test2.html')
         add_file('test3.html')
-        runner._host.filesystem.chdir(runner._port.perf_tests_dir()[:runner._port.perf_tests_dir().rfind(runner._host.filesystem.sep)])
+        port.host.filesystem.chdir(runner._port.perf_tests_dir()[:runner._port.perf_tests_dir().rfind(runner._host.filesystem.sep)])
         self.assertEqual(self._collect_tests_and_sort_test_name(runner), ['test1.html', 'test2.html'])
 
     def test_collect_tests_with_skipped_list(self):
-        runner = self.create_runner()
-
-        def add_file(dirname, filename, content=True):
-            dirname = runner._host.filesystem.join(runner._base_path, dirname) if dirname else runner._base_path
-            runner._host.filesystem.maybe_make_directory(dirname)
-            runner._host.filesystem.files[runner._host.filesystem.join(dirname, filename)] = content
-
-        add_file('inspector', 'test1.html')
-        add_file('inspector', 'unsupported_test1.html')
-        add_file('inspector', 'test2.html')
-        add_file('inspector/resources', 'resource_file.html')
-        add_file('unsupported', 'unsupported_test2.html')
-        runner._port.skipped_perf_tests = lambda: ['inspector/unsupported_test1.html', 'unsupported']
+        runner, port = self.create_runner()
+
+        self._add_file(runner, 'inspector', 'test1.html')
+        self._add_file(runner, 'inspector', 'unsupported_test1.html')
+        self._add_file(runner, 'inspector', 'test2.html')
+        self._add_file(runner, 'inspector/resources', 'resource_file.html')
+        self._add_file(runner, 'unsupported', 'unsupported_test2.html')
+        port.skipped_perf_tests = lambda: ['inspector/unsupported_test1.html', 'unsupported']
         self.assertEqual(self._collect_tests_and_sort_test_name(runner), ['inspector/test1.html', 'inspector/test2.html'])
 
     def test_collect_tests_with_page_load_svg(self):
-        runner = self.create_runner()
+        runner, port = self.create_runner()
+        self._add_file(runner, 'PageLoad', 'some-svg-test.svg')
+        tests = runner._collect_tests()
+        self.assertEqual(len(tests), 1)
+        self.assertEqual(tests[0].__class__.__name__, 'PageLoadingPerfTest')
 
-        def add_file(dirname, filename, content=True):
-            dirname = runner._host.filesystem.join(runner._base_path, dirname) if dirname else runner._base_path
-            runner._host.filesystem.maybe_make_directory(dirname)
-            runner._host.filesystem.files[runner._host.filesystem.join(dirname, filename)] = content
+    def test_collect_tests_should_ignore_replay_tests_by_default(self):
+        runner, port = self.create_runner()
+        self._add_file(runner, 'Replay', 'www.webkit.org.replay')
+        self.assertEqual(runner._collect_tests(), [])
 
-        add_file('PageLoad', 'some-svg-test.svg')
+    def test_collect_tests_with_replay_tests(self):
+        runner, port = self.create_runner(args=['--replay'])
+        self._add_file(runner, 'Replay', 'www.webkit.org.replay')
         tests = runner._collect_tests()
         self.assertEqual(len(tests), 1)
-        self.assertEqual(tests[0].__class__.__name__, 'PageLoadingPerfTest')
+        self.assertEqual(tests[0].__class__.__name__, 'ReplayPerfTest')
 
     def test_parse_args(self):
-        runner = self.create_runner()
+        runner, port = self.create_runner()
         options, args = PerfTestsRunner._parse_args([
                 '--build-directory=folder42',
                 '--platform=platform42',
index 0df0cf7b6875fcef82344efe99269339f04ef1fe..26245baccf14ed7015eae6ebdb54f999e89f115b 100644 (file)
@@ -82,6 +82,8 @@ class AutoinstallImportHook(object):
             self._install_irc()
         elif '.buildbot' in fullname:
             self._install_buildbot()
+        elif '.webpagereplay' in fullname:
+            self._install_webpagereplay()
 
     def _install_mechanize(self):
         self._install("http://pypi.python.org/packages/source/m/mechanize/mechanize-0.2.5.tar.gz",
@@ -126,6 +128,15 @@ class AutoinstallImportHook(object):
         installer.install(url="http://downloads.sourceforge.net/project/python-irclib/python-irclib/0.4.8/python-irclib-0.4.8.zip",
                           url_subpath="ircbot.py")
 
+    def _install_webpagereplay(self):
+        if not self._fs.exists(self._fs.join(_AUTOINSTALLED_DIR, "webpagereplay")):
+            self._install("http://web-page-replay.googlecode.com/files/webpagereplay-1.1.1.tar.gz", "webpagereplay-1.1.1")
+            self._fs.move(self._fs.join(_AUTOINSTALLED_DIR, "webpagereplay-1.1.1"), self._fs.join(_AUTOINSTALLED_DIR, "webpagereplay"))
+
+        init_path = self._fs.join(_AUTOINSTALLED_DIR, "webpagereplay", "__init__.py")
+        if not self._fs.exists(init_path):
+            self._fs.write_text_file(init_path, "")
+
     def _install(self, url, url_subpath):
         installer = AutoInstaller(target_dir=_AUTOINSTALLED_DIR)
         installer.install(url=url, url_subpath=url_subpath)