Cleanup perftest* tests and add a test for computing statistics
authorrniwa@webkit.org <rniwa@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 4 Jan 2013 03:46:06 +0000 (03:46 +0000)
committerrniwa@webkit.org <rniwa@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Fri, 4 Jan 2013 03:46:06 +0000 (03:46 +0000)
https://bugs.webkit.org/show_bug.cgi?id=105685

Reviewed by Eric Seidel.

Add a test for PerfTest.compute_statistics (moved and renamed from PageLoadingPerfTest.calculate_statistics) and
extracted perftestsrunner_itegrationtests.py from perftestsrunner_unittests.py.

Also fixed a bug in compute_statistics that the mean ('avg') value can have a large rounding errors in some cases.

* Scripts/webkitpy/performance_tests/perftest.py:
(PerfTest.compute_statistics): Moved from PageLoadingPerfTest to prepare for the bug 97510. Also compute the mean
directly from sorted_values instead of using the one from Knuth's online algorithm. This approach gives more
accurate result for the mean.
(PageLoadingPerfTest.run_single):
* Scripts/webkitpy/performance_tests/perftest_unittest.py:
(MainTest.test_compute_statistics):
(MainTest.test_compute_statistics.compute_statistics): Added.
(TestPageLoadingPerfTest.test_run): floatify values.
(TestPageLoadingPerfTest.test_run_with_memory_output): Ditto. Also got rid of ".0" from mean values now that Python
correctly recognizes them as integers.

* Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: Copied from
Tools/Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py.
(TestDriver): Moved out of MainTest.
(MainTest): Got rid of assertWritten and all unit tests.
(MainTest._normalize_output): Renamed from normalizeFinishedTime to match the PEP8 naming convention.
(MainTest.test_run_test_set_kills_drt_per_run.TestDriverWithStopCount):
(MainTest.test_run_test_set_for_parser_tests):
(MainTest.test_run_memory_test):
(MainTest._test_run_with_json_output):
(MainTest.test_run_generates_json_by_default):
(MainTest.test_run_merges_output_by_default):
(MainTest.test_run_respects_reset_results):
(MainTest.test_run_generates_and_show_results_page): Use runner.load_output_json() instead of manually loading and
parsing output JSON files. Just verify that the output path is correct instead.
* Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py:
(MainTest): Removed all integration tests.
(MainTest.create_runner): Simplified.

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@138774 268f45cc-cd09-0410-ab3c-d52691b4dbfc

Tools/ChangeLog
Tools/Scripts/webkitpy/performance_tests/perftest.py
Tools/Scripts/webkitpy/performance_tests/perftest_unittest.py
Tools/Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py [new file with mode: 0644]
Tools/Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py

index bac0ec9dc4a337e67ceea7dcba96cb543b8faf50..9aa50d566bd704fb98f18194fd4a830f43c1aaf4 100644 (file)
@@ -1,3 +1,45 @@
+2013-01-03  Ryosuke Niwa  <rniwa@webkit.org>
+
+        Cleanup perftest* tests and add a test for computing statistics
+        https://bugs.webkit.org/show_bug.cgi?id=105685
+
+        Reviewed by Eric Seidel.
+
+        Add a test for PerfTest.compute_statistics (moved and renamed from PageLoadingPerfTest.calculate_statistics) and
+        extracted perftestsrunner_itegrationtests.py from perftestsrunner_unittests.py.
+
+        Also fixed a bug in compute_statistics that the mean ('avg') value can have a large rounding errors in some cases.
+
+        * Scripts/webkitpy/performance_tests/perftest.py:
+        (PerfTest.compute_statistics): Moved from PageLoadingPerfTest to prepare for the bug 97510. Also compute the mean
+        directly from sorted_values instead of using the one from Knuth's online algorithm. This approach gives more
+        accurate result for the mean.
+        (PageLoadingPerfTest.run_single):
+        * Scripts/webkitpy/performance_tests/perftest_unittest.py:
+        (MainTest.test_compute_statistics):
+        (MainTest.test_compute_statistics.compute_statistics): Added.
+        (TestPageLoadingPerfTest.test_run): floatify values.
+        (TestPageLoadingPerfTest.test_run_with_memory_output): Ditto. Also got rid of ".0" from mean values now that Python
+        correctly recognizes them as integers.
+
+        * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py: Copied from
+        Tools/Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py.
+        (TestDriver): Moved out of MainTest.
+        (MainTest): Got rid of assertWritten and all unit tests.
+        (MainTest._normalize_output): Renamed from normalizeFinishedTime to match the PEP8 naming convention.
+        (MainTest.test_run_test_set_kills_drt_per_run.TestDriverWithStopCount):
+        (MainTest.test_run_test_set_for_parser_tests):
+        (MainTest.test_run_memory_test):
+        (MainTest._test_run_with_json_output):
+        (MainTest.test_run_generates_json_by_default):
+        (MainTest.test_run_merges_output_by_default):
+        (MainTest.test_run_respects_reset_results):
+        (MainTest.test_run_generates_and_show_results_page): Use runner.load_output_json() instead of manually loading and
+        parsing output JSON files. Just verify that the output path is correct instead.
+        * Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py:
+        (MainTest): Removed all integration tests.
+        (MainTest.create_runner): Simplified.
+
 2013-01-03  Julie Parent  <jparent@chromium.org>
 
         Dashboard cleanup: remove usage of global g_defaultBuilderName
index c6b845d86ee26f19e3076cc7f74677126c6b0932..254baa74c83ef784053de45ae3634df5f355bff0 100644 (file)
@@ -211,6 +211,27 @@ class PerfTest(object):
 
         return results
 
+    @staticmethod
+    def compute_statistics(values):
+        sorted_values = sorted(values)
+
+        # Compute the mean and variance using Knuth's online algorithm (has good numerical stability).
+        squareSum = 0
+        mean = 0
+        for i, time in enumerate(sorted_values):
+            delta = time - mean
+            sweep = i + 1.0
+            mean += delta / sweep
+            squareSum += delta * (time - mean)
+
+        middle = int(len(sorted_values) / 2)
+        result = {'avg': sum(sorted_values) / len(values),
+            'min': sorted_values[0],
+            'max': sorted_values[-1],
+            'median': sorted_values[middle] if len(sorted_values) % 2 else (sorted_values[middle - 1] + sorted_values[middle]) / 2,
+            'stdev': math.sqrt(squareSum / (len(sorted_values) - 1))}
+        return result
+
     def output_statistics(self, test_name, results):
         unit = results['unit']
         _log.info('RESULT %s= %s %s' % (test_name.replace(':', ': ').replace('/', ': '), results['avg'], unit))
@@ -258,26 +279,6 @@ class PageLoadingPerfTest(PerfTest):
         super(PageLoadingPerfTest, self).run_single(driver, self.force_gc_test, time_out_ms, False)
         return super(PageLoadingPerfTest, self).run_single(driver, test_path, time_out_ms, should_run_pixel_test)
 
-    def calculate_statistics(self, values):
-        sorted_values = sorted(values)
-
-        # Compute the mean and variance using Knuth's online algorithm (has good numerical stability).
-        squareSum = 0
-        mean = 0
-        for i, time in enumerate(sorted_values):
-            delta = time - mean
-            sweep = i + 1.0
-            mean += delta / sweep
-            squareSum += delta * (time - mean)
-
-        middle = int(len(sorted_values) / 2)
-        result = {'avg': mean,
-            'min': sorted_values[0],
-            'max': sorted_values[-1],
-            'median': sorted_values[middle] if len(sorted_values) % 2 else (sorted_values[middle - 1] + sorted_values[middle]) / 2,
-            'stdev': math.sqrt(squareSum / (len(sorted_values) - 1))}
-        return result
-
     def _run_with_driver(self, driver, time_out_ms):
         results = {}
         results.setdefault(self.test_name(), {'unit': 'ms', 'values': []})
@@ -303,7 +304,7 @@ class PageLoadingPerfTest(PerfTest):
                     results[name]['unit'] = 'bytes'
 
         for result_class in results.keys():
-            results[result_class].update(self.calculate_statistics(results[result_class]['values']))
+            results[result_class].update(self.compute_statistics(results[result_class]['values']))
             self.output_statistics(result_class, results[result_class])
 
         return results
index 1499972084524e85db2f02f9d391337c2678bebe..3e9d7bd0bb5b36481e24f674a04cdf89b285bff6 100644 (file)
@@ -27,6 +27,7 @@
 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
 import StringIO
+import json
 import math
 import unittest
 
@@ -47,6 +48,26 @@ class MockPort(TestPort):
         super(MockPort, self).__init__(host=MockHost(), custom_run_test=custom_run_test)
 
 class MainTest(unittest.TestCase):
+    def test_compute_statistics(self):
+        def compute_statistics(values):
+            statistics = PerfTest.compute_statistics(map(lambda x: float(x), values))
+            return json.loads(json.dumps(statistics))
+
+        statistics = compute_statistics([10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11])
+        self.assertEqual(sorted(statistics.keys()), ['avg', 'max', 'median', 'min', 'stdev'])
+        self.assertEqual(statistics['avg'], 10.5)
+        self.assertEqual(statistics['min'], 1)
+        self.assertEqual(statistics['max'], 20)
+        self.assertEqual(statistics['median'], 10.5)
+        self.assertEqual(compute_statistics([8, 9, 10, 11, 12])['avg'], 10)
+        self.assertEqual(compute_statistics([8, 9, 10, 11, 12] * 4)['avg'], 10)
+        self.assertEqual(compute_statistics([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19])['avg'], 10)
+        self.assertEqual(PerfTest.compute_statistics([1, 5, 2, 8, 7])['median'], 5)
+        self.assertEqual(PerfTest.compute_statistics([1, 6, 2, 8, 7, 2])['median'], 4)
+        self.assertAlmostEqual(statistics['stdev'], math.sqrt(35))
+        self.assertAlmostEqual(compute_statistics([1, 2, 3, 4, 5, 6])['stdev'], math.sqrt(3.5))
+        self.assertAlmostEqual(compute_statistics([4, 2, 5, 8, 6])['stdev'], math.sqrt(5))
+
     def test_parse_output(self):
         output = DriverOutput('\n'.join([
             'Running 20 times',
@@ -188,12 +209,12 @@ class TestPageLoadingPerfTest(unittest.TestCase):
         try:
             self.assertEqual(test._run_with_driver(driver, None),
                 {'some-test': {'max': 20000, 'avg': 11000.0, 'median': 11000, 'stdev': 5627.314338711378, 'min': 2000, 'unit': 'ms',
-                    'values': [i * 1000 for i in range(2, 21)]}})
+                    'values': [float(i * 1000) for i in range(2, 21)]}})
         finally:
             actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
         self.assertEqual(actual_stdout, '')
         self.assertEqual(actual_stderr, '')
-        self.assertEqual(actual_logs, 'RESULT some-test= 11000.0 ms\nmedian= 11000 ms, stdev= 5627.31433871 ms, min= 2000 ms, max= 20000 ms\n')
+        self.assertEqual(actual_logs, 'RESULT some-test= 11000 ms\nmedian= 11000 ms, stdev= 5627.31433871 ms, min= 2000 ms, max= 20000 ms\n')
 
     def test_run_with_memory_output(self):
         port = MockPort()
@@ -206,18 +227,18 @@ class TestPageLoadingPerfTest(unittest.TestCase):
         try:
             self.assertEqual(test._run_with_driver(driver, None),
                 {'some-test': {'max': 20000, 'avg': 11000.0, 'median': 11000, 'stdev': 5627.314338711378, 'min': 2000, 'unit': 'ms',
-                    'values': [i * 1000 for i in range(2, 21)]},
+                    'values': [float(i * 1000) for i in range(2, 21)]},
                  'some-test:Malloc': {'max': 10, 'avg': 10.0, 'median': 10, 'min': 10, 'stdev': 0.0, 'unit': 'bytes',
-                    'values': [10] * 19},
+                    'values': [float(10)] * 19},
                  'some-test:JSHeap': {'max': 5, 'avg': 5.0, 'median': 5, 'min': 5, 'stdev': 0.0, 'unit': 'bytes',
-                    'values': [5] * 19}})
+                    'values': [float(5)] * 19}})
         finally:
             actual_stdout, actual_stderr, actual_logs = output_capture.restore_output()
         self.assertEqual(actual_stdout, '')
         self.assertEqual(actual_stderr, '')
-        self.assertEqual(actual_logs, 'RESULT some-test= 11000.0 ms\nmedian= 11000 ms, stdev= 5627.31433871 ms, min= 2000 ms, max= 20000 ms\n'
-            + 'RESULT some-test: Malloc= 10.0 bytes\nmedian= 10 bytes, stdev= 0.0 bytes, min= 10 bytes, max= 10 bytes\n'
-            + 'RESULT some-test: JSHeap= 5.0 bytes\nmedian= 5 bytes, stdev= 0.0 bytes, min= 5 bytes, max= 5 bytes\n')
+        self.assertEqual(actual_logs, 'RESULT some-test= 11000 ms\nmedian= 11000 ms, stdev= 5627.31433871 ms, min= 2000 ms, max= 20000 ms\n'
+            + 'RESULT some-test: Malloc= 10 bytes\nmedian= 10 bytes, stdev= 0.0 bytes, min= 10 bytes, max= 10 bytes\n'
+            + 'RESULT some-test: JSHeap= 5 bytes\nmedian= 5 bytes, stdev= 0.0 bytes, min= 5 bytes, max= 5 bytes\n')
 
     def test_run_with_bad_output(self):
         output_capture = OutputCapture()
diff --git a/Tools/Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py b/Tools/Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py
new file mode 100644 (file)
index 0000000..500b090
--- /dev/null
@@ -0,0 +1,526 @@
+# Copyright (C) 2012 Google Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions are
+# met:
+#
+#     * Redistributions of source code must retain the above copyright
+# notice, this list of conditions and the following disclaimer.
+#     * Redistributions in binary form must reproduce the above
+# copyright notice, this list of conditions and the following disclaimer
+# in the documentation and/or other materials provided with the
+# distribution.
+#     * Neither the name of Google Inc. nor the names of its
+# contributors may be used to endorse or promote products derived from
+# this software without specific prior written permission.
+#
+# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+"""Integration tests for run_perf_tests."""
+
+import StringIO
+import json
+import re
+import unittest
+
+from webkitpy.common.host_mock import MockHost
+from webkitpy.common.system.outputcapture import OutputCapture
+from webkitpy.layout_tests.port.driver import DriverOutput
+from webkitpy.layout_tests.port.test import TestPort
+from webkitpy.performance_tests.perftest import ChromiumStylePerfTest
+from webkitpy.performance_tests.perftest import PerfTest
+from webkitpy.performance_tests.perftestsrunner import PerfTestsRunner
+
+
+class TestDriver:
+    def run_test(self, driver_input, stop_when_done):
+        text = ''
+        timeout = False
+        crash = False
+        if driver_input.test_name.endswith('pass.html'):
+            text = 'RESULT group_name: test_name= 42 ms'
+        elif driver_input.test_name.endswith('timeout.html'):
+            timeout = True
+        elif driver_input.test_name.endswith('failed.html'):
+            text = None
+        elif driver_input.test_name.endswith('tonguey.html'):
+            text = 'we are not expecting an output from perf tests but RESULT blablabla'
+        elif driver_input.test_name.endswith('crash.html'):
+            crash = True
+        elif driver_input.test_name.endswith('event-target-wrapper.html'):
+            text = """Running 20 times
+Ignoring warm-up run (1502)
+1504
+1505
+1510
+1504
+1507
+1509
+1510
+1487
+1488
+1472
+1472
+1488
+1473
+1472
+1475
+1487
+1486
+1486
+1475
+1471
+
+Time:
+values 1504, 1505, 1510, 1504, 1507, 1509, 1510, 1487, 1488, 1472, 1472, 1488, 1473, 1472, 1475, 1487, 1486, 1486, 1475, 1471 ms
+avg 1489.05 ms
+median 1487 ms
+stdev 14.46 ms
+min 1471 ms
+max 1510 ms
+"""
+        elif driver_input.test_name.endswith('some-parser.html'):
+            text = """Running 20 times
+Ignoring warm-up run (1115)
+
+Time:
+values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 ms
+avg 1100 ms
+median 1101 ms
+stdev 11 ms
+min 1080 ms
+max 1120 ms
+"""
+        elif driver_input.test_name.endswith('memory-test.html'):
+            text = """Running 20 times
+Ignoring warm-up run (1115)
+
+Time:
+values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 ms
+avg 1100 ms
+median 1101 ms
+stdev 11 ms
+min 1080 ms
+max 1120 ms
+
+JS Heap:
+values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 bytes
+avg 832000 bytes
+median 829000 bytes
+stdev 15000 bytes
+min 811000 bytes
+max 848000 bytes
+
+Malloc:
+values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 bytes
+avg 532000 bytes
+median 529000 bytes
+stdev 13000 bytes
+min 511000 bytes
+max 548000 bytes
+"""
+        return DriverOutput(text, '', '', '', crash=crash, timeout=timeout)
+
+    def start(self):
+        """do nothing"""
+
+    def stop(self):
+        """do nothing"""
+
+
+class MainTest(unittest.TestCase):
+    def _normalize_output(self, log):
+        return re.sub(r'Finished: [0-9\.]+ s', 'Finished: 0.1 s', log)
+
+    def create_runner(self, args=[], driver_class=TestDriver):
+        options, parsed_args = PerfTestsRunner._parse_args(args)
+        test_port = TestPort(host=MockHost(), options=options)
+        test_port.create_driver = lambda worker_number=None, no_timeout=False: driver_class()
+
+        runner = PerfTestsRunner(args=args, port=test_port)
+        runner._host.filesystem.maybe_make_directory(runner._base_path, 'inspector')
+        runner._host.filesystem.maybe_make_directory(runner._base_path, 'Bindings')
+        runner._host.filesystem.maybe_make_directory(runner._base_path, 'Parser')
+
+        filesystem = runner._host.filesystem
+        runner.load_output_json = lambda: json.loads(filesystem.read_text_file(runner._output_json_path()))
+        return runner, test_port
+
+    def run_test(self, test_name):
+        runner, port = self.create_runner()
+        return runner._run_single_test(ChromiumStylePerfTest(port, test_name, runner._host.filesystem.join('some-dir', test_name)))
+
+    def test_run_passing_test(self):
+        self.assertTrue(self.run_test('pass.html'))
+
+    def test_run_silent_test(self):
+        self.assertFalse(self.run_test('silent.html'))
+
+    def test_run_failed_test(self):
+        self.assertFalse(self.run_test('failed.html'))
+
+    def test_run_tonguey_test(self):
+        self.assertFalse(self.run_test('tonguey.html'))
+
+    def test_run_timeout_test(self):
+        self.assertFalse(self.run_test('timeout.html'))
+
+    def test_run_crash_test(self):
+        self.assertFalse(self.run_test('crash.html'))
+
+    def _tests_for_runner(self, runner, test_names):
+        filesystem = runner._host.filesystem
+        tests = []
+        for test in test_names:
+            path = filesystem.join(runner._base_path, test)
+            dirname = filesystem.dirname(path)
+            if test.startswith('inspector/'):
+                tests.append(ChromiumStylePerfTest(runner._port, test, path))
+            else:
+                tests.append(PerfTest(runner._port, test, path))
+        return tests
+
+    def test_run_test_set(self):
+        runner, port = self.create_runner()
+        tests = self._tests_for_runner(runner, ['inspector/pass.html', 'inspector/silent.html', 'inspector/failed.html',
+            'inspector/tonguey.html', 'inspector/timeout.html', 'inspector/crash.html'])
+        output = OutputCapture()
+        output.capture_output()
+        try:
+            unexpected_result_count = runner._run_tests_set(tests, port)
+        finally:
+            stdout, stderr, log = output.restore_output()
+        self.assertEqual(unexpected_result_count, len(tests) - 1)
+        self.assertTrue('\nRESULT group_name: test_name= 42 ms\n' in log)
+
+    def test_run_test_set_kills_drt_per_run(self):
+
+        class TestDriverWithStopCount(TestDriver):
+            stop_count = 0
+            def stop(self):
+                TestDriverWithStopCount.stop_count += 1
+
+        runner, port = self.create_runner(driver_class=TestDriverWithStopCount)
+
+        tests = self._tests_for_runner(runner, ['inspector/pass.html', 'inspector/silent.html', 'inspector/failed.html',
+            'inspector/tonguey.html', 'inspector/timeout.html', 'inspector/crash.html'])
+        unexpected_result_count = runner._run_tests_set(tests, port)
+
+        self.assertEqual(TestDriverWithStopCount.stop_count, 6)
+
+    def test_run_test_set_for_parser_tests(self):
+        runner, port = self.create_runner()
+        tests = self._tests_for_runner(runner, ['Bindings/event-target-wrapper.html', 'Parser/some-parser.html'])
+        output = OutputCapture()
+        output.capture_output()
+        try:
+            unexpected_result_count = runner._run_tests_set(tests, port)
+        finally:
+            stdout, stderr, log = output.restore_output()
+        self.assertEqual(unexpected_result_count, 0)
+        self.assertEqual(self._normalize_output(log), '\n'.join(['Running Bindings/event-target-wrapper.html (1 of 2)',
+        'RESULT Bindings: event-target-wrapper= 1489.05 ms',
+        'median= 1487.0 ms, stdev= 14.46 ms, min= 1471.0 ms, max= 1510.0 ms',
+        'Finished: 0.1 s',
+        '',
+        'Running Parser/some-parser.html (2 of 2)',
+        'RESULT Parser: some-parser= 1100.0 ms',
+        'median= 1101.0 ms, stdev= 11.0 ms, min= 1080.0 ms, max= 1120.0 ms',
+        'Finished: 0.1 s',
+        '', '']))
+
+    def test_run_memory_test(self):
+        runner, port = self.create_runner_and_setup_results_template()
+        runner._timestamp = 123456789
+        port.host.filesystem.write_text_file(runner._base_path + '/Parser/memory-test.html', 'some content')
+
+        output = OutputCapture()
+        output.capture_output()
+        try:
+            unexpected_result_count = runner.run()
+        finally:
+            stdout, stderr, log = output.restore_output()
+        self.assertEqual(unexpected_result_count, 0)
+        self.assertEqual(self._normalize_output(log), '\n'.join([
+            'Running 1 tests',
+            'Running Parser/memory-test.html (1 of 1)',
+            'RESULT Parser: memory-test= 1100.0 ms',
+            'median= 1101.0 ms, stdev= 11.0 ms, min= 1080.0 ms, max= 1120.0 ms',
+            'RESULT Parser: memory-test: JSHeap= 832000.0 bytes',
+            'median= 829000.0 bytes, stdev= 15000.0 bytes, min= 811000.0 bytes, max= 848000.0 bytes',
+            'RESULT Parser: memory-test: Malloc= 532000.0 bytes',
+            'median= 529000.0 bytes, stdev= 13000.0 bytes, min= 511000.0 bytes, max= 548000.0 bytes',
+            'Finished: 0.1 s',
+            '',
+            'MOCK: user.open_url: file://...',
+            '']))
+        results = runner.load_output_json()[0]['results']
+        values = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
+        self.assertEqual(results['Parser/memory-test'], {'min': 1080.0, 'max': 1120.0, 'median': 1101.0, 'stdev': 11.0, 'avg': 1100.0, 'unit': 'ms', 'values': values})
+        self.assertEqual(results['Parser/memory-test:JSHeap'], {'min': 811000.0, 'max': 848000.0, 'median': 829000.0, 'stdev': 15000.0, 'avg': 832000.0, 'unit': 'bytes', 'values': values})
+        self.assertEqual(results['Parser/memory-test:Malloc'], {'min': 511000.0, 'max': 548000.0, 'median': 529000.0, 'stdev': 13000.0, 'avg': 532000.0, 'unit': 'bytes', 'values': values})
+
+    def _test_run_with_json_output(self, runner, filesystem, upload_suceeds=False, results_shown=True, expected_exit_code=0):
+        filesystem.write_text_file(runner._base_path + '/inspector/pass.html', 'some content')
+        filesystem.write_text_file(runner._base_path + '/Bindings/event-target-wrapper.html', 'some content')
+
+        uploaded = [False]
+
+        def mock_upload_json(hostname, json_path):
+            self.assertEqual(hostname, 'some.host')
+            self.assertEqual(json_path, '/mock-checkout/output.json')
+            uploaded[0] = upload_suceeds
+            return upload_suceeds
+
+        runner._upload_json = mock_upload_json
+        runner._timestamp = 123456789
+        output_capture = OutputCapture()
+        output_capture.capture_output()
+        try:
+            self.assertEqual(runner.run(), expected_exit_code)
+        finally:
+            stdout, stderr, logs = output_capture.restore_output()
+
+        if not expected_exit_code:
+            expected_logs = '\n'.join(['Running 2 tests',
+                                       'Running Bindings/event-target-wrapper.html (1 of 2)',
+                                       'RESULT Bindings: event-target-wrapper= 1489.05 ms',
+                                       'median= 1487.0 ms, stdev= 14.46 ms, min= 1471.0 ms, max= 1510.0 ms',
+                                       'Finished: 0.1 s',
+                                       '',
+                                       'Running inspector/pass.html (2 of 2)',
+                                       'RESULT group_name: test_name= 42 ms',
+                                       'Finished: 0.1 s',
+                                       '', ''])
+            if results_shown:
+                expected_logs += 'MOCK: user.open_url: file://...\n'
+            self.assertEqual(self._normalize_output(logs), expected_logs)
+
+        self.assertEqual(uploaded[0], upload_suceeds)
+
+        return logs
+
+    _event_target_wrapper_and_inspector_results = {
+        "Bindings/event-target-wrapper": {"max": 1510, "avg": 1489.05, "median": 1487, "min": 1471, "stdev": 14.46, "unit": "ms",
+           "values": [1504, 1505, 1510, 1504, 1507, 1509, 1510, 1487, 1488, 1472, 1472, 1488, 1473, 1472, 1475, 1487, 1486, 1486, 1475, 1471]},
+        "inspector/pass.html:group_name:test_name": 42}
+
+    def test_run_with_json_output(self):
+        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
+            '--test-results-server=some.host'])
+        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=True)
+        self.assertEqual(runner.load_output_json(), [{
+            "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
+            "webkit-revision": "5678", "branch": "webkit-trunk"}])
+
+        filesystem = port.host.filesystem
+        self.assertTrue(filesystem.isfile(runner._output_json_path()))
+        self.assertTrue(filesystem.isfile(filesystem.splitext(runner._output_json_path())[0] + '.html'))
+
+    def test_run_with_description(self):
+        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
+            '--test-results-server=some.host', '--description', 'some description'])
+        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=True)
+        self.assertEqual(runner.load_output_json(), [{
+            "timestamp": 123456789, "description": "some description",
+            "results": self._event_target_wrapper_and_inspector_results,
+            "webkit-revision": "5678", "branch": "webkit-trunk"}])
+
+    def create_runner_and_setup_results_template(self, args=[]):
+        runner, port = self.create_runner(args)
+        filesystem = port.host.filesystem
+        filesystem.write_text_file(runner._base_path + '/resources/results-template.html',
+            'BEGIN<script src="%AbsolutePathToWebKitTrunk%/some.js"></script>'
+            '<script src="%AbsolutePathToWebKitTrunk%/other.js"></script><script>%PeformanceTestsResultsJSON%</script>END')
+        filesystem.write_text_file(runner._base_path + '/Dromaeo/resources/dromaeo/web/lib/jquery-1.6.4.js', 'jquery content')
+        return runner, port
+
+    def test_run_respects_no_results(self):
+        runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json',
+            '--test-results-server=some.host', '--no-results'])
+        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=False, results_shown=False)
+        self.assertFalse(port.host.filesystem.isfile('/mock-checkout/output.json'))
+
+    def test_run_generates_json_by_default(self):
+        runner, port = self.create_runner_and_setup_results_template()
+        filesystem = port.host.filesystem
+        output_json_path = runner._output_json_path()
+        results_page_path = filesystem.splitext(output_json_path)[0] + '.html'
+
+        self.assertFalse(filesystem.isfile(output_json_path))
+        self.assertFalse(filesystem.isfile(results_page_path))
+
+        self._test_run_with_json_output(runner, port.host.filesystem)
+
+        self.assertEqual(runner.load_output_json(), [{
+            "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
+            "webkit-revision": "5678", "branch": "webkit-trunk"}])
+
+        self.assertTrue(filesystem.isfile(output_json_path))
+        self.assertTrue(filesystem.isfile(results_page_path))
+
+    def test_run_merges_output_by_default(self):
+        runner, port = self.create_runner_and_setup_results_template()
+        filesystem = port.host.filesystem
+        output_json_path = runner._output_json_path()
+
+        filesystem.write_text_file(output_json_path, '[{"previous": "results"}]')
+
+        self._test_run_with_json_output(runner, port.host.filesystem)
+
+        self.assertEqual(runner.load_output_json(), [{"previous": "results"}, {
+            "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
+            "webkit-revision": "5678", "branch": "webkit-trunk"}])
+        self.assertTrue(filesystem.isfile(filesystem.splitext(output_json_path)[0] + '.html'))
+
+    def test_run_respects_reset_results(self):
+        runner, port = self.create_runner_and_setup_results_template(args=["--reset-results"])
+        filesystem = port.host.filesystem
+        output_json_path = runner._output_json_path()
+
+        filesystem.write_text_file(output_json_path, '[{"previous": "results"}]')
+
+        self._test_run_with_json_output(runner, port.host.filesystem)
+
+        self.assertEqual(runner.load_output_json(), [{
+            "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
+            "webkit-revision": "5678", "branch": "webkit-trunk"}])
+        self.assertTrue(filesystem.isfile(filesystem.splitext(output_json_path)[0] + '.html'))
+        pass
+
+    def test_run_generates_and_show_results_page(self):
+        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json'])
+        page_shown = []
+        port.show_results_html_file = lambda path: page_shown.append(path)
+        filesystem = port.host.filesystem
+        self._test_run_with_json_output(runner, filesystem, results_shown=False)
+
+        expected_entry = {"timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
+            "webkit-revision": "5678", "branch": "webkit-trunk"}
+
+        self.maxDiff = None
+        self.assertEqual(runner._output_json_path(), '/mock-checkout/output.json')
+        self.assertEqual(runner.load_output_json(), [expected_entry])
+        self.assertEqual(filesystem.read_text_file('/mock-checkout/output.html'),
+            'BEGIN<script src="/test.checkout/some.js"></script><script src="/test.checkout/other.js"></script>'
+            '<script>%s</script>END' % port.host.filesystem.read_text_file(runner._output_json_path()))
+        self.assertEqual(page_shown[0], '/mock-checkout/output.html')
+
+        self._test_run_with_json_output(runner, filesystem, results_shown=False)
+        self.assertEqual(runner._output_json_path(), '/mock-checkout/output.json')
+        self.assertEqual(runner.load_output_json(), [expected_entry, expected_entry])
+        self.assertEqual(filesystem.read_text_file('/mock-checkout/output.html'),
+            'BEGIN<script src="/test.checkout/some.js"></script><script src="/test.checkout/other.js"></script>'
+            '<script>%s</script>END' % port.host.filesystem.read_text_file(runner._output_json_path()))
+
+    def test_run_respects_no_show_results(self):
+        show_results_html_file = lambda path: page_shown.append(path)
+
+        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json'])
+        page_shown = []
+        port.show_results_html_file = show_results_html_file
+        self._test_run_with_json_output(runner, port.host.filesystem, results_shown=False)
+        self.assertEqual(page_shown[0], '/mock-checkout/output.html')
+
+        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
+            '--no-show-results'])
+        page_shown = []
+        port.show_results_html_file = show_results_html_file
+        self._test_run_with_json_output(runner, port.host.filesystem, results_shown=False)
+        self.assertEqual(page_shown, [])
+
+    def test_run_with_bad_output_json(self):
+        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json'])
+        port.host.filesystem.write_text_file('/mock-checkout/output.json', 'bad json')
+        self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_MERGE)
+        port.host.filesystem.write_text_file('/mock-checkout/output.json', '{"another bad json": "1"}')
+        self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_MERGE)
+
+    def test_run_with_slave_config_json(self):
+        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
+            '--slave-config-json-path=/mock-checkout/slave-config.json', '--test-results-server=some.host'])
+        port.host.filesystem.write_text_file('/mock-checkout/slave-config.json', '{"key": "value"}')
+        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=True)
+        self.assertEqual(runner.load_output_json(), [{
+            "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
+            "webkit-revision": "5678", "branch": "webkit-trunk", "key": "value"}])
+
+    def test_run_with_bad_slave_config_json(self):
+        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
+            '--slave-config-json-path=/mock-checkout/slave-config.json', '--test-results-server=some.host'])
+        logs = self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_SOURCE_JSON)
+        self.assertTrue('Missing slave configuration JSON file: /mock-checkout/slave-config.json' in logs)
+        port.host.filesystem.write_text_file('/mock-checkout/slave-config.json', 'bad json')
+        self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_SOURCE_JSON)
+        port.host.filesystem.write_text_file('/mock-checkout/slave-config.json', '["another bad json"]')
+        self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_SOURCE_JSON)
+
+    def test_run_with_multiple_repositories(self):
+        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
+            '--test-results-server=some.host'])
+        port.repository_paths = lambda: [('webkit', '/mock-checkout'), ('some', '/mock-checkout/some')]
+        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=True)
+        self.assertEqual(runner.load_output_json(), [{
+            "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
+            "webkit-revision": "5678", "some-revision": "5678", "branch": "webkit-trunk"}])
+
+    def test_run_with_upload_json(self):
+        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
+            '--test-results-server', 'some.host', '--platform', 'platform1', '--builder-name', 'builder1', '--build-number', '123'])
+
+        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=True)
+        generated_json = json.loads(port.host.filesystem.files['/mock-checkout/output.json'])
+        self.assertEqual(generated_json[0]['platform'], 'platform1')
+        self.assertEqual(generated_json[0]['builder-name'], 'builder1')
+        self.assertEqual(generated_json[0]['build-number'], 123)
+
+        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=False, expected_exit_code=PerfTestsRunner.EXIT_CODE_FAILED_UPLOADING)
+
+    def test_upload_json(self):
+        runner, port = self.create_runner()
+        port.host.filesystem.files['/mock-checkout/some.json'] = 'some content'
+
+        called = []
+        upload_single_text_file_throws = False
+        upload_single_text_file_return_value = StringIO.StringIO('OK')
+
+        class MockFileUploader:
+            def __init__(mock, url, timeout):
+                self.assertEqual(url, 'https://some.host/api/test/report')
+                self.assertTrue(isinstance(timeout, int) and timeout)
+                called.append('FileUploader')
+
+            def upload_single_text_file(mock, filesystem, content_type, filename):
+                self.assertEqual(filesystem, port.host.filesystem)
+                self.assertEqual(content_type, 'application/json')
+                self.assertEqual(filename, 'some.json')
+                called.append('upload_single_text_file')
+                if upload_single_text_file_throws:
+                    raise "Some exception"
+                return upload_single_text_file_return_value
+
+        runner._upload_json('some.host', 'some.json', MockFileUploader)
+        self.assertEqual(called, ['FileUploader', 'upload_single_text_file'])
+
+        output = OutputCapture()
+        output.capture_output()
+        upload_single_text_file_return_value = StringIO.StringIO('Some error')
+        runner._upload_json('some.host', 'some.json', MockFileUploader)
+        _, _, logs = output.restore_output()
+        self.assertEqual(logs, 'Uploaded JSON but got a bad response:\nSome error\n')
+
+        # Throwing an exception upload_single_text_file shouldn't blow up _upload_json
+        called = []
+        upload_single_text_file_throws = True
+        runner._upload_json('some.host', 'some.json', MockFileUploader)
+        self.assertEqual(called, ['FileUploader', 'upload_single_text_file'])
index 8ccfc241e722e8bf83e2035a3ff0bdbcbcbc4b53..bf8c7045ad90608ec1973ae95a2bab867fb9137c 100644 (file)
@@ -34,502 +34,20 @@ import re
 import unittest
 
 from webkitpy.common.host_mock import MockHost
-from webkitpy.common.system.filesystem_mock import MockFileSystem
-from webkitpy.common.system.outputcapture import OutputCapture
-from webkitpy.layout_tests.port.driver import DriverInput, DriverOutput
 from webkitpy.layout_tests.port.test import TestPort
-from webkitpy.layout_tests.views import printing
-from webkitpy.performance_tests.perftest import ChromiumStylePerfTest
-from webkitpy.performance_tests.perftest import PerfTest
 from webkitpy.performance_tests.perftestsrunner import PerfTestsRunner
 
 
 class MainTest(unittest.TestCase):
-    def assertWritten(self, stream, contents):
-        self.assertEqual(stream.buflist, contents)
-
-    def normalizeFinishedTime(self, log):
-        return re.sub(r'Finished: [0-9\.]+ s', 'Finished: 0.1 s', log)
-
-    class TestDriver:
-        def run_test(self, driver_input, stop_when_done):
-            text = ''
-            timeout = False
-            crash = False
-            if driver_input.test_name.endswith('pass.html'):
-                text = 'RESULT group_name: test_name= 42 ms'
-            elif driver_input.test_name.endswith('timeout.html'):
-                timeout = True
-            elif driver_input.test_name.endswith('failed.html'):
-                text = None
-            elif driver_input.test_name.endswith('tonguey.html'):
-                text = 'we are not expecting an output from perf tests but RESULT blablabla'
-            elif driver_input.test_name.endswith('crash.html'):
-                crash = True
-            elif driver_input.test_name.endswith('event-target-wrapper.html'):
-                text = """Running 20 times
-Ignoring warm-up run (1502)
-1504
-1505
-1510
-1504
-1507
-1509
-1510
-1487
-1488
-1472
-1472
-1488
-1473
-1472
-1475
-1487
-1486
-1486
-1475
-1471
-
-Time:
-values 1504, 1505, 1510, 1504, 1507, 1509, 1510, 1487, 1488, 1472, 1472, 1488, 1473, 1472, 1475, 1487, 1486, 1486, 1475, 1471 ms
-avg 1489.05 ms
-median 1487 ms
-stdev 14.46 ms
-min 1471 ms
-max 1510 ms
-"""
-            elif driver_input.test_name.endswith('some-parser.html'):
-                text = """Running 20 times
-Ignoring warm-up run (1115)
-
-Time:
-values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 ms
-avg 1100 ms
-median 1101 ms
-stdev 11 ms
-min 1080 ms
-max 1120 ms
-"""
-            elif driver_input.test_name.endswith('memory-test.html'):
-                text = """Running 20 times
-Ignoring warm-up run (1115)
-
-Time:
-values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 ms
-avg 1100 ms
-median 1101 ms
-stdev 11 ms
-min 1080 ms
-max 1120 ms
-
-JS Heap:
-values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 bytes
-avg 832000 bytes
-median 829000 bytes
-stdev 15000 bytes
-min 811000 bytes
-max 848000 bytes
-
-Malloc:
-values 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 bytes
-avg 532000 bytes
-median 529000 bytes
-stdev 13000 bytes
-min 511000 bytes
-max 548000 bytes
-"""
-            return DriverOutput(text, '', '', '', crash=crash, timeout=timeout)
-
-        def start(self):
-            """do nothing"""
-
-        def stop(self):
-            """do nothing"""
-
-    def create_runner(self, args=[], driver_class=TestDriver):
+    def create_runner(self, args=[]):
         options, parsed_args = PerfTestsRunner._parse_args(args)
         test_port = TestPort(host=MockHost(), options=options)
-        test_port.create_driver = lambda worker_number=None, no_timeout=False: driver_class()
-
         runner = PerfTestsRunner(args=args, port=test_port)
         runner._host.filesystem.maybe_make_directory(runner._base_path, 'inspector')
         runner._host.filesystem.maybe_make_directory(runner._base_path, 'Bindings')
         runner._host.filesystem.maybe_make_directory(runner._base_path, 'Parser')
-
-        filesystem = runner._host.filesystem
-        runner.load_output_json = lambda: json.loads(filesystem.read_text_file(runner._output_json_path()))
         return runner, test_port
 
-    def run_test(self, test_name):
-        runner, port = self.create_runner()
-        return runner._run_single_test(ChromiumStylePerfTest(port, test_name, runner._host.filesystem.join('some-dir', test_name)))
-
-    def test_run_passing_test(self):
-        self.assertTrue(self.run_test('pass.html'))
-
-    def test_run_silent_test(self):
-        self.assertFalse(self.run_test('silent.html'))
-
-    def test_run_failed_test(self):
-        self.assertFalse(self.run_test('failed.html'))
-
-    def test_run_tonguey_test(self):
-        self.assertFalse(self.run_test('tonguey.html'))
-
-    def test_run_timeout_test(self):
-        self.assertFalse(self.run_test('timeout.html'))
-
-    def test_run_crash_test(self):
-        self.assertFalse(self.run_test('crash.html'))
-
-    def _tests_for_runner(self, runner, test_names):
-        filesystem = runner._host.filesystem
-        tests = []
-        for test in test_names:
-            path = filesystem.join(runner._base_path, test)
-            dirname = filesystem.dirname(path)
-            if test.startswith('inspector/'):
-                tests.append(ChromiumStylePerfTest(runner._port, test, path))
-            else:
-                tests.append(PerfTest(runner._port, test, path))
-        return tests
-
-    def test_run_test_set(self):
-        runner, port = self.create_runner()
-        tests = self._tests_for_runner(runner, ['inspector/pass.html', 'inspector/silent.html', 'inspector/failed.html',
-            'inspector/tonguey.html', 'inspector/timeout.html', 'inspector/crash.html'])
-        output = OutputCapture()
-        output.capture_output()
-        try:
-            unexpected_result_count = runner._run_tests_set(tests, port)
-        finally:
-            stdout, stderr, log = output.restore_output()
-        self.assertEqual(unexpected_result_count, len(tests) - 1)
-        self.assertTrue('\nRESULT group_name: test_name= 42 ms\n' in log)
-
-    def test_run_test_set_kills_drt_per_run(self):
-
-        class TestDriverWithStopCount(MainTest.TestDriver):
-            stop_count = 0
-
-            def stop(self):
-                TestDriverWithStopCount.stop_count += 1
-
-        runner, port = self.create_runner(driver_class=TestDriverWithStopCount)
-
-        tests = self._tests_for_runner(runner, ['inspector/pass.html', 'inspector/silent.html', 'inspector/failed.html',
-            'inspector/tonguey.html', 'inspector/timeout.html', 'inspector/crash.html'])
-        unexpected_result_count = runner._run_tests_set(tests, port)
-
-        self.assertEqual(TestDriverWithStopCount.stop_count, 6)
-
-    def test_run_test_set_for_parser_tests(self):
-        runner, port = self.create_runner()
-        tests = self._tests_for_runner(runner, ['Bindings/event-target-wrapper.html', 'Parser/some-parser.html'])
-        output = OutputCapture()
-        output.capture_output()
-        try:
-            unexpected_result_count = runner._run_tests_set(tests, port)
-        finally:
-            stdout, stderr, log = output.restore_output()
-        self.assertEqual(unexpected_result_count, 0)
-        self.assertEqual(self.normalizeFinishedTime(log), '\n'.join(['Running Bindings/event-target-wrapper.html (1 of 2)',
-        'RESULT Bindings: event-target-wrapper= 1489.05 ms',
-        'median= 1487.0 ms, stdev= 14.46 ms, min= 1471.0 ms, max= 1510.0 ms',
-        'Finished: 0.1 s',
-        '',
-        'Running Parser/some-parser.html (2 of 2)',
-        'RESULT Parser: some-parser= 1100.0 ms',
-        'median= 1101.0 ms, stdev= 11.0 ms, min= 1080.0 ms, max= 1120.0 ms',
-        'Finished: 0.1 s',
-        '', '']))
-
-    def test_run_memory_test(self):
-        runner, port = self.create_runner_and_setup_results_template()
-        runner._timestamp = 123456789
-        port.host.filesystem.write_text_file(runner._base_path + '/Parser/memory-test.html', 'some content')
-
-        output = OutputCapture()
-        output.capture_output()
-        try:
-            unexpected_result_count = runner.run()
-        finally:
-            stdout, stderr, log = output.restore_output()
-        self.assertEqual(unexpected_result_count, 0)
-        self.assertEqual(self.normalizeFinishedTime(log), '\n'.join([
-            'Running 1 tests',
-            'Running Parser/memory-test.html (1 of 1)',
-            'RESULT Parser: memory-test= 1100.0 ms',
-            'median= 1101.0 ms, stdev= 11.0 ms, min= 1080.0 ms, max= 1120.0 ms',
-            'RESULT Parser: memory-test: JSHeap= 832000.0 bytes',
-            'median= 829000.0 bytes, stdev= 15000.0 bytes, min= 811000.0 bytes, max= 848000.0 bytes',
-            'RESULT Parser: memory-test: Malloc= 532000.0 bytes',
-            'median= 529000.0 bytes, stdev= 13000.0 bytes, min= 511000.0 bytes, max= 548000.0 bytes',
-            'Finished: 0.1 s',
-            '',
-            'MOCK: user.open_url: file://...',
-            '']))
-        results = runner.load_output_json()[0]['results']
-        values = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
-        self.assertEqual(results['Parser/memory-test'], {'min': 1080.0, 'max': 1120.0, 'median': 1101.0, 'stdev': 11.0, 'avg': 1100.0, 'unit': 'ms', 'values': values})
-        self.assertEqual(results['Parser/memory-test:JSHeap'], {'min': 811000.0, 'max': 848000.0, 'median': 829000.0, 'stdev': 15000.0, 'avg': 832000.0, 'unit': 'bytes', 'values': values})
-        self.assertEqual(results['Parser/memory-test:Malloc'], {'min': 511000.0, 'max': 548000.0, 'median': 529000.0, 'stdev': 13000.0, 'avg': 532000.0, 'unit': 'bytes', 'values': values})
-
-    def _test_run_with_json_output(self, runner, filesystem, upload_suceeds=False, results_shown=True, expected_exit_code=0):
-        filesystem.write_text_file(runner._base_path + '/inspector/pass.html', 'some content')
-        filesystem.write_text_file(runner._base_path + '/Bindings/event-target-wrapper.html', 'some content')
-
-        uploaded = [False]
-
-        def mock_upload_json(hostname, json_path):
-            self.assertEqual(hostname, 'some.host')
-            self.assertEqual(json_path, '/mock-checkout/output.json')
-            uploaded[0] = upload_suceeds
-            return upload_suceeds
-
-        runner._upload_json = mock_upload_json
-        runner._timestamp = 123456789
-        output_capture = OutputCapture()
-        output_capture.capture_output()
-        try:
-            self.assertEqual(runner.run(), expected_exit_code)
-        finally:
-            stdout, stderr, logs = output_capture.restore_output()
-
-        if not expected_exit_code:
-            expected_logs = '\n'.join(['Running 2 tests',
-                                       'Running Bindings/event-target-wrapper.html (1 of 2)',
-                                       'RESULT Bindings: event-target-wrapper= 1489.05 ms',
-                                       'median= 1487.0 ms, stdev= 14.46 ms, min= 1471.0 ms, max= 1510.0 ms',
-                                       'Finished: 0.1 s',
-                                       '',
-                                       'Running inspector/pass.html (2 of 2)',
-                                       'RESULT group_name: test_name= 42 ms',
-                                       'Finished: 0.1 s',
-                                       '', ''])
-            if results_shown:
-                expected_logs += 'MOCK: user.open_url: file://...\n'
-            self.assertEqual(self.normalizeFinishedTime(logs), expected_logs)
-
-        self.assertEqual(uploaded[0], upload_suceeds)
-
-        return logs
-
-    _event_target_wrapper_and_inspector_results = {
-        "Bindings/event-target-wrapper": {"max": 1510, "avg": 1489.05, "median": 1487, "min": 1471, "stdev": 14.46, "unit": "ms",
-           "values": [1504, 1505, 1510, 1504, 1507, 1509, 1510, 1487, 1488, 1472, 1472, 1488, 1473, 1472, 1475, 1487, 1486, 1486, 1475, 1471]},
-        "inspector/pass.html:group_name:test_name": 42}
-
-    def test_run_with_json_output(self):
-        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
-            '--test-results-server=some.host'])
-        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=True)
-        self.assertEqual(runner.load_output_json(), [{
-            "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
-            "webkit-revision": "5678", "branch": "webkit-trunk"}])
-
-        filesystem = port.host.filesystem
-        self.assertTrue(filesystem.isfile(runner._output_json_path()))
-        self.assertTrue(filesystem.isfile(filesystem.splitext(runner._output_json_path())[0] + '.html'))
-
-    def test_run_with_description(self):
-        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
-            '--test-results-server=some.host', '--description', 'some description'])
-        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=True)
-        self.assertEqual(runner.load_output_json(), [{
-            "timestamp": 123456789, "description": "some description",
-            "results": self._event_target_wrapper_and_inspector_results,
-            "webkit-revision": "5678", "branch": "webkit-trunk"}])
-
-    def create_runner_and_setup_results_template(self, args=[]):
-        runner, port = self.create_runner(args)
-        filesystem = port.host.filesystem
-        filesystem.write_text_file(runner._base_path + '/resources/results-template.html',
-            'BEGIN<script src="%AbsolutePathToWebKitTrunk%/some.js"></script>'
-            '<script src="%AbsolutePathToWebKitTrunk%/other.js"></script><script>%PeformanceTestsResultsJSON%</script>END')
-        filesystem.write_text_file(runner._base_path + '/Dromaeo/resources/dromaeo/web/lib/jquery-1.6.4.js', 'jquery content')
-        return runner, port
-
-    def test_run_respects_no_results(self):
-        runner, port = self.create_runner(args=['--output-json-path=/mock-checkout/output.json',
-            '--test-results-server=some.host', '--no-results'])
-        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=False, results_shown=False)
-        self.assertFalse(port.host.filesystem.isfile('/mock-checkout/output.json'))
-
-    def test_run_generates_json_by_default(self):
-        runner, port = self.create_runner_and_setup_results_template()
-        filesystem = port.host.filesystem
-        output_json_path = runner._output_json_path()
-        results_page_path = filesystem.splitext(output_json_path)[0] + '.html'
-
-        self.assertFalse(filesystem.isfile(output_json_path))
-        self.assertFalse(filesystem.isfile(results_page_path))
-
-        self._test_run_with_json_output(runner, port.host.filesystem)
-
-        self.assertEqual(json.loads(port.host.filesystem.read_text_file(output_json_path)), [{
-            "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
-            "webkit-revision": "5678", "branch": "webkit-trunk"}])
-
-        self.assertTrue(filesystem.isfile(output_json_path))
-        self.assertTrue(filesystem.isfile(results_page_path))
-
-    def test_run_merges_output_by_default(self):
-        runner, port = self.create_runner_and_setup_results_template()
-        filesystem = port.host.filesystem
-        output_json_path = runner._output_json_path()
-
-        filesystem.write_text_file(output_json_path, '[{"previous": "results"}]')
-
-        self._test_run_with_json_output(runner, port.host.filesystem)
-
-        self.assertEqual(json.loads(port.host.filesystem.read_text_file(output_json_path)), [{"previous": "results"}, {
-            "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
-            "webkit-revision": "5678", "branch": "webkit-trunk"}])
-        self.assertTrue(filesystem.isfile(filesystem.splitext(output_json_path)[0] + '.html'))
-
-    def test_run_respects_reset_results(self):
-        runner, port = self.create_runner_and_setup_results_template(args=["--reset-results"])
-        filesystem = port.host.filesystem
-        output_json_path = runner._output_json_path()
-
-        filesystem.write_text_file(output_json_path, '[{"previous": "results"}]')
-
-        self._test_run_with_json_output(runner, port.host.filesystem)
-
-        self.assertEqual(json.loads(port.host.filesystem.read_text_file(output_json_path)), [{
-            "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
-            "webkit-revision": "5678", "branch": "webkit-trunk"}])
-        self.assertTrue(filesystem.isfile(filesystem.splitext(output_json_path)[0] + '.html'))
-        pass
-
-    def test_run_generates_and_show_results_page(self):
-        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json'])
-        page_shown = []
-        port.show_results_html_file = lambda path: page_shown.append(path)
-        filesystem = port.host.filesystem
-        self._test_run_with_json_output(runner, filesystem, results_shown=False)
-
-        expected_entry = {"timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
-            "webkit-revision": "5678", "branch": "webkit-trunk"}
-
-        self.maxDiff = None
-        json_output = port.host.filesystem.read_text_file('/mock-checkout/output.json')
-        self.assertEqual(json.loads(json_output), [expected_entry])
-        self.assertEqual(filesystem.read_text_file('/mock-checkout/output.html'),
-            'BEGIN<script src="/test.checkout/some.js"></script><script src="/test.checkout/other.js"></script>'
-            '<script>%s</script>END' % json_output)
-        self.assertEqual(page_shown[0], '/mock-checkout/output.html')
-
-        self._test_run_with_json_output(runner, filesystem, results_shown=False)
-        json_output = port.host.filesystem.read_text_file('/mock-checkout/output.json')
-        self.assertEqual(json.loads(json_output), [expected_entry, expected_entry])
-        self.assertEqual(filesystem.read_text_file('/mock-checkout/output.html'),
-            'BEGIN<script src="/test.checkout/some.js"></script><script src="/test.checkout/other.js"></script>'
-            '<script>%s</script>END' % json_output)
-
-    def test_run_respects_no_show_results(self):
-        show_results_html_file = lambda path: page_shown.append(path)
-
-        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json'])
-        page_shown = []
-        port.show_results_html_file = show_results_html_file
-        self._test_run_with_json_output(runner, port.host.filesystem, results_shown=False)
-        self.assertEqual(page_shown[0], '/mock-checkout/output.html')
-
-        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
-            '--no-show-results'])
-        page_shown = []
-        port.show_results_html_file = show_results_html_file
-        self._test_run_with_json_output(runner, port.host.filesystem, results_shown=False)
-        self.assertEqual(page_shown, [])
-
-    def test_run_with_bad_output_json(self):
-        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json'])
-        port.host.filesystem.write_text_file('/mock-checkout/output.json', 'bad json')
-        self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_MERGE)
-        port.host.filesystem.write_text_file('/mock-checkout/output.json', '{"another bad json": "1"}')
-        self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_MERGE)
-
-    def test_run_with_slave_config_json(self):
-        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
-            '--slave-config-json-path=/mock-checkout/slave-config.json', '--test-results-server=some.host'])
-        port.host.filesystem.write_text_file('/mock-checkout/slave-config.json', '{"key": "value"}')
-        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=True)
-        self.assertEqual(runner.load_output_json(), [{
-            "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
-            "webkit-revision": "5678", "branch": "webkit-trunk", "key": "value"}])
-
-    def test_run_with_bad_slave_config_json(self):
-        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
-            '--slave-config-json-path=/mock-checkout/slave-config.json', '--test-results-server=some.host'])
-        logs = self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_SOURCE_JSON)
-        self.assertTrue('Missing slave configuration JSON file: /mock-checkout/slave-config.json' in logs)
-        port.host.filesystem.write_text_file('/mock-checkout/slave-config.json', 'bad json')
-        self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_SOURCE_JSON)
-        port.host.filesystem.write_text_file('/mock-checkout/slave-config.json', '["another bad json"]')
-        self._test_run_with_json_output(runner, port.host.filesystem, expected_exit_code=PerfTestsRunner.EXIT_CODE_BAD_SOURCE_JSON)
-
-    def test_run_with_multiple_repositories(self):
-        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
-            '--test-results-server=some.host'])
-        port.repository_paths = lambda: [('webkit', '/mock-checkout'), ('some', '/mock-checkout/some')]
-        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=True)
-        self.assertEqual(runner.load_output_json(), [{
-            "timestamp": 123456789, "results": self._event_target_wrapper_and_inspector_results,
-            "webkit-revision": "5678", "some-revision": "5678", "branch": "webkit-trunk"}])
-
-    def test_run_with_upload_json(self):
-        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
-            '--test-results-server', 'some.host', '--platform', 'platform1', '--builder-name', 'builder1', '--build-number', '123'])
-
-        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=True)
-        generated_json = json.loads(port.host.filesystem.files['/mock-checkout/output.json'])
-        self.assertEqual(generated_json[0]['platform'], 'platform1')
-        self.assertEqual(generated_json[0]['builder-name'], 'builder1')
-        self.assertEqual(generated_json[0]['build-number'], 123)
-
-        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=False, expected_exit_code=PerfTestsRunner.EXIT_CODE_FAILED_UPLOADING)
-
-    def test_upload_json(self):
-        runner, port = self.create_runner()
-        port.host.filesystem.files['/mock-checkout/some.json'] = 'some content'
-
-        called = []
-        upload_single_text_file_throws = False
-        upload_single_text_file_return_value = StringIO.StringIO('OK')
-
-        class MockFileUploader:
-            def __init__(mock, url, timeout):
-                self.assertEqual(url, 'https://some.host/api/test/report')
-                self.assertTrue(isinstance(timeout, int) and timeout)
-                called.append('FileUploader')
-
-            def upload_single_text_file(mock, filesystem, content_type, filename):
-                self.assertEqual(filesystem, port.host.filesystem)
-                self.assertEqual(content_type, 'application/json')
-                self.assertEqual(filename, 'some.json')
-                called.append('upload_single_text_file')
-                if upload_single_text_file_throws:
-                    raise "Some exception"
-                return upload_single_text_file_return_value
-
-        runner._upload_json('some.host', 'some.json', MockFileUploader)
-        self.assertEqual(called, ['FileUploader', 'upload_single_text_file'])
-
-        output = OutputCapture()
-        output.capture_output()
-        upload_single_text_file_return_value = StringIO.StringIO('Some error')
-        runner._upload_json('some.host', 'some.json', MockFileUploader)
-        _, _, logs = output.restore_output()
-        self.assertEqual(logs, 'Uploaded JSON but got a bad response:\nSome error\n')
-
-        # Throwing an exception upload_single_text_file shouldn't blow up _upload_json
-        called = []
-        upload_single_text_file_throws = True
-        runner._upload_json('some.host', 'some.json', MockFileUploader)
-        self.assertEqual(called, ['FileUploader', 'upload_single_text_file'])
-
     def _add_file(self, runner, dirname, filename, content=True):
         dirname = runner._host.filesystem.join(runner._base_path, dirname) if dirname else runner._base_path
         runner._host.filesystem.maybe_make_directory(dirname)