Upload results to perf.webkit.org in addition to the one specified by --test-results...
authorrniwa@webkit.org <rniwa@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 23 Feb 2013 04:28:47 +0000 (04:28 +0000)
committerrniwa@webkit.org <rniwa@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 23 Feb 2013 04:28:47 +0000 (04:28 +0000)
https://bugs.webkit.org/show_bug.cgi?id=108577

Reviewed by Dirk Pranke.

Upload results to perf.webkit.org using new JSON format as well as the host specified by
--test-results-server. The new format is needed to provide extra information perf.webkit.org
need such as the subversion commit time and test URLs. This is a temporarily measure until
we complete the transition and the old JSON format and the code to upload results to
webkit-perf.appspot.com can be deleted.

This patch adds scm.timestamp_of_latest_commit to obtain the timestamp of the latest commit present
in a svn checkout or a git clone. This information is embedded in JSON submitted to perf.webkit.org
so that the app can sort performance test results based on the timestamp of the last commit.

It also changes the repository names returned by port objects to be properly capitalized
human readable names such as WebKit instead of lowercased names such as webkit since these names
are displayed on perf.webkit.org for humans. Several users of this feature has been updated
to explicitly lowercase the names.

* Scripts/webkitpy/common/checkout/scm/git.py:
(Git.timestamp_of_latest_commit): Added. Obtains the timestamp of the last commit. Unfortunately,
git's timestamp granularity is seconds so we're losing some information compared to using a regular
subversion client. To make matters worse, git doesn't have any option to show ISO-format timestamp in
UTC so we're going to manually fiddle with timezone.

* Scripts/webkitpy/common/checkout/scm/scm.py:
(SCM.timestamp_of_latest_commit): Added.

* Scripts/webkitpy/common/checkout/scm/scm_mock.py:
(MockSCM.timestamp_of_latest_commit): Added.

* Scripts/webkitpy/common/checkout/scm/scm_unittest.py:
(test_timestamp_of_latest_commit): Added a test for Git.timestamp_of_latest_commit.

* Scripts/webkitpy/common/checkout/scm/svn.py:
(SVN.timestamp_of_latest_commit): Added. With svn, all we need to do is to use --xml option and parse
the timestamp which is always in UTC.

* Scripts/webkitpy/layout_tests/layout_package/json_results_generator.py:
(JSONResultsGeneratorBase._insert_generic_metadata): Lowercase the name. Note that the name
'chromium' needs to be substituted by 'chrome' for historical reasons.

* Scripts/webkitpy/layout_tests/port/base.py:
(Port.repository_paths): Return WebKit instead of webkit as noted above.

* Scripts/webkitpy/layout_tests/port/chromium.py:
(ChromiumPort.repository_paths): Return Chromium instead of chromium as noted above.

* Scripts/webkitpy/performance_tests/perftestsrunner.py:
(PerfTestsRunner.__init__): Store the current time in UTC as well as in local time.
(PerfTestsRunner._collect_tests):

(PerfTestsRunner._generate_and_show_results): Retrieve both regular output and one for perf.webkit.org,
and upload them appropriately.

(PerfTestsRunner._generate_results_dict): Store WebKit and Chromium revisions at which tests were ran
in revisions_for_perf_webkit and construct an output for perf.webkit.org.

(PerfTestsRunner._datetime_in_ES5_compatible_iso_format): Added.

(PerfTestsRunner._merge_slave_config_json): Merge slave configuration files into both regular output
and one for perf.webkit.org. Here, we prefix each key with "builder" for perf.webkit.org.
e.g. "processor" would be renamed to "builderProcessor".

(PerfTestsRunner._generate_output_files):

(PerfTestsRunner._upload_json): Added a remote path as an argument since we upload JSONs to /api/report
on perf.webkit.org whereas we upload it to /api/test/report on webkit-perf.appspot.com. Also added the code
to parse response as JSON when possible since perf.webkit.org returns a JSON response as opposed to
webkit-perf.appspot.com which spits out a plaintext response.

* Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py:
(MainTest._test_run_with_json_output.mock_upload_json): Tolerate perf.webkit.org/api/report for now.
(MainTest._test_run_with_json_output): Store a UTC time as perftestrunner would do.
(MainTest.test_run_with_upload_json_should_generate_perf_webkit_json): Added.

* Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py:
(MainTest.test_upload_json): Moved from itegrationtest.py since it really is a unit test. Also added test
cases to parse JSON responses.
(MainTest.test_upload_json.MockFileUploader): Refactored.
(MainTest.test_upload_json.MockFileUploader.reset): Added.
(MainTest.test_upload_json.MockFileUploader.__init__):
(MainTest.test_upload_json.MockFileUploader.upload_single_text_file):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@143833 268f45cc-cd09-0410-ab3c-d52691b4dbfc

12 files changed:
Tools/ChangeLog
Tools/Scripts/webkitpy/common/checkout/scm/git.py
Tools/Scripts/webkitpy/common/checkout/scm/scm.py
Tools/Scripts/webkitpy/common/checkout/scm/scm_mock.py
Tools/Scripts/webkitpy/common/checkout/scm/scm_unittest.py
Tools/Scripts/webkitpy/common/checkout/scm/svn.py
Tools/Scripts/webkitpy/layout_tests/layout_package/json_results_generator.py
Tools/Scripts/webkitpy/layout_tests/port/base.py
Tools/Scripts/webkitpy/layout_tests/port/chromium.py
Tools/Scripts/webkitpy/performance_tests/perftestsrunner.py
Tools/Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py
Tools/Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py

index 1f89662..b2b1242 100644 (file)
@@ -1,3 +1,91 @@
+2013-02-22  Ryosuke Niwa  <rniwa@webkit.org>
+
+        Upload results to perf.webkit.org in addition to the one specified by --test-results-server
+        https://bugs.webkit.org/show_bug.cgi?id=108577
+
+        Reviewed by Dirk Pranke.
+
+        Upload results to perf.webkit.org using new JSON format as well as the host specified by
+        --test-results-server. The new format is needed to provide extra information perf.webkit.org
+        need such as the subversion commit time and test URLs. This is a temporarily measure until
+        we complete the transition and the old JSON format and the code to upload results to
+        webkit-perf.appspot.com can be deleted.
+
+        This patch adds scm.timestamp_of_latest_commit to obtain the timestamp of the latest commit present
+        in a svn checkout or a git clone. This information is embedded in JSON submitted to perf.webkit.org
+        so that the app can sort performance test results based on the timestamp of the last commit.
+
+        It also changes the repository names returned by port objects to be properly capitalized
+        human readable names such as WebKit instead of lowercased names such as webkit since these names
+        are displayed on perf.webkit.org for humans. Several users of this feature has been updated
+        to explicitly lowercase the names.
+
+
+        * Scripts/webkitpy/common/checkout/scm/git.py:
+        (Git.timestamp_of_latest_commit): Added. Obtains the timestamp of the last commit. Unfortunately,
+        git's timestamp granularity is seconds so we're losing some information compared to using a regular
+        subversion client. To make matters worse, git doesn't have any option to show ISO-format timestamp in
+        UTC so we're going to manually fiddle with timezone.
+
+        * Scripts/webkitpy/common/checkout/scm/scm.py:
+        (SCM.timestamp_of_latest_commit): Added.
+
+        * Scripts/webkitpy/common/checkout/scm/scm_mock.py:
+        (MockSCM.timestamp_of_latest_commit): Added.
+
+        * Scripts/webkitpy/common/checkout/scm/scm_unittest.py:
+        (test_timestamp_of_latest_commit): Added a test for Git.timestamp_of_latest_commit.
+
+        * Scripts/webkitpy/common/checkout/scm/svn.py:
+        (SVN.timestamp_of_latest_commit): Added. With svn, all we need to do is to use --xml option and parse
+        the timestamp which is always in UTC.
+
+        * Scripts/webkitpy/layout_tests/layout_package/json_results_generator.py:
+        (JSONResultsGeneratorBase._insert_generic_metadata): Lowercase the name. Note that the name
+        'chromium' needs to be substituted by 'chrome' for historical reasons.
+
+        * Scripts/webkitpy/layout_tests/port/base.py:
+        (Port.repository_paths): Return WebKit instead of webkit as noted above.
+
+        * Scripts/webkitpy/layout_tests/port/chromium.py:
+        (ChromiumPort.repository_paths): Return Chromium instead of chromium as noted above.
+
+        * Scripts/webkitpy/performance_tests/perftestsrunner.py:
+        (PerfTestsRunner.__init__): Store the current time in UTC as well as in local time.
+        (PerfTestsRunner._collect_tests):
+
+        (PerfTestsRunner._generate_and_show_results): Retrieve both regular output and one for perf.webkit.org,
+        and upload them appropriately.
+
+        (PerfTestsRunner._generate_results_dict): Store WebKit and Chromium revisions at which tests were ran
+        in revisions_for_perf_webkit and construct an output for perf.webkit.org.
+
+        (PerfTestsRunner._datetime_in_ES5_compatible_iso_format): Added.
+
+        (PerfTestsRunner._merge_slave_config_json): Merge slave configuration files into both regular output
+        and one for perf.webkit.org. Here, we prefix each key with "builder" for perf.webkit.org.
+        e.g. "processor" would be renamed to "builderProcessor".
+
+        (PerfTestsRunner._generate_output_files):
+
+        (PerfTestsRunner._upload_json): Added a remote path as an argument since we upload JSONs to /api/report
+        on perf.webkit.org whereas we upload it to /api/test/report on webkit-perf.appspot.com. Also added the code
+        to parse response as JSON when possible since perf.webkit.org returns a JSON response as opposed to
+        webkit-perf.appspot.com which spits out a plaintext response.
+
+        * Scripts/webkitpy/performance_tests/perftestsrunner_integrationtest.py:
+        (MainTest._test_run_with_json_output.mock_upload_json): Tolerate perf.webkit.org/api/report for now.
+        (MainTest._test_run_with_json_output): Store a UTC time as perftestrunner would do.
+        (MainTest.test_run_with_upload_json_should_generate_perf_webkit_json): Added.
+
+        * Scripts/webkitpy/performance_tests/perftestsrunner_unittest.py:
+        (MainTest.test_upload_json): Moved from itegrationtest.py since it really is a unit test. Also added test
+        cases to parse JSON responses.
+        (MainTest.test_upload_json.MockFileUploader): Refactored.
+        (MainTest.test_upload_json.MockFileUploader.reset): Added.
+        (MainTest.test_upload_json.MockFileUploader.__init__):
+        (MainTest.test_upload_json.MockFileUploader.upload_single_text_file):
+
 2013-02-22  Roger Fong  <roger_fong@apple.com>
 
         Unreviewed. Update bot config for OpenSource bots to add two new Win7 Debug testers and get rid of WinXP Debug testers.
index bb83b9d..8d49926 100644 (file)
@@ -27,6 +27,7 @@
 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
+import datetime
 import logging
 import os
 import re
@@ -253,6 +254,21 @@ class Git(SCM, SVNRepository):
             return ""
         return str(match.group('svn_revision'))
 
+    def timestamp_of_latest_commit(self, path):
+        git_log = self._run_git(['log', '-1', '--date=iso', self.find_checkout_root(path)])
+        match = re.search("^Date:\s*(\d{4})-(\d{2})-(\d{2}) (\d{2}):(\d{2}):(\d{2}) ([+-])(\d{2})(\d{2})$", git_log, re.MULTILINE)
+        if not match:
+            return ""
+
+        # Manually modify the timezone since Git doesn't have an option to show it in UTC.
+        # Git also truncates milliseconds but we're going to ignore that for now.
+        time_with_timezone = datetime.datetime(int(match.group(1)), int(match.group(2)), int(match.group(3)),
+            int(match.group(4)), int(match.group(5)), int(match.group(6)), 0)
+
+        sign = 1 if match.group(7) == '+' else -1
+        time_without_timezone = time_with_timezone - datetime.timedelta(hours=sign * int(match.group(8)), minutes=int(match.group(9)))
+        return time_without_timezone.strftime('%Y-%m-%dT%H:%M:%SZ')
+
     def prepend_svn_revision(self, diff):
         revision = self.head_svn_revision()
         if not revision:
index 9da4d3f..619d59b 100644 (file)
@@ -169,6 +169,9 @@ class SCM:
     def svn_revision(self, path):
         self._subclass_must_implement()
 
+    def timestamp_of_latest_commit(self, path):
+        self._subclass_must_implement()
+
     def create_patch(self, git_commit=None, changed_files=None):
         self._subclass_must_implement()
 
index 2414562..2e28a6a 100644 (file)
@@ -84,6 +84,9 @@ class MockSCM(object):
     def svn_revision(self, path):
         return '5678'
 
+    def timestamp_of_latest_commit(self, path):
+        return '2013-02-01 08:48:05 +0000'
+
     def create_patch(self, git_commit, changed_files=None):
         return "Patch1"
 
index 7bebc28..29f6c67 100644 (file)
@@ -1577,3 +1577,15 @@ MOCK run_command: ['git', 'log', '-25', './MOCK output of child process'], cwd=%
 
     def test_push_local_commits_to_server_without_username_and_with_password(self):
         self.assertRaises(AuthenticationError, self.make_scm().push_local_commits_to_server, {'password': 'blah'})
+
+    def test_timestamp_of_latest_commit(self):
+        scm = self.make_scm()
+        scm.find_checkout_root = lambda path: ''
+        scm._run_git = lambda args: 'Date: 2013-02-08 08:05:49 +0000'
+        self.assertEqual(scm.timestamp_of_latest_commit('some-path'), '2013-02-08T08:05:49Z')
+
+        scm._run_git = lambda args: 'Date: 2013-02-08 01:02:03 +0130'
+        self.assertEqual(scm.timestamp_of_latest_commit('some-path'), '2013-02-07T23:32:03Z')
+
+        scm._run_git = lambda args: 'Date: 2013-02-08 01:55:21 -0800'
+        self.assertEqual(scm.timestamp_of_latest_commit('some-path'), '2013-02-08T09:55:21Z')
index d2b0f36..c4f98c8 100644 (file)
@@ -246,6 +246,12 @@ class SVN(SCM, SVNRepository):
     def svn_revision(self, path):
         return self.value_from_svn_info(path, 'Revision')
 
+    def timestamp_of_latest_commit(self, path):
+        # We use --xml to get timestamps like 2013-02-08T08:18:04.964409Z
+        info_output = Executive().run_command([self.executable_name, 'info', '--xml'], cwd=path).rstrip()
+        match = re.search(r"^<date>(?P<value>.+)</date>$", info_output, re.MULTILINE)
+        return match.group('value')
+
     # FIXME: This method should be on Checkout.
     def create_patch(self, git_commit=None, changed_files=None):
         """Returns a byte array (str()) representing the patch file.
index 73834f0..a18bc99 100644 (file)
@@ -526,11 +526,12 @@ class JSONResultsGeneratorBase(object):
         for (name, path) in self._svn_repositories:
             # Note: for JSON file's backward-compatibility we use 'chrome' rather
             # than 'chromium' here.
-            if name == 'chromium':
-                name = 'chrome'
+            lowercase_name = name.lower()
+            if lowercase_name == 'chromium':
+                lowercase_name = 'chrome'
             self._insert_item_into_raw_list(results_for_builder,
                 self._get_svn_revision(path),
-                name + 'Revision')
+                lowercase_name + 'Revision')
 
         self._insert_item_into_raw_list(results_for_builder,
             int(time.time()),
index eb26c6b..75aef8b 100644 (file)
@@ -1097,11 +1097,11 @@ class Port(object):
 
     def repository_paths(self):
         """Returns a list of (repository_name, repository_path) tuples of its depending code base.
-        By default it returns a list that only contains a ('webkit', <webkitRepossitoryPath>) tuple."""
+        By default it returns a list that only contains a ('WebKit', <webkitRepositoryPath>) tuple."""
 
-        # We use LayoutTest directory here because webkit_base isn't a part webkit repository in Chromium port
+        # We use LayoutTest directory here because webkit_base isn't a part of WebKit repository in Chromium port
         # where turnk isn't checked out as a whole.
-        return [('webkit', self.layout_tests_dir())]
+        return [('WebKit', self.layout_tests_dir())]
 
     _WDIFF_DEL = '##WDIFF_DEL##'
     _WDIFF_ADD = '##WDIFF_ADD##'
index 7fbabbc..83e1795 100644 (file)
@@ -364,7 +364,7 @@ class ChromiumPort(Port):
 
     def repository_paths(self):
         repos = super(ChromiumPort, self).repository_paths()
-        repos.append(('chromium', self.path_from_chromium_base('build')))
+        repos.append(('Chromium', self.path_from_chromium_base('build')))
         return repos
 
     def _get_crash_log(self, name, pid, stdout, stderr, newer_than):
index 8b524d4..60da95d 100644 (file)
@@ -33,6 +33,7 @@ import json
 import logging
 import optparse
 import time
+import datetime
 
 from webkitpy.common import find_files
 from webkitpy.common.checkout.scm.detection import SCMDetector
@@ -67,6 +68,7 @@ class PerfTestsRunner(object):
         self._base_path = self._port.perf_tests_dir()
         self._results = {}
         self._timestamp = time.time()
+        self._utc_timestamp = datetime.datetime.utcnow()
         self._needs_http = None
         self._has_http_lock = False
 
@@ -126,8 +128,6 @@ class PerfTestsRunner(object):
         return optparse.OptionParser(option_list=(perf_option_list)).parse_args(args)
 
     def _collect_tests(self):
-        """Return the list of tests found."""
-
         test_extensions = ['.html', '.svg']
         if self._options.replay:
             test_extensions.append('.replay')
@@ -208,24 +208,30 @@ class PerfTestsRunner(object):
     def _generate_and_show_results(self):
         options = self._options
         output_json_path = self._output_json_path()
-        output = self._generate_results_dict(self._timestamp, options.description, options.platform, options.builder_name, options.build_number)
+        output, perf_webkit_output = self._generate_results_dict(self._timestamp, options.description, options.platform, options.builder_name, options.build_number)
 
         if options.slave_config_json_path:
-            output = self._merge_slave_config_json(options.slave_config_json_path, output)
+            output, perf_webkit_output = self._merge_slave_config_json(options.slave_config_json_path, output, perf_webkit_output)
             if not output:
                 return self.EXIT_CODE_BAD_SOURCE_JSON
 
         output = self._merge_outputs_if_needed(output_json_path, output)
         if not output:
             return self.EXIT_CODE_BAD_MERGE
+        perf_webkit_output = [perf_webkit_output]
 
         results_page_path = self._host.filesystem.splitext(output_json_path)[0] + '.html'
-        self._generate_output_files(output_json_path, results_page_path, output)
+        perf_webkit_json_path = self._host.filesystem.splitext(output_json_path)[0] + '-perf-webkit.json' if options.test_results_server else None
+        self._generate_output_files(output_json_path, perf_webkit_json_path, results_page_path, output, perf_webkit_output)
 
         if options.test_results_server:
             if not self._upload_json(options.test_results_server, output_json_path):
                 return self.EXIT_CODE_FAILED_UPLOADING
 
+            # FIXME: Remove this code once we've made transition to use perf.webkit.org
+            if not self._upload_json('perf.webkit.org', perf_webkit_json_path, "/api/report"):
+                return self.EXIT_CODE_FAILED_UPLOADING
+
         if options.show_results:
             self._port.show_results_html_file(results_page_path)
 
@@ -233,9 +239,13 @@ class PerfTestsRunner(object):
         contents = {'results': self._results}
         if description:
             contents['description'] = description
+
+        revisions_for_perf_webkit = {}
         for (name, path) in self._port.repository_paths():
             scm = SCMDetector(self._host.filesystem, self._host.executive).detect_scm_system(path) or self._host.scm()
-            contents[name + '-revision'] = scm.svn_revision(path)
+            revision = scm.svn_revision(path)
+            contents[name.lower() + '-revision'] = revision
+            revisions_for_perf_webkit[name] = {'revision': str(revision), 'timestamp': scm.timestamp_of_latest_commit(path)}
 
         # FIXME: Add --branch or auto-detect the branch we're in
         for key, value in {'timestamp': int(timestamp), 'branch': self._default_branch, 'platform': platform,
@@ -243,20 +253,64 @@ class PerfTestsRunner(object):
             if value:
                 contents[key] = value
 
-        return contents
+        contents_for_perf_webkit = {
+            'builderName': builder_name,
+            'buildNumber': str(build_number),
+            'buildTime': self._datetime_in_ES5_compatible_iso_format(self._utc_timestamp),
+            'platform': platform,
+            'revisions': revisions_for_perf_webkit,
+            'tests': {}}
+
+        # FIXME: Make this function shorter once we've transitioned to use perf.webkit.org.
+        for metric_full_name, result in self._results.iteritems():
+            if not isinstance(result, dict):  # We can't reports results without indivisual measurements.
+                continue
+
+            assert metric_full_name.count(':') <= 1
+            test_full_name, _, metric = metric_full_name.partition(':')
+            if not metric:
+                metric = {'fps': 'FrameRate', 'runs/s': 'Runs', 'ms': 'Time'}[result['unit']]
+
+            tests = contents_for_perf_webkit['tests']
+            path = test_full_name.split('/')
+            for i in range(0, len(path)):
+                # FIXME: We shouldn't assume HTML extension.
+                is_last_token = i + 1 == len(path)
+                url = 'http://trac.webkit.org/browser/trunk/PerformanceTests/' + '/'.join(path[0:i + 1])
+                if is_last_token:
+                    url += '.html'
+
+                tests.setdefault(path[i], {'url': url})
+                current_test = tests[path[i]]
+                if is_last_token:
+                    current_test.setdefault('metrics', {})
+                    assert metric not in current_test['metrics']
+                    current_test['metrics'][metric] = {'current': result['values']}
+                else:
+                    current_test.setdefault('tests', {})
+                    tests = current_test['tests']
+
+        return contents, contents_for_perf_webkit
+
+    @staticmethod
+    def _datetime_in_ES5_compatible_iso_format(datetime):
+        return datetime.strftime('%Y-%m-%dT%H:%M:%S.%f')
 
-    def _merge_slave_config_json(self, slave_config_json_path, output):
+    def _merge_slave_config_json(self, slave_config_json_path, contents, contents_for_perf_webkit):
         if not self._host.filesystem.isfile(slave_config_json_path):
             _log.error("Missing slave configuration JSON file: %s" % slave_config_json_path)
-            return None
+            return None, None
 
         try:
             slave_config_json = self._host.filesystem.open_text_file_for_reading(slave_config_json_path)
             slave_config = json.load(slave_config_json)
-            return dict(slave_config.items() + output.items())
+            contents = dict(slave_config.items() + contents.items())
+            for key in slave_config:
+                contents_for_perf_webkit['builder' + key.capitalize()] = slave_config[key]
+            return contents, contents_for_perf_webkit
         except Exception, error:
             _log.error("Failed to merge slave configuration JSON file %s: %s" % (slave_config_json_path, error))
-        return None
+        return None, None
 
     def _merge_outputs_if_needed(self, output_json_path, output):
         if self._options.reset_results or not self._host.filesystem.isfile(output_json_path):
@@ -268,12 +322,15 @@ class PerfTestsRunner(object):
             _log.error("Failed to merge output JSON file %s: %s" % (output_json_path, error))
         return None
 
-    def _generate_output_files(self, output_json_path, results_page_path, output):
+    def _generate_output_files(self, output_json_path, perf_webkit_json_path, results_page_path, output, perf_webkit_output):
         filesystem = self._host.filesystem
 
         json_output = json.dumps(output)
         filesystem.write_text_file(output_json_path, json_output)
 
+        if perf_webkit_json_path:
+            filesystem.write_text_file(perf_webkit_json_path, json.dumps(perf_webkit_output))
+
         if results_page_path:
             template_path = filesystem.join(self._port.perf_tests_dir(), 'resources/results-template.html')
             template = filesystem.read_text_file(template_path)
@@ -284,22 +341,30 @@ class PerfTestsRunner(object):
 
             filesystem.write_text_file(results_page_path, results_page)
 
-    def _upload_json(self, test_results_server, json_path, file_uploader=FileUploader):
-        uploader = file_uploader("https://%s/api/test/report" % test_results_server, 120)
+    def _upload_json(self, test_results_server, json_path, host_path="/api/test/report", file_uploader=FileUploader):
+        url = "https://%s%s" % (test_results_server, host_path)
+        uploader = file_uploader(url, 120)
         try:
             response = uploader.upload_single_text_file(self._host.filesystem, 'application/json', json_path)
         except Exception, error:
-            _log.error("Failed to upload JSON file in 120s: %s" % error)
+            _log.error("Failed to upload JSON file to %s in 120s: %s" % (url, error))
             return False
 
         response_body = [line.strip('\n') for line in response]
         if response_body != ['OK']:
-            _log.error("Uploaded JSON but got a bad response:")
-            for line in response_body:
-                _log.error(line)
-            return False
-
-        _log.info("JSON file uploaded.")
+            try:
+                parsed_response = json.loads('\n'.join(response_body))
+            except:
+                _log.error("Uploaded JSON to %s but got a bad response:" % url)
+                for line in response_body:
+                    _log.error(line)
+                return False
+            if parsed_response.get('status') != 'OK':
+                _log.error("Uploaded JSON to %s but got an error:" % url)
+                _log.error(json.dumps(parsed_response, indent=4))
+                return False
+
+        _log.info("JSON file uploaded to %s." % url)
         return True
 
     def _print_status(self, tests, expected, unexpected):
index 534104c..9ddd6f1 100644 (file)
@@ -29,6 +29,7 @@
 """Integration tests for run_perf_tests."""
 
 import StringIO
+import datetime
 import json
 import re
 import unittest2 as unittest
@@ -321,14 +322,17 @@ class MainTest(unittest.TestCase):
 
         uploaded = [False]
 
-        def mock_upload_json(hostname, json_path):
-            self.assertEqual(hostname, 'some.host')
-            self.assertEqual(json_path, '/mock-checkout/output.json')
+        def mock_upload_json(hostname, json_path, host_path=None):
+            # FIXME: Get rid of the hard-coded perf.webkit.org once we've completed the transition.
+            self.assertIn(hostname, ['some.host', 'perf.webkit.org'])
+            self.assertIn(json_path, ['/mock-checkout/output.json', '/mock-checkout/output-perf-webkit.json'])
+            self.assertIn(host_path, [None, '/api/report'])
             uploaded[0] = upload_suceeds
             return upload_suceeds
 
         runner._upload_json = mock_upload_json
         runner._timestamp = 123456789
+        runner._utc_timestamp = datetime.datetime(2013, 2, 8, 15, 19, 37, 460000)
         output_capture = OutputCapture()
         output_capture.capture_output()
         try:
@@ -521,41 +525,29 @@ class MainTest(unittest.TestCase):
 
         self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=False, expected_exit_code=PerfTestsRunner.EXIT_CODE_FAILED_UPLOADING)
 
-    def test_upload_json(self):
-        runner, port = self.create_runner()
-        port.host.filesystem.files['/mock-checkout/some.json'] = 'some content'
-
-        called = []
-        upload_single_text_file_throws = False
-        upload_single_text_file_return_value = StringIO.StringIO('OK')
-
-        class MockFileUploader:
-            def __init__(mock, url, timeout):
-                self.assertEqual(url, 'https://some.host/api/test/report')
-                self.assertTrue(isinstance(timeout, int) and timeout)
-                called.append('FileUploader')
-
-            def upload_single_text_file(mock, filesystem, content_type, filename):
-                self.assertEqual(filesystem, port.host.filesystem)
-                self.assertEqual(content_type, 'application/json')
-                self.assertEqual(filename, 'some.json')
-                called.append('upload_single_text_file')
-                if upload_single_text_file_throws:
-                    raise "Some exception"
-                return upload_single_text_file_return_value
-
-        runner._upload_json('some.host', 'some.json', MockFileUploader)
-        self.assertEqual(called, ['FileUploader', 'upload_single_text_file'])
+    def test_run_with_upload_json_should_generate_perf_webkit_json(self):
+        runner, port = self.create_runner_and_setup_results_template(args=['--output-json-path=/mock-checkout/output.json',
+            '--test-results-server', 'some.host', '--platform', 'platform1', '--builder-name', 'builder1', '--build-number', '123',
+            '--slave-config-json-path=/mock-checkout/slave-config.json'])
+        port.host.filesystem.write_text_file('/mock-checkout/slave-config.json', '{"key": "value1"}')
 
-        output = OutputCapture()
-        output.capture_output()
-        upload_single_text_file_return_value = StringIO.StringIO('Some error')
-        runner._upload_json('some.host', 'some.json', MockFileUploader)
-        _, _, logs = output.restore_output()
-        self.assertEqual(logs, 'Uploaded JSON but got a bad response:\nSome error\n')
-
-        # Throwing an exception upload_single_text_file shouldn't blow up _upload_json
-        called = []
-        upload_single_text_file_throws = True
-        runner._upload_json('some.host', 'some.json', MockFileUploader)
-        self.assertEqual(called, ['FileUploader', 'upload_single_text_file'])
+        self._test_run_with_json_output(runner, port.host.filesystem, upload_suceeds=True)
+        generated_json = json.loads(port.host.filesystem.files['/mock-checkout/output-perf-webkit.json'])
+        self.assertTrue(isinstance(generated_json, list))
+        self.assertEqual(len(generated_json), 1)
+
+        output = generated_json[0]
+        self.maxDiff = None
+        self.assertEqual(output['platform'], 'platform1')
+        self.assertEqual(output['buildNumber'], '123')
+        self.assertEqual(output['buildTime'], '2013-02-08T15:19:37.460000')
+        self.assertEqual(output['builderName'], 'builder1')
+        self.assertEqual(output['builderKey'], 'value1')
+        self.assertEqual(output['revisions'], {'WebKit': {'revision': '5678', 'timestamp': '2013-02-01 08:48:05 +0000'}})
+        self.assertEqual(output['tests'].keys(), ['Bindings'])
+        self.assertEqual(sorted(output['tests']['Bindings'].keys()), ['tests', 'url'])
+        self.assertEqual(output['tests']['Bindings']['url'], 'http://trac.webkit.org/browser/trunk/PerformanceTests/Bindings')
+        self.assertEqual(output['tests']['Bindings']['tests'].keys(), ['event-target-wrapper'])
+        self.assertEqual(output['tests']['Bindings']['tests']['event-target-wrapper'], {
+            'url': 'http://trac.webkit.org/browser/trunk/PerformanceTests/Bindings/event-target-wrapper.html',
+            'metrics': {'Time': {'current': [1486.0, 1471.0, 1510.0, 1505.0, 1478.0, 1490.0]}}})
index fcc34a3..2973f77 100644 (file)
@@ -34,6 +34,7 @@ import re
 import unittest2 as unittest
 
 from webkitpy.common.host_mock import MockHost
+from webkitpy.common.system.outputcapture import OutputCapture
 from webkitpy.layout_tests.port.test import TestPort
 from webkitpy.performance_tests.perftestsrunner import PerfTestsRunner
 
@@ -147,3 +148,64 @@ class MainTest(unittest.TestCase):
         self.assertEqual(options.output_json_path, 'a/output.json')
         self.assertEqual(options.slave_config_json_path, 'a/source.json')
         self.assertEqual(options.test_results_server, 'somehost')
+
+    def test_upload_json(self):
+        runner, port = self.create_runner()
+        port.host.filesystem.files['/mock-checkout/some.json'] = 'some content'
+
+        class MockFileUploader:
+            called = []
+            upload_single_text_file_throws = False
+            upload_single_text_file_return_value = None
+
+            @classmethod
+            def reset(cls):
+                cls.called = []
+                cls.upload_single_text_file_throws = False
+                cls.upload_single_text_file_return_value = None
+
+            def __init__(mock, url, timeout):
+                self.assertEqual(url, 'https://some.host/some/path')
+                self.assertTrue(isinstance(timeout, int) and timeout)
+                mock.called.append('FileUploader')
+
+            def upload_single_text_file(mock, filesystem, content_type, filename):
+                self.assertEqual(filesystem, port.host.filesystem)
+                self.assertEqual(content_type, 'application/json')
+                self.assertEqual(filename, 'some.json')
+                mock.called.append('upload_single_text_file')
+                if mock.upload_single_text_file_throws:
+                    raise Exception
+                return mock.upload_single_text_file_return_value
+
+        MockFileUploader.upload_single_text_file_return_value = StringIO.StringIO('OK')
+        self.assertTrue(runner._upload_json('some.host', 'some.json', '/some/path', MockFileUploader))
+        self.assertEqual(MockFileUploader.called, ['FileUploader', 'upload_single_text_file'])
+
+        MockFileUploader.reset()
+        MockFileUploader.upload_single_text_file_return_value = StringIO.StringIO('Some error')
+        output = OutputCapture()
+        output.capture_output()
+        self.assertFalse(runner._upload_json('some.host', 'some.json', '/some/path', MockFileUploader))
+        _, _, logs = output.restore_output()
+        self.assertEqual(logs, 'Uploaded JSON to https://some.host/some/path but got a bad response:\nSome error\n')
+
+        # Throwing an exception upload_single_text_file shouldn't blow up _upload_json
+        MockFileUploader.reset()
+        MockFileUploader.upload_single_text_file_throws = True
+        self.assertFalse(runner._upload_json('some.host', 'some.json', '/some/path', MockFileUploader))
+        self.assertEqual(MockFileUploader.called, ['FileUploader', 'upload_single_text_file'])
+
+        MockFileUploader.reset()
+        MockFileUploader.upload_single_text_file_return_value = StringIO.StringIO('{"status": "OK"}')
+        self.assertTrue(runner._upload_json('some.host', 'some.json', '/some/path', MockFileUploader))
+        self.assertEqual(MockFileUploader.called, ['FileUploader', 'upload_single_text_file'])
+
+        MockFileUploader.reset()
+        MockFileUploader.upload_single_text_file_return_value = StringIO.StringIO('{"status": "SomethingHasFailed", "failureStored": false}')
+        output = OutputCapture()
+        output.capture_output()
+        self.assertFalse(runner._upload_json('some.host', 'some.json', '/some/path', MockFileUploader))
+        _, _, logs = output.restore_output()
+        serialized_json = json.dumps({'status': 'SomethingHasFailed', 'failureStored': False}, indent=4)
+        self.assertEqual(logs, 'Uploaded JSON to https://some.host/some/path but got an error:\n%s\n' % serialized_json)