https://bugs.webkit.org/show_bug.cgi?id=144038
authorcommit-queue@webkit.org <commit-queue@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 25 Apr 2015 08:13:03 +0000 (08:13 +0000)
committercommit-queue@webkit.org <commit-queue@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Sat, 25 Apr 2015 08:13:03 +0000 (08:13 +0000)
Patch by Dewei Zhu <dewei_zhu@apple.com> on 2015-04-25
Reviewed by Ryosuke Niwa

Add a script to run Speedometer and JetStream on a browser.

* Scripts/run-benchmark: Wrapper script to run benchmark.
(main):
* Scripts/webkitpy/benchmark_runner/README.md: Introduction of this script.
* Scripts/webkitpy/benchmark_runner/__init__.py: Added.
* Scripts/webkitpy/benchmark_runner/benchmark_builder/__init__.py: Added.
* Scripts/webkitpy/benchmark_runner/benchmark_builder/benchmark_builder_factory.py: Added.
(BenchmarkBuilderFactory):
* Scripts/webkitpy/benchmark_runner/benchmark_builder/benchmark_builders.json: Added.
* Scripts/webkitpy/benchmark_runner/benchmark_builder/generic_benchmark_builder.py: Added.
(GenericBenchmarkBuilder):
(GenericBenchmarkBuilder.prepare):
(GenericBenchmarkBuilder._copyBenchmarkToTempDir):
(GenericBenchmarkBuilder._applyPatch):
(GenericBenchmarkBuilder.clean):
* Scripts/webkitpy/benchmark_runner/benchmark_builder/jetstream_benchmark_builder.py: Added.
(JetStreamBenchmarkBuilder):
(JetStreamBenchmarkBuilder.prepare):
(JetStreamBenchmarkBuilder._runCreateScript):
* Scripts/webkitpy/benchmark_runner/benchmark_runner.py: Main module that masters all the processes of benchmark running.
(BenchmarkRunner):
(BenchmarkRunner.__init__):
(BenchmarkRunner.execute):
(BenchmarkRunner.dump):
(BenchmarkRunner.wrap):
(BenchmarkRunner.merge):
* Scripts/webkitpy/benchmark_runner/browser_driver/__init__.py: Added.
* Scripts/webkitpy/benchmark_runner/browser_driver/browser_driver.py: Added.
(BrowserDriver):
(BrowserDriver.prepareEnv):
(BrowserDriver.launchUrl):
(BrowserDriver.closeBrowser):
* Scripts/webkitpy/benchmark_runner/browser_driver/browser_driver_factory.py: Added.
(BrowserDriverFactory):
* Scripts/webkitpy/benchmark_runner/browser_driver/browser_drivers.json: Added.
* Scripts/webkitpy/benchmark_runner/browser_driver/osx_chrome_driver.py: Added.
(OSXChromeDriver):
(OSXChromeDriver.prepareEnv):
(OSXChromeDriver.launchUrl):
(OSXChromeDriver.closeBrowsers):
* Scripts/webkitpy/benchmark_runner/browser_driver/osx_safari_driver.py: Added.
(OSXSafariDriver):
(OSXSafariDriver.prepareEnv):
(OSXSafariDriver.launchUrl):
(OSXSafariDriver.closeBrowsers):
* Scripts/webkitpy/benchmark_runner/data/patches/JetStream.patch: Patch that makes JetStream compatible with this script.
* Scripts/webkitpy/benchmark_runner/data/patches/Speedometer.patch: Patch that makes Speedometer compatible with this scritp.
* Scripts/webkitpy/benchmark_runner/data/plans/jetstream.plan: Added.
* Scripts/webkitpy/benchmark_runner/data/plans/speedometer.plan: Added.
* Scripts/webkitpy/benchmark_runner/generic_factory.py: Factory for generic purpose.
(GenericFactory):
(GenericFactory.iterateGetItem):
(GenericFactory.create):
* Scripts/webkitpy/benchmark_runner/http_server_driver/__init__.py: Added.
* Scripts/webkitpy/benchmark_runner/http_server_driver/http_server/twisted_http_server.py: Added.
(ServerControl):
(ServerControl.render_GET):
(ServerControl.render_POST):
* Scripts/webkitpy/benchmark_runner/http_server_driver/http_server_driver.py: Added.
(HTTPServerDriver):
(HTTPServerDriver.serve):
(HTTPServerDriver.fetchResult):
* Scripts/webkitpy/benchmark_runner/http_server_driver/http_server_driver_factory.py: Added.
(HTTPServerDriverFactory):
* Scripts/webkitpy/benchmark_runner/http_server_driver/http_server_drivers.json: Added.
* Scripts/webkitpy/benchmark_runner/http_server_driver/simple_http_server_driver.py: Added.
(SimpleHTTPServerDriver):
(SimpleHTTPServerDriver.depends):
(SimpleHTTPServerDriver.__init__):
(SimpleHTTPServerDriver.serve):
(SimpleHTTPServerDriver.baseUrl):
(SimpleHTTPServerDriver.fetchResult):
* Scripts/webkitpy/benchmark_runner/utils.py: Utility module.
(ModuleNotFoundError):
(loadModule):
(getPathFromProjectRoot):
(loadJSONFromFile):
(TimeoutError):
(timeout):
(timeout.__init__):
(timeout.handle_timeout):
(timeout.__enter__):
(timeout.__exit__):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@183309 268f45cc-cd09-0410-ab3c-d52691b4dbfc

24 files changed:
Tools/ChangeLog
Tools/Scripts/run-benchmark [new file with mode: 0755]
Tools/Scripts/webkitpy/benchmark_runner/README.md [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/benchmark_builder/benchmark_builder_factory.py [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/benchmark_builder/benchmark_builders.json [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/benchmark_builder/generic_benchmark_builder.py [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/benchmark_builder/jetstream_benchmark_builder.py [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/benchmark_runner.py [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/browser_driver/browser_driver.py [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/browser_driver/browser_driver_factory.py [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/browser_driver/browser_drivers.json [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/browser_driver/osx_chrome_driver.py [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/browser_driver/osx_safari_driver.py [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/data/patches/JetStream.patch [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/data/patches/Speedometer.patch [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/data/plans/jetstream.plan [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/data/plans/speedometer.plan [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/generic_factory.py [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/http_server_driver/http_server/twisted_http_server.py [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/http_server_driver/http_server_driver.py [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/http_server_driver/http_server_driver_factory.py [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/http_server_driver/http_server_drivers.json [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/http_server_driver/simple_http_server_driver.py [new file with mode: 0644]
Tools/Scripts/webkitpy/benchmark_runner/utils.py [new file with mode: 0644]

index 894d4c2..b75c803 100644 (file)
@@ -1,3 +1,94 @@
+2015-04-25  Dewei Zhu  <dewei_zhu@apple.com>
+
+        https://bugs.webkit.org/show_bug.cgi?id=144038
+
+        Reviewed by Ryosuke Niwa
+
+        Add a script to run Speedometer and JetStream on a browser.
+
+        * Scripts/run-benchmark: Wrapper script to run benchmark.
+        (main):
+        * Scripts/webkitpy/benchmark_runner/README.md: Introduction of this script.
+        * Scripts/webkitpy/benchmark_runner/__init__.py: Added.
+        * Scripts/webkitpy/benchmark_runner/benchmark_builder/__init__.py: Added.
+        * Scripts/webkitpy/benchmark_runner/benchmark_builder/benchmark_builder_factory.py: Added.
+        (BenchmarkBuilderFactory):
+        * Scripts/webkitpy/benchmark_runner/benchmark_builder/benchmark_builders.json: Added.
+        * Scripts/webkitpy/benchmark_runner/benchmark_builder/generic_benchmark_builder.py: Added.
+        (GenericBenchmarkBuilder):
+        (GenericBenchmarkBuilder.prepare):
+        (GenericBenchmarkBuilder._copyBenchmarkToTempDir):
+        (GenericBenchmarkBuilder._applyPatch):
+        (GenericBenchmarkBuilder.clean):
+        * Scripts/webkitpy/benchmark_runner/benchmark_builder/jetstream_benchmark_builder.py: Added.
+        (JetStreamBenchmarkBuilder):
+        (JetStreamBenchmarkBuilder.prepare):
+        (JetStreamBenchmarkBuilder._runCreateScript):
+        * Scripts/webkitpy/benchmark_runner/benchmark_runner.py: Main module that masters all the processes of benchmark running.
+        (BenchmarkRunner):
+        (BenchmarkRunner.__init__):
+        (BenchmarkRunner.execute):
+        (BenchmarkRunner.dump):
+        (BenchmarkRunner.wrap):
+        (BenchmarkRunner.merge):
+        * Scripts/webkitpy/benchmark_runner/browser_driver/__init__.py: Added.
+        * Scripts/webkitpy/benchmark_runner/browser_driver/browser_driver.py: Added.
+        (BrowserDriver):
+        (BrowserDriver.prepareEnv):
+        (BrowserDriver.launchUrl):
+        (BrowserDriver.closeBrowser):
+        * Scripts/webkitpy/benchmark_runner/browser_driver/browser_driver_factory.py: Added.
+        (BrowserDriverFactory):
+        * Scripts/webkitpy/benchmark_runner/browser_driver/browser_drivers.json: Added.
+        * Scripts/webkitpy/benchmark_runner/browser_driver/osx_chrome_driver.py: Added.
+        (OSXChromeDriver):
+        (OSXChromeDriver.prepareEnv):
+        (OSXChromeDriver.launchUrl):
+        (OSXChromeDriver.closeBrowsers):
+        * Scripts/webkitpy/benchmark_runner/browser_driver/osx_safari_driver.py: Added.
+        (OSXSafariDriver):
+        (OSXSafariDriver.prepareEnv):
+        (OSXSafariDriver.launchUrl):
+        (OSXSafariDriver.closeBrowsers):
+        * Scripts/webkitpy/benchmark_runner/data/patches/JetStream.patch: Patch that makes JetStream compatible with this script.
+        * Scripts/webkitpy/benchmark_runner/data/patches/Speedometer.patch: Patch that makes Speedometer compatible with this scritp.
+        * Scripts/webkitpy/benchmark_runner/data/plans/jetstream.plan: Added.
+        * Scripts/webkitpy/benchmark_runner/data/plans/speedometer.plan: Added.
+        * Scripts/webkitpy/benchmark_runner/generic_factory.py: Factory for generic purpose.
+        (GenericFactory):
+        (GenericFactory.iterateGetItem):
+        (GenericFactory.create):
+        * Scripts/webkitpy/benchmark_runner/http_server_driver/__init__.py: Added.
+        * Scripts/webkitpy/benchmark_runner/http_server_driver/http_server/twisted_http_server.py: Added.
+        (ServerControl):
+        (ServerControl.render_GET):
+        (ServerControl.render_POST):
+        * Scripts/webkitpy/benchmark_runner/http_server_driver/http_server_driver.py: Added.
+        (HTTPServerDriver):
+        (HTTPServerDriver.serve):
+        (HTTPServerDriver.fetchResult):
+        * Scripts/webkitpy/benchmark_runner/http_server_driver/http_server_driver_factory.py: Added.
+        (HTTPServerDriverFactory):
+        * Scripts/webkitpy/benchmark_runner/http_server_driver/http_server_drivers.json: Added.
+        * Scripts/webkitpy/benchmark_runner/http_server_driver/simple_http_server_driver.py: Added.
+        (SimpleHTTPServerDriver):
+        (SimpleHTTPServerDriver.depends):
+        (SimpleHTTPServerDriver.__init__):
+        (SimpleHTTPServerDriver.serve):
+        (SimpleHTTPServerDriver.baseUrl):
+        (SimpleHTTPServerDriver.fetchResult):
+        * Scripts/webkitpy/benchmark_runner/utils.py: Utility module.
+        (ModuleNotFoundError):
+        (loadModule):
+        (getPathFromProjectRoot):
+        (loadJSONFromFile):
+        (TimeoutError):
+        (timeout):
+        (timeout.__init__):
+        (timeout.handle_timeout):
+        (timeout.__enter__):
+        (timeout.__exit__):
+
 2015-04-24  Commit Queue  <commit-queue@webkit.org>
 
         Unreviewed, rolling out r183303.
diff --git a/Tools/Scripts/run-benchmark b/Tools/Scripts/run-benchmark
new file mode 100755 (executable)
index 0000000..5a57537
--- /dev/null
@@ -0,0 +1,37 @@
+#!/usr/bin/env python
+
+import argparse
+import logging
+from webkitpy.benchmark_runner.benchmark_runner import BenchmarkRunner
+
+
+_log = logging.getLogger()
+_log.setLevel(logging.INFO)
+ch = logging.StreamHandler()
+formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s')
+ch.setFormatter(formatter)
+_log.addHandler(ch)
+
+
+def main():
+    parser = argparse.ArgumentParser(description='Automate the browser based performance benchmarks')
+    parser.add_argument('--output-file', dest='output', default=None)
+    parser.add_argument('--build-directory', dest='buildDir', required=True)
+    parser.add_argument('--plan', dest='plan', required=True)
+    parser.add_argument('--platform', dest='platform', default='osx', choices=['osx', 'ios', 'windows'], required=True)
+    # FIXME: Should we add chrome as an option? Well, chrome uses webkit in iOS.
+    parser.add_argument('--browser', dest='browser', default='safari', choices=['safari', 'chrome'], required=True)
+    parser.add_argument('--debug', action='store_true')
+    args = parser.parse_args()
+    
+    if args.debug:
+        _log.setLevel(logging.DEBUG)
+    _log.debug('Initializing program with following parameters')
+    _log.debug('\toutput file name\t: %s' % args.output)
+    _log.debug('\tbuild directory\t: %s' % args.buildDir)
+    _log.debug('\tplan name\t: %s', args.plan)
+    runner = BenchmarkRunner(args.plan, args.buildDir, args.output, args.platform, args.browser)
+    runner.execute()
+
+if __name__ == '__main__':
+    main()
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/README.md b/Tools/Scripts/webkitpy/benchmark_runner/README.md
new file mode 100644 (file)
index 0000000..52350df
--- /dev/null
@@ -0,0 +1,111 @@
+# Benchmark Runner 
+This is a script for automating the browser based benchmarks(e.g. Speedometer, JetStream)
+## Project Structure
+```
+benchmark_runner
+├── README.md
+├── __init__.py
+├── benchmark_builder
+│   ├── __init__.py
+│   ├── benchmark_builder_factory.py
+│   ├── benchmark_builders.json
+│   ├── generic_benchmark_builder.py
+│   └── jetstream_benchmark_builder.py
+├── benchmark_runner.py
+├── browser_driver
+│   ├── __init__.py
+│   ├── browser_driver.py
+│   ├── browser_driver_factory.py
+│   ├── browser_drivers.json
+│   ├── osx_chrome_driver.py
+│   └── osx_safari_driver.py
+├── data
+│   ├── patches
+│   │   ├── JetStream.patch
+│   │   └── Speedometer.patch
+│   └── plans
+│       ├── jetstream.plan
+│       └── speedometer.plan
+├── generic_factory.py
+├── http_server_driver
+│   ├── __init__.py
+│   ├── http_server
+│   │   └── twisted_http_server.py
+│   ├── http_server_driver.py
+│   ├── http_server_driver_factory.py
+│   ├── http_server_drivers.json
+│   └── simple_http_server_driver.py
+└── utils.py
+```
+## Requirements
+* python modules:
+    * PyObjc
+    * psutils (optional)
+
+## HOW TOs
+### How to run
+```shell
+    python path/to/run_benchmark --build-directory path/to/browser/directory --plan json_format_plan --platform target_platform --browser target_browser
+```
+* **path/to/browser/directory**: should be the folder containing the executable binary(e.g. /Application/ on OSX which contains Safari.app)
+* **json_format_plan**: the benchmark plan you want to execute  
+
+### How to create a plan
+To create a plan, you may refer to Plans/jetstream.plan.
+```json 
+{
+    "http_server_driver": "SimpleHTTPServerDriver", 
+    "benchmarks": [
+        {
+            "timeout" : 600,
+            "count": 5,
+            "benchmark_builder": "JetStreamBenchmarkBuilder",
+            "original_benchmark": "../../../../PerformanceTests/JetStream",
+            "benchmark_patch": "data/patches/JetStream.patch",
+            "entry_point": "JetStream/JetStream-1.0.1/index.html",
+            "output_file": "jetstream.result"
+        }
+    ]
+}
+```
+Plan is a json-formatted dictionary which contains following keys 
+* **http_server_driver**: (**case-sensitive**) the http server module you want to host the resources. Current available option is "SimpleHTTPServerHandle" which is based on python twisted framework.
+* **benchmarks**: contains a list of benchmarks you want to run, each benchmark is a list contains following keys:
+    * **timeout**: time limit for **EACH RUN** of the benchmark. This can avoid program getting stuck in the extreme circumstances. The time limit is suggested to be 1.5-2x the time spent in a normal run.
+    * **count**: the number of times you want to run benchmark
+    * **benchmark_builder**:  builder of the benchmark which is responsible for arranging benchmark before the web server serving the directory. In most case, 'GenericBenchmarkHandler' is sufficient. It copies the benchmark to a temporary directory and applies patch to benchmark. If you have special requirement, you could design your own benchmark handle, just like the 'JetStreamBenchmarkHandle' in this example.
+    * **original_benchmark**: path of benchmark, a relative path to the root of this project ('benchmark_runner' directory)
+    * **benchmark_path**: (**OPTIONAL**) path of patch, a relative path to the root of this project ('benchmark_runner' directory)
+    * **entry_point**: the relative url you want browser to launch (a relative path to the webroot)
+    * **output_file**: specify the output file
+
+### How to import a benchmark
+* Modify the benchmark html file, make sure the page has following functionalities:
+    * When you launch the page('entry_point' in benchmark), the test will start automatically
+    * Organizing the results
+    * 'POST' information to '/report', and 'GET' '/shutdown' when post succeeds. Example:
+    * ```js
+      var xhr = new XMLHttpRequest();
+      xhr.open("POST", "/report");
+      xhr.setRequestHeader("Content-type", "application/json");
+      xhr.setRequestHeader("Content-length", results.length);
+      xhr.setRequestHeader("Connection", "close");
+      xhr.onreadystatechange = function() {
+          if(xhr.readyState == 4 && xhr.status == 200) {
+              var closeRequest = new XMLHttpRequest();
+              closeRequest.open("GET", '/shutdown');
+              closeRequest.send()
+          }
+      }
+      xhr.send(results);
+``` 
+* Create a patch file against original file
+    * Go to the directory contains the benchmark directory (e.g. for JetStream, you should go to PerformaceTest folder)
+    * Use ```git diff --relative HEAD > your.patch``` to create your patch
+    * (**Suggested**) move the patch to the 'Patches' directory under project directory
+* Create a plan for the benchmark (refer to **"How to create a plan"** for more details)
+* Do following instruction **ONLY IF NEEDED**. In most case, you do not have to.
+    * If you want to customize BrowserDriver for specific browser/platform, you need to extend browser_driver/browser_driver.py and register your module in browser_driver/browser_driversjson.
+    * If you want to customize HTTPServerDriver, you need to extend http_server_drirver/http_server_driver and register your module in http_server_driver/http_server_drivers.json.
+    * If you want to customize ResultWrapper, you need to extend result_wrapper/base_result_wrapper.py and register your module in result_wrapper/result_wrappers.json 
+    * If you want to customize BenchmarkBuilder, you need to extend benchmark_builder/generic_benchmark_builder register you module in benchmark_builder/benchmark_builders.json
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/benchmark_builder/benchmark_builder_factory.py b/Tools/Scripts/webkitpy/benchmark_runner/benchmark_builder/benchmark_builder_factory.py
new file mode 100644 (file)
index 0000000..9601d3f
--- /dev/null
@@ -0,0 +1,16 @@
+#!/usr/bin/env python
+
+import logging
+import json
+import os
+
+from webkitpy.benchmark_runner.generic_factory import GenericFactory
+from webkitpy.benchmark_runner.utils import loadJSONFromFile
+
+
+builderFileName = 'benchmark_builders.json'
+
+
+class BenchmarkBuilderFactory(GenericFactory):
+
+    products = loadJSONFromFile(os.path.join(os.path.dirname(__file__), builderFileName))
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/benchmark_builder/benchmark_builders.json b/Tools/Scripts/webkitpy/benchmark_runner/benchmark_builder/benchmark_builders.json
new file mode 100644 (file)
index 0000000..b8fc932
--- /dev/null
@@ -0,0 +1,10 @@
+{
+    "GenericBenchmarkBuilder": {
+        "filePath": "benchmark_builder.generic_benchmark_builder",
+        "moduleName": "GenericBenchmarkBuilder"
+    },
+    "JetStreamBenchmarkBuilder": {
+        "filePath": "benchmark_builder.jetstream_benchmark_builder",
+        "moduleName": "JetStreamBenchmarkBuilder"
+    }
+}
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/benchmark_builder/generic_benchmark_builder.py b/Tools/Scripts/webkitpy/benchmark_runner/benchmark_builder/generic_benchmark_builder.py
new file mode 100644 (file)
index 0000000..be54879
--- /dev/null
@@ -0,0 +1,43 @@
+#!/usr/bin/env python
+
+import logging
+import tempfile
+import os
+import shutil
+import subprocess
+
+from webkitpy.benchmark_runner.utils import getPathFromProjectRoot
+
+
+_log = logging.getLogger(__name__)
+
+
+class GenericBenchmarkBuilder(object):
+
+    def prepare(self, benchmarkPath, patch):
+        self._copyBenchmarkToTempDir(benchmarkPath)
+        return self._applyPatch(patch)
+
+    def _copyBenchmarkToTempDir(self, benchmarkPath):
+        self.webRoot = tempfile.mkdtemp()
+        _log.debug('Servering at webRoot: %s' % self.webRoot)
+        self.dest = os.path.join(self.webRoot, os.path.split(benchmarkPath)[1])
+        shutil.copytree(getPathFromProjectRoot(benchmarkPath), self.dest)
+
+    def _applyPatch(self, patch):
+        if not patch:
+            return self.webRoot
+        oldWorkingDirectory = os.getcwd()
+        os.chdir(self.webRoot)
+        errorCode = subprocess.call(['patch', '-p1', '-f', '-i', getPathFromProjectRoot(patch)])
+        os.chdir(oldWorkingDirectory)
+        if errorCode:
+            _log.error('Cannot apply patch, will skip current benchmarkPath')
+            self.clean()
+            return None
+        return self.webRoot
+
+    def clean(self):
+        _log.info('Cleanning Benchmark')
+        if self.webRoot:
+            shutil.rmtree(self.webRoot)
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/benchmark_builder/jetstream_benchmark_builder.py b/Tools/Scripts/webkitpy/benchmark_runner/benchmark_builder/jetstream_benchmark_builder.py
new file mode 100644 (file)
index 0000000..3d958e3
--- /dev/null
@@ -0,0 +1,27 @@
+#!/usr/bin/env python
+
+import logging
+import os
+import subprocess
+
+from generic_benchmark_builder import GenericBenchmarkBuilder
+
+
+_log = logging.getLogger(__name__)
+
+
+class JetStreamBenchmarkBuilder(GenericBenchmarkBuilder):
+
+    def prepare(self, benchmarkPath, patch):
+        super(self.__class__, self)._copyBenchmarkToTempDir(benchmarkPath)
+        self._runCreateScript()
+        return super(self.__class__, self)._applyPatch(patch)
+
+    def _runCreateScript(self):
+        oldWorkingDirectory = os.getcwd()
+        os.chdir(self.dest)
+        _log.debug(self.dest)
+        errorCode = subprocess.call(['ruby', 'create.rb'])
+        os.chdir(oldWorkingDirectory)
+        if errorCode:
+            _log.error('Cannot create JetStream Benchmark')
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/benchmark_runner.py b/Tools/Scripts/webkitpy/benchmark_runner/benchmark_runner.py
new file mode 100644 (file)
index 0000000..8f75d45
--- /dev/null
@@ -0,0 +1,110 @@
+#!/usr/bin/env python
+
+import json
+import logging
+import shutil
+import signal
+import subprocess
+import tempfile
+import time
+import types
+import os
+import urlparse
+
+from benchmark_builder.benchmark_builder_factory import BenchmarkBuilderFactory
+from browser_driver.browser_driver_factory import BrowserDriverFactory
+from http_server_driver.http_server_driver_factory import HTTPServerDriverFactory
+from utils import loadModule, getPathFromProjectRoot
+from utils import timeout
+
+
+_log = logging.getLogger(__name__)
+
+
+class BenchmarkRunner(object):
+
+    def __init__(self, planFile, buildDir, outputFile, platform, browser):
+        _log.info('Initializing benchmark running')
+        try:
+            with open(planFile, 'r') as fp:
+                self.plan = json.load(fp)
+                self.browserDriver = BrowserDriverFactory.create([platform, browser])
+                self.httpServerDriver = HTTPServerDriverFactory.create([self.plan['http_server_driver']])
+                self.benchmarks = self.plan['benchmarks']
+                self.buildDir = os.path.abspath(buildDir)
+                self.outputFile = outputFile if outputFile else 'benchmark.result'
+        except IOError:
+            _log.error('Can not open plan file: %s' % planFile)
+        except ValueError:
+            _log.error('Plan file:%s may not follow json format' % planFile)
+        except:
+            raise
+
+    def execute(self):
+        _log.info('Start to execute the plan')
+        for benchmark in self.benchmarks:
+            _log.info('Start a new benchmark')
+            results = []
+            benchmarkBuilder = BenchmarkBuilderFactory.create([benchmark['benchmark_builder']])
+            webRoot = benchmarkBuilder.prepare(benchmark['original_benchmark'], benchmark['benchmark_patch'] if 'benchmark_patch' in benchmark else None)
+            for x in xrange(int(benchmark['count'])):
+                _log.info('Start the iteration %d of current benchmark' % (x + 1))
+                self.httpServerDriver.serve(webRoot)
+                self.browserDriver.prepareEnv()
+                self.browserDriver.launchUrl(urlparse.urljoin(self.httpServerDriver.baseUrl(), benchmark['entry_point']), self.buildDir)
+                try:
+                    with timeout(benchmark['timeout']):
+                        result = json.loads(self.httpServerDriver.fetchResult())
+                        assert(result)
+                except:
+                    _log.error('No result. Something went wrong. Will skip current benchmark.')
+                    self.browserDriver.closeBrowsers()
+                    break
+                finally:
+                    self.browserDriver.closeBrowsers()
+                    _log.info('End of %d iteration of current benchmark' % (x + 1))
+            results = self.wrap(results)
+            self.dump(results, benchmark['output_file'] if benchmark['output_file'] else self.outputFile)
+            benchmarkBuilder.clean()
+
+    @classmethod
+    def dump(cls, results, outputFile):
+        _log.info('Dumpping the results to file')
+        try:
+            with open(outputFile, 'w') as fp:
+                json.dump(results, fp)
+        except IOError:
+            _log.error('Cannot open output file: %s' % outputFile)
+            _log.error('Results are:\n %s', json.dumps(results))
+
+    @classmethod
+    def wrap(cls, dicts):
+        if not dicts:
+            return None
+        ret = {}
+        for dic in dicts:
+            ret = cls.merge(ret, dic)
+        return ret
+
+    @classmethod
+    def merge(cls, a, b):
+        assert(isinstance(a, type(b)))
+        argType = type(a)
+        # special handle for list type, and should be handle before equal check
+        if argType == types.ListType:
+            return a + b
+        if a == b:
+            return a
+        if argType == types.DictType:
+            result = {}
+            for key, value in a.items():
+                if key in b:
+                    result[key] = cls.merge(value, b[key])
+                else:
+                    result[key] = value
+            for key, value in b.items():
+                if key not in result:
+                    result[key] = value
+            return result
+        # for other types
+        return a + b
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/browser_driver/browser_driver.py b/Tools/Scripts/webkitpy/benchmark_runner/browser_driver/browser_driver.py
new file mode 100644 (file)
index 0000000..6bef150
--- /dev/null
@@ -0,0 +1,18 @@
+#!/usr/bin/env python
+
+import abc
+
+
+class BrowserDriver(object):
+
+    @abc.abstractmethod
+    def prepareEnv(self):
+        pass
+
+    @abc.abstractmethod
+    def launchUrl(self, url, browserBuildPath=None):
+        pass
+
+    @abc.abstractmethod
+    def closeBrowser(self):
+        pass
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/browser_driver/browser_driver_factory.py b/Tools/Scripts/webkitpy/benchmark_runner/browser_driver/browser_driver_factory.py
new file mode 100644 (file)
index 0000000..b998b4f
--- /dev/null
@@ -0,0 +1,16 @@
+#!/usr/bin/env python
+
+import logging
+import json
+import os
+
+from webkitpy.benchmark_runner.generic_factory import GenericFactory
+from webkitpy.benchmark_runner.utils import loadJSONFromFile
+
+
+driverFileName = 'browser_drivers.json'
+
+
+class BrowserDriverFactory(GenericFactory):
+
+    products = loadJSONFromFile(os.path.join(os.path.dirname(__file__), driverFileName))
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/browser_driver/browser_drivers.json b/Tools/Scripts/webkitpy/benchmark_runner/browser_driver/browser_drivers.json
new file mode 100644 (file)
index 0000000..c80b961
--- /dev/null
@@ -0,0 +1,12 @@
+{
+    "osx": {
+        "chrome": {
+            "moduleName": "OSXChromeDriver", 
+            "filePath": "browser_driver.osx_chrome_driver"
+        }, 
+        "safari": {
+            "moduleName": "OSXSafariDriver", 
+            "filePath": "browser_driver.osx_safari_driver"
+        }
+    } 
+}
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/browser_driver/osx_chrome_driver.py b/Tools/Scripts/webkitpy/benchmark_runner/browser_driver/osx_chrome_driver.py
new file mode 100644 (file)
index 0000000..266319a
--- /dev/null
@@ -0,0 +1,31 @@
+#!/usr/bin/env python
+
+import logging
+import os
+import subprocess
+import time
+
+# We assume that this handle can only be used when the platform is OSX
+from AppKit import NSRunningApplication
+from browser_driver import BrowserDriver
+
+
+_log = logging.getLogger(__name__)
+
+
+class OSXChromeDriver(BrowserDriver):
+
+    def prepareEnv(self):
+        self.closeBrowsers()
+        self.chromePreferences = []
+
+    def launchUrl(self, url, browserBuildPath=None):
+        _log.info('Launching chrome: %s with url: %s' % (os.path.join(browserBuildPath, 'Google Chrome.app'), url))
+        # FIXME: May need to be modified for develop build, such as setting up libraries
+        subprocess.Popen(['open', '-a', os.path.join(browserBuildPath, 'Google Chrome.app'), '--args', '--homepage', url] + self.chromePreferences).communicate()
+
+    def closeBrowsers(self):
+        _log.info('Closing all existing chrome processes')
+        chromes = NSRunningApplication.runningApplicationsWithBundleIdentifier_('com.google.Chrome')
+        for chrome in chromes:
+            chrome.terminate()
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/browser_driver/osx_safari_driver.py b/Tools/Scripts/webkitpy/benchmark_runner/browser_driver/osx_safari_driver.py
new file mode 100644 (file)
index 0000000..fbf491d
--- /dev/null
@@ -0,0 +1,36 @@
+#!/usr/bin/env python
+
+import logging
+import os
+import subprocess
+import time
+
+# We assume that this handle can only be used when the platform is OSX.
+from AppKit import NSRunningApplication
+from browser_driver import BrowserDriver
+
+
+_log = logging.getLogger(__name__)
+
+
+class OSXSafariDriver(BrowserDriver):
+
+    def prepareEnv(self):
+        self.closeBrowsers()
+        self.safariPreferences = ["-HomePage", "about:blank", "-WarnAboutFraudulentWebsites", "0", "-ExtensionsEnabled", "0", "-ShowStatusBar", "0", "-NewWindowBehavior", "1", "-NewTabBehavior", "1"]
+
+    def launchUrl(self, url, browserBuildPath=None):
+        args = [os.path.join(browserBuildPath, 'Safari.app/Contents/MacOS/Safari')]
+        args.extend(self.safariPreferences)
+        _log.info('Launching safari: %s with url: %s' % (args[0], url))
+        subprocess.Popen(args, env={'DYLD_FRAMEWORK_PATH': browserBuildPath, 'DYLD_LIBRARY_PATH': browserBuildPath}, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+        # Stop for initialization of the safari process, otherwise, open
+        # command may use the system safari.
+        time.sleep(3)
+        subprocess.Popen(['open', url])
+
+    def closeBrowsers(self):
+        _log.info('Closing all existing safari processes')
+        safariInstances = NSRunningApplication.runningApplicationsWithBundleIdentifier_('com.apple.Safari')
+        for safariInstance in safariInstances:
+            safariInstance.terminate()
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/data/patches/JetStream.patch b/Tools/Scripts/webkitpy/benchmark_runner/data/patches/JetStream.patch
new file mode 100644 (file)
index 0000000..7e0e575
--- /dev/null
@@ -0,0 +1,59 @@
+diff --git a/JetStream/JetStream-1.0.1/JetStreamDriver.js b/JetStream/JetStream-1.0.1/JetStreamDriver.js
+index 73ee420..60f587c 100644
+--- a/JetStream/JetStream-1.0.1/JetStreamDriver.js
++++ b/JetStream/JetStream-1.0.1/JetStreamDriver.js
+@@ -448,6 +448,14 @@ var JetStream = (function() {
+         return rawResults;
+     }
++    
++    function computeRefinedResults(){
++        var results = {};
++        for (var i = 0; i < benchmarks.length; ++i) {
++            results[benchmarks[i].name] = {"metrics" : {"Score" : {"current" : [benchmarks[i].times]}}};
++        }
++        return {"JetStream": {"metrics" : {"Score" : ["Geometric"]}, "tests" : results}};
++    }
+     function end()
+     {
+@@ -458,6 +466,23 @@ var JetStream = (function() {
+         isRunning = false;
+         hasAlreadyRun = true;
+         prepareToStart();
++        // submit result to server
++        results = JSON.stringify(computeRefinedResults());
++        var xhr = new XMLHttpRequest();
++        xhr.open("POST", "/report");
++       
++        xhr.setRequestHeader("Content-type", "application/json");
++        xhr.setRequestHeader("Content-length", results.length);
++        xhr.setRequestHeader("Connection", "close");
++ 
++        xhr.onreadystatechange = function() {
++        if(xhr.readyState == XMLHttpRequest.DONE && xhr.status == 200) {
++                closeRequest = new XMLHttpRequest();
++                closeRequest.open("GET", "/shutdown");
++                closeRequest.send()
++            }
++        }
++        xhr.send(results);
+     }
+     function iterate()
+diff --git a/JetStream/JetStream-1.0.1/index.html b/JetStream/JetStream-1.0.1/index.html
+index e27535c..001087d 100644
+--- a/JetStream/JetStream-1.0.1/index.html
++++ b/JetStream/JetStream-1.0.1/index.html
+@@ -34,8 +34,10 @@
+     window.onerror = function() { allIsGood = false; }
+     function initialize() {
+-        if (allIsGood)
++        if (allIsGood) {
+             JetStream.initialize();
++            setTimeout(function(){window.location = "javascript:JetStream.start()"}, 3000);
++        }
+         else
+             document.getElementById("body").innerHTML = "<h1>ERROR</h1><p>Encountered errors during page load. Refusing to run a partial benchmark suite.</p>";
+     }
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/data/patches/Speedometer.patch b/Tools/Scripts/webkitpy/benchmark_runner/data/patches/Speedometer.patch
new file mode 100644 (file)
index 0000000..33b0f32
--- /dev/null
@@ -0,0 +1,101 @@
+diff --git a/Speedometer/Full.html b/Speedometer/Full.html
+index 419b9f2..0b04c69 100644
+--- a/Speedometer/Full.html
++++ b/Speedometer/Full.html
+@@ -7,7 +7,6 @@
+     <script src="resources/main.js" defer></script>
+     <script src="resources/benchmark-runner.js" defer></script>
+     <script src="resources/benchmark-report.js" defer></script>
+-    <script src="../resources/statistics.js" defer></script>
+     <script src="resources/tests.js" defer></script>
+ </head>
+ <body>
+diff --git a/Speedometer/resources/benchmark-report.js b/Speedometer/resources/benchmark-report.js
+index c4b4c64..c4453cd 100644
+--- a/Speedometer/resources/benchmark-report.js
++++ b/Speedometer/resources/benchmark-report.js
+@@ -1,16 +1,9 @@
+ // This file can be customized to report results as needed.
+ (function () {
+-    if (!window.testRunner && location.search != '?webkit' && location.hash != '#webkit')
+-        return;
+-
+     if (window.testRunner)
+         testRunner.waitUntilDone();
+-    var scriptElement = document.createElement('script');
+-    scriptElement.src = '../resources/runner.js';
+-    document.head.appendChild(scriptElement);
+-
+     var styleElement = document.createElement('style');
+     styleElement.textContent = 'pre { padding-top: 600px; }';
+     document.head.appendChild(styleElement);
+@@ -36,10 +29,8 @@
+                     name: name,
+                     aggregator: aggregator};
+             }
+-            PerfTestRunner.prepareToMeasureValuesAsync(createTest(null, 'Total'));
+         },
+         didRunSuites: function (measuredValues) {
+-            PerfTestRunner.measureValueAsync(measuredValues.total);
+             valuesByIteration.push(measuredValues.tests);
+         },
+         didFinishLastIteration: function () {
+@@ -52,19 +43,30 @@
+                 values.push(value);
+                 values.aggregator = aggregator;
+             }
+-
++            var dict = {}
++            function addToDictionaryValue(value, suiteName, testName, subtestName) {
++                dict["Speedometer"] = dict["Speedometer"] || { "metrics" : { "Time" : [ "Total", ] }, "tests" : {}};
++                dict["Speedometer"]["tests"][suiteName] = dict["Speedometer"]["tests"][suiteName] || {"metrics" : { "Time" : [ "Total", ] }, "tests" : {}};
++                dict["Speedometer"]["tests"][suiteName]["tests"][testName] = dict["Speedometer"]["tests"][suiteName]["tests"][testName] || { "metrics" : { "Time" : [ "Total", ] }, "tests" : {}};
++                dict["Speedometer"]["tests"][suiteName]["tests"][testName]["tests"][subtestName] = dict["Speedometer"]["tests"][suiteName]["tests"][testName]["tests"][subtestName] || { "metrics" : { "Time" : { "current" : [[]] } }};
++                dict["Speedometer"]["tests"][suiteName]["tests"][testName]["tests"][subtestName]["metrics"]["Time"]["current"][0].push(value);
++            }
++            
+             valuesByIteration.forEach(function (measuredValues) {
+                 for (var suiteName in measuredValues) {
+                     var suite = measuredValues[suiteName];
+                     for (var testName in suite.tests) {
+                         var test = suite.tests[testName];
+-                        for (var subtestName in test.tests)
++                        for (var subtestName in test.tests) {
+                             addToMeasuredValue(test.tests[subtestName], suiteName + '/' + testName + '/' + subtestName);
++                            addToDictionaryValue(test.tests[subtestName], suiteName, testName, subtestName);
++                        }
+                         addToMeasuredValue(test.total, suiteName + '/' + testName, 'Total');
+                     }
+                     addToMeasuredValue(suite.total, suiteName, 'Total');
+                 }
+             });
++            var results = JSON.stringify(dict);
+             var fullNames = new Array;
+             for (var fullName in measuredValuesByFullName)
+@@ -72,8 +74,22 @@
+             for (var i = 0; i < fullNames.length; i++) {
+                 var values = measuredValuesByFullName[fullNames[i]];
+-                PerfTestRunner.reportValues(createTest(fullNames[i], values.aggregator, i + 1 == fullNames.length), values);
+             }
++            var xhr = new XMLHttpRequest();
++            xhr.open("POST", "/report");
++ 
++            xhr.setRequestHeader("Content-type", "application/json");
++            xhr.setRequestHeader("Content-length", results.length);
++            xhr.setRequestHeader("Connection", "close");
++ 
++            xhr.onreadystatechange = function() {
++                if(xhr.readyState == XMLHttpRequest.DONE && xhr.status == 200) {
++                    var closeRequest = new XMLHttpRequest();
++                    closeRequest.open("GET", "/shutdown");
++                    closeRequest.send()
++                }
++            }
++            xhr.send(results);
+         }
+     };
+ })();
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/data/plans/jetstream.plan b/Tools/Scripts/webkitpy/benchmark_runner/data/plans/jetstream.plan
new file mode 100644 (file)
index 0000000..9a89883
--- /dev/null
@@ -0,0 +1,14 @@
+{
+    "http_server_driver": "SimpleHTTPServerDriver", 
+    "benchmarks": [
+        {
+            "timeout" : 600,
+            "count": 5,
+            "benchmark_builder": "JetStreamBenchmarkBuilder",
+            "original_benchmark": "../../../../PerformanceTests/JetStream",
+            "benchmark_patch": "data/patches/JetStream.patch",
+            "entry_point": "JetStream/JetStream-1.0.1/index.html",
+            "output_file": "jetstream.result"
+        }
+    ]
+}
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/data/plans/speedometer.plan b/Tools/Scripts/webkitpy/benchmark_runner/data/plans/speedometer.plan
new file mode 100644 (file)
index 0000000..cff10a6
--- /dev/null
@@ -0,0 +1,14 @@
+{
+    "benchmarks": [
+        {
+            "benchmark_builder": "GenericBenchmarkBuilder",
+            "benchmark_patch": "data/patches/Speedometer.patch",
+            "original_benchmark": "../../../../PerformanceTests/Speedometer",
+            "count": 5,
+            "entry_point": "Speedometer/Full.html",
+            "output_file": "speedometer.result",
+            "timeout": 600
+        }
+    ],
+    "http_server_driver": "SimpleHTTPServerDriver"
+}
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/generic_factory.py b/Tools/Scripts/webkitpy/benchmark_runner/generic_factory.py
new file mode 100644 (file)
index 0000000..d710d4a
--- /dev/null
@@ -0,0 +1,31 @@
+#!/usr/bin/env python
+
+import logging
+import os
+
+from utils import loadModule, ModuleNotFoundError
+
+
+_log = logging.getLogger(__name__)
+
+
+class GenericFactory(object):
+
+    products = None
+
+    @classmethod
+    def iterateGetItem(cls, options, keys):
+        ret = options
+        for key in keys:
+            try:
+                ret = ret.__getitem__(key)
+            except KeyError:
+                raise
+        return ret
+
+    @classmethod
+    def create(cls, descriptions):
+        try:
+            return loadModule(cls.iterateGetItem(cls.products, descriptions))()
+        except:
+            raise
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/http_server_driver/http_server/twisted_http_server.py b/Tools/Scripts/webkitpy/benchmark_runner/http_server_driver/http_server/twisted_http_server.py
new file mode 100644 (file)
index 0000000..172fcf2
--- /dev/null
@@ -0,0 +1,31 @@
+#!/usr/bin/env python
+
+from twisted.web import static, server
+from twisted.web.resource import Resource
+from twisted.internet import reactor
+import argparse
+import sys
+
+
+class ServerControl(Resource):
+    isLeaf = True
+
+    def render_GET(self, request):
+        reactor.stop()
+        return ""
+
+    def render_POST(self, request):
+        sys.stdout.write(request.content.getvalue())
+        return 'OK'
+
+
+if __name__ == '__main__':
+    parser = argparse.ArgumentParser(description='python TwistedHTTPServer.py webRoot')
+    parser.add_argument('webRoot')
+    args = parser.parse_args()
+    webRoot = static.File(args.webRoot)
+    serverControl = ServerControl()
+    webRoot.putChild('shutdown', serverControl)
+    webRoot.putChild('report', serverControl)
+    reactor.listenTCP(0, server.Site(webRoot))
+    reactor.run()
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/http_server_driver/http_server_driver.py b/Tools/Scripts/webkitpy/benchmark_runner/http_server_driver/http_server_driver.py
new file mode 100644 (file)
index 0000000..02094b8
--- /dev/null
@@ -0,0 +1,13 @@
+#!/usr/bin/env python
+
+from abc import abstractmethod
+
+
+class HTTPServerDriver(object):
+    @abstractmethod
+    def serve(self, webRoot):
+        pass
+
+    @abstractmethod
+    def fetchResult(self):
+        pass
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/http_server_driver/http_server_driver_factory.py b/Tools/Scripts/webkitpy/benchmark_runner/http_server_driver/http_server_driver_factory.py
new file mode 100644 (file)
index 0000000..f8e8506
--- /dev/null
@@ -0,0 +1,16 @@
+#!/usr/bin/env python
+
+import logging
+import json
+import os
+
+from webkitpy.benchmark_runner.generic_factory import GenericFactory
+from webkitpy.benchmark_runner.utils import loadJSONFromFile
+
+
+driverFileName = 'http_server_drivers.json'
+
+
+class HTTPServerDriverFactory(GenericFactory):
+
+    products = loadJSONFromFile(os.path.join(os.path.dirname(__file__), driverFileName))
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/http_server_driver/http_server_drivers.json b/Tools/Scripts/webkitpy/benchmark_runner/http_server_driver/http_server_drivers.json
new file mode 100644 (file)
index 0000000..6bb3bcd
--- /dev/null
@@ -0,0 +1,6 @@
+{
+  "SimpleHTTPServerDriver": {
+    "moduleName": "SimpleHTTPServerDriver", 
+    "filePath": "http_server_driver.simple_http_server_driver"
+  }
+}
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/http_server_driver/simple_http_server_driver.py b/Tools/Scripts/webkitpy/benchmark_runner/http_server_driver/simple_http_server_driver.py
new file mode 100644 (file)
index 0000000..a4292a6
--- /dev/null
@@ -0,0 +1,79 @@
+#!/usr/bin/env python
+
+import logging
+import os
+import re
+import socket
+import subprocess
+import time
+
+from http_server_driver import HTTPServerDriver
+
+
+_log = logging.getLogger(__name__)
+
+
+class SimpleHTTPServerDriver(HTTPServerDriver):
+
+    """This class depends on unix environment, need to be modified to achieve crossplatform compability
+    """
+
+    def __init__(self):
+        self.serverProcess = None
+        self.serverPort = 0
+        # FIXME: This may not be reliable.
+        _log.info('Finding the IP address of current machine')
+        try:
+            self.ip = [ip for ip in socket.gethostbyname_ex(socket.gethostname())[2] if not ip.startswith("127.")][0]
+            _log.info('IP of current machine is: %s' % self.ip)
+        except:
+            _log.error('Cannot get the ip address of current machine')
+            raise
+
+    def serve(self, webroot):
+        oldWorkingDirectory = os.getcwd()
+        os.chdir(os.path.dirname(os.path.abspath(__file__)))
+        _log.info('Lauchning an http server')
+        self.serverProcess = subprocess.Popen(['/usr/bin/python', 'http_server/twisted_http_server.py', webroot], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+        os.chdir(oldWorkingDirectory)
+        maxAttempt = 5
+        interval = 0.5
+        _log.info('Start to fetching the port number of the http server')
+        try:
+            import psutil
+            for attempt in xrange(maxAttempt):
+                try:
+                    self.serverPort = psutil.Process(self.serverProcess.pid).connections()[0][3][1]
+                    if self.serverPort:
+                        _log.info('HTTP Server is serving at port: %d', self.serverPort)
+                        break
+                except IndexError:
+                    pass
+                _log.info('Server port is not found this time, retry after %f seconds' % interval)
+                time.sleep(interval)
+                interval *= 2
+        except ImportError:
+            try:
+                for attempt in xrange(maxAttempt):
+                    try:
+                        p = subprocess.Popen(' '.join(['/usr/sbin/lsof', '-a', '-iTCP', '-sTCP:LISTEN', '-p', str(self.serverProcess.pid)]), shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+                        self.serverPort = int(re.findall('TCP \*:(\d+) \(LISTEN\)', p.communicate()[0])[0])
+                        if self.serverPort:
+                            _log.info('HTTP Server is serving at port: %d', self.serverPort)
+                            break
+                    # Raising exception means the server is not ready to server, try later
+                    except ValueError:
+                        pass
+                    except IndexError:
+                        pass
+                    _log.info('Server port is not found this time, retry after %f seconds' % interval)
+                    time.sleep(interval)
+                    interval *= 2
+            except:
+                raise Exception("Server may not be serving")
+
+    def baseUrl(self):
+        return "http://%s:%d" % (self.ip, self.serverPort)
+
+    def fetchResult(self):
+        return self.serverProcess.communicate()[0]
diff --git a/Tools/Scripts/webkitpy/benchmark_runner/utils.py b/Tools/Scripts/webkitpy/benchmark_runner/utils.py
new file mode 100644 (file)
index 0000000..6bc4b68
--- /dev/null
@@ -0,0 +1,60 @@
+#!/usr/bin/env python
+
+import json
+import logging
+import os
+import signal
+
+
+_log = logging.getLogger(__name__)
+
+
+class ModuleNotFoundError(Exception):
+    pass
+
+
+def loadModule(moduleDesc):
+    try:
+        ret = getattr(__import__(moduleDesc['filePath'], globals(), locals(), moduleDesc['moduleName'], -1), moduleDesc['moduleName'])
+        return ret
+    except Exception as e:
+        raise ModuleNotFoundError('Module (%s) with path(%s) is not found' % (moduleDesc['moduleName'], moduleDesc['filePath']))
+
+
+def getPathFromProjectRoot(relativePathToProjectRoot):
+    # Choose the directory containning current file as start point,
+    # compute relative path base on the parameter,
+    # and return an absolute path
+    return os.path.abspath(os.path.join(os.path.dirname(os.path.abspath(__file__)), relativePathToProjectRoot))
+
+
+def loadJSONFromFile(filePath):
+    try:
+        jsonObject = json.load(open(filePath, 'r'))
+        assert(jsonObject)
+        return jsonObject
+    except:
+        raise Exception("Invalid json format or empty json was found in %s" % (filePath))
+
+
+# Borrow this code from
+# 'http://stackoverflow.com/questions/2281850/timeout-function-if-it-takes-too-long-to-finish'
+class TimeoutError(Exception):
+    pass
+
+
+class timeout:
+
+    def __init__(self, seconds=1, error_message='Timeout'):
+        self.seconds = seconds
+        self.error_message = error_message
+
+    def handle_timeout(self, signum, frame):
+        raise TimeoutError(self.error_message)
+
+    def __enter__(self):
+        signal.signal(signal.SIGALRM, self.handle_timeout)
+        signal.alarm(self.seconds)
+
+    def __exit__(self, type, value, traceback):
+        signal.alarm(0)