X-Git-Url: https://git.webkit.org/?p=WebKit.git;a=blobdiff_plain;f=Tools%2FScripts%2Fwebkitpy%2Fbenchmark_runner%2FREADME.md;h=7810c980877c14ddfe5694007db2ad4d24c67b24;hp=0cad4a198e3e0716034c3e865dda41eba0980047;hb=9b073cc27a336c9d337f5b0b76a7d1baa0f4f47c;hpb=bcb994365d74c68c00bbd055557a773b6315d2c3 diff --git a/Tools/Scripts/webkitpy/benchmark_runner/README.md b/Tools/Scripts/webkitpy/benchmark_runner/README.md index 0cad4a198e3e..7810c980877c 100644 --- a/Tools/Scripts/webkitpy/benchmark_runner/README.md +++ b/Tools/Scripts/webkitpy/benchmark_runner/README.md @@ -5,27 +5,37 @@ This is a script for automating the browser based benchmarks(e.g. Speedometer, J benchmark_runner ├── README.md ├── __init__.py -├── benchmark_builder -│   ├── __init__.py -│   ├── benchmark_builder_factory.py -│   ├── benchmark_builders.json -│   ├── generic_benchmark_builder.py -│   └── jetstream_benchmark_builder.py +├── benchmark_builder.py +├── benchmark_results.py +├── benchmark_results_unittest.py ├── benchmark_runner.py ├── browser_driver │   ├── __init__.py │   ├── browser_driver.py │   ├── browser_driver_factory.py -│   ├── browser_drivers.json +│   ├── osx_browser_driver.py │   ├── osx_chrome_driver.py +│   ├── osx_firefox_driver.py │   └── osx_safari_driver.py ├── data │   ├── patches +│   │   ├── Dromaeo.patch +│   │   ├── JSBench.patch │   │   ├── JetStream.patch -│   │   └── Speedometer.patch +│   │   ├── Kraken.patch +│   │   ├── Octane.patch +│   │   ├── Speedometer.patch +│   │   └── SunSpider.patch │   └── plans +│   ├── dromaeo-cssquery.plan +│   ├── dromaeo-dom.plan +│   ├── dromaeo-jslib.plan │   ├── jetstream.plan -│   └── speedometer.plan +│   ├── jsbench.plan +│   ├── kraken.plan +│   ├── octane.plan +│   ├── speedometer.plan +│   └── sunspider.plan ├── generic_factory.py ├── http_server_driver │   ├── __init__.py @@ -33,8 +43,8 @@ benchmark_runner │   │   └── twisted_http_server.py │   ├── http_server_driver.py │   ├── http_server_driver_factory.py -│   ├── http_server_drivers.json │   └── simple_http_server_driver.py +├── run_benchmark.py └── utils.py ``` ## Requirements @@ -54,24 +64,25 @@ benchmark_runner To create a plan, you may refer to Plans/jetstream.plan. ```json { - "http_server_driver": "SimpleHTTPServerDriver", "timeout" : 600, "count": 5, - "benchmark_builder": "JetStreamBenchmarkBuilder", - "original_benchmark": "../../../../PerformanceTests/JetStream", + "local_copy": "../../../../PerformanceTests/JetStream", "benchmark_patch": "data/patches/JetStream.patch", "entry_point": "JetStream/JetStream-1.0.1/index.html", + "config": { + "orientation": "portrait" + }, "output_file": "jetstream.result" } ``` Plan is a json-formatted dictionary which contains following keys -* **http_server_driver**: (**case-sensitive**) the http server module you want to host the resources. Current available option is "SimpleHTTPServerHandle" which is based on python twisted framework. * **timeout**: time limit for **EACH RUN** of the benchmark. This can avoid program getting stuck in the extreme circumstances. The time limit is suggested to be 1.5-2x the time spent in a normal run. * **count**: the number of times you want to run benchmark -* **benchmark_builder**: builder of the benchmark which is responsible for arranging benchmark before the web server serving the directory. In most case, 'GenericBenchmarkHandler' is sufficient. It copies the benchmark to a temporary directory and applies patch to benchmark. If you have special requirement, you could design your own benchmark handle, just like the 'JetStreamBenchmarkHandle' in this example. -* **original_benchmark**: path of benchmark, a relative path to the root of this project ('benchmark_runner' directory) +* **local_copy**: (**OPTIONAL**) Path of benchmark, a relative path to the root of this project ('benchmark_runner' directory) +* **remote_archive**: (**OPTIONAL**) URL of the remote (http/https) ZIP file that contains the benchmark. * **benchmark_path**: (**OPTIONAL**) path of patch, a relative path to the root of this project ('benchmark_runner' directory) -* **entry_point**: the relative url you want browser to launch (a relative path to the webRoot) +* **entry_point**: the relative url you want browser to launch (a relative path to the benchmark directory) +* **config**: a dictionary that specifies the environment configurations for the test (e.g. orientation while the test is running) * **output_file**: specify the output file, this can be overwritten by specifying '--output-file' while invoking run-benchmark script ### How to import a benchmark @@ -102,4 +113,3 @@ Plan is a json-formatted dictionary which contains following keys * Do following instruction **ONLY IF NEEDED**. In most case, you do not have to. * If you want to customize BrowserDriver for specific browser/platform, you need to extend browser_driver/browser_driver.py and register your module in browser_driver/browser_driversjson. * If you want to customize HTTPServerDriver, you need to extend http_server_drirver/http_server_driver and register your module in http_server_driver/http_server_drivers.json. - * If you want to customize BenchmarkBuilder, you need to extend benchmark_builder/generic_benchmark_builder register you module in benchmark_builder/benchmark_builders.json