Add benchmark for WebKit process launch times
authorcommit-queue@webkit.org <commit-queue@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Thu, 19 Jul 2018 21:54:48 +0000 (21:54 +0000)
committercommit-queue@webkit.org <commit-queue@webkit.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Thu, 19 Jul 2018 21:54:48 +0000 (21:54 +0000)
https://bugs.webkit.org/show_bug.cgi?id=186414

Patch by Ben Richards <benton_richards@apple.com> on 2018-07-19
Reviewed by Ryosuke Niwa.

Added two benchmarks, one for measuring browser new tab launch time and one for browser startup time.

* LaunchTime/.gitignore: Added.
* LaunchTime/feedback_client.html: Added.
Displays benchmark progress in browser
* LaunchTime/feedback_server.py: Added.
(FeedbackServer): Sends data to feedback_client via websocket
(FeedbackServer.__init__):
(FeedbackServer._create_app):
(FeedbackServer._start_server):
(FeedbackServer._send_all_messages):
(FeedbackServer.start):
(FeedbackServer.stop):
(FeedbackServer.send_message): Send a message to the feedback_client
(FeedbackServer.wait_until_client_has_loaded): Wait until the feedback_client has opened a websocket connection to the feedback_server
(FeedbackServer.MainHandler): Handler factory to create handler that serves feedback_client.html
(FeedbackServer.MainHandler.Handler):
(FeedbackServer.MainHandler.Handler.get):
(FeedbackServer.WSHandler): Handler factory to create handler that sends data to feedback client
(FeedbackServer.WSHandler.Handler):
(FeedbackServer.WSHandler.Handler.open): On websocket connection opened
(FeedbackServer.WSHandler.Handler.on_close): On websocket connection closed
* LaunchTime/launch_time.py: Added.
(DefaultLaunchTimeHandler): Abstract HTTP request handler for launch time benchmarks
(DefaultLaunchTimeHandler.get_test_page): Default test page to be overridden by benchmarks
(DefaultLaunchTimeHandler.get_blank_page):
(DefaultLaunchTimeHandler.on_receive_stop_time):
(DefaultLaunchTimeHandler.do_HEAD):
(DefaultLaunchTimeHandler.do_GET):
(DefaultLaunchTimeHandler.do_POST):
(DefaultLaunchTimeHandler.log_message): Suppresses HTTP logs from SimpleHTTPRequestHandler
(LaunchTimeBenchmark): Abstract class which launch time benchmarks inherit from and override methods desired to customize
(LaunchTimeBenchmark.__init__):
(LaunchTimeBenchmark._parse_browser_bundle_path): Parser for bundle path option
(LaunchTimeBenchmark._parse_args):
(LaunchTimeBenchmark._run_server): Target function for main server thread
(LaunchTimeBenchmark._setup_servers):
(LaunchTimeBenchmark._clean_up):
(LaunchTimeBenchmark._exit_due_to_exception):
(LaunchTimeBenchmark._geometric_mean):
(LaunchTimeBenchmark._standard_deviation):
(LaunchTimeBenchmark._compute_results): Returns mean and std dev of list of results and pretty prints if should_print=True is specified
(LaunchTimeBenchmark._wait_times): Mimic numpy.linspace
(LaunchTimeBenchmark.open_tab): Open a browser tab with the html given by self.response_handler.get_test_page
(LaunchTimeBenchmark.launch_browser): Open a broser to either the feedback client (if option is set) or a blank page
(LaunchTimeBenchmark.quit_browser):
(LaunchTimeBenchmark.quit_browser.quit_app):
(LaunchTimeBenchmark.quit_browser.is_app_closed):
(LaunchTimeBenchmark.close_tab):
(LaunchTimeBenchmark.wait):
(LaunchTimeBenchmark.log): Print to console and send to feedback client if --feedback-in-browser flag is used
(LaunchTimeBenchmark.log_verbose): Only logs if --verbose flag is used
(LaunchTimeBenchmark.run):
(LaunchTimeBenchmark.group_init): Initialization done before each round of iterations
(LaunchTimeBenchmark.run_iteration):
(LaunchTimeBenchmark.initialize): Convenience method to be overriden by subclasses which is called at the end of __init__
(LaunchTimeBenchmark.will_parse_arguments): Called before argparse.parse_args to let subclasses add new command line arguments
(LaunchTimeBenchmark.did_parse_arguments): Called after argparse.parse_args to let subclass initialize based on command line arguments
* LaunchTime/new_tab.py: Added
(NewTabBenchmark):
(NewTabBenchmark._parse_wait_time): Parser for wait time option
(NewTabBenchmark.initialize):
(NewTabBenchmark.run_iteration):
(NewTabBenchmark.group_init):
(NewTabBenchmark.will_parse_arguments):
(NewTabBenchmark.did_parse_arguments):
(NewTabBenchmark.ResponseHandler):
(NewTabBenchmark.ResponseHandler.Handler):
(NewTabBenchmark.ResponseHandler.Handler.get_test_page):
* LaunchTime/startup.py: Added
(StartupBenchmark): This benchmark measures browser startup time and initial page load time
(StartupBenchmark.initialize):
(StartupBenchmark.run_iteration):
(StartupBenchmark.ResponseHandler):
(StartupBenchmark.ResponseHandler.Handler):
(StartupBenchmark.ResponseHandler.Handler.get_test_page):
* LaunchTime/thirdparty/__init__.py: Added.
(AutoinstallImportHook): Auto installs tornado package for feedback server
(AutoinstallImportHook.__init__):
(AutoinstallImportHook._ensure_autoinstalled_dir_is_in_sys_path):
(AutoinstallImportHook.find_module):
(AutoinstallImportHook._install_tornado):
(AutoinstallImportHook.greater_than_equal_to_version):
(AutoinstallImportHook._install):
(AutoinstallImportHook.get_latest_pypi_url):
(AutoinstallImportHook.install_binary):
(autoinstall_everything):
(get_os_info):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@234006 268f45cc-cd09-0410-ab3c-d52691b4dbfc

PerformanceTests/ChangeLog
PerformanceTests/LaunchTime/.gitignore [new file with mode: 0644]
PerformanceTests/LaunchTime/feedback_client.html [new file with mode: 0644]
PerformanceTests/LaunchTime/feedback_server.py [new file with mode: 0644]
PerformanceTests/LaunchTime/launch_time.py [new file with mode: 0644]
PerformanceTests/LaunchTime/new_tab.py [new file with mode: 0755]
PerformanceTests/LaunchTime/startup.py [new file with mode: 0755]
PerformanceTests/LaunchTime/thirdparty/__init__.py [new file with mode: 0644]

index c053eea..b4fb22d 100644 (file)
@@ -1,3 +1,99 @@
+2018-07-19  Ben Richards  <benton_richards@apple.com>
+
+        Add benchmark for WebKit process launch times
+        https://bugs.webkit.org/show_bug.cgi?id=186414
+
+        Reviewed by Ryosuke Niwa.
+
+        Added two benchmarks, one for measuring browser new tab launch time and one for browser startup time.
+
+        * LaunchTime/.gitignore: Added.
+        * LaunchTime/feedback_client.html: Added.
+        Displays benchmark progress in browser
+        * LaunchTime/feedback_server.py: Added.
+        (FeedbackServer): Sends data to feedback_client via websocket
+        (FeedbackServer.__init__):
+        (FeedbackServer._create_app):
+        (FeedbackServer._start_server):
+        (FeedbackServer._send_all_messages):
+        (FeedbackServer.start):
+        (FeedbackServer.stop):
+        (FeedbackServer.send_message): Send a message to the feedback_client
+        (FeedbackServer.wait_until_client_has_loaded): Wait until the feedback_client has opened a websocket connection to the feedback_server
+        (FeedbackServer.MainHandler): Handler factory to create handler that serves feedback_client.html
+        (FeedbackServer.MainHandler.Handler):
+        (FeedbackServer.MainHandler.Handler.get):
+        (FeedbackServer.WSHandler): Handler factory to create handler that sends data to feedback client
+        (FeedbackServer.WSHandler.Handler):
+        (FeedbackServer.WSHandler.Handler.open): On websocket connection opened
+        (FeedbackServer.WSHandler.Handler.on_close): On websocket connection closed
+        * LaunchTime/launch_time.py: Added.
+        (DefaultLaunchTimeHandler): Abstract HTTP request handler for launch time benchmarks
+        (DefaultLaunchTimeHandler.get_test_page): Default test page to be overridden by benchmarks
+        (DefaultLaunchTimeHandler.get_blank_page):
+        (DefaultLaunchTimeHandler.on_receive_stop_time):
+        (DefaultLaunchTimeHandler.do_HEAD):
+        (DefaultLaunchTimeHandler.do_GET):
+        (DefaultLaunchTimeHandler.do_POST):
+        (DefaultLaunchTimeHandler.log_message): Suppresses HTTP logs from SimpleHTTPRequestHandler
+        (LaunchTimeBenchmark): Abstract class which launch time benchmarks inherit from and override methods desired to customize
+        (LaunchTimeBenchmark.__init__):
+        (LaunchTimeBenchmark._parse_browser_bundle_path): Parser for bundle path option
+        (LaunchTimeBenchmark._parse_args):
+        (LaunchTimeBenchmark._run_server): Target function for main server thread
+        (LaunchTimeBenchmark._setup_servers):
+        (LaunchTimeBenchmark._clean_up):
+        (LaunchTimeBenchmark._exit_due_to_exception):
+        (LaunchTimeBenchmark._geometric_mean):
+        (LaunchTimeBenchmark._standard_deviation):
+        (LaunchTimeBenchmark._compute_results): Returns mean and std dev of list of results and pretty prints if should_print=True is specified
+        (LaunchTimeBenchmark._wait_times): Mimic numpy.linspace
+        (LaunchTimeBenchmark.open_tab): Open a browser tab with the html given by self.response_handler.get_test_page
+        (LaunchTimeBenchmark.launch_browser): Open a broser to either the feedback client (if option is set) or a blank page
+        (LaunchTimeBenchmark.quit_browser):
+        (LaunchTimeBenchmark.quit_browser.quit_app):
+        (LaunchTimeBenchmark.quit_browser.is_app_closed):
+        (LaunchTimeBenchmark.close_tab):
+        (LaunchTimeBenchmark.wait):
+        (LaunchTimeBenchmark.log): Print to console and send to feedback client if --feedback-in-browser flag is used
+        (LaunchTimeBenchmark.log_verbose): Only logs if --verbose flag is used
+        (LaunchTimeBenchmark.run):
+        (LaunchTimeBenchmark.group_init): Initialization done before each round of iterations
+        (LaunchTimeBenchmark.run_iteration):
+        (LaunchTimeBenchmark.initialize): Convenience method to be overriden by subclasses which is called at the end of __init__
+        (LaunchTimeBenchmark.will_parse_arguments): Called before argparse.parse_args to let subclasses add new command line arguments
+        (LaunchTimeBenchmark.did_parse_arguments): Called after argparse.parse_args to let subclass initialize based on command line arguments
+        * LaunchTime/new_tab.py: Added
+        (NewTabBenchmark):
+        (NewTabBenchmark._parse_wait_time): Parser for wait time option
+        (NewTabBenchmark.initialize):
+        (NewTabBenchmark.run_iteration):
+        (NewTabBenchmark.group_init):
+        (NewTabBenchmark.will_parse_arguments):
+        (NewTabBenchmark.did_parse_arguments):
+        (NewTabBenchmark.ResponseHandler):
+        (NewTabBenchmark.ResponseHandler.Handler):
+        (NewTabBenchmark.ResponseHandler.Handler.get_test_page):
+        * LaunchTime/startup.py: Added
+        (StartupBenchmark): This benchmark measures browser startup time and initial page load time
+        (StartupBenchmark.initialize):
+        (StartupBenchmark.run_iteration):
+        (StartupBenchmark.ResponseHandler):
+        (StartupBenchmark.ResponseHandler.Handler):
+        (StartupBenchmark.ResponseHandler.Handler.get_test_page):
+        * LaunchTime/thirdparty/__init__.py: Added.
+        (AutoinstallImportHook): Auto installs tornado package for feedback server
+        (AutoinstallImportHook.__init__):
+        (AutoinstallImportHook._ensure_autoinstalled_dir_is_in_sys_path):
+        (AutoinstallImportHook.find_module):
+        (AutoinstallImportHook._install_tornado):
+        (AutoinstallImportHook.greater_than_equal_to_version):
+        (AutoinstallImportHook._install):
+        (AutoinstallImportHook.get_latest_pypi_url):
+        (AutoinstallImportHook.install_binary):
+        (autoinstall_everything):
+        (get_os_info):
+
 2018-06-25  Jon Lee  <jonlee@apple.com>
 
         [MotionMark] Add support for version numbers
diff --git a/PerformanceTests/LaunchTime/.gitignore b/PerformanceTests/LaunchTime/.gitignore
new file mode 100644 (file)
index 0000000..2136d14
--- /dev/null
@@ -0,0 +1 @@
+autoinstalled
diff --git a/PerformanceTests/LaunchTime/feedback_client.html b/PerformanceTests/LaunchTime/feedback_client.html
new file mode 100644 (file)
index 0000000..3d9779b
--- /dev/null
@@ -0,0 +1,37 @@
+<!DOCTYPE html>
+<html>
+  <head>
+    <title>Launch Time Benchmark</title>
+    <script src="http://code.jquery.com/jquery-1.9.1.min.js"></script>
+  </head>
+  <body style="font-family: -apple-system; font-size:1.5em;">
+    <h2>Benchmark Progress</h2>
+    <p id="output" style="white-space: pre-line;"></p>
+    <strong id="done"></strong>
+  </body>
+</html>
+<script>
+  if (!("WebSocket" in window)) {
+    alert("Your browser does not support web sockets");
+  } else {
+    setup();
+  }
+
+  function setup() {
+    const host = "ws://localhost:{{ port }}/ws";
+    const socket = new WebSocket(host);
+
+    if (socket) {
+      socket.onmessage = message => showServerMessage(message.data);
+      socket.onclose = () => document.getElementById('done').textContent = 'DONE';
+    } else {
+      console.log("invalid socket");
+    }
+
+    function showServerMessage(text) {
+      text = text.replace(/\n/g, "\r\n");
+      const output = document.getElementById('output');
+      output.textContent = output.textContent.concat(text)
+    }
+  }
+</script>
diff --git a/PerformanceTests/LaunchTime/feedback_server.py b/PerformanceTests/LaunchTime/feedback_server.py
new file mode 100644 (file)
index 0000000..193a5cf
--- /dev/null
@@ -0,0 +1,97 @@
+from Queue import Queue
+import logging
+import os
+import socket
+import threading
+import time
+
+# This line makes sure that tornado is installed and in sys.path
+import thirdparty.autoinstalled.tornado
+import tornado.ioloop
+import tornado.web
+import tornado.websocket
+import tornado.template
+from tornado.httpserver import HTTPServer
+
+
+class FeedbackServer:
+    def __init__(self):
+        self._client = None
+        self._feedback_server_thread = None
+        self._port = 9090
+        self._application = None
+        self._server_is_ready = None
+        self._io_loop = None
+        self._messages = Queue()
+        self._client_loaded = threading.Semaphore(0)
+
+    def _create_app(self):
+        return HTTPServer(tornado.web.Application([
+            (r'/ws', FeedbackServer.WSHandler(self)),
+            (r'/', FeedbackServer.MainHandler(self)),
+        ]))
+
+    def _start_server(self):
+        self._io_loop = tornado.ioloop.IOLoop()
+        self._io_loop.make_current()
+        self._application = self._create_app()
+        while True:
+            try:
+                self._application.listen(self._port)
+                print 'Running feedback server at http://localhost:{}'.format(self._port)
+                break
+            except socket.error as err:
+                self._port += 1
+            except:
+                print 'Feedback server failed to start'
+                break
+        self._server_is_ready.release()
+        self._io_loop.start()
+
+    def _send_all_messages(self):
+        if self._client:
+            while not self._messages.empty():
+                message = self._messages.get()
+                self._client.write_message(message)
+
+    def start(self):
+        self._server_is_ready = threading.Semaphore(0)
+        self._feedback_server_thread = threading.Thread(target=self._start_server)
+        self._feedback_server_thread.start()
+        self._server_is_ready.acquire()
+        return self._port
+
+    def stop(self):
+        self._client_loaded = threading.Semaphore(0)
+        self._application.stop()
+        self._io_loop.add_callback(lambda x: x.stop(), self._io_loop)
+        self._feedback_server_thread.join()
+
+    def send_message(self, new_message):
+        self._messages.put(new_message)
+        self._io_loop.add_callback(self._send_all_messages)
+
+    def wait_until_client_has_loaded(self):
+        self._client_loaded.acquire()
+
+    @staticmethod
+    def MainHandler(feedback_server):
+        class Handler(tornado.web.RequestHandler):
+            def get(self):
+                loader = tornado.template.Loader('.')
+                client_path = os.path.join(os.path.abspath(os.path.dirname(__file__)), 'feedback_client.html')
+                self.write(loader.load(client_path).generate(port=feedback_server._port))
+
+        return Handler
+
+    @staticmethod
+    def WSHandler(feedback_server):
+        class Handler(tornado.websocket.WebSocketHandler):
+            def open(self):
+                feedback_server._client = self
+                feedback_server._client_loaded.release()
+
+            def on_close(self):
+                feedback_server._client = None
+
+        return Handler
diff --git a/PerformanceTests/LaunchTime/launch_time.py b/PerformanceTests/LaunchTime/launch_time.py
new file mode 100644 (file)
index 0000000..c637103
--- /dev/null
@@ -0,0 +1,326 @@
+import SimpleHTTPServer
+import SocketServer
+import argparse
+import logging
+from math import sqrt
+from operator import mul
+import os
+from subprocess import call, check_output
+import sys
+import threading
+import time
+
+from feedback_server import FeedbackServer
+
+
+# Supress logs from feedback server
+logging.getLogger().setLevel(logging.FATAL)
+
+
+class DefaultLaunchTimeHandler(SimpleHTTPServer.SimpleHTTPRequestHandler):
+    def get_test_page(self):
+        return '''<!DOCTYPE html>
+        <html>
+          <head>
+            <title>Launch Time Benchmark</title>
+            <meta http-equiv="Content-Type" content="text/html" />
+            <script>
+                function sendDone() {
+                    const time = performance.timing.navigationStart
+                    const request = new XMLHttpRequest();
+                    request.open("POST", "done", false);
+                    request.setRequestHeader('Content-Type', 'application/json');
+                    request.send(JSON.stringify(time));
+                }
+                window.onload = sendDone;
+            </script>
+          </head>
+          <body>
+            <h1>New Tab Benchmark</h1>
+          </body>
+        </html>
+        '''
+
+    def get_blank_page(self):
+        return '''<!DOCTYPE html>
+        <html>
+          <head>
+            <title>Launch Time Benchmark</title>
+            <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+          </head>
+        </html>'''
+
+    def on_receive_stop_time(self, time):
+        pass
+
+    def do_HEAD(self):
+        self.send_response(200)
+        self.send_header('Content-type', 'text/hetml')
+        self.end_headers()
+
+    def do_GET(self):
+        self.send_response(200)
+        self.send_header('Content-type', 'text/html')
+        self.end_headers()
+        self.wfile.write(self.get_blank_page() if self.path == '/blank' else self.get_test_page())
+        self.wfile.close()
+
+    def do_POST(self):
+        self.send_response(200)
+        self.send_header('Content-type', 'text/html')
+        self.end_headers()
+        self.wfile.write('done')
+        self.wfile.close()
+
+        data_string = self.rfile.read(int(self.headers['Content-Length']))
+        time = float(data_string)
+
+        self.on_receive_stop_time(time)
+
+    def log_message(self, format, *args):
+        pass
+
+
+class LaunchTimeBenchmark:
+    def __init__(self):
+        self._server_ready = threading.Semaphore(0)
+        self._server = None
+        self._server_thread = None
+        self._port = 8080
+        self._feedback_port = None
+        self._feedback_server = None
+        self._open_count = 0
+        self._app_name = None
+        self._verbose = False
+        self._feedback_in_browser = False
+        self._do_not_ignore_first_result = False
+        self._iterations = 5
+        self._browser_bundle_path = '/Applications/Safari.app'
+        self.response_handler = None
+        self.benchmark_description = None
+        self.use_geometric_mean = False
+        self.wait_time_high = 1
+        self.wait_time_low = 0.1
+        self.iteration_groups = 1
+        self.initialize()
+
+    def _parse_browser_bundle_path(self, path):
+        if not os.path.isdir(path) or not path.endswith('.app'):
+            raise argparse.ArgumentTypeError(
+                'Invalid app bundle path: "{}"'.format(path))
+        return path
+
+    def _parse_args(self):
+        self.argument_parser = argparse.ArgumentParser(description=self.benchmark_description)
+        self.argument_parser.add_argument('-p', '--path', type=self._parse_browser_bundle_path,
+            help='path for browser application bundle (default: {})'.format(self._browser_bundle_path))
+        self.argument_parser.add_argument('-n', '--iterations', type=int,
+            help='number of iterations of test (default: {})'.format(self._iterations))
+        self.argument_parser.add_argument('-v', '--verbose', action='store_true',
+            help="print each iteration's time")
+        self.argument_parser.add_argument('-f', '--feedback-in-browser', action='store_true',
+            help="show benchmark results in browser (default: {})".format(self._feedback_in_browser))
+        self.will_parse_arguments()
+
+        args = self.argument_parser.parse_args()
+        if args.iterations:
+            self._iterations = args.iterations
+        if args.path:
+            self._browser_bundle_path = args.path
+        if args.verbose is not None:
+            self._verbose = args.verbose
+        if args.feedback_in_browser is not None:
+            self._feedback_in_browser = args.feedback_in_browser
+        path_len = len(self._browser_bundle_path)
+        start_index = self._browser_bundle_path.rfind('/', 0, path_len)
+        end_index = self._browser_bundle_path.rfind('.', 0, path_len)
+        self._app_name = self._browser_bundle_path[start_index + 1:end_index]
+        self.did_parse_arguments(args)
+
+    def _run_server(self):
+        self._server_ready.release()
+        self._server.serve_forever()
+
+    def _setup_servers(self):
+        while True:
+            try:
+                self._server = SocketServer.TCPServer(
+                    ('0.0.0.0', self._port), self.response_handler)
+                break
+            except:
+                self._port += 1
+        print 'Running test server at http://localhost:{}'.format(self._port)
+
+        self._server_thread = threading.Thread(target=self._run_server)
+        self._server_thread.start()
+        self._server_ready.acquire()
+
+        if self._feedback_in_browser:
+            self._feedback_server = FeedbackServer()
+            self._feedback_port = self._feedback_server.start()
+
+    def _clean_up(self):
+        self._server.shutdown()
+        self._server_thread.join()
+        if self._feedback_in_browser:
+            self._feedback_server.stop()
+
+    def _exit_due_to_exception(self, reason):
+        self.log(reason)
+        self._clean_up()
+        sys.exit(1)
+
+    def _geometric_mean(self, values):
+        product = reduce(mul, values)
+        return product ** (1.0 / len(values))
+
+    def _standard_deviation(self, results, mean=None):
+        if mean is None:
+            mean = sum(results) / float(len(results))
+        variance = sum((x - mean) ** 2 for x in results) / float(len(results) - 1)
+        return sqrt(variance)
+
+    def _compute_results(self, results):
+        if not results:
+            self._exit_due_to_exception('No results to compute.\n')
+        if len(results) > 1 and not self._do_not_ignore_first_result:
+            results = results[1:]
+        mean = sum(results) / float(len(results))
+        stdev = self._standard_deviation(results, mean)
+        return mean, stdev
+
+    def _wait_times(self):
+        if self.iteration_groups == 1:
+            yield self.wait_time_high
+            return
+        increment_per_group = float(self.wait_time_high - self.wait_time_low) / (self.iteration_groups - 1)
+        for i in range(self.iteration_groups):
+            yield self.wait_time_low + increment_per_group * i
+
+    def open_tab(self):
+        call(['open', '-a', self._browser_bundle_path,
+            'http://localhost:{}/{}'.format(self._port, self._open_count)])
+        self._open_count += 1
+
+    def launch_browser(self):
+        if self._feedback_in_browser:
+            call(['open', '-a', self._browser_bundle_path,
+                'http://localhost:{}'.format(self._feedback_port), '-F'])
+            self._feedback_server.wait_until_client_has_loaded()
+        else:
+            call(['open', '-a', self._browser_bundle_path,
+                'http://localhost:{}/blank'.format(self._port), '-F'])
+        self.wait(2)
+
+    def quit_browser(self):
+        def quit_app():
+            call(['osascript', '-e', 'quit app "{}"'.format(self._browser_bundle_path)])
+
+        def is_app_closed():
+            out = check_output(['osascript', '-e', 'tell application "System Events"',
+                '-e', 'copy (get name of every process whose name is "{}") to stdout'.format(self._app_name),
+                '-e', 'end tell'])
+            return len(out.strip()) == 0
+
+        while not is_app_closed():
+            quit_app()
+        self.wait(1)
+
+    def close_tab(self):
+        call(['osascript', '-e',
+            'tell application "System Events" to keystroke "w" using command down'])
+
+    def wait(self, duration):
+        wait_start = time.time()
+        while time.time() - wait_start < duration:
+            pass
+
+    def log(self, message):
+        if self._feedback_in_browser:
+            self._feedback_server.send_message(message)
+        sys.stdout.write(message)
+        sys.stdout.flush()
+
+    def log_verbose(self, message):
+        if self._verbose:
+            self.log(message)
+
+    def run(self):
+        self._parse_args()
+        self._setup_servers()
+        self.quit_browser()
+        print ''
+
+        try:
+            group_means = []
+            results_by_iteration_number = [[] for _ in range(self._iterations)]
+
+            group = 1
+            for wait_duration in self._wait_times():
+                self.group_init()
+                if self.iteration_groups > 1:
+                    self.log('Running group {}{}'.format(group, ':\n' if self._verbose else '...'))
+
+                results = []
+                for i in range(self._iterations):
+                    try:
+                        if not self._verbose:
+                            self.log('.')
+                        result_in_ms = self.run_iteration()
+                        self.log_verbose('({}) {} ms\n'.format(i + 1, result_in_ms))
+                        self.wait(wait_duration)
+                        results.append(result_in_ms)
+                        results_by_iteration_number[i].append(result_in_ms)
+                    except KeyboardInterrupt:
+                        raise KeyboardInterrupt
+                    except:
+                        self._exit_due_to_exception('(Test {} failed)\n'.format(i + 1 if self._verbose else i))
+                if not self._verbose:
+                    print ''
+
+                mean, stdev = self._compute_results(results)
+                self.log_verbose('RESULTS:\n')
+                self.log_verbose('mean: {} ms\n'.format(mean))
+                self.log_verbose('std dev: {} ms ({}%)\n\n'.format(stdev, (stdev / mean) * 100))
+                if self._verbose:
+                    self.wait(1)
+                group_means.append(mean)
+                group += 1
+                self.quit_browser()
+
+            if not self._verbose:
+                print '\n'
+
+            if self._feedback_in_browser:
+                self.launch_browser()
+
+            means_by_iteration_number = []
+            if len(results_by_iteration_number) > 1 and not self._do_not_ignore_first_result:
+                results_by_iteration_number = results_by_iteration_number[1:]
+            for iteration_results in results_by_iteration_number:
+                means_by_iteration_number.append(self._geometric_mean(iteration_results))
+            final_mean = self._geometric_mean(group_means)
+            final_stdev = self._standard_deviation(means_by_iteration_number)
+            self.log('FINAL RESULTS\n')
+            self.log('Mean:\n-> {} ms\n'.format(final_mean))
+            self.log('Standard Deviation:\n-> {} ms ({}%)\n'.format(final_stdev, (final_stdev / final_mean) * 100))
+        except KeyboardInterrupt:
+            self._clean_up()
+            sys.exit(1)
+        finally:
+            self._clean_up()
+
+    def group_init(self):
+        pass
+
+    def run_iteration(self):
+        pass
+
+    def initialize(self):
+        pass
+
+    def will_parse_arguments(self):
+        pass
+
+    def did_parse_arguments(self, args):
+        pass
diff --git a/PerformanceTests/LaunchTime/new_tab.py b/PerformanceTests/LaunchTime/new_tab.py
new file mode 100755 (executable)
index 0000000..f385ed0
--- /dev/null
@@ -0,0 +1,100 @@
+#! /usr/bin/env python
+import argparse
+import time
+from threading import Event
+
+from launch_time import LaunchTimeBenchmark, DefaultLaunchTimeHandler
+import feedback_server
+
+
+class NewTabBenchmark(LaunchTimeBenchmark):
+    def _parse_wait_time(self, string):
+        values = string.split(':')
+
+        start = None
+        end = None
+        try:
+            if len(values) == 2:
+                start = float(values[0])
+                end = float(values[1])
+                if start > end:
+                    raise
+            elif len(values) == 1:
+                start = float(values[0])
+                end = start
+            else:
+                raise
+        except:
+            raise argparse.ArgumentTypeError(
+                "'" + string + "' is not a range of numbers. Expected form is N:M where N < M")
+
+        return start, end
+
+    def initialize(self):
+        self.benchmark_description = "Measure time to open a new tab for a given browser."
+        self.response_handler = NewTabBenchmark.ResponseHandler(self)
+        self.start_time = None
+        self.stop_time = None
+        self.stop_signal_was_received = Event()
+
+    def run_iteration(self):
+        self.start_time = time.time() * 1000
+        self.open_tab()
+        while self.stop_time is None:
+            self.stop_signal_was_received.wait()
+        result = self.stop_time - self.start_time
+        self.stop_time = None
+        self.stop_signal_was_received.clear()
+        self.close_tab()
+        return result
+
+    def group_init(self):
+        self.launch_browser()
+
+    def will_parse_arguments(self):
+        self.argument_parser.add_argument('-g', '--groups', type=int,
+            help='number of groups of iterations to run (default: {})'.format(self.iteration_groups))
+        self.argument_parser.add_argument('-w', '--wait-time', type=self._parse_wait_time,
+            help='wait time to use between iterations or range to scan (format is "N" or "N:M" where N < M, default: {}:{})'.format(self.wait_time_low, self.wait_time_high))
+
+    def did_parse_arguments(self, args):
+        if args.groups:
+            self.iteration_groups = args.groups
+        if args.wait_time:
+            self.wait_time_low, self.wait_time_high = args.wait_time
+
+    @staticmethod
+    def ResponseHandler(new_tab_benchmark):
+        class Handler(DefaultLaunchTimeHandler):
+            def get_test_page(self):
+                return '''<!DOCTYPE html>
+                <html>
+                  <head>
+                    <title>New Tab Benchmark</title>
+                    <meta http-equiv="Content-Type" content="text/html" />
+                    <script>
+                        function sendDone() {
+                            var time = performance.timing.navigationStart
+                            var request = new XMLHttpRequest();
+                            request.open("POST", "done", false);
+                            request.setRequestHeader('Content-Type', 'application/json');
+                            request.send(JSON.stringify(time));
+                        }
+                        window.onload = sendDone;
+                    </script>
+                  </head>
+                  <body>
+                    <h1>New Tab Benchmark</h1>
+                  </body>
+                </html>
+                '''
+
+            def on_receive_stop_time(self, stop_time):
+                new_tab_benchmark.stop_time = stop_time
+                new_tab_benchmark.stop_signal_was_received.set()
+
+        return Handler
+
+
+if __name__ == '__main__':
+    NewTabBenchmark().run()
diff --git a/PerformanceTests/LaunchTime/startup.py b/PerformanceTests/LaunchTime/startup.py
new file mode 100755 (executable)
index 0000000..f36cdaf
--- /dev/null
@@ -0,0 +1,61 @@
+#! /usr/bin/env python
+import time
+from threading import Event
+
+from launch_time import LaunchTimeBenchmark, DefaultLaunchTimeHandler
+
+
+class StartupBenchmark(LaunchTimeBenchmark):
+    def initialize(self):
+        self.benchmark_description = 'Measure startup time of a given browser.'
+        self.response_handler = StartupBenchmark.ResponseHandler(self)
+        self.start_time = None
+        self.stop_time = None
+        self.stop_signal_was_received = Event()
+
+    def run_iteration(self):
+        self.start_time = time.time() * 1000
+        self.open_tab()
+        while self.stop_time is None:
+            self.stop_signal_was_received.wait()
+        result = self.stop_time - self.start_time
+        self.stop_time = None
+        self.stop_signal_was_received.clear()
+        self.quit_browser()
+        return result
+
+    @staticmethod
+    def ResponseHandler(startup_benchmark):
+        class Handler(DefaultLaunchTimeHandler):
+            def get_test_page(self):
+                return '''<!DOCTYPE html>
+                <html>
+                  <head>
+                    <title>Startup Benchmark</title>
+                    <meta http-equiv="Content-Type" content="text/html" />
+                    <script>
+                        function sendDone() {
+                            const time = Date.now();
+                            const request = new XMLHttpRequest();
+                            request.open("POST", "done", false);
+                            request.setRequestHeader('Content-Type', 'application/json');
+                            request.send(JSON.stringify(time));
+                        }
+                        window.onload = sendDone;
+                    </script>
+                  </head>
+                  <body>
+                    <h1>Startup Benchmark</h1>
+                  </body>
+                </html>
+                '''
+
+            def on_receive_stop_time(self, stop_time):
+                startup_benchmark.stop_time = stop_time
+                startup_benchmark.stop_signal_was_received.set()
+
+        return Handler
+
+
+if __name__ == '__main__':
+    StartupBenchmark().run()
diff --git a/PerformanceTests/LaunchTime/thirdparty/__init__.py b/PerformanceTests/LaunchTime/thirdparty/__init__.py
new file mode 100644 (file)
index 0000000..d19eb2d
--- /dev/null
@@ -0,0 +1,143 @@
+# Copyright (C) 2010 Chris Jerdonek (cjerdonek@webkit.org)
+# Copyright (C) 2018 Apple Inc. All rights reserved.
+#
+# Redistribution and use in source and binary forms, with or without
+# modification, are permitted provided that the following conditions
+# are met:
+# 1.  Redistributions of source code must retain the above copyright
+#     notice, this list of conditions and the following disclaimer.
+# 2.  Redistributions in binary form must reproduce the above copyright
+#     notice, this list of conditions and the following disclaimer in the
+#     documentation and/or other materials provided with the distribution.
+#
+# THIS SOFTWARE IS PROVIDED BY APPLE INC. AND ITS CONTRIBUTORS ``AS IS'' AND
+# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+# DISCLAIMED. IN NO EVENT SHALL APPLE INC. OR ITS CONTRIBUTORS BE LIABLE FOR
+# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+# This module is required for Python to treat this directory as a package.
+
+"""Autoinstalls third-party code required by WebKit."""
+
+
+import codecs
+import json
+import os
+import re
+import sys
+import urllib2
+import imp
+
+from collections import namedtuple
+from distutils import spawn
+
+webkitpy_path = os.path.join(os.path.dirname(__file__), '../../../Tools/Scripts/webkitpy')
+autoinstall_path = os.path.join(webkitpy_path, 'common/system/autoinstall.py')
+filesystem_path = os.path.join(webkitpy_path, 'common/system/filesystem.py')
+
+AutoInstaller = imp.load_source('autoinstall', autoinstall_path).AutoInstaller
+FileSystem = imp.load_source('filesystem', filesystem_path).FileSystem
+
+_THIRDPARTY_DIR = os.path.dirname(__file__)
+_AUTOINSTALLED_DIR = os.path.join(_THIRDPARTY_DIR, "autoinstalled")
+
+# Putting the autoinstall code into LaunchTime/thirdparty/__init__.py
+# ensures that no autoinstalling occurs until a caller imports from
+# webkitpy.thirdparty.  This is useful if the caller wants to configure
+# logging prior to executing autoinstall code.
+
+# FIXME: If any of these servers is offline, webkit-patch breaks (and maybe
+# other scripts do, too). See <http://webkit.org/b/42080>.
+
+# We put auto-installed third-party modules in this directory--
+#
+#     LaunchTime/thirdparty/autoinstalled
+
+fs = FileSystem()
+fs.maybe_make_directory(_AUTOINSTALLED_DIR)
+
+init_path = fs.join(_AUTOINSTALLED_DIR, "__init__.py")
+if not fs.exists(init_path):
+    fs.write_text_file(init_path, "")
+
+readme_path = fs.join(_AUTOINSTALLED_DIR, "README")
+if not fs.exists(readme_path):
+    fs.write_text_file(readme_path,
+        "This directory is auto-generated by WebKit and is "
+        "safe to delete.\nIt contains needed third-party Python "
+        "packages automatically downloaded from the web.")
+
+
+class AutoinstallImportHook(object):
+    def __init__(self, filesystem=None):
+        self._fs = filesystem or FileSystem()
+
+    def _ensure_autoinstalled_dir_is_in_sys_path(self):
+        # Some packages require that the are being put somewhere under a directory in sys.path.
+        if not _AUTOINSTALLED_DIR in sys.path:
+            sys.path.insert(0, _AUTOINSTALLED_DIR)
+
+    def find_module(self, fullname, _):
+        # This method will run before each import. See http://www.python.org/dev/peps/pep-0302/
+        if '.autoinstalled' not in fullname:
+            return
+
+        # Note: all of the methods must follow the "_install_XXX" convention in
+        # order for autoinstall_everything(), below, to work properly.
+        if '.tornado' in fullname:
+            self._install_tornado()
+
+    def _install_tornado(self):
+        self._ensure_autoinstalled_dir_is_in_sys_path()
+        self._install("https://files.pythonhosted.org/packages/45/ec/f2a03a0509bcfca336bef23a3dab0d07504893af34fd13064059ba4a0503/tornado-5.1.tar.gz",
+            "tornado-5.1/tornado")
+
+    @staticmethod
+    def greater_than_equal_to_version(minimum, version):
+        for i in xrange(len(minimum.split('.'))):
+            if int(version.split('.')[i]) > int(minimum.split('.')[i]):
+                return True
+            if int(version.split('.')[i]) < int(minimum.split('.')[i]):
+                return False
+        return True
+
+    def _install(self, url, url_subpath=None, target_name=None):
+        installer = AutoInstaller(target_dir=_AUTOINSTALLED_DIR)
+        installer.install(url=url, url_subpath=url_subpath, target_name=target_name)
+
+    def get_latest_pypi_url(self, package_name, url_subpath_format='{name}-{version}/{lname}'):
+        json_url = "https://pypi.python.org/pypi/%s/json" % package_name
+        response = urllib2.urlopen(json_url)
+        data = json.load(response)
+        url = data['urls'][1]['url']
+        subpath = url_subpath_format.format(name=package_name, version=data['info']['version'], lname=package_name.lower())
+        return (url, subpath)
+
+    def install_binary(self, url, name):
+        self._install(url=url, target_name=name)
+        directory = os.path.join(_AUTOINSTALLED_DIR, name)
+        os.chmod(os.path.join(directory, name), 0755)
+        open(os.path.join(directory, '__init__.py'), 'w+').close()
+
+
+_hook = AutoinstallImportHook()
+sys.meta_path.append(_hook)
+
+
+def autoinstall_everything():
+    install_methods = [method for method in dir(_hook.__class__) if method.startswith('_install_')]
+    for method in install_methods:
+        getattr(_hook, method)()
+
+
+def get_os_info():
+    import platform
+    os_name = platform.system()
+    os_type = platform.machine()[-2:]
+    return (os_name, os_type)