test-webkitpy: clean up logging to make it ninja-esque
authordpranke@chromium.org <dpranke@chromium.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Mon, 16 Jul 2012 21:13:29 +0000 (21:13 +0000)
committerdpranke@chromium.org <dpranke@chromium.org@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Mon, 16 Jul 2012 21:13:29 +0000 (21:13 +0000)
https://bugs.webkit.org/show_bug.cgi?id=91297

Reviewed by Adam Barth.

This patch changes the output of test-webkitpy to be closer to
what ninja produces. Namely:

If you are running with stderr writing to a tty (and not -v), we
will no longer print '...' as tests complete. Instead, we use a
metered stream and print entries of the form:
   '[X/Y] test_name passed/failed/erred'
where X is the current test # and Y is the total number of tests.

If you are running with stderr piped to a file or other non-tty
object (or with -v), you get the same output, one per line for
every test.

In addition, if tests fail or err out, you get the stack trace
immediately; you don't have to wait until the end of the run.

Lastly, this change cleans up the unit tests for test-webkitpy
itself to not confuse the logger and to work cleanly w/ multiple
processes. It looks like between this and all the clean up in
MessagePool that happened as it landed, we can now run
multiprocessing tests in parallel.

* Scripts/webkitpy/test/finder.py:
(Finder._default_names):
* Scripts/webkitpy/test/finder_unittest.py:
(FinderTest.setUp):
(FinderTest.tearDown):
* Scripts/webkitpy/test/printer.py:
(Printer.__init__):
(Printer.configure):
(Printer.print_started_test):
(Printer.print_finished_test):
(Printer._test_line):
(Printer.print_result):
* Scripts/webkitpy/test/runner.py:
(Runner.run):
* Scripts/webkitpy/test/runner_unittest.py:
(RunnerTest.setUp):
(RunnerTest):
(RunnerTest.tearDown):
(RunnerTest.assert_run):
(RunnerTest.test_regular):
(RunnerTest.test_verbose):
(RunnerTest.test_timing):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@122764 268f45cc-cd09-0410-ab3c-d52691b4dbfc

Tools/ChangeLog
Tools/Scripts/webkitpy/test/finder.py
Tools/Scripts/webkitpy/test/finder_unittest.py
Tools/Scripts/webkitpy/test/printer.py
Tools/Scripts/webkitpy/test/runner.py
Tools/Scripts/webkitpy/test/runner_unittest.py

index b06a18a..f734646 100644 (file)
@@ -1,5 +1,84 @@
 2012-07-16  Dirk Pranke  <dpranke@chromium.org>
 
+        test-webkitpy: clean up logging to make it ninja-esque
+        https://bugs.webkit.org/show_bug.cgi?id=91297
+
+        Reviewed by Adam Barth.
+
+        This patch changes the output of test-webkitpy to be closer to
+        what ninja produces. Namely:
+
+        If you are running with stderr writing to a tty (and not -v), we
+        will no longer print '...' as tests complete. Instead, we use a
+        metered stream and print entries of the form:
+           '[X/Y] test_name passed/failed/erred'
+        where X is the current test # and Y is the total number of tests.
+
+        If you are running with stderr piped to a file or other non-tty
+        object (or with -v), you get the same output, one per line for
+        every test.
+
+        In addition, if tests fail or err out, you get the stack trace
+        immediately; you don't have to wait until the end of the run.
+
+        Lastly, this change cleans up the unit tests for test-webkitpy
+        itself to not confuse the logger and to work cleanly w/ multiple
+        processes. It looks like between this and all the clean up in
+        MessagePool that happened as it landed, we can now run
+        multiprocessing tests in parallel.
+
+        * Scripts/webkitpy/test/finder.py:
+        (Finder._default_names):
+        * Scripts/webkitpy/test/finder_unittest.py:
+        (FinderTest.setUp):
+        (FinderTest.tearDown):
+        * Scripts/webkitpy/test/printer.py:
+        (Printer.__init__):
+        (Printer.configure):
+        (Printer.print_started_test):
+        (Printer.print_finished_test):
+        (Printer._test_line):
+        (Printer.print_result):
+        * Scripts/webkitpy/test/runner.py:
+        (Runner.run):
+        * Scripts/webkitpy/test/runner_unittest.py:
+        (RunnerTest.setUp):
+        (RunnerTest):
+        (RunnerTest.tearDown):
+        (RunnerTest.assert_run):
+        (RunnerTest.test_regular):
+        (RunnerTest.test_verbose):
+        (RunnerTest.test_timing):
+
+2012-07-16  Dirk Pranke  <dpranke@chromium.org>
+
+        test-webkitpy: handle failures properly when running in parallel
+        https://bugs.webkit.org/show_bug.cgi?id=91416
+
+        Reviewed by Adam Barth.
+
+        It turns out that unittest.TestResults contain a handle to the
+        test method itself, which isn't picklable; it's sufficient to just
+        store the test name instead of the actual method. By doing so
+        we can move the test_name() method from the printer to the
+        runner where it belongs (so the printer is less dependent on the
+        unittest framework's data structures).
+
+        This change should really have a test but I don't know how to
+        write one that properly captures the behavior and won't cause
+        test-webkitpy itself to fail. I've verified the fix by hand, at
+        least, in the meantime.
+
+        * Scripts/webkitpy/test/printer.py:
+        (Printer.__init__):
+        (Printer.print_result):
+        * Scripts/webkitpy/test/runner.py:
+        (_test_name):
+        (Runner.all_test_names):
+        (_Worker.handle):
+
+2012-07-16  Dirk Pranke  <dpranke@chromium.org>
+
         test-webkitpy: handle failures properly when running in parallel
         https://bugs.webkit.org/show_bug.cgi?id=91416
 
index 0b14f04..fcbb0e9 100644 (file)
@@ -165,11 +165,6 @@ class Finder(object):
                                    'webkitpy.tool')
                 self._exclude(modules, win32_blacklist, 'fail horribly on win32', 54526)
 
-        if skip_if_parallel:
-            serial_tests = ('webkitpy.common.system.executive_unittest.ExecutiveTest.test_running_in_parallel',
-                            'webkitpy.test')
-            self._exclude(modules, serial_tests, 'tests fail when run in parallel with other tests', 12345)
-
         return modules
 
     def _exclude(self, modules, module_prefixes, reason, bugid):
index 09048b1..386c579 100644 (file)
@@ -47,18 +47,15 @@ class FinderTest(unittest.TestCase):
         # Here we have to jump through a hoop to make sure test-webkitpy doesn't log
         # any messages from these tests :(.
         self.root_logger = logging.getLogger()
-        self.log_handler = None
-        for h in self.root_logger.handlers:
-            if getattr(h, 'name', None) == 'webkitpy.test.printer':
-                self.log_handler = h
-                break
-        if self.log_handler:
-            self.log_level = self.log_handler.level
-            self.log_handler.level = logging.CRITICAL
+        self.log_levels = []
+        self.log_handlers = self.root_logger.handlers[:]
+        for handler in self.log_handlers:
+            self.log_levels.append(handler.level)
+            handler.level = logging.CRITICAL
 
     def tearDown(self):
-        if self.log_handler:
-            self.log_handler.setLevel(self.log_level)
+        for handler in self.log_handlers:
+            handler.level = self.log_levels.pop(0)
 
     def test_additional_system_paths(self):
         self.assertEquals(self.finder.additional_paths(['/usr']),
index 53f7e53..042fba1 100644 (file)
@@ -25,6 +25,7 @@ import logging
 import StringIO
 
 from webkitpy.common.system import outputcapture
+from webkitpy.layout_tests.views.metered_stream import MeteredStream
 
 _log = logging.getLogger(__name__)
 
@@ -32,7 +33,14 @@ _log = logging.getLogger(__name__)
 class Printer(object):
     def __init__(self, stream, options=None):
         self.stream = stream
+        self.meter = None
         self.options = options
+        self.num_tests = 0
+        self.num_completed = 0
+        self.running_tests = []
+        self.completed_tests = []
+        if options:
+            self.configure(options)
 
     def configure(self, options):
         self.options = options
@@ -47,6 +55,8 @@ class Printer(object):
         elif options.verbose == 2:
             log_level = logging.DEBUG
 
+        self.meter = MeteredStream(self.stream, (options.verbose == 2))
+
         handler = logging.StreamHandler(self.stream)
         # We constrain the level on the handler rather than on the root
         # logger itself.  This is probably better because the handler is
@@ -93,58 +103,64 @@ class Printer(object):
             outputcapture.OutputCapture.stream_wrapper = _CaptureAndPassThroughStream
 
     def print_started_test(self, source, test_name):
+        self.running_tests.append(test_name)
+        if len(self.running_tests) > 1:
+            suffix = ' (+%d)' % (len(self.running_tests) - 1)
+        else:
+            suffix = ''
+
         if self.options.verbose:
-            self.stream.write(test_name)
+            write = self.meter.write_update
+        else:
+            write = self.meter.write_throttled_update
+
+        write(self._test_line(self.running_tests[0], suffix))
 
     def print_finished_test(self, result, test_name, test_time, failure, err):
-        timing = ''
-        if self.options.timing:
-            timing = ' %.4fs' % test_time
-        if self.options.verbose:
-            if failure:
-                msg = ' failed'
-            elif err:
-                msg = ' erred'
+        write = self.meter.writeln
+        if failure:
+            lines = failure[0][1].splitlines() + ['']
+            suffix = ' failed:'
+        elif err:
+            lines = err[0][1].splitlines() + ['']
+            suffix = ' erred:'
+        else:
+            suffix = ' passed'
+            lines = []
+            if self.options.verbose:
+                write = self.meter.writeln
             else:
-                msg = ' passed'
-            self.stream.write(msg + timing + '\n')
+                write = self.meter.write_throttled_update
+        if self.options.timing:
+            suffix += ' %.4fs' % test_time
+
+        self.num_completed += 1
+
+        if test_name == self.running_tests[0]:
+            self.completed_tests.insert(0, [test_name, suffix, lines])
         else:
-            if failure:
-                msg = 'F'
-            elif err:
-                msg = 'E'
+            self.completed_tests.append([test_name, suffix, lines])
+        self.running_tests.remove(test_name)
+
+        for test_name, msg, lines in self.completed_tests:
+            if lines:
+                self.meter.writeln(self._test_line(test_name, msg))
+                for line in lines:
+                    self.meter.writeln('  ' + line)
             else:
-                msg = '.'
-            self.stream.write(msg)
+                write(self._test_line(test_name, msg))
+        self.completed_tests = []
 
-    def print_result(self, result, run_time):
-        self.stream.write('\n')
-
-        for (test_name, err) in result.errors:
-            self.stream.write("=" * 80 + '\n')
-            self.stream.write("ERROR: " + test_name + '\n')
-            self.stream.write("-" * 80 + '\n')
-            for line in err.splitlines():
-                self.stream.write(line + '\n')
-            self.stream.write('\n')
-
-        for (test_name, failure) in result.failures:
-            self.stream.write("=" * 80 + '\n')
-            self.stream.write("FAILURE: " + test_name + '\n')
-            self.stream.write("-" * 80 + '\n')
-            for line in failure.splitlines():
-                self.stream.write(line + '\n')
-            self.stream.write('\n')
-
-        self.stream.write('-' * 80 + '\n')
-        self.stream.write('Ran %d test%s in %.3fs\n' %
-            (result.testsRun, result.testsRun != 1 and "s" or "", run_time))
+    def _test_line(self, test_name, suffix):
+        return '[%d/%d] %s%s' % (self.num_completed, self.num_tests, test_name, suffix)
 
+    def print_result(self, result, run_time):
+        write = self.meter.writeln
+        write('Ran %d test%s in %.3fs' % (result.testsRun, result.testsRun != 1 and "s" or "", run_time))
         if result.wasSuccessful():
-            self.stream.write('\nOK\n')
+            write('\nOK\n')
         else:
-            self.stream.write('FAILED (failures=%d, errors=%d)\n' %
-                (len(result.failures), len(result.errors)))
+            write('FAILED (failures=%d, errors=%d)\n' % (len(result.failures), len(result.errors)))
 
 
 class _CaptureAndPassThroughStream(object):
index 8c415de..fd8af6f 100644 (file)
@@ -60,6 +60,7 @@ class Runner(object):
     def run(self, suite):
         run_start_time = time.time()
         all_test_names = self.all_test_names(suite)
+        self.printer.num_tests = len(all_test_names)
 
         with message_pool.get(self, self.worker_factory, int(self.options.child_processes)) as pool:
             pool.run(('test', test_name) for test_name in all_test_names)
index 9436b8c..07c5c31 100644 (file)
@@ -20,6 +20,7 @@
 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 
+import logging
 import re
 import StringIO
 import unittest
@@ -69,7 +70,22 @@ class FakeLoader(object):
 
 
 class RunnerTest(unittest.TestCase):
-    def assert_run(self, options):
+    def setUp(self):
+        # Here we have to jump through a hoop to make sure test-webkitpy doesn't log
+        # any messages from these tests :(.
+        self.root_logger = logging.getLogger()
+        self.log_levels = []
+        self.log_handlers = self.root_logger.handlers[:]
+        for handler in self.log_handlers:
+            self.log_levels.append(handler.level)
+            handler.level = logging.CRITICAL
+
+    def tearDown(self):
+        for handler in self.log_handlers:
+            handler.level = self.log_levels.pop(0)
+
+    def assert_run(self, verbose=0, timing=False, child_processes=1, quiet=False):
+        options = MockOptions(verbose=verbose, timing=timing, child_processes=child_processes, quiet=quiet, pass_through=False)
         stream = StringIO.StringIO()
         loader = FakeLoader(('test1 (Foo)', '.', ''),
                             ('test2 (Foo)', 'F', 'test2\nfailed'),
@@ -83,13 +99,13 @@ class RunnerTest(unittest.TestCase):
         # FIXME: check the output from the test
 
     def test_regular(self):
-        self.assert_run(MockOptions(verbose=0, timing=False, child_processes=1))
+        self.assert_run()
 
     def test_verbose(self):
-        self.assert_run(MockOptions(verbose=1, timing=False, child_processes=1))
+        self.assert_run(verbose=1)
 
     def test_timing(self):
-        self.assert_run(MockOptions(verbose=0, timing=True, child_processes=1))
+        self.assert_run(timing=True)
 
 
 if __name__ == '__main__':