Teach webkitpy how to check leaks and treat leaks as test failures
authorsimon.fraser@apple.com <simon.fraser@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 29 Aug 2018 17:51:32 +0000 (17:51 +0000)
committersimon.fraser@apple.com <simon.fraser@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Wed, 29 Aug 2018 17:51:32 +0000 (17:51 +0000)
https://bugs.webkit.org/show_bug.cgi?id=189067

Reviewed by Darin Adler.

Tools:

Add a new "--world-leaks" argument to run-webkit-tests. When enabled, DRT/WTR are launched
with a --world-leaks argument (which is renamed in this patch for consistency). This enables the
behavior added in r235408, namely that they check for leaked documents after each test, and at
the end of one (if --run-singly) or a set of tests run in a single DRT/WTR instance handle the
"#CHECK FOR WORLD LEAKS" command to get still-live documents.

LayoutTestRunner in webkitpy now has the notion of doing "post-tests work", called via _finished_test_group(),
and here it sends the "#CHECK FOR WORLD LEAKS" command to the runner and parses the resulting output block.
If this results block includes leaks, we convert an existing TestResult into a LEAK failure
in TestRunResults.change_result_to_failure(). Leaks are then added to the ouput JSON for display in results.html

Unit tests are updated with some leak examples.

* DumpRenderTree/mac/DumpRenderTree.mm:
(initializeGlobalsFromCommandLineOptions):
* Scripts/webkitpy/common/net/resultsjsonparser_unittest.py:
(ParsedJSONResultsTest):
* Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py:
(LayoutTestRunner._annotate_results_with_additional_failures):
(LayoutTestRunner._handle_finished_test_group):
(Worker.handle):
(Worker._run_test):
(Worker._do_post_tests_work):
(Worker._finished_test_group):
(Worker._run_test_in_another_thread):
* Scripts/webkitpy/layout_tests/layout_package/json_layout_results_generator.py:
(JSONLayoutResultsGenerator):
* Scripts/webkitpy/layout_tests/models/test_expectations.py:
(TestExpectationParser):
(TestExpectations):
* Scripts/webkitpy/layout_tests/models/test_expectations_unittest.py:
(Base.get_basic_tests):
* Scripts/webkitpy/layout_tests/models/test_failures.py:
(determine_result_type):
(FailureLeak):
(FailureLeak.__init__):
(FailureLeak.message):
(FailureDocumentLeak):
(FailureDocumentLeak.__init__):
(FailureDocumentLeak.message):
* Scripts/webkitpy/layout_tests/models/test_results.py:
(TestResult.convert_to_failure):
* Scripts/webkitpy/layout_tests/models/test_run_results.py:
(TestRunResults.change_result_to_failure):
(_interpret_test_failures):
(summarize_results):
* Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py:
(get_result):
(run_results):
(summarized_results):
* Scripts/webkitpy/layout_tests/run_webkit_tests.py:
(parse_args):
* Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py:
(parse_args):
(RunTest.test_check_for_world_leaks):
* Scripts/webkitpy/port/driver.py:
(DriverPostTestOutput):
(DriverPostTestOutput.__init__):
(Driver.do_post_tests_work):
(Driver._parse_world_leaks_output):
(Driver.cmd_line):
(DriverProxy.do_post_tests_work):
* Scripts/webkitpy/port/test.py:
(unit_test_list):
* WebKitTestRunner/Options.cpp:
(WTR::OptionsHandler::OptionsHandler):
* WebKitTestRunner/TestController.cpp:
(WTR::TestController::checkForWorldLeaks):

LayoutTests:

Put some fake leaks in full_results.json, and update results.html to show a table
of leaks when results are expanded.

* fast/harness/full_results.json:
* fast/harness/results-expected.txt:
* fast/harness/results.html:

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@235467 268f45cc-cd09-0410-ab3c-d52691b4dbfc

21 files changed:
LayoutTests/ChangeLog
LayoutTests/fast/harness/full_results.json
LayoutTests/fast/harness/results-expected.txt
LayoutTests/fast/harness/results.html
Tools/ChangeLog
Tools/DumpRenderTree/mac/DumpRenderTree.mm
Tools/Scripts/webkitpy/common/net/resultsjsonparser_unittest.py
Tools/Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py
Tools/Scripts/webkitpy/layout_tests/layout_package/json_layout_results_generator.py
Tools/Scripts/webkitpy/layout_tests/models/test_expectations.py
Tools/Scripts/webkitpy/layout_tests/models/test_expectations_unittest.py
Tools/Scripts/webkitpy/layout_tests/models/test_failures.py
Tools/Scripts/webkitpy/layout_tests/models/test_results.py
Tools/Scripts/webkitpy/layout_tests/models/test_run_results.py
Tools/Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py
Tools/Scripts/webkitpy/layout_tests/run_webkit_tests.py
Tools/Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py
Tools/Scripts/webkitpy/port/driver.py
Tools/Scripts/webkitpy/port/test.py
Tools/WebKitTestRunner/Options.cpp
Tools/WebKitTestRunner/TestController.cpp

index 647ffee..ce591a9 100644 (file)
@@ -1,3 +1,17 @@
+2018-08-29  Simon Fraser  <simon.fraser@apple.com>
+
+        Teach webkitpy how to check leaks and treat leaks as test failures
+        https://bugs.webkit.org/show_bug.cgi?id=189067
+
+        Reviewed by Darin Adler.
+        
+        Put some fake leaks in full_results.json, and update results.html to show a table
+        of leaks when results are expanded.
+
+        * fast/harness/full_results.json:
+        * fast/harness/results-expected.txt:
+        * fast/harness/results.html:
+
 2018-08-29  Truitt Savell  <tsavell@apple.com>
 
         Missed adding expctations to ios for webkit.org/b/188985
index 3e85d2a..10f7b85 100644 (file)
@@ -11,6 +11,26 @@ ADD_RESULTS({
                         "report": "REGRESSION",
                         "expected": "PASS",
                         "actual": "TEXT"
+                    },
+                    "image-fail-with-leak.html": {
+                        "report": "REGRESSION",
+                        "expected": "PASS",
+                        "actual": "IMAGE LEAK",
+                        "image_diff_percent": 0.26,
+                        "leaks": [{
+                            "document": "http://localhost:8800/WebKit/cache-storage/image-fail-with-leak.html",
+                        }]
+                    },
+                    "leaky-worker.html": {
+                        "report": "REGRESSION",
+                        "expected": "PASS",
+                        "actual": "LEAK",
+                        "has_stderr": true,
+                        "leaks": [{
+                            "document": "http://localhost:8800/WebKit/cache-storage/leaky-worker.html",
+                            }, {
+                            "document": "http://localhost:8800/WebKit/cache-storage/resources/leaky-worker-subframe.html"
+                        }]
                     }
                 }
             },
@@ -69,6 +89,14 @@ ADD_RESULTS({
                     "expected": "IMAGE",
                     "actual": "PASS",
                     "reftest_type": ["!="]
+                },
+                "spelling-leaky-ref.html": {
+                    "expected": "LEAK",
+                    "actual": "LEAK",
+                    "reftest_type": ["!="],
+                    "leaks": [{
+                        "document": "file:///Volumes/Data/slave/highsierra-release-tests-wk2/build/LayoutTests/editing/spelling/spelling-leaky-ref.html"
+                    }]
                 }
             }
         },
@@ -121,20 +149,20 @@ ADD_RESULTS({
         }
     },
     "skipped": 5367,
-    "num_regressions": 2,
+    "num_regressions": 7,
     "other_crashes": {
         "DumpRenderTree-54888": {},
         "DumpRenderTree-56804": {},
     },
     "interrupted": false,
-    "num_missing": 0,
+    "num_missing": 1,
     "layout_tests_dir": "/Volumes/Data/slave/highsierra-release-tests-wk2/build/LayoutTests",
     "version": 4,
     "num_passes": 49387,
     "pixel_tests_enabled": false,
     "date": "05:38PM on August 15, 2018",
     "has_pretty_patch": true,
-    "fixable": 6360,
+    "fixable": 5,
     "num_flaky": 0,
     "uses_expectations_file": true,
     "revision": "234905"
index 1b50530..d5318e6 100644 (file)
@@ -1,6 +1,7 @@
 Use the i, j, k and l keys to navigate, e, c to expand and collapse, and f to flag
 expand all collapse all options
 Only unexpected results
+Toggle images
 Use newlines in flagged list
 Tests that crashed (1): flag all
 
@@ -9,12 +10,14 @@ Other crashes (2): flag all
 
 +DumpRenderTree-54888  crash log
 +DumpRenderTree-56804  crash log
-Tests that failed text/pixel/audio diff (3): flag all
+Tests that failed text/pixel/audio diff (5): flag all
 
- test  results         actual failure  expected failure        history
+ test  results image results   actual failure  expected failure        history
 +css1/font_properties/font_family.html expected actual diff pretty diff        images  text missing            history
 +http/tests/storageAccess/request-storage-access-top-frame.html        expected actual diff pretty diff                text    pass timeout    history
 +http/wpt/cache-storage/cache-put-keys.https.any.worker.html   expected actual diff pretty diff                text    pass    history
++http/wpt/cache-storage/image-fail-with-leak.html              images diff (0.26%)     image leak      pass    history
++http/wpt/cache-storage/leaky-worker.html                      leak    pass    history
 Tests that had no expected results (probably new) (1): flag all
 
 test   results image results   actual failure  expected failure        history
@@ -22,9 +25,10 @@ svg/batik/smallFonts.svg                     missing         history
 Tests that timed out (1): flag all
 
 +platform/mac/media/audio-session-category-video-paused.html   expected actual diff pretty diff        history
-Tests that had stderr output (2): flag all
+Tests that had stderr output (3): flag all
 
 +http/tests/contentextensions/top-url.html     stderr  history
++http/wpt/cache-storage/leaky-worker.html      stderr  history
 +platform/mac/media/audio-session-category-video-paused.html   stderr  history
 Flaky tests (failed the first run and passed on retry) (1): flag all
 
index 8b291ad..be2b7c4 100644 (file)
@@ -40,6 +40,11 @@ td:not(:first-of-type) {
 
 th, td {
     padding: 1px 4px;
+    vertical-align: top;
+}
+
+td:nth-child(1) {  
+    min-width: 35em;
 }
 
 th:empty, td:empty {
@@ -51,6 +56,15 @@ th {
     -moz-user-select: none;
 }
 
+dt > sup {
+    vertical-align:text-top;
+    font-size:75%;
+}
+
+sup > a {
+    text-decoration: none;
+}
+
 .content-container {
     min-height: 0;
 }
@@ -138,6 +152,13 @@ tbody.flagged .flag {
     vertical-align: top;
 }
 
+.leaks > table {
+    margin: 4px;
+}
+.leaks > td {
+    padding-left: 20px;
+}
+
 #options {
     background-color: white;
 }
@@ -369,6 +390,11 @@ class TestResult
         return this.info.actual.indexOf('AUDIO') != -1;
     }
 
+    hasLeak()
+    {
+        return this.info.actual.indexOf('LEAK') != -1;
+    }
+
     isCrash()
     {
         return this.info.actual == 'CRASH';
@@ -787,6 +813,17 @@ class TestResultsController
         return basePath;
     }
 
+    convertToLayoutTestBaseRelativeURL(fullURL)
+    {
+        if (fullURL.startsWith('file://')) {
+            let urlPrefix = 'file://' + this.layoutTestsBasePath();
+            if (fullURL.startsWith(urlPrefix))
+                return fullURL.substring(urlPrefix.length);
+        }
+        
+        return fullURL;
+    }
+
     shouldUseTracLinks()
     {
         return !this.testResults.layoutTestsDir() || !location.toString().indexOf('file://') == 0;
@@ -1020,9 +1057,47 @@ class TestResultsController
             Utils.appendHTML(newCell, result);    
         }
 
+        if (testResult.hasLeak())
+            newCell.appendChild(TestResultsController._makeLeaksCell(testResult));
+
         newRow.appendChild(newCell);
+
         return newRow;
     }
+    
+    static _makeLeaksCell(testResult)
+    {
+        let container = document.createElement('div');
+        container.className = 'result-container leaks';
+        
+        let label = document.createElement('div');
+        label.className = 'label';
+        label.textContent = "Leaks";
+        container.appendChild(label);
+
+        let leaksTable = document.createElement('table');
+        
+        for (let leak of testResult.info.leaks) {
+            let leakRow = document.createElement('tr');
+
+            for (let leakedObjectType in leak) {
+                let th = document.createElement('th');
+                th.textContent = leakedObjectType;
+                let td = document.createElement('td');
+                
+                let url = leak[leakedObjectType]; // FIXME: when we track leaks other than documents, this might not be a URL.
+                td.textContent = controller.convertToLayoutTestBaseRelativeURL(url)
+                
+                leakRow.appendChild(th);
+                leakRow.appendChild(td);
+            }
+            
+            leaksTable.appendChild(leakRow);
+        }
+        
+        container.appendChild(leaksTable);
+        return container;
+    }
 
     static _resultIframe(src)
     {
index 6c746cf..3a7eb4d 100644 (file)
@@ -1,3 +1,79 @@
+2018-08-29  Simon Fraser  <simon.fraser@apple.com>
+
+        Teach webkitpy how to check leaks and treat leaks as test failures
+        https://bugs.webkit.org/show_bug.cgi?id=189067
+
+        Reviewed by Darin Adler.
+        
+        Add a new "--world-leaks" argument to run-webkit-tests. When enabled, DRT/WTR are launched
+        with a --world-leaks argument (which is renamed in this patch for consistency). This enables the
+        behavior added in r235408, namely that they check for leaked documents after each test, and at
+        the end of one (if --run-singly) or a set of tests run in a single DRT/WTR instance handle the
+        "#CHECK FOR WORLD LEAKS" command to get still-live documents.
+        
+        LayoutTestRunner in webkitpy now has the notion of doing "post-tests work", called via _finished_test_group(),
+        and here it sends the "#CHECK FOR WORLD LEAKS" command to the runner and parses the resulting output block.
+        If this results block includes leaks, we convert an existing TestResult into a LEAK failure
+        in TestRunResults.change_result_to_failure(). Leaks are then added to the ouput JSON for display in results.html
+
+        Unit tests are updated with some leak examples.
+
+        * DumpRenderTree/mac/DumpRenderTree.mm:
+        (initializeGlobalsFromCommandLineOptions):
+        * Scripts/webkitpy/common/net/resultsjsonparser_unittest.py:
+        (ParsedJSONResultsTest):
+        * Scripts/webkitpy/layout_tests/controllers/layout_test_runner.py:
+        (LayoutTestRunner._annotate_results_with_additional_failures):
+        (LayoutTestRunner._handle_finished_test_group):
+        (Worker.handle):
+        (Worker._run_test):
+        (Worker._do_post_tests_work):
+        (Worker._finished_test_group):
+        (Worker._run_test_in_another_thread):
+        * Scripts/webkitpy/layout_tests/layout_package/json_layout_results_generator.py:
+        (JSONLayoutResultsGenerator):
+        * Scripts/webkitpy/layout_tests/models/test_expectations.py:
+        (TestExpectationParser):
+        (TestExpectations):
+        * Scripts/webkitpy/layout_tests/models/test_expectations_unittest.py:
+        (Base.get_basic_tests):
+        * Scripts/webkitpy/layout_tests/models/test_failures.py:
+        (determine_result_type):
+        (FailureLeak):
+        (FailureLeak.__init__):
+        (FailureLeak.message):
+        (FailureDocumentLeak):
+        (FailureDocumentLeak.__init__):
+        (FailureDocumentLeak.message):
+        * Scripts/webkitpy/layout_tests/models/test_results.py:
+        (TestResult.convert_to_failure):
+        * Scripts/webkitpy/layout_tests/models/test_run_results.py:
+        (TestRunResults.change_result_to_failure):
+        (_interpret_test_failures):
+        (summarize_results):
+        * Scripts/webkitpy/layout_tests/models/test_run_results_unittest.py:
+        (get_result):
+        (run_results):
+        (summarized_results):
+        * Scripts/webkitpy/layout_tests/run_webkit_tests.py:
+        (parse_args):
+        * Scripts/webkitpy/layout_tests/run_webkit_tests_integrationtest.py:
+        (parse_args):
+        (RunTest.test_check_for_world_leaks):
+        * Scripts/webkitpy/port/driver.py:
+        (DriverPostTestOutput):
+        (DriverPostTestOutput.__init__):
+        (Driver.do_post_tests_work):
+        (Driver._parse_world_leaks_output):
+        (Driver.cmd_line):
+        (DriverProxy.do_post_tests_work):
+        * Scripts/webkitpy/port/test.py:
+        (unit_test_list):
+        * WebKitTestRunner/Options.cpp:
+        (WTR::OptionsHandler::OptionsHandler):
+        * WebKitTestRunner/TestController.cpp:
+        (WTR::TestController::checkForWorldLeaks):
+
 2018-08-29  David Kilzer  <ddkilzer@apple.com>
 
         Remove empty directories from from svn.webkit.org repository
index 2ad03a0..383a389 100644 (file)
@@ -1121,7 +1121,7 @@ static void initializeGlobalsFromCommandLineOptions(int argc, const char *argv[]
         {"allow-any-certificate-for-allowed-hosts", no_argument, &allowAnyHTTPSCertificateForAllowedHosts, YES},
         {"show-webview", no_argument, &showWebView, YES},
         {"print-test-count", no_argument, &printTestCount, YES},
-        {"check-for-world-leaks", no_argument, &checkForWorldLeaks, NO},
+        {"world-leaks", no_argument, &checkForWorldLeaks, NO},
         {nullptr, 0, nullptr, 0}
     };
 
index 2f21f44..67a2ef6 100644 (file)
@@ -58,7 +58,13 @@ class ParsedJSONResultsTest(unittest.TestCase):
                 },
                 "prototype-strawberry.html": {
                     "expected": "PASS",
-                    "actual": "FAIL PASS"
+                    "actual": "FAIL PASS",
+                    "leaks": {
+                        "documents": [
+                            "file:///Volumes/Data/slave/webkit/build/LayoutTests/fast/dom/prototype-strawberry.html",
+                            "about:blank"
+                        ]
+                    }
                 }
             }
         },
@@ -106,7 +112,13 @@ class ParsedJSONResultsTest(unittest.TestCase):
                 },
                 "prototype-strawberry.html": {
                     "expected": "PASS",
-                    "actual": "FAIL PASS"
+                    "actual": "FAIL PASS",
+                    "leaks": {
+                        "documents": [
+                            "file:///Volumes/Data/slave/webkit/build/LayoutTests/fast/dom/prototype-strawberry.html",
+                            "about:blank"
+                        ]
+                    }
                 }
             }
         },
index 3e52e61..076547d 100644 (file)
@@ -192,6 +192,16 @@ class LayoutTestRunner(object):
 
         self._interrupt_if_at_failure_limits(run_results)
 
+    def _annotate_results_with_additional_failures(self, run_results, results):
+        for new_result in results:
+            existing_result = run_results.results_by_name.get(new_result.test_name)
+            # When running a chunk (--run-chunk), results_by_name contains all the tests, but (confusingly) all_tests only contains those in the chunk that was run,
+            # and we don't want to modify the results of a test that didn't run. existing_result.test_number is only non-None for tests that ran.
+            if existing_result and existing_result.test_number is not None:
+                was_expected = self._expectations.matches_an_expected_result(new_result.test_name, existing_result.type, self._options.pixel_tests or existing_result.reftest_type)
+                now_expected = self._expectations.matches_an_expected_result(new_result.test_name, new_result.type, self._options.pixel_tests or new_result.reftest_type)
+                run_results.change_result_to_failure(existing_result, new_result, was_expected, now_expected)
+
     def start_servers(self):
         if self._needs_http and not self._did_start_http_server and not self._port.is_http_server_running():
             self._printer.write_update('Starting HTTP server ...')
@@ -232,6 +242,9 @@ class LayoutTestRunner(object):
     def _handle_finished_test(self, worker_name, result, log_messages=[]):
         self._update_summary_with_result(self._current_run_results, result)
 
+    def _handle_finished_test_group(self, worker_name, overlay_results, log_messages=[]):
+        self._annotate_results_with_additional_failures(self._current_run_results, overlay_results)
+
 
 class Worker(object):
     def __init__(self, caller, results_directory, options):
@@ -269,6 +282,8 @@ class Worker(object):
         for test_input in test_inputs:
             self._run_test(test_input, test_list_name)
 
+        self._finished_test_group(test_inputs)
+
     def _update_test_input(self, test_input):
         if test_input.reference_files is None:
             # Lazy initialization.
@@ -302,6 +317,29 @@ class Worker(object):
 
         self._clean_up_after_test(test_input, result)
 
+    def _do_post_tests_work(self, driver):
+        additional_results = []
+        if not driver:
+            return additional_results
+
+        post_test_output = driver.do_post_tests_work()
+        if post_test_output:
+            for test_name, doc_list in post_test_output.world_leaks_dict.iteritems():
+                additional_results.append(test_results.TestResult(test_name, [test_failures.FailureDocumentLeak(doc_list)]))
+        return additional_results
+
+    def _finished_test_group(self, test_inputs):
+        _log.debug("%s finished test group" % self._name)
+
+        if self._driver and self._driver.has_crashed():
+            self._kill_driver()
+
+        additional_results = []
+        if not self._options.run_singly:
+            additional_results = self._do_post_tests_work(self._driver)
+
+        self._caller.post('finished_test_group', additional_results)
+
     def stop(self):
         _log.debug("%s cleaning up" % self._name)
         self._kill_driver()
@@ -396,6 +434,11 @@ class Worker(object):
             # thread's results.
             _log.error('Test thread hung: killing all DumpRenderTrees')
             failures = [test_failures.FailureTimeout()]
+        else:
+            failure_results = self._do_post_tests_work(driver)
+            for failure_result in failure_results:
+                if failure_result.test_name == result.test_name:
+                    result.convert_to_failure(failure_result)
 
         driver.stop()
 
index 1bc10e6..d0c2201 100644 (file)
@@ -49,6 +49,7 @@ class JSONLayoutResultsGenerator(json_results_generator.JSONResultsGenerator):
                        test_expectations.TEXT: "F",
                        test_expectations.AUDIO: "A",
                        test_expectations.MISSING: "O",
+                       test_expectations.LEAK: "L",
                        test_expectations.IMAGE_PLUS_TEXT: "Z"}
 
     def __init__(self, port, builder_name, build_name, build_number,
index e54ac94..af4d1be 100644 (file)
@@ -43,7 +43,7 @@ _log = logging.getLogger(__name__)
 # FIXME: range() starts with 0 which makes if expectation checks harder
 # as PASS is 0.
 (PASS, FAIL, TEXT, IMAGE, IMAGE_PLUS_TEXT, AUDIO, TIMEOUT, CRASH, SKIP, WONTFIX,
- SLOW, DUMPJSCONSOLELOGINSTDERR, REBASELINE, MISSING, FLAKY, NOW, NONE) = range(17)
+ SLOW, LEAK, DUMPJSCONSOLELOGINSTDERR, REBASELINE, MISSING, FLAKY, NOW, NONE) = range(18)
 
 # FIXME: Perhas these two routines should be part of the Port instead?
 BASELINE_SUFFIX_LIST = ('png', 'wav', 'txt')
@@ -241,9 +241,11 @@ class TestExpectationParser(object):
     # FIXME: Update the original modifiers list and remove this once the old syntax is gone.
     _expectation_tokens = {
         'Crash': 'CRASH',
+        'Leak': 'LEAK',
         'Failure': 'FAIL',
         'ImageOnlyFailure': 'IMAGE',
         'Missing': 'MISSING',
+        'Leak': 'LEAK',
         'Pass': 'PASS',
         'Rebaseline': 'REBASELINE',
         'Skip': 'SKIP',
@@ -794,6 +796,7 @@ class TestExpectations(object):
                     'timeout': TIMEOUT,
                     'crash': CRASH,
                     'missing': MISSING,
+                    'leak': LEAK,
                     'skip': SKIP}
 
     # (aggregated by category, pass/fail/skip, type)
@@ -806,9 +809,10 @@ class TestExpectations(object):
                                 AUDIO: 'audio failures',
                                 CRASH: 'crashes',
                                 TIMEOUT: 'timeouts',
-                                MISSING: 'missing results'}
+                                MISSING: 'missing results',
+                                LEAK: 'leaks'}
 
-    EXPECTATION_ORDER = (PASS, CRASH, TIMEOUT, MISSING, FAIL, IMAGE, SKIP)
+    EXPECTATION_ORDER = (PASS, CRASH, TIMEOUT, MISSING, FAIL, IMAGE, LEAK, SKIP)
 
     BUILD_TYPES = ('debug', 'release')
 
index 07595d2..6bda973 100644 (file)
@@ -53,6 +53,7 @@ class Base(unittest.TestCase):
         return ['failures/expected/text.html',
                 'failures/expected/image_checksum.html',
                 'failures/expected/crash.html',
+                'failures/expected/leak.html',
                 'failures/expected/missing_text.html',
                 'failures/expected/image.html',
                 'passes/text.html']
@@ -61,6 +62,7 @@ class Base(unittest.TestCase):
         return """
 Bug(test) failures/expected/text.html [ Failure ]
 Bug(test) failures/expected/crash.html [ WontFix ]
+Bug(test) failures/expected/crash.html [ Leak ]
 Bug(test) failures/expected/missing_image.html [ Rebaseline Missing ]
 Bug(test) failures/expected/image_checksum.html [ WontFix ]
 Bug(test) failures/expected/image.html [ WontFix Mac ]
@@ -89,6 +91,7 @@ class BasicTests(Base):
         self.parse_exp(self.get_basic_expectations())
         self.assert_exp('failures/expected/text.html', FAIL)
         self.assert_exp('failures/expected/image_checksum.html', PASS)
+        self.assert_exp('failures/expected/leak.html', PASS)
         self.assert_exp('passes/text.html', PASS)
         self.assert_exp('failures/expected/image.html', PASS)
 
index 7aeb8cf..b7476ca 100644 (file)
@@ -59,6 +59,8 @@ def determine_result_type(failure_list):
         return test_expectations.TIMEOUT
     elif FailureEarlyExit in failure_types:
         return test_expectations.SKIP
+    elif FailureDocumentLeak in failure_types:
+        return test_expectations.LEAK
     elif (FailureMissingResult in failure_types or
           FailureMissingImage in failure_types or
           FailureMissingImageHash in failure_types or
@@ -160,6 +162,23 @@ class FailureCrash(TestFailure):
         writer.write_crash_log(crashed_driver_output.crash_log)
 
 
+class FailureLeak(TestFailure):
+    def __init__(self):
+        super(FailureLeak, self).__init__()
+
+    def message(self):
+        return "leak"
+
+
+class FailureDocumentLeak(FailureLeak):
+    def __init__(self, leaked_document_urls):
+        super(FailureDocumentLeak, self).__init__()
+        self.leaked_document_urls = leaked_document_urls
+
+    def message(self):
+        return "test leaked document%s %s" % ("s" if len(self.leaked_document_urls) else "", ', '.join(self.leaked_document_urls))
+
+
 class FailureMissingResult(FailureText):
     def message(self):
         return "-expected.txt was missing"
index 82e1fb2..922a96e 100644 (file)
@@ -68,6 +68,12 @@ class TestResult(object):
     def __hash__(self):
         return self.test_name.__hash__()
 
+    def convert_to_failure(self, failure_result):
+        if self.type is failure_result.type:
+            return
+        self.failures.extend(failure_result.failures)
+        self.type = test_failures.determine_result_type(self.failures)
+
     def has_failure_matching_types(self, *failure_classes):
         for failure in self.failures:
             if type(failure) in failure_classes:
index 5021a2f..197520d 100644 (file)
@@ -93,6 +93,39 @@ class TestRunResults(object):
         if test_is_slow:
             self.slow_tests.add(test_result.test_name)
 
+    def change_result_to_failure(self, existing_result, new_result, existing_expected, new_expected):
+        assert existing_result.test_name == new_result.test_name
+        if existing_result.type is new_result.type:
+            return
+
+        self.tests_by_expectation[existing_result.type].remove(existing_result.test_name)
+        self.tests_by_expectation[new_result.type].add(new_result.test_name)
+
+        had_failures = len(existing_result.failures) > 0
+
+        existing_result.convert_to_failure(new_result)
+
+        if not had_failures and len(existing_result.failures):
+            self.total_failures += 1
+
+        if len(existing_result.failures):
+            self.failures_by_name[existing_result.test_name] = existing_result.failures
+
+        if not existing_expected and new_expected:
+            # test changed from unexpected to expected
+            self.expected += 1
+            self.unexpected_results_by_name.pop(existing_result.test_name, None)
+            self.unexpected -= 1
+            if had_failures:
+                self.unexpected_failures -= 1
+        else:
+            # test changed from expected to unexpected
+            self.expected -= 1
+            self.unexpected_results_by_name[existing_result.test_name] = existing_result
+            self.unexpected += 1
+            if len(existing_result.failures):
+                self.unexpected_failures += 1
+
     def merge(self, test_run_results):
         if not test_run_results:
             return self
@@ -132,6 +165,7 @@ class RunDetails(object):
 
 def _interpret_test_failures(failures):
     test_dict = {}
+
     failure_types = [type(failure) for failure in failures]
     # FIXME: get rid of all this is_* values once there is a 1:1 map between
     # TestFailure type and test_expectations.EXPECTATION.
@@ -144,6 +178,14 @@ def _interpret_test_failures(failures):
     if test_failures.FailureMissingImage in failure_types or test_failures.FailureMissingImageHash in failure_types:
         test_dict['is_missing_image'] = True
 
+    if test_failures.FailureDocumentLeak in failure_types:
+        leaks = []
+        for failure in failures:
+            if isinstance(failure, test_failures.FailureDocumentLeak):
+                for url in failure.leaked_document_urls:
+                    leaks.append({"document": url})
+        test_dict['leaks'] = leaks
+
     if 'image_diff_percent' not in test_dict:
         for failure in failures:
             if isinstance(failure, test_failures.FailureImageHashMismatch) or isinstance(failure, test_failures.FailureReftestMismatch):
@@ -178,8 +220,8 @@ def summarize_results(port_obj, expectations, initial_results, retry_results, en
     num_missing = 0
     num_regressions = 0
     keywords = {}
-    for expecation_string, expectation_enum in test_expectations.TestExpectations.EXPECTATIONS.iteritems():
-        keywords[expectation_enum] = expecation_string.upper()
+    for expectation_string, expectation_enum in test_expectations.TestExpectations.EXPECTATIONS.iteritems():
+        keywords[expectation_enum] = expectation_string.upper()
 
     for modifier_string, modifier_enum in test_expectations.TestExpectations.MODIFIERS.iteritems():
         keywords[modifier_enum] = modifier_string.upper()
@@ -225,6 +267,10 @@ def summarize_results(port_obj, expectations, initial_results, retry_results, en
             if test_name in initial_results.unexpected_results_by_name:
                 num_missing += 1
                 test_dict['report'] = 'MISSING'
+        elif result_type == test_expectations.LEAK:
+            if test_name in initial_results.unexpected_results_by_name:
+                num_regressions += 1
+                test_dict['report'] = 'REGRESSION'
         elif test_name in initial_results.unexpected_results_by_name:
             if retry_results and test_name not in retry_results.unexpected_results_by_name:
                 actual.extend(expectations.model().get_expectations_string(test_name).split(" "))
index 78f7397..cbda199 100644 (file)
@@ -43,12 +43,14 @@ def get_result(test_name, result_type=test_expectations.PASS, run_time=0):
         failures = [test_failures.FailureAudioMismatch()]
     elif result_type == test_expectations.CRASH:
         failures = [test_failures.FailureCrash()]
+    elif result_type == test_expectations.LEAK:
+        failures = [test_failures.FailureDocumentLeak(['http://localhost:8000/failures/expected/leak.html'])]
     return test_results.TestResult(test_name, failures=failures, test_run_time=run_time)
 
 
 def run_results(port):
     tests = ['passes/text.html', 'failures/expected/timeout.html', 'failures/expected/crash.html', 'failures/expected/hang.html',
-             'failures/expected/audio.html']
+             'failures/expected/audio.html', 'failures/expected/leak.html']
     expectations = test_expectations.TestExpectations(port, tests)
     expectations.parse_all_expectations()
     return test_run_results.TestRunResults(expectations, len(tests))
@@ -63,16 +65,19 @@ def summarized_results(port, expected, passing, flaky, include_passes=False):
         initial_results.add(get_result('failures/expected/audio.html', test_expectations.AUDIO), expected, test_is_slow)
         initial_results.add(get_result('failures/expected/timeout.html', test_expectations.TIMEOUT), expected, test_is_slow)
         initial_results.add(get_result('failures/expected/crash.html', test_expectations.CRASH), expected, test_is_slow)
+        initial_results.add(get_result('failures/expected/leak.html', test_expectations.LEAK), expected, test_is_slow)
     elif passing:
         initial_results.add(get_result('passes/text.html'), expected, test_is_slow)
         initial_results.add(get_result('failures/expected/audio.html'), expected, test_is_slow)
         initial_results.add(get_result('failures/expected/timeout.html'), expected, test_is_slow)
         initial_results.add(get_result('failures/expected/crash.html'), expected, test_is_slow)
+        initial_results.add(get_result('failures/expected/leak.html'), expected, test_is_slow)
     else:
         initial_results.add(get_result('passes/text.html', test_expectations.TIMEOUT), expected, test_is_slow)
         initial_results.add(get_result('failures/expected/audio.html', test_expectations.AUDIO), expected, test_is_slow)
         initial_results.add(get_result('failures/expected/timeout.html', test_expectations.CRASH), expected, test_is_slow)
         initial_results.add(get_result('failures/expected/crash.html', test_expectations.TIMEOUT), expected, test_is_slow)
+        initial_results.add(get_result('failures/expected/leak.html', test_expectations.CRASH), expected, test_is_slow)
 
         # we only list hang.html here, since normally this is WontFix
         initial_results.add(get_result('failures/expected/hang.html', test_expectations.TIMEOUT), expected, test_is_slow)
@@ -82,6 +87,7 @@ def summarized_results(port, expected, passing, flaky, include_passes=False):
         retry_results.add(get_result('passes/text.html'), True, test_is_slow)
         retry_results.add(get_result('failures/expected/timeout.html'), True, test_is_slow)
         retry_results.add(get_result('failures/expected/crash.html'), True, test_is_slow)
+        retry_results.add(get_result('failures/expected/leak.html'), True, test_is_slow)
     else:
         retry_results = None
 
index ced6b9d..4e8e315 100755 (executable)
@@ -290,6 +290,7 @@ def parse_args(args):
         optparse.make_option('--display-server', choices=['xvfb', 'xorg', 'weston', 'wayland'], default='xvfb',
             help='"xvfb": Use a virtualized X11 server. "xorg": Use the current X11 session. '
                  '"weston": Use a virtualized Weston server. "wayland": Use the current wayland session.'),
+        optparse.make_option("--world-leaks", action="store_true", default=False, help="Check for world leaks (currently, only documents). Differs from --leaks in that this uses internal instrumentation, rather than external tools."),
     ]))
 
     option_group_definitions.append(("iOS Options", [
index 7ab327a..f04cb58 100644 (file)
@@ -67,6 +67,10 @@ def parse_args(extra_args=None, tests_included=False, new_results=False, print_n
 
     if not '--child-processes' in extra_args:
         args.extend(['--child-processes', 1])
+
+    if not '--world-leaks' in extra_args:
+        args.append('--world-leaks')
+
     args.extend(extra_args)
     if not tests_included:
         # We use the glob to test that globbing works.
@@ -332,6 +336,9 @@ class RunTest(unittest.TestCase, StreamTestingMixin):
     def test_gc_between_tests(self):
         self.assertTrue(passing_run(['--gc-between-tests']))
 
+    def test_check_for_world_leaks(self):
+        self.assertTrue(passing_run(['--world-leaks']))
+
     def test_complex_text(self):
         self.assertTrue(passing_run(['--complex-text']))
 
index ea287fe..a803adb 100644 (file)
@@ -34,6 +34,7 @@ import shlex
 import sys
 import time
 import os
+from collections import defaultdict
 
 from os.path import normpath
 from webkitpy.common.system import path
@@ -119,6 +120,13 @@ class DriverOutput(object):
             self.error = re.sub(pattern[0], pattern[1], self.error)
 
 
+class DriverPostTestOutput(object):
+    """Groups data collected for a set of tests, collected after all those testse have run
+    (for example, data about leaked objects)"""
+    def __init__(self, world_leaks_dict):
+        self.world_leaks_dict = world_leaks_dict
+
+
 class Driver(object):
     """object for running test(s) using DumpRenderTree/WebKitTestRunner."""
 
@@ -242,6 +250,38 @@ class Driver(object):
             crashed_process_name=self._crashed_process_name,
             crashed_pid=self._crashed_pid, crash_log=crash_log, pid=pid)
 
+    def do_post_tests_work(self):
+        if not self._port.get_option('world_leaks'):
+            return None
+
+        if not self._server_process:
+            return None
+
+        _log.debug('Checking for world leaks...')
+        self._server_process.write('#CHECK FOR WORLD LEAKS\n')
+        deadline = time.time() + 20
+        block = self._read_block(deadline, '', wait_for_stderr_eof=True)
+
+        _log.debug('World leak result: %s' % (block.decoded_content))
+
+        return self._parse_world_leaks_output(block.decoded_content)
+
+    def _parse_world_leaks_output(self, output):
+        tests_with_world_leaks = defaultdict(list)
+
+        last_test = None
+        for line in output.splitlines():
+            m = re.match('^TEST: (.+)$', line)
+            if m:
+                last_test = self.uri_to_test(m.group(1))
+            m = re.match('^ABANDONED DOCUMENT: (.+)$', line)
+            if m:
+                leaked_document_url = m.group(1)
+                if last_test:
+                    tests_with_world_leaks[last_test].append(leaked_document_url)
+
+        return DriverPostTestOutput(tests_with_world_leaks)
+
     def _get_crash_log(self, stdout, stderr, newer_than):
         return self._port._get_crash_log(self._crashed_process_name, self._crashed_pid, stdout, stderr, newer_than, target_host=self._target_host)
 
@@ -432,6 +472,8 @@ class Driver(object):
             cmd.append('--accelerated-drawing')
         if self._port.get_option('remote_layer_tree'):
             cmd.append('--remote-layer-tree')
+        if self._port.get_option('world_leaks'):
+            cmd.append('--world-leaks')
         if self._port.get_option('threaded'):
             cmd.append('--threaded')
         if self._no_timeout:
@@ -705,6 +747,9 @@ class DriverProxy(object):
 
         return self._driver.run_test(driver_input, stop_when_done)
 
+    def do_post_tests_work(self):
+        return self._driver.do_post_tests_work()
+
     def has_crashed(self):
         return self._driver.has_crashed()
 
index df5e67d..f2d6c6f 100644 (file)
@@ -97,12 +97,12 @@ class TestList(object):
 #
 # These numbers may need to be updated whenever we add or delete tests.
 #
-TOTAL_TESTS = 72
+TOTAL_TESTS = 74
 TOTAL_SKIPS = 9
-TOTAL_RETRIES = 14
+TOTAL_RETRIES = 15
 
 UNEXPECTED_PASSES = 7
-UNEXPECTED_FAILURES = 17
+UNEXPECTED_FAILURES = 18
 
 
 def unit_test_list():
@@ -115,6 +115,7 @@ def unit_test_list():
     tests.add('failures/expected/timeout.html', timeout=True)
     tests.add('failures/expected/hang.html', hang=True)
     tests.add('failures/expected/missing_text.html', expected_text=None)
+    tests.add('failures/expected/leak.html', leak=True)
     tests.add('failures/expected/image.html',
               actual_image='image_fail-pngtEXtchecksum\x00checksum_fail',
               expected_image='image-pngtEXtchecksum\x00checksum-png')
@@ -176,6 +177,7 @@ layer at (0,0) size 800x34
     tests.add('failures/unexpected/skip_pass.html')
     tests.add('failures/unexpected/text.html', actual_text='text_fail-txt')
     tests.add('failures/unexpected/timeout.html', timeout=True)
+    tests.add('failures/unexpected/leak.html', leak=True)
     tests.add('http/tests/passes/text.html')
     tests.add('http/tests/passes/image.html')
     tests.add('http/tests/ssl/text.html')
@@ -280,6 +282,7 @@ def add_unit_tests_to_mock_filesystem(filesystem):
     if not filesystem.exists(LAYOUT_TEST_DIR + '/platform/test/TestExpectations'):
         filesystem.write_text_file(LAYOUT_TEST_DIR + '/platform/test/TestExpectations', """
 Bug(test) failures/expected/crash.html [ Crash ]
+Bug(test) failures/expected/leak.html [ Leak ]
 Bug(test) failures/expected/image.html [ ImageOnlyFailure ]
 Bug(test) failures/expected/audio.html [ Failure ]
 Bug(test) failures/expected/image_checksum.html [ ImageOnlyFailure ]
@@ -591,5 +594,17 @@ class TestDriver(Driver):
             crashed_pid=crashed_pid, crash_log=crash_log,
             test_time=time.time() - start_time, timeout=test.timeout, error=test.error, pid=self.pid)
 
+    def do_post_tests_work(self):
+        if not self._port.get_option('world_leaks'):
+            return None
+
+        test_world_leaks_output = """TEST: file:///test.checkout/LayoutTests/failures/expected/leak.html
+ABANDONED DOCUMENT: file:///test.checkout/LayoutTests//failures/expected/leak.html
+TEST: file:///test.checkout/LayoutTests/failures/unexpected/leak.html
+ABANDONED DOCUMENT: file:///test.checkout/LayoutTests//failures/expected/leak.html
+TEST: file:///test.checkout/LayoutTests/failures/unexpected/leak.html
+ABANDONED DOCUMENT: file:///test.checkout/LayoutTests//failures/expected/leak-subframe.html"""
+        return self._parse_world_leaks_output(test_world_leaks_output)
+
     def stop(self):
         self.started = False
index a51440b..9dd37b0 100644 (file)
@@ -135,7 +135,7 @@ OptionsHandler::OptionsHandler(Options& o)
     optionList.append(Option("--allow-any-certificate-for-allowed-hosts", "Allows any HTTPS certificate for an allowed host.", handleOptionAllowAnyHTTPSCertificateForAllowedHosts));
     optionList.append(Option("--show-webview", "Show the WebView during test runs (for debugging)", handleOptionShowWebView));
     optionList.append(Option("--show-touches", "Show the touches during test runs (for debugging)", handleOptionShowTouches));
-    optionList.append(Option("--check-for-world-leaks", "Check for leaks of world objects (currently, documents)", handleOptionCheckForWorldLeaks));
+    optionList.append(Option("--world-leaks", "Check for leaks of world objects (currently, documents)", handleOptionCheckForWorldLeaks));
 
     optionList.append(Option(0, 0, handleOptionUnmatched));
 }
index 68a4d12..b63da01 100644 (file)
@@ -950,6 +950,9 @@ void TestController::updateLiveDocumentsAfterTest()
 
 void TestController::checkForWorldLeaks()
 {
+    if (!TestController::singleton().mainWebView())
+        return;
+
     AsyncTask([]() {
         // This runs at the end of a series of tests. It clears caches, runs a GC and then fetches the list of documents.
         WKRetainPtr<WKStringRef> messageName = adoptWK(WKStringCreateWithUTF8CString("CheckForWorldLeaks"));