https://bugs.webkit.org/show_bug.cgi?id=150056
Tools:
<rdar://problem/
9280656>
Patch by Aakash Jain <aakash_jain@apple.com> on 2015-10-20
Reviewed by Alexey Proskuryakov.
* Scripts/webkitpy/common/system/crashlogs.py:
(CrashLogs.find_all_logs): Generic method to find all crash logs.
(CrashLogs._find_all_logs_darwin): Darwin based method to find all crash logs.
It iterates through log directory and returns all the logs based on timestamp.
* Scripts/webkitpy/common/system/crashlogs_unittest.py:
(CrashLogsTest.create_crash_logs_darwin): Creates sample crash logs and verify them.
(CrashLogsTest.test_find_all_log_darwin): Testcase for above find_all_logs method
(CrashLogsTest.test_find_log_darwin): Restructured to share code with other methods.
* Scripts/webkitpy/layout_tests/controllers/manager.py:
(Manager.run): Modified start_time to start counting before simulator launch
so that we can capture crashes during simualator launch.
(Manager._look_for_new_crash_logs): Browse through list of crashes and append
any test which is not already marked as CRASH to the run_results.
* Scripts/webkitpy/layout_tests/models/test_expectations.py:
(TestExpectationsModel.get_expectations_string): return PASS in case there
are no expectations defined for this test.
* Scripts/webkitpy/layout_tests/models/test_run_results.py:
(summarize_results): Add other_crashes in a separte category in full_results.json.
* Scripts/webkitpy/port/ios.py:
(IOSSimulatorPort._merge_crash_logs): Merge unique crash logs from two dictionaries.
(IOSSimulatorPort._look_for_all_crash_logs_in_log_dir): Get the crash logs
from the log directory.
(IOSSimulatorPort.look_for_new_crash_logs): Uses above method to get crash
logs from log directory and merge them with the list of already crashed tests.
LayoutTests:
<rdar://problem/
22239750>
Patch by Aakash Jain <aakash_jain@apple.com> on 2015-10-20
Reviewed by Alexey Proskuryakov.
* fast/harness/results.html: Added the column for Other crashes, this contain
all the newly find crashes from the crash-log directory. Added method forOtherCrashes
which processes othre_crashes section from full_results.json. Also fixed the method
splitExtension to handle the case when there is no extension.
git-svn-id: https://svn.webkit.org/repository/webkit/trunk@191374
268f45cc-cd09-0410-ab3c-
d52691b4dbfc
+2015-10-20 Aakash Jain <aakash_jain@apple.com>
+
+ run-webkit-tests does not copy all crash logs for layout test failures on iOS
+ https://bugs.webkit.org/show_bug.cgi?id=150056
+ <rdar://problem/22239750>
+
+ Reviewed by Alexey Proskuryakov.
+
+ * fast/harness/results.html: Added the column for Other crashes, this contain
+ all the newly find crashes from the crash-log directory. Added method forOtherCrashes
+ which processes othre_crashes section from full_results.json. Also fixed the method
+ splitExtension to handle the case when there is no extension.
+
2015-10-20 Mark Lam <mark.lam@apple.com>
YarrPatternConstructor::containsCapturingTerms() should not assume that its terms.size() is greater than 0.
if (!g_state) {
g_state = {
crashTests: [],
+ crashOther: [],
flakyPassTests: [],
hasHttpTests: false,
hasImageFailures: false,
function splitExtension(test)
{
var index = test.lastIndexOf('.');
+ if (index == -1) {
+ return [test, ""];
+ }
return [test.substring(0, index), test.substring(index + 1)];
}
}
}
+function forOtherCrashes()
+{
+ var tree = globalState().results.other_crashes;
+ for (var key in tree) {
+ var testObject = tree[key];
+ testObject.name = key;
+ globalState().crashOther.push(testObject);
+ }
+}
+
function hasUnexpected(tests)
{
return tests.some(function (test) { return !test.isExpected; });
function generatePage()
{
forEachTest(processGlobalStateFor);
+ forOtherCrashes();
var html = "";
if (globalState().crashTests.length)
html += testList(globalState().crashTests, 'Tests that crashed', 'crash-tests-table');
+ if (globalState().crashOther.length)
+ html += testList(globalState().crashOther, 'Other Crashes', 'crash-tests-table');
+
html += failingTestsTable(globalState().failingTests,
'Tests that failed text/pixel/audio diff', 'results-table');
+2015-10-20 Aakash Jain <aakash_jain@apple.com>
+
+ run-webkit-tests does not copy all crash logs for layout test failures on iOS
+ https://bugs.webkit.org/show_bug.cgi?id=150056
+ <rdar://problem/9280656>
+
+ Reviewed by Alexey Proskuryakov.
+
+ * Scripts/webkitpy/common/system/crashlogs.py:
+ (CrashLogs.find_all_logs): Generic method to find all crash logs.
+ (CrashLogs._find_all_logs_darwin): Darwin based method to find all crash logs.
+ It iterates through log directory and returns all the logs based on timestamp.
+ * Scripts/webkitpy/common/system/crashlogs_unittest.py:
+ (CrashLogsTest.create_crash_logs_darwin): Creates sample crash logs and verify them.
+ (CrashLogsTest.test_find_all_log_darwin): Testcase for above find_all_logs method
+ (CrashLogsTest.test_find_log_darwin): Restructured to share code with other methods.
+ * Scripts/webkitpy/layout_tests/controllers/manager.py:
+ (Manager.run): Modified start_time to start counting before simulator launch
+ so that we can capture crashes during simualator launch.
+ (Manager._look_for_new_crash_logs): Browse through list of crashes and append
+ any test which is not already marked as CRASH to the run_results.
+ * Scripts/webkitpy/layout_tests/models/test_expectations.py:
+ (TestExpectationsModel.get_expectations_string): return PASS in case there
+ are no expectations defined for this test.
+ * Scripts/webkitpy/layout_tests/models/test_run_results.py:
+ (summarize_results): Add other_crashes in a separte category in full_results.json.
+ * Scripts/webkitpy/port/ios.py:
+ (IOSSimulatorPort._merge_crash_logs): Merge unique crash logs from two dictionaries.
+ (IOSSimulatorPort._look_for_all_crash_logs_in_log_dir): Get the crash logs
+ from the log directory.
+ (IOSSimulatorPort.look_for_new_crash_logs): Uses above method to get crash
+ logs from log directory and merge them with the list of already crashed tests.
+
2015-10-20 Dana Burkart <dburkart@apple.com>
Fix the build
return self._find_newest_log_win(process_name, pid, include_errors, newer_than)
return None
+ def find_all_logs(self, include_errors=False, newer_than=None):
+ if self._host.platform.is_mac():
+ return self._find_all_logs_darwin(include_errors, newer_than)
+ return None
+
def _log_directory_darwin(self):
log_directory = self._host.filesystem.expanduser("~")
log_directory = self._host.filesystem.join(log_directory, "Library", "Logs")
if include_errors and errors:
return errors
return None
+
+ def _find_all_logs_darwin(self, include_errors, newer_than):
+ def is_crash_log(fs, dirpath, basename):
+ return basename.endswith(".crash")
+
+ log_directory = self._log_directory_darwin()
+ logs = self._host.filesystem.files_under(log_directory, file_filter=is_crash_log)
+ first_line_regex = re.compile(r'^Process:\s+(?P<process_name>.*) \[(?P<pid>\d+)\]$')
+ errors = ''
+ crash_logs = {}
+ for path in reversed(sorted(logs)):
+ try:
+ if not newer_than or self._host.filesystem.mtime(path) > newer_than:
+ result_name = "Unknown"
+ pid = 0
+ log_contents = self._host.filesystem.read_text_file(path)
+ match = first_line_regex.match(log_contents[0:log_contents.find('\n')])
+ if match:
+ process_name = match.group('process_name')
+ pid = str(match.group('pid'))
+ result_name = process_name + "-" + pid
+
+ while result_name in crash_logs:
+ result_name = result_name + "-1"
+ crash_logs[result_name] = errors + log_contents
+ except IOError, e:
+ if include_errors:
+ errors += "ERROR: Failed to read '%s': %s\n" % (path, str(e))
+ except OSError, e:
+ if include_errors:
+ errors += "ERROR: Failed to read '%s': %s\n" % (path, str(e))
+
+ if include_errors and errors and len(crash_logs) == 0:
+ return errors
+ return crash_logs
""".format(process_name=process_name, pid=pid)
class CrashLogsTest(unittest.TestCase):
- def test_find_log_darwin(self):
+ def create_crash_logs_darwin(self):
if not SystemHost().platform.is_mac():
return
- older_mock_crash_report = make_mock_crash_report_darwin('DumpRenderTree', 28528)
- mock_crash_report = make_mock_crash_report_darwin('DumpRenderTree', 28530)
- newer_mock_crash_report = make_mock_crash_report_darwin('DumpRenderTree', 28529)
- other_process_mock_crash_report = make_mock_crash_report_darwin('FooProcess', 28527)
- misformatted_mock_crash_report = 'Junk that should not appear in a crash report' + make_mock_crash_report_darwin('DumpRenderTree', 28526)[200:]
- files = {}
- files['/Users/mock/Library/Logs/DiagnosticReports/DumpRenderTree_2011-06-13-150718_quadzen.crash'] = older_mock_crash_report
- files['/Users/mock/Library/Logs/DiagnosticReports/DumpRenderTree_2011-06-13-150719_quadzen.crash'] = mock_crash_report
- files['/Users/mock/Library/Logs/DiagnosticReports/DumpRenderTree_2011-06-13-150720_quadzen.crash'] = newer_mock_crash_report
- files['/Users/mock/Library/Logs/DiagnosticReports/DumpRenderTree_2011-06-13-150721_quadzen.crash'] = None
- files['/Users/mock/Library/Logs/DiagnosticReports/DumpRenderTree_2011-06-13-150722_quadzen.crash'] = other_process_mock_crash_report
- files['/Users/mock/Library/Logs/DiagnosticReports/DumpRenderTree_2011-06-13-150723_quadzen.crash'] = misformatted_mock_crash_report
- filesystem = MockFileSystem(files)
- crash_logs = CrashLogs(MockSystemHost(filesystem=filesystem))
+ self.older_mock_crash_report = make_mock_crash_report_darwin('DumpRenderTree', 28528)
+ self.mock_crash_report = make_mock_crash_report_darwin('DumpRenderTree', 28530)
+ self.newer_mock_crash_report = make_mock_crash_report_darwin('DumpRenderTree', 28529)
+ self.other_process_mock_crash_report = make_mock_crash_report_darwin('FooProcess', 28527)
+ self.misformatted_mock_crash_report = 'Junk that should not appear in a crash report' + make_mock_crash_report_darwin('DumpRenderTree', 28526)[200:]
+ self.files = {}
+ self.files['/Users/mock/Library/Logs/DiagnosticReports/DumpRenderTree_2011-06-13-150718_quadzen.crash'] = self.older_mock_crash_report
+ self.files['/Users/mock/Library/Logs/DiagnosticReports/DumpRenderTree_2011-06-13-150719_quadzen.crash'] = self.mock_crash_report
+ self.files['/Users/mock/Library/Logs/DiagnosticReports/DumpRenderTree_2011-06-13-150720_quadzen.crash'] = self.newer_mock_crash_report
+ self.files['/Users/mock/Library/Logs/DiagnosticReports/DumpRenderTree_2011-06-13-150721_quadzen.crash'] = None
+ self.files['/Users/mock/Library/Logs/DiagnosticReports/DumpRenderTree_2011-06-13-150722_quadzen.crash'] = self.other_process_mock_crash_report
+ self.files['/Users/mock/Library/Logs/DiagnosticReports/DumpRenderTree_2011-06-13-150723_quadzen.crash'] = self.misformatted_mock_crash_report
+ self.filesystem = MockFileSystem(self.files)
+ crash_logs = CrashLogs(MockSystemHost(filesystem=self.filesystem))
+ logs = self.filesystem.files_under('/Users/mock/Library/Logs/DiagnosticReports/')
+ for path in reversed(sorted(logs)):
+ self.assertTrue(path in self.files.keys())
+ return crash_logs
+
+ def test_find_all_log_darwin(self):
+ crash_logs = self.create_crash_logs_darwin()
+ all_logs = crash_logs.find_all_logs()
+ self.assertEqual(len(all_logs), 5)
+
+ for test, crash_log in all_logs.iteritems():
+ self.assertTrue(crash_log in self.files.values())
+ self.assertTrue(test == "Unknown" or int(test.split("-")[1]) in range(28527, 28531))
+
+ def test_find_log_darwin(self):
+ crash_logs = self.create_crash_logs_darwin()
log = crash_logs.find_newest_log("DumpRenderTree")
- self.assertMultiLineEqual(log, newer_mock_crash_report)
+ self.assertMultiLineEqual(log, self.newer_mock_crash_report)
log = crash_logs.find_newest_log("DumpRenderTree", 28529)
- self.assertMultiLineEqual(log, newer_mock_crash_report)
+ self.assertMultiLineEqual(log, self.newer_mock_crash_report)
log = crash_logs.find_newest_log("DumpRenderTree", 28530)
- self.assertMultiLineEqual(log, mock_crash_report)
+ self.assertMultiLineEqual(log, self.mock_crash_report)
log = crash_logs.find_newest_log("DumpRenderTree", 28531)
self.assertIsNone(log)
log = crash_logs.find_newest_log("DumpRenderTree", newer_than=1.0)
def bad_mtime(path):
raise OSError('OSError: No such file or directory')
- filesystem.read_text_file = bad_read
+ self.filesystem.read_text_file = bad_read
log = crash_logs.find_newest_log("DumpRenderTree", 28531, include_errors=True)
self.assertIn('IOError: No such file or directory', log)
- filesystem = MockFileSystem(files)
- crash_logs = CrashLogs(MockSystemHost(filesystem=filesystem))
- filesystem.mtime = bad_mtime
+ self.filesystem = MockFileSystem(self.files)
+ crash_logs = CrashLogs(MockSystemHost(filesystem=self.filesystem))
+ self.filesystem.mtime = bad_mtime
log = crash_logs.find_newest_log("DumpRenderTree", newer_than=1.0, include_errors=True)
self.assertIn('OSError: No such file or directory', log)
from webkitpy.layout_tests.layout_package import json_results_generator
from webkitpy.layout_tests.models import test_expectations
from webkitpy.layout_tests.models import test_failures
+from webkitpy.layout_tests.models import test_results
from webkitpy.layout_tests.models import test_run_results
from webkitpy.layout_tests.models.test_input import TestInput
from webkitpy.layout_tests.models.test_run_results import INTERRUPTED_EXIT_STATUS
tests_to_run, tests_to_skip = self._prepare_lists(paths, test_names)
self._printer.print_found(len(test_names), len(tests_to_run), self._options.repeat_each, self._options.iterations)
+ start_time = time.time()
# Check to make sure we're not skipping every test.
if not tests_to_run:
if not self._set_up_run(tests_to_run):
return test_run_results.RunDetails(exit_code=-1)
- start_time = time.time()
enabled_pixel_tests_in_retry = False
try:
initial_results = self._run_tests(tests_to_run, tests_to_skip, self._options.repeat_each, self._options.iterations,
writer = TestResultWriter(self._port._filesystem, self._port, self._port.results_directory(), test)
writer.write_crash_log(crash_log)
+ # Check if this crashing 'test' is already in list of crashed_processes, if not add it to the run_results
+ if not any(process[0] == test for process in crashed_processes):
+ result = test_results.TestResult(test)
+ result.type = test_expectations.CRASH
+ result.is_other_crash = True
+ run_results.add(result, expected=False, test_is_slow=False)
+ _log.debug("Adding results for other crash: " + str(test))
+
def _clobber_old_results(self):
# Just clobber the actual test results directories since the other
# files in the results directory are explicitly used for cross-run
def get_expectations_string(self, test):
"""Returns the expectatons for the given test as an uppercase string.
If there are no expectations for the test, then "PASS" is returned."""
- expectations = self.get_expectations(test)
+ try:
+ expectations = self.get_expectations(test)
+ except:
+ return "PASS"
retval = []
for expectation in expectations:
self.shard_name = ''
self.total_run_time = 0 # The time taken to run the test plus any references, compute diffs, etc.
self.test_number = None
+ self.is_other_crash = False
def __eq__(self, other):
return (self.test_name == other.test_name and
'tests': a dict of tests -> {'expected': '...', 'actual': '...'}
"""
results = {}
- results['version'] = 3
+ results['version'] = 4
tbe = initial_results.tests_by_expectation
tbt = initial_results.tests_by_timeline
keywords[modifier_enum] = modifier_string.upper()
tests = {}
+ other_crashes_dict = {}
for test_name, result in initial_results.results_by_name.iteritems():
# Note that if a test crashed in the original run, we ignore
if result_type == test_expectations.SKIP:
continue
+ if result.is_other_crash:
+ other_crashes_dict[test_name] = {}
+ continue
+
test_dict = {}
if result.has_stderr:
test_dict['has_stderr'] = True
results['layout_tests_dir'] = port_obj.layout_tests_dir()
results['has_pretty_patch'] = port_obj.pretty_patch.pretty_patch_available()
results['pixel_tests_enabled'] = port_obj.get_option('pixel_tests')
+ results['other_crashes'] = other_crashes_dict
try:
# We only use the svn revision for using trac links in the results.html file,
def testing_device(self):
return Simulator().lookup_or_create_device(self.simulator_device_type.name + ' WebKit Tester', self.simulator_device_type, self.simulator_runtime)
+ def _merge_crash_logs(self, logs, new_logs, crashed_processes):
+ for test, crash_log in new_logs.iteritems():
+ try:
+ process_name = test.split("-")[0]
+ pid = int(test.split("-")[1])
+ except IndexError:
+ continue
+ if not any(entry[1] == process_name and entry[2] == pid for entry in crashed_processes):
+ # if this is a new crash, then append the logs
+ logs[test] = crash_log
+ return logs
+
+ def _look_for_all_crash_logs_in_log_dir(self, newer_than):
+ crash_log = CrashLogs(self.host)
+ return crash_log.find_all_logs(include_errors=True, newer_than=newer_than)
+
def look_for_new_crash_logs(self, crashed_processes, start_time):
crash_logs = {}
for (test_name, process_name, pid) in crashed_processes:
if not crash_log:
continue
crash_logs[test_name] = crash_log
- return crash_logs
+ all_crash_log = self._look_for_all_crash_logs_in_log_dir(start_time)
+ return self._merge_crash_logs(crash_logs, all_crash_log, crashed_processes)
def look_for_new_samples(self, unresponsive_processes, start_time):
sample_files = {}