https://bugs.webkit.org/show_bug.cgi?id=141500
Reviewed by Chris Dumez.
Added the support for fetching test_runs for a specific test group in /api/runs/, and used it in the
analysis task page to fetch results for each test group.
Merged App.createChartData into App.Manifest.fetchRunsWithPlatformAndMetric so that App.BuildRequest
can use the formatter.
* public/api/runs.php:
(fetch_runs_for_config_and_test_group): Added.
(fetch_runs_for_config): Just return the fetched rows since main will format them with RunsGenerator.
(main): Use fetch_runs_for_config_and_test_group to fetch rows when a test group id is specified. Also
use RunsGenerator to format results.
(RunsGenerator): Added.
(RunsGenerator::__construct): Added.
(RunsGenerator::add_runs): Added.
(RunsGenerator::format_run): Moved.
(RunsGenerator::parse_revisions_array): Moved.
* public/v2/analysis.js:
(App.TestGroup): Fixed a typo. The property on a test group that refers to an analysis task is "task".
(App.TestGroup._fetchChartData): Added. Fetches all A/B testing results for this group.
(App.BuildRequest.configLetter): Renamed from config since this returns a letter that identifies the
configuration associated with this build request such as "A" and "B".
(App.BuildRequest.statusLabel): Added the missing label for failed build requests.
(App.BuildRequest.url): Added. Returns the URL associated with this build request.
(App.BuildRequest._meanFetched): Added. Retrieve the mean and the build number for this request via
_fetchChartData.
* public/v2/app.js:
(App.Pane._fetch): Set chartData directly here.
(App.Pane._updateMovingAverageAndEnvelope): Renamed from _computeChartData. No longer sets chartData
now that it's done in App.Pane._fetch.
(App.AnalysisTaskController._fetchedRuns): Updated per createChartData merge.
* public/v2/data.js:
(Measurement.prototype.buildId): Added.
(TimeSeries.prototype.findPointByBuild): Added.
* public/v2/index.html: Fixed a bug that build status URL was broken. We can't use link-to helper since
url is not an Ember routed path.
* public/v2/manifest.js:
(App.Manifest.fetchRunsWithPlatformAndMetric): Takes testGroupId as the third argument. Merged
App.createChartData here so that App.BuildRequest can use the formatter
git-svn-id: https://svn.webkit.org/repository/webkit/trunk@180000
268f45cc-cd09-0410-ab3c-
d52691b4dbfc
+2015-02-12 Ryosuke Niwa <rniwa@webkit.org>
+
+ Perf dashboard should show the results of A/B testing
+ https://bugs.webkit.org/show_bug.cgi?id=141500
+
+ Reviewed by Chris Dumez.
+
+ Added the support for fetching test_runs for a specific test group in /api/runs/, and used it in the
+ analysis task page to fetch results for each test group.
+
+ Merged App.createChartData into App.Manifest.fetchRunsWithPlatformAndMetric so that App.BuildRequest
+ can use the formatter.
+
+ * public/api/runs.php:
+ (fetch_runs_for_config_and_test_group): Added.
+ (fetch_runs_for_config): Just return the fetched rows since main will format them with RunsGenerator.
+ (main): Use fetch_runs_for_config_and_test_group to fetch rows when a test group id is specified. Also
+ use RunsGenerator to format results.
+ (RunsGenerator): Added.
+ (RunsGenerator::__construct): Added.
+ (RunsGenerator::add_runs): Added.
+ (RunsGenerator::format_run): Moved.
+ (RunsGenerator::parse_revisions_array): Moved.
+
+ * public/v2/analysis.js:
+ (App.TestGroup): Fixed a typo. The property on a test group that refers to an analysis task is "task".
+ (App.TestGroup._fetchChartData): Added. Fetches all A/B testing results for this group.
+ (App.BuildRequest.configLetter): Renamed from config since this returns a letter that identifies the
+ configuration associated with this build request such as "A" and "B".
+ (App.BuildRequest.statusLabel): Added the missing label for failed build requests.
+ (App.BuildRequest.url): Added. Returns the URL associated with this build request.
+ (App.BuildRequest._meanFetched): Added. Retrieve the mean and the build number for this request via
+ _fetchChartData.
+
+ * public/v2/app.js:
+ (App.Pane._fetch): Set chartData directly here.
+ (App.Pane._updateMovingAverageAndEnvelope): Renamed from _computeChartData. No longer sets chartData
+ now that it's done in App.Pane._fetch.
+ (App.AnalysisTaskController._fetchedRuns): Updated per createChartData merge.
+
+ * public/v2/data.js:
+ (Measurement.prototype.buildId): Added.
+ (TimeSeries.prototype.findPointByBuild): Added.
+
+ * public/v2/index.html: Fixed a bug that build status URL was broken. We can't use link-to helper since
+ url is not an Ember routed path.
+
+ * public/v2/manifest.js:
+ (App.Manifest.fetchRunsWithPlatformAndMetric): Takes testGroupId as the third argument. Merged
+ App.createChartData here so that App.BuildRequest can use the formatter
+
2015-02-12 Ryosuke Niwa <rniwa@webkit.org>
v2 UI should adjust the number of ticks on dashboards based on screen size
require('../include/json-header.php');
+function fetch_runs_for_config_and_test_group($db, $config, $test_group_id) {
+ return $db->query_and_fetch_all('
+ SELECT test_runs.*, builds.*, array_agg((commit_repository, commit_revision, commit_time)) AS revisions
+ FROM builds
+ LEFT OUTER JOIN build_commits ON commit_build = build_id
+ LEFT OUTER JOIN commits ON build_commit = commit_id,
+ test_runs, build_requests, analysis_test_groups
+ WHERE run_build = build_id AND run_config = $1 AND request_build = build_id AND request_group = $2
+ GROUP BY build_id, run_id', array($config['config_id'], $test_group_id));
+}
+
function fetch_runs_for_config($db, $config) {
- $raw_runs = $db->query_and_fetch_all('
+ return $db->query_and_fetch_all('
SELECT test_runs.*, builds.*, array_agg((commit_repository, commit_revision, commit_time)) AS revisions
FROM builds
LEFT OUTER JOIN build_commits ON commit_build = build_id
LEFT OUTER JOIN commits ON build_commit = commit_id, test_runs
WHERE run_build = build_id AND run_config = $1 AND NOT EXISTS (SELECT * FROM build_requests WHERE request_build = build_id)
GROUP BY build_id, run_id', array($config['config_id']));
-
- $formatted_runs = array();
- if (!$raw_runs)
- return $formatted_runs;
-
- foreach ($raw_runs as $run)
- array_push($formatted_runs, format_run($run));
-
- return $formatted_runs;
-}
-
-function parse_revisions_array($postgres_array) {
- // e.g. {"(WebKit,131456,\"2012-10-16 14:53:00\")","(Chromium,162004,)"}
- $outer_array = json_decode('[' . trim($postgres_array, '{}') . ']');
- $revisions = array();
- foreach ($outer_array as $item) {
- $name_and_revision = explode(',', trim($item, '()'));
- if (!$name_and_revision[0])
- continue;
- $time = strtotime(trim($name_and_revision[2], '"')) * 1000;
- $revisions[trim($name_and_revision[0], '"')] = array(trim($name_and_revision[1], '"'), $time);
- }
- return $revisions;
-}
-
-function format_run($run) {
- return array(
- 'id' => intval($run['run_id']),
- 'mean' => floatval($run['run_mean_cache']),
- 'iterationCount' => intval($run['run_iteration_count_cache']),
- 'sum' => floatval($run['run_sum_cache']),
- 'squareSum' => floatval($run['run_square_sum_cache']),
- 'revisions' => parse_revisions_array($run['revisions']),
- 'buildTime' => strtotime($run['build_time']) * 1000,
- 'buildNumber' => intval($run['build_number']),
- 'builder' => $run['build_builder']);
}
function main($path) {
if (!$db->connect())
exit_with_error('DatabaseConnectionFailure');
- // FIXME: We should support revalication as well as caching results in the server side.
- $maxage = config('jsonCacheMaxAge');
- header('Expires: ' . gmdate('D, d M Y H:i:s', time() + $maxage) . ' GMT');
- header("Cache-Control: maxage=$maxage");
-
$platform_id = intval($parts[0]);
$metric_id = intval($parts[1]);
$config_rows = $db->query_and_fetch_all('SELECT config_id, config_type, config_platform, config_metric
if (!$config_rows)
exit_with_error('ConfigurationNotFound');
- $results = array();
+ $test_group_id = array_get($_GET, 'testGroup');
+ if ($test_group_id)
+ $test_group_id = intval($test_group_id);
+ else {
+ // FIXME: We should support revalication as well as caching results in the server side.
+ $maxage = config('jsonCacheMaxAge');
+ header('Expires: ' . gmdate('D, d M Y H:i:s', time() + $maxage) . ' GMT');
+ header("Cache-Control: maxage=$maxage");
+ }
+
+ $generator = new RunsGenerator();
+
foreach ($config_rows as $config) {
- if ($runs = fetch_runs_for_config($db, $config))
- $results[$config['config_type']] = $runs;
+ if ($test_group_id)
+ $raw_runs = fetch_runs_for_config_and_test_group($db, $config, $test_group_id);
+ else
+ $raw_runs = fetch_runs_for_config($db, $config);
+ $generator->add_runs($config['config_type'], $raw_runs);
+ }
+
+ exit_with_success($generator->results());
+}
+
+class RunsGenerator {
+ function __construct() {
+ $this->results = array();
}
- exit_with_success($results);
+ function &results() { return $this->results; }
+
+ function add_runs($name, $raw_runs) {
+ $formatted_runs = array();
+ if ($raw_runs) {
+ foreach ($raw_runs as $run)
+ array_push($formatted_runs, self::format_run($run));
+ }
+ $this->results[$name] = $formatted_runs;
+ return $formatted_runs;
+ }
+
+ private static function format_run($run) {
+ return array(
+ 'id' => intval($run['run_id']),
+ 'mean' => floatval($run['run_mean_cache']),
+ 'iterationCount' => intval($run['run_iteration_count_cache']),
+ 'sum' => floatval($run['run_sum_cache']),
+ 'squareSum' => floatval($run['run_square_sum_cache']),
+ 'revisions' => self::parse_revisions_array($run['revisions']),
+ 'build' => $run['build_id'],
+ 'buildTime' => strtotime($run['build_time']) * 1000,
+ 'buildNumber' => intval($run['build_number']),
+ 'builder' => $run['build_builder']);
+ }
+
+ private static function parse_revisions_array($postgres_array) {
+ // e.g. {"(WebKit,131456,\"2012-10-16 14:53:00\")","(Chromium,162004,)"}
+ $outer_array = json_decode('[' . trim($postgres_array, '{}') . ']');
+ $revisions = array();
+ foreach ($outer_array as $item) {
+ $name_and_revision = explode(',', trim($item, '()'));
+ if (!$name_and_revision[0])
+ continue;
+ $time = strtotime(trim($name_and_revision[2], '"')) * 1000;
+ $revisions[trim($name_and_revision[0], '"')] = array(trim($name_and_revision[1], '"'), $time);
+ }
+ return $revisions;
+ }
}
main(array_key_exists('PATH_INFO', $_SERVER) ? explode('/', trim($_SERVER['PATH_INFO'], '/')) : array());
});
App.TestGroup = App.NameLabelModel.extend({
- analysisTask: DS.belongsTo('analysisTask'),
+ task: DS.belongsTo('analysisTask'),
author: DS.attr('string'),
createdAt: DS.attr('date'),
buildRequests: DS.hasMany('buildRequests'),
});
return rootSetIds;
}.property('buildRequests'),
+ _fetchChartData: function ()
+ {
+ var task = this.get('task');
+ if (!task)
+ return null;
+ var self = this;
+ return App.Manifest.fetchRunsWithPlatformAndMetric(this.store,
+ task.get('platform').get('id'), task.get('metric').get('id'), this.get('id')).then(
+ function (result) { self.set('chartData', result.data); },
+ function (error) {
+ // FIXME: Somehow this never gets called.
+ alert('Failed to fetch the results:' + error);
+ return null;
+ });
+ }.observes('task', 'task.platform', 'task.metric').on('init'),
});
App.TestGroup.create = function (analysisTask, name, rootSets, repetitionCount)
return this.get('order') + 1;
}.property('order'),
rootSet: DS.attr('number'),
- config: function ()
+ configLetter: function ()
{
var rootSets = this.get('testGroup').get('rootSets');
var index = rootSets.indexOf(this.get('rootSet'));
return 'Scheduled';
case 'running':
return 'Running';
+ case 'failed':
+ return 'Failed';
case 'completed':
return 'Finished';
}
}.property('status'),
+ url: DS.attr('string'),
build: DS.attr('number'),
+ _fetchMean: function ()
+ {
+ var testGroup = this.get('testGroup');
+ if (!testGroup)
+ return;
+ var chartData = testGroup.get('chartData');
+ if (!chartData)
+ return;
+
+ var point = chartData.current.findPointByBuild(this.get('build'));
+ if (!point)
+ return;
+ this.set('mean', chartData.formatter(point.value) + (chartData.unit ? ' ' + chartData.unit : ''));
+ this.set('buildNumber', point.measurement.buildNumber());
+ }.observes('build', 'testGroup', 'testGroup.chartData').on('init'),
});
App.Manifest.fetchRunsWithPlatformAndMetric(this.get('store'), platformId, metricId).then(function (result) {
self.set('platform', result.platform);
self.set('metric', result.metric);
- self.set('fetchedData', result);
- self._computeChartData();
+ self.set('chartData', result.data);
+ self._updateMovingAverageAndEnvelope();
}, function (result) {
if (!result || typeof(result) === "string")
self.set('failure', 'Failed to fetch the JSON with an error: ' + result);
return chosenStrategy;
},
- _computeChartData: function ()
+ _updateMovingAverageAndEnvelope: function ()
{
- if (!this.get('fetchedData'))
+ var chartData = this.get('chartData');
+ if (!chartData)
return;
- var chartData = App.createChartData(this.get('fetchedData'));
-
var movingAverageStrategy = this.get('chosenMovingAverageStrategy');
this._updateStrategyConfigIfNeeded(movingAverageStrategy, 'movingAverageConfig');
this._updateStrategyConfigIfNeeded(envelopingStrategy, 'envelopingConfig');
chartData.movingAverage = this._computeMovingAverageAndOutliers(chartData, movingAverageStrategy, envelopingStrategy);
-
- this.set('chartData', chartData);
}.observes('chosenMovingAverageStrategy', 'chosenMovingAverageStrategy.parameterList.@each.value',
'chosenEnvelopingStrategy', 'chosenEnvelopingStrategy.parameterList.@each.value'),
_computeMovingAverageAndOutliers: function (chartData, movingAverageStrategy, envelopingStrategy)
},
});
-App.createChartData = function (data)
-{
- var runs = data.runs;
- return {
- current: runs.current.timeSeriesByCommitTime(),
- baseline: runs.baseline ? runs.baseline.timeSeriesByCommitTime() : null,
- target: runs.target ? runs.target.timeSeriesByCommitTime() : null,
- unit: data.unit,
- formatter: data.useSI ? d3.format('.4s') : d3.format('.4g'),
- deltaFormatter: data.useSI ? d3.format('+.2s') : d3.format('+.2g'),
- smallerIsBetter: data.smallerIsBetter,
- };
-}
-
App.encodePrettifiedJSON = function (plain)
{
function numberIfPossible(string) {
});
}));
},
- _fetchedRuns: function (data)
+ _fetchedRuns: function (result)
{
- var runs = data.runs;
-
- var currentTimeSeries = runs.current.timeSeriesByCommitTime();
+ var chartData = result.data;
+ var currentTimeSeries = chartData.current;
if (!currentTimeSeries)
return; // FIXME: Report an error.
highlightedItems[start.measurement.id()] = true;
highlightedItems[end.measurement.id()] = true;
- var chartData = App.createChartData(data);
var formatedPoints = currentTimeSeries.seriesBetweenPoints(start, end).map(function (point, index) {
return {
id: point.measurement.id(),
measurement: point.measurement,
label: 'Point ' + (index + 1),
- value: chartData.formatter(point.value) + (data.unit ? ' ' + data.unit : ''),
+ value: chartData.formatter(point.value) + (chartData.unit ? ' ' + chartData.unit : ''),
};
});
return this._latestCommitTime || this._buildTime;
}
+Measurement.prototype.buildId = function()
+{
+ return this._raw['build'];
+}
+
Measurement.prototype.buildNumber = function ()
{
return this._raw['buildNumber'];
// FIXME: We need to devise a way to fetch runs in multiple chunks so that
// we don't have to fetch the entire time series to just show the last 3 days.
-RunsData.fetchRuns = function (platformId, metricId)
+RunsData.fetchRuns = function (platformId, metricId, testGroupId)
{
var filename = platformId + '-' + metricId + '.json';
+ if (testGroupId)
+ filename += '?testGroup=' + testGroupId;
+
return new Ember.RSVP.Promise(function (resolve, reject) {
$.getJSON('../api/runs/' + filename, function (data) {
if (data.status != 'OK') {
this._max = max;
}
+TimeSeries.prototype.findPointByBuild = function (buildId)
+{
+ return this._series.find(function (point) { return point.measurement.buildId() == buildId; })
+}
+
TimeSeries.prototype.findPointByMeasurementId = function (measurementId)
{
return this._series.find(function (point) { return point.measurement.id() == measurementId; });
{{#each buildRequests}}
<tr>
<td>{{orderLabel}}</td>
- <td>{{config}}</td>
- <td>{{#if url}}{{#link-to url}}{{statusLabel}}{{/link-to}}{{else}}{{statusLabel}}{{/if}}</td>
- <td>{{build}}</td>
+ <td>{{configLetter}}</td>
+ <td><a {{bind-attr href=url}}>{{statusLabel}}</a></td>
+ <td>{{buildNumber}}</td>
<td>{{mean}}</td>
</tr>
{{/each}}
dashboards.forEach(function (dashboard) { self._dashboardByName[dashboard.get('name')] = dashboard; });
this._defaultDashboardName = dashboards.length ? dashboards[0].get('name') : null;
},
- fetchRunsWithPlatformAndMetric: function (store, platformId, metricId)
+ fetchRunsWithPlatformAndMetric: function (store, platformId, metricId, testGroupId)
{
return Ember.RSVP.all([
- RunsData.fetchRuns(platformId, metricId),
+ RunsData.fetchRuns(platformId, metricId, testGroupId),
this.fetch(store),
]).then(function (values) {
var runs = values[0];
}[suffix];
var smallerIsBetter = unit != 'fps' && unit != '/s'; // Assume smaller is better for unit-less metrics.
- return {platform: platform, metric: metric, runs: runs, unit: unit, useSI: unit == 'bytes', smallerIsBetter: smallerIsBetter};
+ var useSI = unit == 'bytes';
+ return {
+ platform: platform,
+ metric: metric,
+ data: {
+ current: runs.current.timeSeriesByCommitTime(),
+ baseline: runs.baseline ? runs.baseline.timeSeriesByCommitTime() : null,
+ target: runs.target ? runs.target.timeSeriesByCommitTime() : null,
+ unit: unit,
+ formatter: useSI ? d3.format('.4s') : d3.format('.4g'),
+ deltaFormatter: useSI ? d3.format('+.2s') : d3.format('+.2g'),
+ smallerIsBetter: smallerIsBetter,
+ }
+ };
});
},
}).create();