It should be fun and easy to run every possible JavaScript benchmark from the command...
authorfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 30 Sep 2014 21:11:57 +0000 (21:11 +0000)
committerfpizlo@apple.com <fpizlo@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 30 Sep 2014 21:11:57 +0000 (21:11 +0000)
https://bugs.webkit.org/show_bug.cgi?id=137245

Reviewed by Oliver Hunt.

PerformanceTests:

This adds the scaffolding for running Octane version 2 inside run-jsc-benchmarks.
In the future we should just land Octane2 in this directory, and run-jsc-benchmarks
should be changed to point directly at this directory instead of requiring the
Octane path to be configured as part of the configuration file.

* Octane: Added.
* Octane/wrappers: Added.
* Octane/wrappers/jsc-box2d.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-boyer.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-closure.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-decrypt.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-deltablue.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-earley.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-encrypt.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-gbemu.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-jquery.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-mandreel.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-navier-stokes.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-pdfjs.js: Added.
(jscSetUp.PdfJS_window.console.log):
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-raytrace.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-regexp.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-richards.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-splay.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-typescript.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):
* Octane/wrappers/jsc-zlib.js: Added.
(jscSetUp):
(jscTearDown):
(jscRun):

Tools:

We previously had Tools/Scripts/bencher.  Then we stopped adding things to it because we
weren't sure about the licensing of things like Kraken and Octane.  Various people ended up
having their own private scripts for doing benchmark runs, and didn't share them in the open
source community, because of fears about the shady licensing of the benchmarks suites that
they were running. The dominant version of this was "run-jsc-benchmarks", which has a lot of
excellent power - it can run benchmarks through either jsc, DumpRenferTree, or
WebKitTestRunner; it can run tests on any number of remote machines; and it has inside
knowledge about how to run *a lot* of test suites. Many of those test suites are not public,
but some of them are. The non-public tests are exclusively those that were not made by any
WebKit contributor, but which JSC/WebKit devs found useful for testing.

This fixes this weirdness by releasing run-jsc-benchmarks. The paths to the test suites
whose licenses are incompatible with WebKit's (to the extent that they cannot be safely
checked into WebKit svn at all) can be run by passing the path to them via a configuration
file. The default configuration file is ~/.run-jsc-benchmarks. The most important benchmark
suites are Octane version 2 and Kraken version 1.1. We should probably check Octane 2 into
WebKit eventually because it seems that the license is fine. Kraken, on the other hand, will
probably never be checked in because there is no license text anywhere in that benchmark.
A valid ~/.run-jsc-benchmarks file will just be something like:

    {
        "OctanePath": "/path/to/Octane2",
        "KrakenPath": "/path/to/Kraken-1.1/tests/kraken-1.1"
    }

If your ~/.run-jsc-benchmarks file omits the directory for any particular test suite, then
run-jsc-benchmarks will just gracefully avoid running that test suite.

Finally, a word about policy: it is understood that different organizations that do
development on JSC may find themselves having internal benchmarks that they cannot share
because of weird licensing. It happens - usually because the organization doing JSC
development found some test in the wild that is owned by someone else and therefore cannot
be shared. So, we should consider it acceptable to write patches against run-jsc-benchmarks
that add support for some new kind of benchmark suite even if the suite is not made public
as part of the same patch - so long as the patch isn't too invasive. An example of
non-invasiveness is the DSPJS suite, which is implemented using some new classes (like
DSPJSAmmoJSRegularBenchmark) and some calls to otherwise reusable functions (like
emitSelfContainedBenchRunCode). It is obviously super helpful if a benchmark suite can be
completely open-sourced and committed to the WebKit repo - but the reality is that this
can't always be done safely.

* Scripts/bencher: Removed.
* Scripts/run-jsc-benchmarks: Added.

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@174123 268f45cc-cd09-0410-ab3c-d52691b4dbfc

21 files changed:
PerformanceTests/ChangeLog
PerformanceTests/Octane/wrappers/jsc-box2d.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-boyer.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-closure.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-decrypt.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-deltablue.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-earley.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-encrypt.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-gbemu.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-jquery.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-mandreel.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-navier-stokes.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-pdfjs.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-raytrace.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-regexp.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-richards.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-splay.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-typescript.js [new file with mode: 0644]
PerformanceTests/Octane/wrappers/jsc-zlib.js [new file with mode: 0644]
Tools/ChangeLog
Tools/Scripts/run-jsc-benchmarks [moved from Tools/Scripts/bencher with 50% similarity]

index 1d201f3..62b34f8 100644 (file)
@@ -1,3 +1,91 @@
+2014-09-29  Filip Pizlo  <fpizlo@apple.com>
+
+        It should be fun and easy to run every possible JavaScript benchmark from the command line
+        https://bugs.webkit.org/show_bug.cgi?id=137245
+
+        Reviewed by Oliver Hunt.
+        
+        This adds the scaffolding for running Octane version 2 inside run-jsc-benchmarks.
+        In the future we should just land Octane2 in this directory, and run-jsc-benchmarks
+        should be changed to point directly at this directory instead of requiring the
+        Octane path to be configured as part of the configuration file.
+
+        * Octane: Added.
+        * Octane/wrappers: Added.
+        * Octane/wrappers/jsc-box2d.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-boyer.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-closure.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-decrypt.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-deltablue.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-earley.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-encrypt.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-gbemu.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-jquery.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-mandreel.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-navier-stokes.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-pdfjs.js: Added.
+        (jscSetUp.PdfJS_window.console.log):
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-raytrace.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-regexp.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-richards.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-splay.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-typescript.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+        * Octane/wrappers/jsc-zlib.js: Added.
+        (jscSetUp):
+        (jscTearDown):
+        (jscRun):
+
 2014-09-28  Sungmann Cho  <sungmann.cho@navercorp.com>
 
         Fix some minor typos: psuedo -> pseudo
diff --git a/PerformanceTests/Octane/wrappers/jsc-box2d.js b/PerformanceTests/Octane/wrappers/jsc-box2d.js
new file mode 100644 (file)
index 0000000..060b5d2
--- /dev/null
@@ -0,0 +1,11 @@
+function jscSetUp() {
+    setupBox2D();
+}
+
+function jscTearDown() {
+    world = null;
+}
+
+function jscRun() {
+    runBox2D();
+}
diff --git a/PerformanceTests/Octane/wrappers/jsc-boyer.js b/PerformanceTests/Octane/wrappers/jsc-boyer.js
new file mode 100644 (file)
index 0000000..7e614cd
--- /dev/null
@@ -0,0 +1,6 @@
+function jscSetUp() { }
+function jscTearDown() { }
+
+function jscRun() {
+    BgL_nboyerzd2benchmarkzd2();
+}
diff --git a/PerformanceTests/Octane/wrappers/jsc-closure.js b/PerformanceTests/Octane/wrappers/jsc-closure.js
new file mode 100644 (file)
index 0000000..1808459
--- /dev/null
@@ -0,0 +1,11 @@
+function jscSetUp() {
+    setupCodeLoad()
+}
+
+function jscTearDown() {
+    tearDownCodeLoad();
+}
+
+function jscRun() {
+    runCodeLoadClosure();
+}
diff --git a/PerformanceTests/Octane/wrappers/jsc-decrypt.js b/PerformanceTests/Octane/wrappers/jsc-decrypt.js
new file mode 100644 (file)
index 0000000..9311368
--- /dev/null
@@ -0,0 +1,9 @@
+var encrypted = "0599c0900f04d914aa11176b75f1ff8040b99a4cd008dd526d41fff88acc006a0eab4a6ec8d390b300c169c6dc2c23aa7767ba83f336b7f8eaade253c22eb7c78c98881d5e89b2827592f73baea3e32f10b2b1fba83dda854c9c6a96467fca0d1e2f5aa4d595b62f65b2eb258aaef9e73a407511c24085df025de6bbbfb32764";
+
+function jscSetUp() { }
+function jscTearDown() { }
+
+function jscRun() {
+    decrypt();
+}
+
diff --git a/PerformanceTests/Octane/wrappers/jsc-deltablue.js b/PerformanceTests/Octane/wrappers/jsc-deltablue.js
new file mode 100644 (file)
index 0000000..e96cca9
--- /dev/null
@@ -0,0 +1,6 @@
+function jscSetUp() { }
+function jscTearDown() { }
+
+function jscRun() {
+    deltaBlue();
+}
diff --git a/PerformanceTests/Octane/wrappers/jsc-earley.js b/PerformanceTests/Octane/wrappers/jsc-earley.js
new file mode 100644 (file)
index 0000000..6dd8d1a
--- /dev/null
@@ -0,0 +1,6 @@
+function jscSetUp() { }
+function jscTearDown() { }
+
+function jscRun() {
+    BgL_earleyzd2benchmarkzd2();
+}
diff --git a/PerformanceTests/Octane/wrappers/jsc-encrypt.js b/PerformanceTests/Octane/wrappers/jsc-encrypt.js
new file mode 100644 (file)
index 0000000..8e26521
--- /dev/null
@@ -0,0 +1,7 @@
+function jscSetUp() { }
+function jscTearDown() { }
+
+function jscRun() {
+    encrypt();
+}
+
diff --git a/PerformanceTests/Octane/wrappers/jsc-gbemu.js b/PerformanceTests/Octane/wrappers/jsc-gbemu.js
new file mode 100644 (file)
index 0000000..e0c816e
--- /dev/null
@@ -0,0 +1,11 @@
+function jscSetUp() {
+    setupGameboy();
+}
+
+function jscTearDown() {
+  decoded_gameboy_rom = null;
+}
+
+function jscRun() {
+    runGameboy();
+}
diff --git a/PerformanceTests/Octane/wrappers/jsc-jquery.js b/PerformanceTests/Octane/wrappers/jsc-jquery.js
new file mode 100644 (file)
index 0000000..9d5a0cf
--- /dev/null
@@ -0,0 +1,11 @@
+function jscSetUp() {
+    setupCodeLoad()
+}
+
+function jscTearDown() {
+    tearDownCodeLoad();
+}
+
+function jscRun() {
+    runCodeLoadJQuery();
+}
diff --git a/PerformanceTests/Octane/wrappers/jsc-mandreel.js b/PerformanceTests/Octane/wrappers/jsc-mandreel.js
new file mode 100644 (file)
index 0000000..aeb2009
--- /dev/null
@@ -0,0 +1,11 @@
+function jscSetUp() {
+    setupMandreel();
+}
+
+function jscTearDown() {
+    tearDownMandreel();
+}
+
+function jscRun() {
+    runMandreel();
+}
diff --git a/PerformanceTests/Octane/wrappers/jsc-navier-stokes.js b/PerformanceTests/Octane/wrappers/jsc-navier-stokes.js
new file mode 100644 (file)
index 0000000..08bdac1
--- /dev/null
@@ -0,0 +1,14 @@
+function jscSetUp()
+{
+    setupNavierStokes();
+}
+
+function jscTearDown()
+{
+    tearDownNavierStokes();
+}
+
+function jscRun()
+{
+    runNavierStokes();
+}
diff --git a/PerformanceTests/Octane/wrappers/jsc-pdfjs.js b/PerformanceTests/Octane/wrappers/jsc-pdfjs.js
new file mode 100644 (file)
index 0000000..d69b40a
--- /dev/null
@@ -0,0 +1,26 @@
+function jscSetUp() {
+    canvas_logs = [];
+    PdfJS_window.console = {log:function(){}}
+    PdfJS_window.__timeouts__ = [];
+    PdfJS_window.__resources__ = {};
+    setupPdfJS();
+}
+
+function jscTearDown() {
+  for (var i = 0; i < canvas_logs.length; ++i) {
+    var log_length = canvas_logs[i].length;
+    var log_hash = hash(canvas_logs[i].join(" "));
+    var expected_length = 36788;
+    var expected_hash = 939524096;
+    if (log_length !== expected_length || log_hash !== expected_hash) {
+      var message = "PdfJS produced incorrect output: " +
+          "expected " + expected_length + " " + expected_hash + ", " +
+          "got " + log_length + " " + log_hash;
+      throw message;
+    }
+  }
+}
+
+function jscRun() {
+    runPdfJS();
+}
diff --git a/PerformanceTests/Octane/wrappers/jsc-raytrace.js b/PerformanceTests/Octane/wrappers/jsc-raytrace.js
new file mode 100644 (file)
index 0000000..7e6ce37
--- /dev/null
@@ -0,0 +1,6 @@
+function jscSetUp() { }
+function jscTearDown() { }
+
+function jscRun() {
+    renderScene();
+}
diff --git a/PerformanceTests/Octane/wrappers/jsc-regexp.js b/PerformanceTests/Octane/wrappers/jsc-regexp.js
new file mode 100644 (file)
index 0000000..7e0f8b8
--- /dev/null
@@ -0,0 +1,13 @@
+function jscSetUp() {
+    BenchmarkSuite.ResetRNG();
+    RegExpSetup();
+}
+
+function jscTearDown() {
+    RegExpTearDown();
+}
+
+function jscRun() {
+    RegExpRun();
+}
+
diff --git a/PerformanceTests/Octane/wrappers/jsc-richards.js b/PerformanceTests/Octane/wrappers/jsc-richards.js
new file mode 100644 (file)
index 0000000..5c3166a
--- /dev/null
@@ -0,0 +1,7 @@
+function jscSetUp() { }
+function jscTearDown() { }
+
+function jscRun() {
+    runRichards();
+}
+
diff --git a/PerformanceTests/Octane/wrappers/jsc-splay.js b/PerformanceTests/Octane/wrappers/jsc-splay.js
new file mode 100644 (file)
index 0000000..4370326
--- /dev/null
@@ -0,0 +1,12 @@
+function jscSetUp() {
+    SplaySetup();
+}
+
+function jscTearDown() {
+    SplayTearDown();
+}
+
+function jscRun() {
+    SplayRun();
+}
+
diff --git a/PerformanceTests/Octane/wrappers/jsc-typescript.js b/PerformanceTests/Octane/wrappers/jsc-typescript.js
new file mode 100644 (file)
index 0000000..df7a55e
--- /dev/null
@@ -0,0 +1,11 @@
+function jscSetUp() {
+    setupTypescript();
+}
+
+function jscTearDown() {
+    tearDownTypescript();
+}
+
+function jscRun() {
+    runTypescript();
+}
diff --git a/PerformanceTests/Octane/wrappers/jsc-zlib.js b/PerformanceTests/Octane/wrappers/jsc-zlib.js
new file mode 100644 (file)
index 0000000..c7d7373
--- /dev/null
@@ -0,0 +1,12 @@
+var read;
+
+function jscSetUp() {
+}
+
+function jscTearDown() {
+    tearDownZlib();
+}
+
+function jscRun() {
+    runZlib();
+}
index c0e76ae..9b12bab 100644 (file)
@@ -1,3 +1,54 @@
+2014-09-29  Filip Pizlo  <fpizlo@apple.com>
+
+        It should be fun and easy to run every possible JavaScript benchmark from the command line
+        https://bugs.webkit.org/show_bug.cgi?id=137245
+
+        Reviewed by Oliver Hunt.
+        
+        We previously had Tools/Scripts/bencher.  Then we stopped adding things to it because we
+        weren't sure about the licensing of things like Kraken and Octane.  Various people ended up
+        having their own private scripts for doing benchmark runs, and didn't share them in the open
+        source community, because of fears about the shady licensing of the benchmarks suites that
+        they were running. The dominant version of this was "run-jsc-benchmarks", which has a lot of
+        excellent power - it can run benchmarks through either jsc, DumpRenferTree, or
+        WebKitTestRunner; it can run tests on any number of remote machines; and it has inside
+        knowledge about how to run *a lot* of test suites. Many of those test suites are not public,
+        but some of them are. The non-public tests are exclusively those that were not made by any
+        WebKit contributor, but which JSC/WebKit devs found useful for testing.
+
+        This fixes this weirdness by releasing run-jsc-benchmarks. The paths to the test suites
+        whose licenses are incompatible with WebKit's (to the extent that they cannot be safely
+        checked into WebKit svn at all) can be run by passing the path to them via a configuration
+        file. The default configuration file is ~/.run-jsc-benchmarks. The most important benchmark
+        suites are Octane version 2 and Kraken version 1.1. We should probably check Octane 2 into
+        WebKit eventually because it seems that the license is fine. Kraken, on the other hand, will
+        probably never be checked in because there is no license text anywhere in that benchmark.
+        A valid ~/.run-jsc-benchmarks file will just be something like:
+        
+            {
+                "OctanePath": "/path/to/Octane2",
+                "KrakenPath": "/path/to/Kraken-1.1/tests/kraken-1.1"
+            }
+        
+        If your ~/.run-jsc-benchmarks file omits the directory for any particular test suite, then
+        run-jsc-benchmarks will just gracefully avoid running that test suite.
+        
+        Finally, a word about policy: it is understood that different organizations that do
+        development on JSC may find themselves having internal benchmarks that they cannot share
+        because of weird licensing. It happens - usually because the organization doing JSC
+        development found some test in the wild that is owned by someone else and therefore cannot
+        be shared. So, we should consider it acceptable to write patches against run-jsc-benchmarks
+        that add support for some new kind of benchmark suite even if the suite is not made public
+        as part of the same patch - so long as the patch isn't too invasive. An example of
+        non-invasiveness is the DSPJS suite, which is implemented using some new classes (like
+        DSPJSAmmoJSRegularBenchmark) and some calls to otherwise reusable functions (like
+        emitSelfContainedBenchRunCode). It is obviously super helpful if a benchmark suite can be
+        completely open-sourced and committed to the WebKit repo - but the reality is that this
+        can't always be done safely.
+
+        * Scripts/bencher: Removed.
+        * Scripts/run-jsc-benchmarks: Added.
+
 2014-09-30  Roger Fong  <roger_fong@apple.com>
 
         [Windows] Back to 2 child processes for NRWT on Windows.
similarity index 50%
rename from Tools/Scripts/bencher
rename to Tools/Scripts/run-jsc-benchmarks
index 1f0e684..1747d05 100755 (executable)
@@ -1,6 +1,6 @@
 #!/usr/bin/env ruby
 
-# Copyright (C) 2011 Apple Inc. All rights reserved.
+# Copyright (C) 2011, 2012, 2013, 2014 Apple Inc. All rights reserved.
 #
 # Redistribution and use in source and binary forms, with or without
 # modification, are permitted provided that the following conditions
@@ -27,7 +27,7 @@ require 'rubygems'
 
 require 'getoptlong'
 require 'pathname'
-require 'tempfile'
+require 'shellwords'
 require 'socket'
 
 begin
@@ -37,31 +37,28 @@ rescue LoadError => e
   exit 1
 end
 
-# Configuration
+SCRIPT_PATH = Pathname.new(__FILE__).realpath
 
-CONFIGURATION_FLNM = ENV["HOME"]+"/.bencher"
+raise unless SCRIPT_PATH.dirname.basename.to_s == "Scripts"
+raise unless SCRIPT_PATH.dirname.dirname.basename.to_s == "Tools"
 
-unless FileTest.exist? CONFIGURATION_FLNM
-  $stderr.puts "Error: no configuration file at ~/.bencher."
-  $stderr.puts "This file should contain paths to SunSpider, V8, and Kraken, as well as a"
-  $stderr.puts "temporary directory that bencher can use for its remote mode. It should be"
-  $stderr.puts "formatted in JSON.  For example:"
-  $stderr.puts "{"
-  $stderr.puts "    \"sunSpiderPath\": \"/Volumes/Data/pizlo/OpenSource/PerformanceTests/SunSpider/tests/sunspider-1.0\","
-  $stderr.puts "    \"v8Path\": \"/Volumes/Data/pizlo/OpenSource/PerformanceTests/SunSpider/tests/v8-v6\","
-  $stderr.puts "    \"krakenPath\": \"/Volumes/Data/pizlo/kraken/kraken-e119421cb325/tests/kraken-1.1\","
-  $stderr.puts "    \"tempPath\": \"/Volumes/Data/pizlo/bencher/temp\""
-  $stderr.puts "}"
-  exit 1
-end
+OPENSOURCE_PATH = SCRIPT_PATH.dirname.dirname.dirname
+
+SUNSPIDER_PATH = OPENSOURCE_PATH + "PerformanceTests" + "SunSpider" + "tests" + "sunspider-1.0"
+LONGSPIDER_PATH = OPENSOURCE_PATH + "PerformanceTests" + "LongSpider"
+V8_PATH = OPENSOURCE_PATH + "PerformanceTests" + "SunSpider" + "tests" + "v8-v6"
+JSREGRESS_PATH = OPENSOURCE_PATH + "LayoutTests" + "js" + "regress" + "script-tests"
+OCTANE_WRAPPER_PATH = OPENSOURCE_PATH + "PerformanceTests" + "Octane" + "wrappers"
+
+TEMP_PATH = OPENSOURCE_PATH + "BenchmarkTemp"
 
-CONFIGURATION = JSON.parse(File::read(CONFIGURATION_FLNM))
+if TEMP_PATH.exist?
+  raise unless TEMP_PATH.directory?
+else
+  Dir.mkdir(TEMP_PATH)
+end
 
-SUNSPIDER_PATH = CONFIGURATION["sunSpiderPath"]
-V8_PATH = CONFIGURATION["v8Path"]
-KRAKEN_PATH = CONFIGURATION["krakenPath"]
-TEMP_PATH = CONFIGURATION["tempPath"]
-BENCH_DATA_PATH = TEMP_PATH + "/benchdata"
+BENCH_DATA_PATH = TEMP_PATH + "benchdata"
 
 IBR_LOOKUP=[0.00615583, 0.0975, 0.22852, 0.341628, 0.430741, 0.500526, 0.555933, 
             0.600706, 0.637513, 0.668244, 0.694254, 0.716537, 0.735827, 0.752684, 
@@ -208,12 +205,22 @@ IBR_LOOKUP=[0.00615583, 0.0975, 0.22852, 0.341628, 0.430741, 0.500526, 0.555933,
 # Run-time configuration parameters (can be set with command-line options)
 
 $rerun=1
-$inner=3
+$inner=1
 $warmup=1
 $outer=4
+$quantum=1000
 $includeSunSpider=true
+$includeLongSpider=true
 $includeV8=true
 $includeKraken=true
+$includeJSBench=true
+$includeJSRegress=true
+$includeAsmBench=true
+$includeDSPJS=true
+$includeBrowsermarkJS=false
+$includeBrowsermarkDOM=false
+$includeOctane=true
+$includeCompressionBench = true
 $measureGC=false
 $benchmarkPattern=nil
 $verbosity=0
@@ -225,8 +232,13 @@ $remoteHosts=[]
 $alsoLocal=false
 $sshOptions=[]
 $vms = []
+$environment = {}
 $needToCopyVMs = false
 $dontCopyVMs = false
+$allDRT = true
+$outputName = nil
+$sunSpiderWarmup = true
+$configPath = Pathname.new(ENV["HOME"]) + ".run-jsc-benchmarks"
 
 $prepare = true
 $run = true
@@ -240,21 +252,19 @@ def smallUsage
 end
 
 def usage
-  puts "bencher [options] <vm1> [<vm2> ...]"
+  puts "run-jsc-benchmarks [options] <vm1> [<vm2> ...]"
   puts
   puts "Runs one or more JavaScript runtimes against SunSpider, V8, and/or Kraken"
-  puts "benchmarks, and reports detailed statistics.  What makes bencher special is"
-  puts "that each benchmark/VM configuration is run in a single VM invocation, and"
-  puts "the invocations are run in random order.  This minimizes systematics due to"
+  puts "benchmarks, and reports detailed statistics.  What makes run-jsc-benchmarks"
+  puts "special is that each benchmark/VM configuration is run in a single VM invocation,"
+  puts "and the invocations are run in random order.  This minimizes systematics due to"
   puts "one benchmark polluting the running time of another.  The fine-grained"
   puts "interleaving of VM invocations further minimizes systematics due to changes in"
   puts "the performance or behavior of your machine."
   puts 
-  puts "Bencher is highly configurable.  You can compare as many VMs as you like.  You"
-  puts "can change the amount of warm-up iterations, number of iterations executed per"
-  puts "VM invocation, and the number of VM invocations per benchmark.  By default,"
-  puts "SunSpider, VM, and Kraken are all run; but you can run any combination of these"
-  puts "suites."
+  puts "Run-jsc-benchmarks is highly configurable.  You can compare as many VMs as you"
+  puts "like.  You can change the amount of warm-up iterations, number of iterations"
+  puts "executed per VM invocation, and the number of VM invocations per benchmark."
   puts
   puts "The <vm> should be either a path to a JavaScript runtime executable (such as"
   puts "jsc), or a string of the form <name>:<path>, where the <path> is the path to"
@@ -262,6 +272,13 @@ def usage
   puts "configuration for the purposeof reporting.  If no name is given, a generic name"
   puts "of the form Conf#<n> will be ascribed to the configuration automatically."
   puts
+  puts "It's also possible to specify per-VM environment variables. For example, you"
+  puts "might specify a VM like Foo:JSC_useJIT=false:/path/to/jsc, in which case the"
+  puts "harness will set the JSC_useJIT environment variable to false just before running"
+  puts "the given VM. Note that the harness will not unset the environment variable, so"
+  puts "you must ensure that your other VMs will use the opposite setting"
+  puts "(JSC_useJIT=true in this case)."
+  puts
   puts "Options:"
   puts "--rerun <n>          Set the number of iterations of the benchmark that"
   puts "                     contribute to the measured run time.  Default is #{$rerun}."
@@ -270,22 +287,34 @@ def usage
   puts "--outer <n>          Set the number of runtime invocations for each benchmark."
   puts "                     Default is #{$outer}."
   puts "--warmup <n>         Set the number of warm-up runs per invocation.  Default"
-  puts "                     is #{$warmup}."
-  puts "--timing-mode        Set the way that bencher measures time.  Possible values"
+  puts "                     is #{$warmup}. This has a different effect on different kinds"
+  puts "                     benchmarks. Some benchmarks have no notion of warm-up."
+  puts "--no-ss-warmup       Disable SunSpider-based warm-up runs."
+  puts "--quantum <n>        Set the duration in milliseconds for which an iteration of"
+  puts "                     a throughput benchmark should be run.  Default is #{$quantum}."
+  puts "--timing-mode        Set the way that time is measured.  Possible values"
   puts "                     are 'preciseTime' and 'date'.  Default is 'preciseTime'."
   puts "--force-vm-kind      Turn off auto-detection of VM kind, and assume that it is"
-  puts "                     the one specified.  Valid arguments are 'jsc' or"
-  puts "                     'DumpRenderTree'."
-  puts "--force-vm-copy      Force VM builds to be copied to bencher's working directory."
+  puts "                     the one specified.  Valid arguments are 'jsc'"
+  puts "                     'DumpRenderTree', or 'WebKitTestRunner'."
+  puts "--force-vm-copy      Force VM builds to be copied to the working directory."
   puts "                     This may reduce pathologies resulting from path names."
   puts "--dont-copy-vms      Don't copy VMs even when doing a remote benchmarking run;"
   puts "                     instead assume that they are already there."
-  puts "--v8-only            Only run V8."
-  puts "--sunspider-only     Only run SunSpider."
-  puts "--kraken-only        Only run Kraken."
-  puts "--exclude-v8         Exclude V8 (only run SunSpider and Kraken)."
-  puts "--exclude-sunspider  Exclude SunSpider (only run V8 and Kraken)."
-  puts "--exclude-kraken     Exclude Kraken (only run SunSpider and V8)."
+  puts "--sunspider          Only run SunSpider."
+  puts "--v8-spider          Only run V8."
+  puts "--kraken             Only run Kraken."
+  puts "--js-bench           Only run JSBench."
+  puts "--js-regress         Only run JSRegress."
+  puts "--dsp                Only run DSP."
+  puts "--asm-bench          Only run AsmBench."
+  puts "--browsermark-js     Only run browsermark-js."
+  puts "--browsermark-dom    Only run browsermark-dom."
+  puts "--octane             Only run Octane."
+  puts "--compression-bench  Only run compression bench"
+  puts "                     The default is to run all benchmarks. The above options can"
+  puts "                     be combined to run any subset (so --sunspider --dsp will run"
+  puts "                     both SunSpider and DSP)."
   puts "--benchmarks         Only run benchmarks matching the given regular expression."
   puts "--measure-gc         Turn off manual calls to gc(), so that GC time is measured."
   puts "                     Works best with large values of --inner.  You can also say"
@@ -295,7 +324,7 @@ def usage
   puts "--brief              Print only the final result for each VM."
   puts "--silent             Don't print progress. This might slightly reduce some"
   puts "                     performance perturbation."
-  puts "--remote <sshhosts>  Performance performance measurements remotely, on the given"
+  puts "--remote <sshhosts>  Perform performance measurements remotely, on the given"
   puts "                     SSH host(s). Easiest way to use this is to specify the SSH"
   puts "                     user@host string. However, you can also supply a comma-"
   puts "                     separated list of SSH hosts. Alternatively, you can use this"
@@ -304,25 +333,35 @@ def usage
   puts "                     you specified to all of the hosts."
   puts "--ssh-options        Pass additional options to SSH."
   puts "--local              Also do a local benchmark run even when doing --remote."
-  puts "--prepare-only       Only prepare the bencher runscript (a shell script that"
+  puts "--vms                Use a JSON file to specify which VMs to run, as opposed to"
+  puts "                     specifying them on the command line."
+  puts "--prepare-only       Only prepare the runscript (a shell script that"
   puts "                     invokes the VMs to run benchmarks) but don't run it."
   puts "--analyze            Only read the output of the runscript but don't do anything"
-  puts "                     else. This requires passing the same arguments to bencher"
-  puts "                     that you passed when running --prepare-only."
+  puts "                     else. This requires passing the same arguments that you"
+  puts "                     passed when running --prepare-only."
+  puts "--output-name        Base of the filenames to put results into. Will write a file"
+  puts "                     called <base>_report.txt and <base>.json.  By default this"
+  puts "                     name is automatically synthesized from the machine name,"
+  puts "                     date, set of benchmarks run, and set of configurations."
+  puts "--environment        JSON file that specifies the environment variables that should"
+  puts "                     be used for particular VMs and benchmarks."
+  puts "--config <path>      Specify the path of the configuration file. Defaults to"
+  puts "                     ~/.run-jsc-benchmarks"
   puts "--help or -h         Display this message."
   puts
   puts "Example:"
-  puts "bencher TipOfTree:/Volumes/Data/pizlo/OpenSource/WebKitBuild/Release/jsc MyChanges:/Volumes/Data/pizlo/secondary/OpenSource/WebKitBuild/Release/jsc"
+  puts "run-jsc-benchmarks TipOfTree:/Volumes/Data/pizlo/OpenSource/WebKitBuild/Release/jsc MyChanges:/Volumes/Data/pizlo/secondary/OpenSource/WebKitBuild/Release/jsc"
   exit 1
 end
 
 def fail(reason)
   if reason.respond_to? :backtrace
-    puts "FAILED: #{reason}"
+    puts "FAILED: #{reason.inspect}"
     puts "Stack trace:"
     puts reason.backtrace.join("\n")
   else
-    puts "FAILED: #{reason}"
+    puts "FAILED: #{reason.inspect}"
   end
   smallUsage
 end
@@ -360,12 +399,12 @@ def computeMean(array)
 end
 
 def computeGeometricMean(array)
-  mult=1.0
+  sum = 0.0
   array.each {
     | value |
-    mult*=value
+    sum += Math.log(value)
   }
-  mult**(1.0/array.length)
+  Math.exp(sum * (1.0/array.length))
 end
 
 def computeHarmonicMean(array)
@@ -404,8 +443,25 @@ def inverseBetaRegularized(n)
   IBR_LOOKUP[n-1]
 end
 
-def numToStr(num)
-  "%.4f"%(num.to_f)
+def numToStr(num, decimalShift)
+  ("%." + (4 + decimalShift).to_s + "f") % (num.to_f)
+end
+
+class CantSay
+  def initialize
+  end
+  
+  def shortForm
+    " "
+  end
+  
+  def longForm
+    ""
+  end
+  
+  def to_s
+    ""
+  end
 end
   
 class NoChange
@@ -420,7 +476,7 @@ class NoChange
   end
   
   def longForm
-    "  might be #{numToStr(@amountFaster)}x faster"
+    "  might be #{numToStr(@amountFaster, 0)}x faster"
   end
   
   def to_s
@@ -444,7 +500,7 @@ class Faster
   end
   
   def longForm
-    "^ definitely #{numToStr(@amountFaster)}x faster"
+    "^ definitely #{numToStr(@amountFaster, 0)}x faster"
   end
   
   def to_s
@@ -464,7 +520,7 @@ class Slower
   end
   
   def longForm
-    "! definitely #{numToStr(@amountSlower)}x slower"
+    "! definitely #{numToStr(@amountSlower, 0)}x slower"
   end
   
   def to_s
@@ -484,7 +540,7 @@ class MayBeSlower
   end
   
   def longForm
-    "? might be #{numToStr(@amountSlower)}x slower"
+    "? might be #{numToStr(@amountSlower, 0)}x slower"
   end
   
   def to_s
@@ -496,13 +552,39 @@ class MayBeSlower
   end
 end
 
+def jsonSanitize(value)
+  if value.is_a? Fixnum
+    value
+  elsif value.is_a? Float
+    if value.nan? or value.infinite?
+      value.to_s
+    else
+      value
+    end
+  elsif value.is_a? Array
+    value.map{|v| jsonSanitize(v)}
+  elsif value.nil?
+    value
+  else
+    raise "Unrecognized value #{value.inspect}"
+  end
+end
+
 class Stats
   def initialize
     @array = []
   end
   
   def add(value)
-    if value.is_a? Stats
+    if not value or not @array
+      @array = nil
+    elsif value.is_a? Float
+      if value.nan? or value.infinite?
+        @array = nil
+      else
+        @array << value
+      end
+    elsif value.is_a? Stats
       add(value.array)
     elsif value.respond_to? :each
       value.each {
@@ -513,6 +595,23 @@ class Stats
       @array << value.to_f
     end
   end
+  
+  def status
+    if @array
+      :ok
+    else
+      :error
+    end
+  end
+  
+  def error?
+    # TODO: We're probably still not handling this case correctly. 
+    not @array or @array.empty?
+  end
+  
+  def ok?
+    not not @array
+  end
     
   def array
     @array
@@ -582,6 +681,8 @@ class Stats
   end
   
   def compareTo(other)
+    return CantSay.new unless ok? and other.ok?
+    
     if upper < other.lower
       Faster.new(other.mean/mean)
     elsif lower > other.upper
@@ -596,6 +697,14 @@ class Stats
   def to_s
     "size = #{size}, mean = #{mean}, stdDev = #{stdDev}, stdErr = #{stdErr}, confInt = #{confInt}"
   end
+  
+  def jsonMap
+    if ok?
+      {"data"=>jsonSanitize(@array), "mean"=>jsonSanitize(mean), "confInt"=>jsonSanitize(confInt)}
+    else
+      "ERROR"
+    end
+  end
 end
 
 def doublePuts(out1,out2,msg)
@@ -619,7 +728,7 @@ class Benchfile < File
     else
       basename = name + @@counter.to_s
     end
-    filename = BENCH_DATA_PATH + "/" + basename
+    filename = BENCH_DATA_PATH + basename
     @@counter += 1
     raise "Benchfile #{filename} already exists" if FileTest.exist?(filename)
     [basename, filename]
@@ -643,40 +752,210 @@ def ensureFile(key, filename)
   end
   $dataFiles[key]
 end
+
+# Helper for files that cannot be renamed.
+$absoluteFiles={}
+def ensureAbsoluteFile(filename, basedir=nil)
+  return if $absoluteFiles[filename]
+  filename = Pathname.new(filename)
+
+  directory = Pathname.new('')
+  if basedir and filename.dirname != basedir
+    remainingPath = filename.dirname
+    while remainingPath != basedir
+      directory = remainingPath.basename + directory
+      remainingPath = remainingPath.dirname
+    end
+    if not $absoluteFiles[directory]
+      cmd = "mkdir -p #{Shellwords.shellescape((BENCH_DATA_PATH + directory).to_s)}"
+      $stderr.puts ">> #{cmd}" if $verbosity >= 2
+      raise unless system(cmd)
+      intermediateDirectory = Pathname.new(directory)
+      while intermediateDirectory.basename.to_s != "."
+        $absoluteFiles[intermediateDirectory] = true
+        intermediateDirectory = intermediateDirectory.dirname
+      end
+    end
+  end
+  
+  cmd = "cp #{Shellwords.shellescape(filename.to_s)} #{Shellwords.shellescape((BENCH_DATA_PATH + directory + filename.basename).to_s)}"
+  $stderr.puts ">> #{cmd}" if $verbosity >= 2
+  raise unless system(cmd)
+  $absoluteFiles[filename] = true
+end
+
+# Helper for large benchmarks with lots of files and directories.
+def ensureBenchmarkFiles(rootdir)
+    toProcess = [rootdir]
+    while not toProcess.empty?
+      currdir = toProcess.pop
+      Dir.foreach(currdir.to_s) {
+        | filename |
+        path = currdir + filename
+        next if filename.match(/^\./)
+        toProcess.push(path) if File.directory?(path.to_s)
+        ensureAbsoluteFile(path, rootdir) if File.file?(path.to_s)
+      }
+    end
+end
+
+class JSCommand
+  attr_reader :js, :html
+  def initialize(js, html)
+    @js = js
+    @html = html
+  end
+end
+
+def loadCommandForFile(key, filename)
+  file = ensureFile(key, filename)
+  JSCommand.new("load(#{file.inspect});", "<script src=#{file.inspect}></script>")
+end
+
+def simpleCommand(command)
+  JSCommand.new(command, "<script type=\"text/javascript\">#{command}</script>")
+end
+
+# Benchmark that consists of a single file and must be loaded in its own global object each
+# time (i.e. run()).
+class SingleFileTimedBenchmarkParameters
+  attr_reader :benchPath
+  
+  def initialize(benchPath)
+    @benchPath = benchPath
+  end
+  
+  def kind
+    :singleFileTimedBenchmark
+  end
+end
+
+# Benchmark that consists of one or more data files that should be loaded globally, followed
+# by a command to run the benchmark.
+class MultiFileTimedBenchmarkParameters
+  attr_reader :dataPaths, :command
+
+  def initialize(dataPaths, command)
+    @dataPaths = dataPaths
+    @command = command
+  end
+  
+  def kind
+    :multiFileTimedBenchmark
+  end
+end
+
+# Benchmark that consists of one or more data files that should be loaded globally, followed
+# by a command to run a short tick of the benchmark. The benchmark should be run for as many
+# ticks as possible, for one quantum (quantum is 1000ms by default).
+class ThroughputBenchmarkParameters
+  attr_reader :dataPaths, :setUpCommand, :command, :tearDownCommand, :doWarmup, :deterministic, :minimumIterations
+
+  def initialize(dataPaths, setUpCommand, command, tearDownCommand, doWarmup, deterministic, minimumIterations)
+    @dataPaths = dataPaths
+    @setUpCommand = setUpCommand
+    @command = command
+    @tearDownCommand = tearDownCommand
+    @doWarmup = doWarmup
+    @deterministic = deterministic
+    @minimumIterations = minimumIterations
+  end
+  
+  def kind
+    :throughputBenchmark
+  end
+end
+
+# Benchmark that can only run in DumpRenderTree or WebKitTestRunner, that has its own callback for reporting
+# results. Other than that it's just like SingleFileTimedBenchmark.
+class SingleFileTimedCallbackBenchmarkParameters
+  attr_reader :callbackDecl, :benchPath
   
-def emitBenchRunCodeFile(name, plan, benchDataPath, benchPath)
+  def initialize(callbackDecl, benchPath)
+    @callbackDecl = callbackDecl
+    @benchPath = benchPath
+  end
+  
+  def kind
+    :singleFileTimedCallbackBenchmark
+  end
+end
+
+def emitTimerFunctionCode(file)
+  case $timeMode
+  when :preciseTime
+    doublePuts($stderr,file,"function __bencher_curTimeMS() {")
+    doublePuts($stderr,file,"   return preciseTime()*1000")
+    doublePuts($stderr,file,"}")
+  when :date
+    doublePuts($stderr,file,"function __bencher_curTimeMS() {")
+    doublePuts($stderr,file,"   return Date.now()")
+    doublePuts($stderr,file,"}")
+  else
+    raise
+  end
+end
+
+def emitBenchRunCodeFile(name, plan, benchParams)
   case plan.vm.vmType
   when :jsc
     Benchfile.create("bencher") {
       | file |
-      case $timeMode
-      when :preciseTime
-        doublePuts($stderr,file,"function __bencher_curTimeMS() {")
-        doublePuts($stderr,file,"   return preciseTime()*1000")
-        doublePuts($stderr,file,"}")
-      when :date
-        doublePuts($stderr,file,"function __bencher_curTimeMS() {")
-        doublePuts($stderr,file,"   return Date.now()")
-        doublePuts($stderr,file,"}")
-      else
-        raise
-      end
-
-      doublePuts($stderr,file,"if (typeof noInline == 'undefined') noInline = function(){};")
-
-      if benchDataPath
-        doublePuts($stderr,file,"load(#{benchDataPath.inspect});")
+      emitTimerFunctionCode(file)
+      
+      if benchParams.kind == :multiFileTimedBenchmark
+        benchParams.dataPaths.each {
+          | path |
+          doublePuts($stderr,file,"load(#{path.inspect});")
+        }
         doublePuts($stderr,file,"gc();")
         doublePuts($stderr,file,"for (var __bencher_index = 0; __bencher_index < #{$warmup+$inner}; ++__bencher_index) {")
-        doublePuts($stderr,file,"   before = __bencher_curTimeMS();")
+        doublePuts($stderr,file,"   var __before = __bencher_curTimeMS();")
         $rerun.times {
-          doublePuts($stderr,file,"   load(#{benchPath.inspect});")
+          doublePuts($stderr,file,"   #{benchParams.command.js}")
         }
-        doublePuts($stderr,file,"   after = __bencher_curTimeMS();")
-        doublePuts($stderr,file,"   if (__bencher_index >= #{$warmup}) print(\"#{name}: #{plan.vm}: #{plan.iteration}: \" + (__bencher_index - #{$warmup}) + \": Time: \"+(after-before));");
+        doublePuts($stderr,file,"   var __after = __bencher_curTimeMS();")
+        doublePuts($stderr,file,"   if (__bencher_index >= #{$warmup}) print(\"#{name}: #{plan.vm}: #{plan.iteration}: \" + (__bencher_index - #{$warmup}) + \": Time: \"+(__after-__before));");
         doublePuts($stderr,file,"   gc();") unless plan.vm.shouldMeasureGC
         doublePuts($stderr,file,"}")
+      elsif benchParams.kind == :throughputBenchmark
+        emitTimerFunctionCode(file)
+        benchParams.dataPaths.each {
+          | path |
+          doublePuts($stderr,file,"load(#{path.inspect});")
+        }
+        doublePuts($stderr,file,"#{benchParams.setUpCommand.js}")
+        if benchParams.doWarmup
+          warmup = $warmup
+        else
+          warmup = 0
+        end
+        doublePuts($stderr,file,"for (var __bencher_index = 0; __bencher_index < #{warmup + $inner}; __bencher_index++) {")
+        doublePuts($stderr,file,"    var __before = __bencher_curTimeMS();")
+        doublePuts($stderr,file,"    var __after = __before;")
+        doublePuts($stderr,file,"    var __runs = 0;")
+        doublePuts($stderr,file,"    var __expected = #{$quantum};")
+        doublePuts($stderr,file,"    while (true) {")
+        $rerun.times {
+          doublePuts($stderr,file,"       #{benchParams.command.js}")
+        }
+        doublePuts($stderr,file,"       __runs++;")
+        doublePuts($stderr,file,"       __after = __bencher_curTimeMS();")
+        if benchParams.deterministic
+          doublePuts($stderr,file,"       if (true) {")
+        else
+          doublePuts($stderr,file,"       if (__after - __before >= __expected) {")
+        end
+        doublePuts($stderr,file,"           if (__runs >= #{benchParams.minimumIterations} || __bencher_index < #{warmup})")
+        doublePuts($stderr,file,"               break;")
+        doublePuts($stderr,file,"           __expected += #{$quantum}")
+        doublePuts($stderr,file,"       }")
+        doublePuts($stderr,file,"    }")
+        doublePuts($stderr,file,"    if (__bencher_index >= #{warmup}) print(\"#{name}: #{plan.vm}: #{plan.iteration}: \" + (__bencher_index - #{warmup}) + \": Time: \"+((__after-__before)/__runs));")
+        doublePuts($stderr,file,"}")
+        doublePuts($stderr,file,"#{benchParams.tearDownCommand.js}")
       else
+        raise unless benchParams.kind == :singleFileTimedBenchmark
         doublePuts($stderr,file,"function __bencher_run(__bencher_what) {")
         doublePuts($stderr,file,"   var __bencher_before = __bencher_curTimeMS();")
         $rerun.times {
@@ -686,17 +965,26 @@ def emitBenchRunCodeFile(name, plan, benchDataPath, benchPath)
         doublePuts($stderr,file,"   return __bencher_after - __bencher_before;")
         doublePuts($stderr,file,"}")
         $warmup.times {
-          doublePuts($stderr,file,"__bencher_run(#{benchPath.inspect})")
+          doublePuts($stderr,file,"__bencher_run(#{benchParams.benchPath.inspect})")
           doublePuts($stderr,file,"gc();") unless plan.vm.shouldMeasureGC
         }
         $inner.times {
           | innerIndex |
-          doublePuts($stderr,file,"print(\"#{name}: #{plan.vm}: #{plan.iteration}: #{innerIndex}: Time: \"+__bencher_run(#{benchPath.inspect}));")
+          doublePuts($stderr,file,"print(\"#{name}: #{plan.vm}: #{plan.iteration}: #{innerIndex}: Time: \"+__bencher_run(#{benchParams.benchPath.inspect}));")
           doublePuts($stderr,file,"gc();") unless plan.vm.shouldMeasureGC
         }
       end
     }
-  when :dumpRenderTree
+  when :dumpRenderTree, :webkitTestRunner
+    case $timeMode
+    when :preciseTime
+      curTime = "(testRunner.preciseTime()*1000)"
+    when :date
+      curTime = "(Date.now())"
+    else
+      raise
+    end
+
     mainCode = Benchfile.create("bencher") {
       | file |
       doublePuts($stderr,file,"__bencher_count = 0;")
@@ -722,7 +1010,6 @@ def emitBenchRunCodeFile(name, plan, benchDataPath, benchPath)
       doublePuts($stderr,file,"if (window.testRunner) {")
       doublePuts($stderr,file,"    testRunner.dumpAsText(window.enablePixelTesting);")
       doublePuts($stderr,file,"    testRunner.waitUntilDone();")
-      doublePuts($stderr,file,"    noInline = testRunner.neverInlineFunction || function(){};")
       doublePuts($stderr,file,"}")
       doublePuts($stderr,file,"")
       doublePuts($stderr,file,"function debug(msg)")
@@ -741,6 +1028,10 @@ def emitBenchRunCodeFile(name, plan, benchDataPath, benchPath)
       doublePuts($stderr,file,"function reportResult(result) {")
       doublePuts($stderr,file,"    __bencher_continuation(result);")
       doublePuts($stderr,file,"}")
+      if benchParams.kind == :singleFileTimedCallbackBenchmark
+        doublePuts($stderr,file,"")
+        doublePuts($stderr,file,benchParams.callbackDecl)
+      end
       doublePuts($stderr,file,"")
       doublePuts($stderr,file,"function __bencher_runImpl(continuation) {")
       doublePuts($stderr,file,"    function doit() {")
@@ -749,10 +1040,53 @@ def emitBenchRunCodeFile(name, plan, benchDataPath, benchPath)
       doublePuts($stderr,file,"        var testFrame = document.getElementById(\"testframe\");")
       doublePuts($stderr,file,"        testFrame.contentDocument.open();")
       doublePuts($stderr,file,"        testFrame.contentDocument.write(\"<!DOCTYPE html>\\n<head></head><body><div id=\\\"console\\\"></div>\");")
-      if benchDataPath
-        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"<script src=\\\"#{benchDataPath}\\\"></script>\");")
+      if benchParams.kind == :throughputBenchmark or benchParams.kind == :multiFileTimedBenchmark
+        benchParams.dataPaths.each {
+          | path |
+          doublePuts($stderr,file,"        testFrame.contentDocument.write(\"<script src=#{path.inspect.inspect[1..-2]}></script>\");")
+        }
+      end
+      if benchParams.kind == :throughputBenchmark
+        if benchParams.doWarmup
+          warmup = $warmup
+        else
+          warmup = 0
+        end
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"<script type=\\\"text/javascript\\\">\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"#{benchParams.setUpCommand.js}\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"var __bencher_before = #{curTime};\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"var __bencher_after = __bencher_before;\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"var __bencher_expected = #{$quantum};\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"var __bencher_runs = 0;\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"while (true) {\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"    #{benchParams.command.js}\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"    __bencher_runs++;\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"    __bencher_after = #{curTime};\");")
+        if benchParams.deterministic
+          doublePuts($stderr,file,"        testFrame.contentDocument.write(\"    if (true) {\");")
+        else
+          doublePuts($stderr,file,"        testFrame.contentDocument.write(\"    if (__bencher_after - __bencher_before >= __bencher_expected) {\");")
+        end
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"        if (__bencher_runs >= #{benchParams.minimumIterations} || window.parent.__bencher_count < #{warmup})\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"            break;\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"        __bencher_expected += #{$quantum}\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"    }\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"}\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"#{benchParams.tearDownCommand.js}\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"window.parent.reportResult((__bencher_after - __bencher_before) / __bencher_runs);\");")
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"</script>\");")
+      else
+        doublePuts($stderr,file,"        testFrame.contentDocument.write(\"<script type=\\\"text/javascript\\\">var __bencher_before = #{curTime};</script>\");")
+        if benchParams.kind == :multiFileTimedBenchmark
+          doublePuts($stderr,file,"        testFrame.contentDocument.write(#{benchParams.command.html.inspect});")
+        else
+          doublePuts($stderr,file,"        testFrame.contentDocument.write(\"<script src=#{benchParams.benchPath.inspect.inspect[1..-2]}></script>\");")
+        end
+        unless benchParams.kind == :singleFileTimedCallbackBenchmark
+          doublePuts($stderr,file,"        testFrame.contentDocument.write(\"<script type=\\\"text/javascript\\\">window.parent.reportResult(#{curTime} - __bencher_before);</script>\");")
+        end
       end
-      doublePuts($stderr,file,"        testFrame.contentDocument.write(\"<script type=\\\"text/javascript\\\">if (window.testRunner) noInline=window.testRunner.neverInlineFunction || function(){}; if (typeof noInline == 'undefined') noInline=function(){}; __bencher_before = Date.now();</script><script src=\\\"#{benchPath}\\\"></script><script type=\\\"text/javascript\\\">window.parent.reportResult(Date.now() - __bencher_before);</script></body></html>\");")
+      doublePuts($stderr,file,"        testFrame.contentDocument.write(\"</body></html>\");")
       doublePuts($stderr,file,"        testFrame.contentDocument.close();")
       doublePuts($stderr,file,"    }")
       doublePuts($stderr,file,"    __bencher_continuation = continuation;")
@@ -769,30 +1103,87 @@ def emitBenchRunCodeFile(name, plan, benchDataPath, benchPath)
   end
 end
 
-def emitBenchRunCode(name, plan, benchDataPath, benchPath)
-  plan.vm.emitRunCode(emitBenchRunCodeFile(name, plan, benchDataPath, benchPath))
+def emitBenchRunCode(name, plan, benchParams)
+  plan.vm.emitRunCode(emitBenchRunCodeFile(name, plan, benchParams), plan)
+end
+
+class FileCreator
+  def initialize(filename)
+    @filename = filename
+    @state = :empty
+  end
+  
+  def puts(text)
+    $script.print "echo #{Shellwords.shellescape(text)}"
+    if @state == :empty
+      $script.print " > "
+      @state = :nonEmpty
+    else
+      $script.print " >> "
+    end
+    $script.puts "#{Shellwords.shellescape(@filename)}"
+  end
+  
+  def close
+    if @state == :empty
+      $script.puts "rm -f #{Shellwords.shellescape(text)}"
+      $script.puts "touch #{Shellwords.shellescape(text)}"
+    end
+  end
+  
+  def self.open(filename)
+    outp = FileCreator.new(filename)
+    yield outp
+    outp.close
+  end
+end
+
+def emitSelfContainedBenchRunCode(name, plan, targetFile, configFile, benchmark)
+  FileCreator.open(configFile) {
+    | outp |
+    outp.puts "__bencher_message = \"#{name}: #{plan.vm}: #{plan.iteration}: \";"
+    outp.puts "__bencher_warmup = #{$warmup};"
+    outp.puts "__bencher_inner = #{$inner};"
+    outp.puts "__bencher_benchmark = #{benchmark.to_json};"
+    case $timeMode
+    when :preciseTime
+      outp.puts "__bencher_curTime = (function(){ return testRunner.preciseTime() * 1000; });"
+    when :date
+      outp.puts "__bencher_curTime = (function(){ return Date.now(); });"
+    else
+      raise
+    end
+  }
+  
+  plan.vm.emitRunCode(targetFile, plan)
 end
 
-def planForDescription(plans, benchFullname, vmName, iteration)
-  raise unless benchFullname =~ /\//
+def planForDescription(string, plans, benchFullname, vmName, iteration)
+  raise "Unexpected benchmark full name: #{benchFullname.inspect}, string: #{string.inspect}" unless benchFullname =~ /\//
   suiteName = $~.pre_match
+  return nil if suiteName == "WARMUP"
   benchName = $~.post_match
   result = plans.select{|v| v.suite.name == suiteName and v.benchmark.name == benchName and v.vm.name == vmName and v.iteration == iteration}
-  raise unless result.size == 1
+  raise "Unexpected result dimensions: #{result.inspect}, string: #{string.inspect}" unless result.size == 1
   result[0]
 end
 
 class ParsedResult
-  attr_reader :plan, :innerIndex, :time
+  attr_reader :plan, :innerIndex, :time, :result
   
   def initialize(plan, innerIndex, time)
     @plan = plan
     @innerIndex = innerIndex
-    @time = time
+    if time == :crashed
+      @result = :error
+    else
+      @time = time
+      @result = :success
+    end
     
     raise unless @plan.is_a? BenchPlan
     raise unless @innerIndex.is_a? Integer
-    raise unless @time.is_a? Numeric
+    raise unless @time.is_a? Numeric or @result == :error
   end
   
   def benchmark
@@ -811,14 +1202,29 @@ class ParsedResult
     plan.iteration
   end
   
+  def self.create(plan, innerIndex, time)
+    if plan
+      ParsedResult.new(plan, innerIndex, time)
+    else
+      nil
+    end
+  end
+  
   def self.parse(plans, string)
-    if string =~ /([a-zA-Z0-9\/-]+): ([a-zA-Z0-9_# ]+): ([0-9]+): ([0-9]+): Time: /
+    if string =~ /([a-zA-Z0-9\/_.-]+): ([a-zA-Z0-9_#. ]+): ([0-9]+): ([0-9]+): Time: /
+      benchFullname = $1
+      vmName = $2
+      outerIndex = $3.to_i
+      innerIndex = $4.to_i
+      time = $~.post_match.to_f
+      ParsedResult.create(planForDescription(string, plans, benchFullname, vmName, outerIndex), innerIndex, time)
+    elsif string =~ /([a-zA-Z0-9\/_.-]+): ([a-zA-Z0-9_#. ]+): ([0-9]+): ([0-9]+): CRASHED/
       benchFullname = $1
       vmName = $2
       outerIndex = $3.to_i
       innerIndex = $4.to_i
       time = $~.post_match.to_f
-      ParsedResult.new(planForDescription(plans, benchFullname, vmName, outerIndex), innerIndex, time)
+      ParsedResult.create(planForDescription(string, plans, benchFullname, vmName, outerIndex), innerIndex, :crashed)
     else
       nil
     end
@@ -826,17 +1232,22 @@ class ParsedResult
 end
 
 class VM
+  @@extraEnvSet = {}
+  
   def initialize(origPath, name, nameKind, svnRevision)
     @origPath = origPath.to_s
     @path = origPath.to_s
     @name = name
     @nameKind = nameKind
+    @extraEnv = {}
     
     if $forceVMKind
       @vmType = $forceVMKind
     else
       if @origPath =~ /DumpRenderTree$/
         @vmType = :dumpRenderTree
+      elsif @origPath =~ /WebKitTestRunner$/
+        @vmType = :webkitTestRunner
       else
         @vmType = :jsc
       end
@@ -845,7 +1256,7 @@ class VM
     @svnRevision = svnRevision
     
     # Try to detect information about the VM.
-    if path =~ /\/WebKitBuild\/Release\/([a-zA-Z]+)$/
+    if path =~ /\/WebKitBuild\/(Release|Debug)+\/([a-zA-Z]+)$/
       @checkoutPath = $~.pre_match
       # FIXME: Use some variant of this: 
       # <bdash>   def retrieve_revision
@@ -878,11 +1289,13 @@ class VM
     end
     
     if @path =~ /\/Release\/([a-zA-Z]+)$/
-      @libPath, @relativeBinPath = $~.pre_match+"/Release", "./#{$1}"
+      @libPath, @relativeBinPath = [$~.pre_match+"/Release"], "./#{$1}"
+    elsif @path =~ /\/Debug\/([a-zA-Z]+)$/
+      @libPath, @relativeBinPath = [$~.pre_match+"/Debug"], "./#{$1}"
     elsif @path =~ /\/Contents\/Resources\/([a-zA-Z]+)$/
-      @libPath = $~.pre_match
+      @libPath = [$~.pre_match + "/Contents/Resources", $~.pre_match + "/Contents/Frameworks"]
     elsif @path =~ /\/JavaScriptCore.framework\/Resources\/([a-zA-Z]+)$/
-      @libPath, @relativeBinPath = $~.pre_match, $&[1..-1]
+      @libPath, @relativeBinPath = [$~.pre_match], $&[1..-1]
     end
   end
   
@@ -894,15 +1307,23 @@ class VM
     end
   end
   
+  def addExtraEnv(key, val)
+    @extraEnv[key] = val
+    @@extraEnvSet[key] = true
+  end
+  
   def copyIntoBenchPath
     raise unless canCopyIntoBenchPath
     basename, filename = Benchfile.uniqueFilename("vm")
     raise unless Dir.mkdir(filename)
-    cmd = "cp -a #{@libPath.inspect}/* #{filename.inspect}"
-    $stderr.puts ">> #{cmd}" if $verbosity>=2
-    raise unless system(cmd)
+    @libPath.each {
+      | libPathPart |
+      cmd = "cp -a #{Shellwords.shellescape(libPathPart)}/* #{Shellwords.shellescape(filename.to_s)}"
+      $stderr.puts ">> #{cmd}" if $verbosity>=2
+      raise unless system(cmd)
+    }
     @path = "#{basename}/#{@relativeBinPath}"
-    @libPath = basename
+    @libPath = [basename]
   end
   
   def to_s
@@ -941,23 +1362,49 @@ class VM
     @svnRevision
   end
   
+  def extraEnv
+    @extraEnv
+  end
+  
   def printFunction
     case @vmType
     when :jsc
       "print"
-    when :dumpRenderTree
+    when :dumpRenderTree, :webkitTestRunner
       "debug"
     else
       raise @vmType
     end
   end
   
-  def emitRunCode(fileToRun)
+  def emitRunCode(fileToRun, plan)
     myLibPath = @libPath
-    myLibPath = "" unless myLibPath
-    $script.puts "export DYLD_LIBRARY_PATH=#{myLibPath.to_s.inspect}"
-    $script.puts "export DYLD_FRAMEWORK_PATH=#{myLibPath.to_s.inspect}"
-    $script.puts "#{path} #{fileToRun}"
+    myLibPath = [] unless myLibPath
+    @@extraEnvSet.keys.each {
+      | key |
+      $script.puts "unset #{Shellwords.shellescape(key)}"
+    }
+    $script.puts "export DYLD_LIBRARY_PATH=#{Shellwords.shellescape(myLibPath.join(':').to_s)}"
+    $script.puts "export DYLD_FRAMEWORK_PATH=#{Shellwords.shellescape(myLibPath.join(':').to_s)}"
+    @extraEnv.each_pair {
+      | key, val |
+      $script.puts "export #{Shellwords.shellescape(key)}=#{Shellwords.shellescape(val)}"
+    }
+    plan.environment.each_pair {
+        | key, val |
+        $script.puts "export #{Shellwords.shellescape(key)}=#{Shellwords.shellescape(val)}"
+    }
+    $script.puts "#{path} #{fileToRun} 2>&1 || {"
+    $script.puts "    echo " + Shellwords.shellescape("#{name} failed to run!") + " 1>&2"
+    $inner.times {
+      | iteration |
+      $script.puts "    echo " + Shellwords.shellescape("#{plan.prefix}: #{iteration}: CRASHED")
+    }
+    $script.puts "}"
+    plan.environment.keys.each {
+        | key |
+        $script.puts "export #{Shellwords.shellescape(key)}="
+    }
   end
 end
 
@@ -977,7 +1424,11 @@ class StatsAccumulator
     result = Stats.new
     @stats.each {
       | stat |
-      result.add(yield stat)
+      if stat.ok?
+        result.add(yield stat)
+      else
+        result.add(nil)
+      end
     }
     result
   end
@@ -1008,6 +1459,20 @@ module Benchmark
   def to_s
     fullname
   end
+  
+  def weight
+    1
+  end
+  
+  def weightString
+    raise unless weight.is_a? Fixnum
+    raise unless weight >= 1
+    if weight == 1
+      ""
+    else
+      "x#{weight} "
+    end
+  end
 end
 
 class SunSpiderBenchmark
@@ -1018,11 +1483,11 @@ class SunSpiderBenchmark
   end
   
   def emitRunCode(plan)
-    emitBenchRunCode(fullname, plan, nil, ensureFile("SunSpider-#{@name}", "#{SUNSPIDER_PATH}/#{@name}.js"))
+    emitBenchRunCode(fullname, plan, SingleFileTimedBenchmarkParameters.new(ensureFile("SunSpider-#{@name}", "#{SUNSPIDER_PATH}/#{@name}.js")))
   end
 end
 
-class V8Benchmark
+class LongSpiderBenchmark
   include Benchmark
   
   def initialize(name)
@@ -1030,11 +1495,11 @@ class V8Benchmark
   end
   
   def emitRunCode(plan)
-    emitBenchRunCode(fullname, plan, nil, ensureFile("V8-#{@name}", "#{V8_PATH}/v8-#{@name}.js"))
+    emitBenchRunCode(fullname, plan, SingleFileTimedBenchmarkParameters.new(ensureFile("LongSpider-#{@name}", "#{LONGSPIDER_PATH}/#{@name}.js")))
   end
 end
 
-class KrakenBenchmark
+class V8Benchmark
   include Benchmark
   
   def initialize(name)
@@ -1042,99 +1507,453 @@ class KrakenBenchmark
   end
   
   def emitRunCode(plan)
-    emitBenchRunCode(fullname, plan, ensureFile("KrakenData-#{@name}", "#{KRAKEN_PATH}/#{@name}-data.js"), ensureFile("Kraken-#{@name}", "#{KRAKEN_PATH}/#{@name}.js"))
+    emitBenchRunCode(fullname, plan, SingleFileTimedBenchmarkParameters.new(ensureFile("V8-#{@name}", "#{V8_PATH}/v8-#{@name}.js")))
   end
 end
 
-class BenchmarkSuite
-  def initialize(name, path, preferredMean)
-    @name = name
-    @path = path
-    @preferredMean = preferredMean
-    @benchmarks = []
-  end
-  
-  def name
-    @name
-  end
+class V8RealBenchmark
+  include Benchmark
   
-  def to_s
-    @name
-  end
+  attr_reader :v8SuiteName
   
-  def path
-    @path
+  def initialize(v8SuiteName, name, weight, minimumIterations)
+    @v8SuiteName = v8SuiteName
+    @name = name
+    @weight = weight
+    @minimumIterations = minimumIterations
   end
   
-  def add(benchmark)
-    if not $benchmarkPattern or "#{@name}/#{benchmark.name}" =~ $benchmarkPattern
-      benchmark.benchmarkSuite = self
-      @benchmarks << benchmark
-    end
+  def weight
+    @weight
   end
   
-  def benchmarks
-    @benchmarks
+  def emitRunCode(plan)
+    emitBenchRunCode(fullname, plan, ThroughputBenchmarkParameters.new(["base", @v8SuiteName, "jsc-#{@name}"].collect{|v| ensureFile("V8Real-#{v}", "#{V8_REAL_PATH}/#{v}.js")}, simpleCommand("jscSetUp();"), simpleCommand("jscRun();"), simpleCommand("jscTearDown();"), true, false, @minimumIterations))
   end
+end
+
+class OctaneBenchmark
+  include Benchmark
   
-  def benchmarkForName(name)
-    result = @benchmarks.select{|v| v.name == name}
-    raise unless result.length == 1
-    result[0]
+  def initialize(files, name, weight, doWarmup, deterministic, minimumIterations)
+    @files = files
+    @name = name
+    @weight = weight
+    @doWarmup = doWarmup
+    @deterministic = deterministic
+    @minimumIterations = minimumIterations
   end
   
-  def empty?
-    @benchmarks.empty?
+  def weight
+    @weight
   end
   
-  def retain_if
-    @benchmarks.delete_if {
-      | benchmark |
-      not yield benchmark
+  def emitRunCode(plan)
+    files = []
+    files += (["base"] + @files).collect {
+      | v |
+      ensureFile("Octane-#{v}", "#{OCTANE_PATH}/#{v}.js")
     }
+    files += ["jsc-#{@name}"].collect {
+      | v |
+      ensureFile("Octane-#{v}", "#{OCTANE_WRAPPER_PATH}/#{v}.js")
+    }
+    emitBenchRunCode(fullname, plan, ThroughputBenchmarkParameters.new(files, simpleCommand("jscSetUp();"), simpleCommand("jscRun();"), simpleCommand("jscTearDown();"), @doWarmup, @deterministic, @minimumIterations))
   end
+end
+
+class KrakenBenchmark
+  include Benchmark
   
-  def preferredMean
-    @preferredMean
+  def initialize(name)
+    @name = name
   end
   
-  def computeMean(stat)
-    stat.send @preferredMean
+  def emitRunCode(plan)
+    emitBenchRunCode(fullname, plan, MultiFileTimedBenchmarkParameters.new([ensureFile("KrakenData-#{@name}", "#{KRAKEN_PATH}/#{@name}-data.js")], loadCommandForFile("Kraken-#{@name}", "#{KRAKEN_PATH}/#{@name}.js")))
   end
 end
 
-class BenchRunPlan
-  def initialize(benchmark, vm, iteration)
-    @benchmark = benchmark
-    @vm = vm
-    @iteration = iteration
+class JSBenchBenchmark
+  include Benchmark
+  
+  attr_reader :jsBenchMode
+  
+  def initialize(name, jsBenchMode)
+    @name = name
+    @jsBenchMode = jsBenchMode
   end
   
-  def benchmark
-    @benchmark
+  def emitRunCode(plan)
+    callbackDecl  = "function JSBNG_handleResult(result) {\n"
+    callbackDecl += "    if (result.error) {\n"
+    callbackDecl += "        console.log(\"Did not run benchmark correctly!\");\n"
+    callbackDecl += "        quit();\n"
+    callbackDecl += "    }\n"
+    callbackDecl += "    reportResult(result.time);\n"
+    callbackDecl += "}\n"
+    emitBenchRunCode(fullname, plan, SingleFileTimedCallbackBenchmarkParameters.new(callbackDecl, ensureFile("JSBench-#{@name}", "#{JSBENCH_PATH}/#{@name}/#{@jsBenchMode}.js")))
   end
+end
+
+class JSRegressBenchmark
+  include Benchmark
   
-  def suite
-    @benchmark.benchmarkSuite
+  def initialize(name)
+    @name = name
   end
   
-  def vm
-    @vm
+  def emitRunCode(plan)
+    emitBenchRunCode(fullname, plan, SingleFileTimedBenchmarkParameters.new(ensureFile("JSRegress-#{@name}", "#{JSREGRESS_PATH}/#{@name}.js")))
   end
+end
+
+class AsmBenchBenchmark
+  include Benchmark
   
-  def iteration
-    @iteration
+  def initialize(name)
+    @name = name
   end
   
-  def emitRunCode
-    @benchmark.emitRunCode(self)
+  def emitRunCode(plan)
+    emitBenchRunCode(fullname, plan, SingleFileTimedBenchmarkParameters.new(ensureFile("AsmBench-#{@name}", "#{ASMBENCH_PATH}/#{@name}.js")))
   end
 end
 
-class BenchmarkOnVM
-  def initialize(benchmark, suiteOnVM)
+
+class CompressionBenchBenchmark
+  include Benchmark
+  
+  def initialize(files, name, model)
+    @files = files
+    @name = name;
+    @name = name + "-" + model if !model.empty?
+    @name = @name.gsub(" ", "-").downcase
+    @scriptName = name
+    @weight = 1
+    @doWarmup = true
+    @deterministic = true
+    @minimumIterations = 1
+    @model = model
+  end
+  
+  def weight
+    @weight
+  end
+  
+  def emitRunCode(plan)
+    emitBenchRunCode(fullname, plan, ThroughputBenchmarkParameters.new((["base"] + @files + ["jsc-#{@scriptName}"]).collect{|v| ensureFile("Compression-#{v}", "#{COMPRESSIONBENCH_PATH}/#{v}.js")}, simpleCommand("jscSetUp('#{@model}');"), simpleCommand("jscRun();"), simpleCommand("jscTearDown();"), @doWarmup, @deterministic, @minimumIterations))
+  end
+end
+
+class DSPJSFiltrrBenchmark
+  include Benchmark
+  
+  def initialize(name, filterKey)
+    @name = name
+    @filterKey = filterKey
+  end
+  
+  def emitRunCode(plan)
+    ensureAbsoluteFile(DSPJS_FILTRR_PATH + "filtrr.js")
+    ensureAbsoluteFile(DSPJS_FILTRR_PATH + "filtrr_back.jpg")
+    ensureAbsoluteFile(DSPJS_FILTRR_PATH + "filtrr-jquery.min.js")
+    ensureAbsoluteFile(DSPJS_FILTRR_PATH + "filtrr-bencher.html")
+    emitSelfContainedBenchRunCode(fullname, plan, "filtrr-bencher.html", "bencher-config.js", @filterKey)
+  end
+end
+
+class DSPJSVP8Benchmark
+  include Benchmark
+  
+  def initialize
+    @name = "route9-vp8"
+  end
+  
+  def weight
+    5
+  end
+  
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(DSPJS_ROUTE9_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, "route9-bencher.html", "bencher-config.js", "")
+  end
+end
+
+class DSPStarfieldBenchmark
+  include Benchmark
+
+  def initialize
+    @name = "starfield"
+  end
+  
+  def weight
+    5
+  end
+
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(DSPJS_STARFIELD_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, "starfield-bencher.html", "bencher-config.js", "")
+  end
+end
+
+class DSPJSJSLinuxBenchmark
+  include Benchmark
+  def initialize
+    @name = "bellard-jslinux"
+  end
+
+  def weight
+    5
+  end
+
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(DSPJS_JSLINUX_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, "jslinux-bencher.html", "bencher-config.js", "")
+  end
+end
+
+class DSPJSQuake3Benchmark
+  include Benchmark
+
+  def initialize
+    @name = "zynaps-quake3"
+  end
+
+  def weight
+    5
+  end
+
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(DSPJS_QUAKE3_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, "quake-bencher.html", "bencher-config.js", "")
+  end
+end
+
+class DSPJSMandelbrotBenchmark
+  include Benchmark
+
+  def initialize
+    @name = "zynaps-mandelbrot"
+  end
+
+  def weight
+    5
+  end
+
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(DSPJS_MANDELBROT_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, "mandelbrot-bencher.html", "bencher-config.js", "")
+  end
+end
+
+class DSPJSAmmoJSASMBenchmark
+  include Benchmark
+
+  def initialize
+    @name = "ammojs-asm-js"
+  end
+
+  def weight
+    5
+  end
+
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(DSPJS_AMMOJS_ASMJS_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, "ammo-asmjs-bencher.html", "bencher-config.js", "")
+  end
+end
+
+class DSPJSAmmoJSRegularBenchmark
+  include Benchmark
+
+  def initialize
+    @name = "ammojs-regular-js"
+  end
+
+  def weight
+    5
+  end
+
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(DSPJS_AMMOJS_REGULAR_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, "ammo-regular-bencher.html", "bencher-config.js", "")
+  end
+end
+
+class BrowsermarkJSBenchmark
+  include Benchmark
+    
+  def initialize(name)
+    @name = name
+  end
+  
+  def emitRunCode(plan)
+    emitBenchRunCode(fullname, plan, ThroughputBenchmarkParameters.new([ensureFile(name, "#{BROWSERMARK_JS_PATH}/#{name}/test.js"), ensureFile("browsermark-bencher", "#{BROWSERMARK_JS_PATH}/browsermark-bencher.js")], simpleCommand("jscSetUp();"), simpleCommand("jscRun();"), simpleCommand("jscTearDown();"), true, 32))
+  end
+end
+
+class BrowsermarkDOMBenchmark
+  include Benchmark
+    
+  def initialize(name)
+    @name = name
+  end
+  
+  def emitRunCode(plan)
+    ensureBenchmarkFiles(BROWSERMARK_PATH)
+    emitSelfContainedBenchRunCode(fullname, plan, "tests/benchmarks/dom/#{name}/index.html", "bencher-config.js", name)
+  end
+end
+
+class BenchmarkSuite
+  def initialize(name, preferredMean, decimalShift)
+    @name = name
+    @preferredMean = preferredMean
+    @benchmarks = []
+    @subSuites = []
+    @decimalShift = decimalShift
+  end
+  
+  def name
+    @name
+  end
+  
+  def to_s
+    @name
+  end
+  
+  def decimalShift
+    @decimalShift
+  end
+  
+  def addIgnoringPattern(benchmark)
+    benchmark.benchmarkSuite = self
+    @benchmarks << benchmark
+  end
+  
+  def add(benchmark)
+    if not $benchmarkPattern or "#{@name}/#{benchmark.name}" =~ $benchmarkPattern
+        addIgnoringPattern(benchmark)
+    end
+  end
+  
+  def addSubSuite(subSuite)
+    @subSuites << subSuite
+  end
+  
+  def benchmarks
+    @benchmarks
+  end
+  
+  def benchmarkForName(name)
+    result = @benchmarks.select{|v| v.name == name}
+    raise unless result.length == 1
+    result[0]
+  end
+  
+  def hasBenchmark(benchmark)
+    array = @benchmarks.select{|v| v == benchmark}
+    raise unless array.length == 1 or array.length == 0
+    array.length == 1
+  end
+  
+  def subSuites
+    @subSuites
+  end
+  
+  def suites
+    [self] + @subSuites
+  end
+  
+  def suitesWithBenchmark(benchmark)
+    result = [self]
+    @subSuites.each {
+      | subSuite |
+      if subSuite.hasBenchmark(benchmark)
+        result << subSuite
+      end
+    }
+    result
+  end
+  
+  def empty?
+    @benchmarks.empty?
+  end
+  
+  def retain_if
+    @benchmarks.delete_if {
+      | benchmark |
+      not yield benchmark
+    }
+  end
+  
+  def preferredMean
+    @preferredMean
+  end
+  
+  def computeMean(stat)
+    if stat.ok?
+      (stat.send @preferredMean) * (10 ** decimalShift)
+    else
+      nil
+    end
+  end
+end
+
+class BenchRunPlan
+  def initialize(benchmark, vm, iteration)
+    @benchmark = benchmark
+    @vm = vm
+    @iteration = iteration
+    @environment = {}
+    if $environment.has_key?(vm.name)
+      if $environment[vm.name].has_key?(benchmark.benchmarkSuite.name)
+        if $environment[vm.name][benchmark.benchmarkSuite.name].has_key?(benchmark.name)
+          @environment = $environment[vm.name][benchmark.benchmarkSuite.name][benchmark.name]
+        end
+      end
+    end
+  end
+  
+  def benchmark
+    @benchmark
+  end
+  
+  def suite
+    @benchmark.benchmarkSuite
+  end
+  
+  def vm
+    @vm
+  end
+  
+  def iteration
+    @iteration
+  end
+  def environment
+    @environment
+  end
+  def prefix
+    "#{@benchmark.fullname}: #{vm.name}: #{iteration}"
+  end
+  
+  def emitRunCode
+    @benchmark.emitRunCode(self)
+  end
+  
+  def to_s
+    benchmark.to_s + "/" + vm.to_s
+  end
+end
+
+class BenchmarkOnVM
+  def initialize(benchmark, suiteOnVM, subSuitesOnVM)
     @benchmark = benchmark
     @suiteOnVM = suiteOnVM
+    @subSuitesOnVM = subSuitesOnVM
     @stats = Stats.new
   end
   
@@ -1162,6 +1981,10 @@ class BenchmarkOnVM
     @suiteOnVM
   end
   
+  def subSuitesOnVM
+    @subSuitesOnVM
+  end
+  
   def stats
     @stats
   end
@@ -1173,6 +1996,17 @@ class BenchmarkOnVM
   end
 end
 
+class NamedStatsAccumulator < StatsAccumulator
+  def initialize(name)
+    super()
+    @name = name
+  end
+  
+  def reportingName
+    @name
+  end
+end
+
 class SuiteOnVM < StatsAccumulator
   def initialize(vm, vmStats, suite)
     super()
@@ -1196,6 +2030,10 @@ class SuiteOnVM < StatsAccumulator
   def vm
     @vm
   end
+
+  def reportingName
+    @vm.name
+  end
   
   def vmStats
     raise unless @vmStats
@@ -1203,6 +2041,32 @@ class SuiteOnVM < StatsAccumulator
   end
 end
 
+class SubSuiteOnVM < StatsAccumulator
+  def initialize(vm, suite)
+    super()
+    @vm = vm
+    @suite = suite
+    raise unless @vm.is_a? VM
+    raise unless @suite.is_a? BenchmarkSuite
+  end
+  
+  def to_s
+    "#{@suite} on #{@vm}"
+  end
+  
+  def suite
+    @suite
+  end
+  
+  def vm
+    @vm
+  end
+  
+  def reportingName
+    @vm.name
+  end
+end
+
 class BenchPlan
   def initialize(benchmarkOnVM, iteration)
     @benchmarkOnVM = benchmarkOnVM
@@ -1236,8 +2100,14 @@ class BenchPlan
   def parseResult(result)
     raise unless result.plan == self
     @benchmarkOnVM.parseResult(result)
-    @benchmarkOnVM.vmStats.statsForIteration(@iteration, result.innerIndex).add(result.time)
-    @benchmarkOnVM.suiteOnVM.statsForIteration(@iteration, result.innerIndex).add(result.time)
+    benchmark.weight.times {
+      @benchmarkOnVM.vmStats.statsForIteration(@iteration, result.innerIndex).add(result.time)
+      @benchmarkOnVM.suiteOnVM.statsForIteration(@iteration, result.innerIndex).add(result.time)
+      @benchmarkOnVM.subSuitesOnVM.each {
+        | subSuite |
+        subSuite.statsForIteration(@iteration, result.innerIndex).add(result.time)
+      }
+    }
   end
 end
 
@@ -1250,7 +2120,7 @@ def lpad(str,chars)
 end
 
 def rpad(str,chars)
-  while str.length<chars
+  while str.length < chars
     str+=" "
   end
   str
@@ -1266,15 +2136,17 @@ def center(str,chars)
   str
 end
 
-def statsToStr(stats)
-  if $inner*$outer == 1
-    string = numToStr(stats.mean)
+def statsToStr(stats, decimalShift)
+  if stats.error?
+    lpad(center("ERROR", 10+10+2), 12+10+2)
+  elsif $inner*$outer == 1
+    string = numToStr(stats.mean, decimalShift)
     raise unless string =~ /\./
     left = $~.pre_match
     right = $~.post_match
-    lpad(left,12)+"."+rpad(right,9)
+    lpad(left, 13 - decimalShift) + "." + rpad(right, 10 + decimalShift)
   else
-    lpad(numToStr(stats.mean),11)+"+-"+rpad(numToStr(stats.confInt),9)
+    lpad(numToStr(stats.mean, decimalShift), 12) + "+-" + rpad(numToStr(stats.confInt, decimalShift), 10)
   end
 end
 
@@ -1305,11 +2177,9 @@ end
 def runAndGetResults
   results = nil
   Dir.chdir(BENCH_DATA_PATH) {
-    IO.popen("sh ./runscript", "r") {
-      | inp |
-      results = inp.read
-    }
-    raise "Script did not complete correctly: #{$?}" unless $?.success?
+    $stderr.puts ">> sh ./runscript" if $verbosity >= 2
+    raise "Script did not complete correctly: #{$?}" unless system("sh ./runscript > runlog")
+    results = IO::read("runlog")
   }
   raise unless results
   results
@@ -1318,14 +2188,20 @@ end
 def parseAndDisplayResults(results)
   vmStatses = []
   $vms.each {
-    vmStatses << StatsAccumulator.new
+    | vm |
+    vmStatses << NamedStatsAccumulator.new(vm.name)
   }
   
   suitesOnVMs = []
   suitesOnVMsForSuite = {}
+  subSuitesOnVMsForSubSuite = {}
   $suites.each {
     | suite |
     suitesOnVMsForSuite[suite] = []
+    suite.subSuites.each {
+      | subSuite |
+      subSuitesOnVMsForSubSuite[subSuite] = []
+    }
   }
   suitesOnVMsForVM = {}
   $vms.each {
@@ -1346,12 +2222,25 @@ def parseAndDisplayResults(results)
     $suites.each {
       | suite |
       suiteOnVM = SuiteOnVM.new(vm, vmStats, suite)
+      subSuitesOnVM = suite.subSuites.map {
+        | subSuite |
+        result = SubSuiteOnVM.new(vm, subSuite)
+        subSuitesOnVMsForSubSuite[subSuite] << result
+        result
+      }
       suitesOnVMs << suiteOnVM
       suitesOnVMsForSuite[suite] << suiteOnVM
       suitesOnVMsForVM[vm] << suiteOnVM
       suite.benchmarks.each {
         | benchmark |
-        benchmarkOnVM = BenchmarkOnVM.new(benchmark, suiteOnVM)
+        subSuitesOnVMForThisBenchmark = []
+        subSuitesOnVM.each {
+          | subSuiteOnVM |
+          if subSuiteOnVM.suite.hasBenchmark(benchmark)
+            subSuitesOnVMForThisBenchmark << subSuiteOnVM
+          end
+        }
+        benchmarkOnVM = BenchmarkOnVM.new(benchmark, suiteOnVM, subSuitesOnVMForThisBenchmark)
         benchmarksOnVMs << benchmarkOnVM
         benchmarksOnVMsForBenchmark[benchmark] << benchmarkOnVM
       }
@@ -1377,7 +2266,7 @@ def parseAndDisplayResults(results)
     elsif line =~ /HARDWARE:hw\.model: /
       hwmodel = $~.post_match.chomp
     else
-      result = ParsedResult.parse(plans, line.chomp)
+      result = ParsedResult.parse(plans, line)
       if result
         result.plan.parseResult(result)
       end
@@ -1406,7 +2295,11 @@ def parseAndDisplayResults(results)
         
         # curResult now holds 1 sample for each of the means computed in the above
         # loop. Compute the geomean over this, and store it.
-        result.add(curResult.geometricMean)
+        if curResult.ok?
+          result.add(curResult.geometricMean)
+        else
+          result.add(nil)
+        end
       }
     }
 
@@ -1431,58 +2324,68 @@ def parseAndDisplayResults(results)
     }
   end
 
-  reportName =
-    (if ($vms.collect {
+  if $outputName
+    reportName = $outputName
+  else
+    reportName =
+      (if ($vms.collect {
+             | vm |
+             vm.nameKind
+           }.index :auto)
+         ""
+       else
+         text = $vms.collect {
            | vm |
-           vm.nameKind
-         }.index :auto)
-       ""
-     else
-       $vms.collect {
-         | vm |
-         vm.to_s
-       }.join("_") + "_"
-     end) +
-    ($suites.collect {
-       | suite |
-       suite.to_s
-     }.join("")) + "_" +
-    (if hostname
-       hostname + "_"
-     else
-       ""
-     end)+
-    (begin
-       time = Time.now
-       "%04d%02d%02d_%02d%02d" %
-         [ time.year, time.month, time.day,
-           time.hour, time.min ]
-     end) +
-    "_benchReport.txt"
+           vm.to_s
+         }.join("_") + "_"
+         if text.size >= 40
+           ""
+         else
+           text
+         end
+       end) +
+      ($suites.collect {
+         | suite |
+         suite.to_s
+       }.join("")) + "_" +
+      (if hostname
+         hostname + "_"
+       else
+         ""
+       end)+
+      (begin
+         time = Time.now
+         "%04d%02d%02d_%02d%02d" %
+           [ time.year, time.month, time.day,
+             time.hour, time.min ]
+       end)
+  end
 
   unless $brief
-    puts "Generating benchmark report at #{reportName}"
+    puts "Generating benchmark report at #{Dir.pwd}/#{reportName}_report.txt"
+    puts "And raw data at #{Dir.pwd}/#{reportName}.json"
   end
   
   outp = $stdout
+  json = {}
   begin
-    outp = File.open(reportName,"w")
+    outp = File.open(reportName + "_report.txt","w")
   rescue => e
-    $stderr.puts "Error: could not save report to #{reportName}: #{e}"
+    $stderr.puts "Error: could not save report to #{reportName}_report.txt: #{e}"
     $stderr.puts
   end
   
   def createVMsString
     result = ""
-    result += "   " if $suites.size > 1
-    result += rpad("", $benchpad)
+    result += "   " if $allSuites.size > 1
+    result += rpad("", $benchpad + $weightpad)
     result += " "
     $vms.size.times {
       | index |
       if index != 0
         result += " "+NoChange.new(0).shortForm
       end
-      result += lpad(center($vms[index].name, 9+9+2), 11+9+2)
+      result += lpad(center($vms[index].name, 10+10+2), 12+10+2)
     }
     result += "    "
     if $vms.size >= 3
@@ -1493,16 +2396,24 @@ def parseAndDisplayResults(results)
     result
   end
   
+  def andJoin(list)
+    if list.size == 1
+      list[0].to_s
+    elsif list.size == 2
+      "#{list[0]} and #{list[1]}"
+    else
+      "#{list[0..-2].join(', ')}, and #{list[-1]}"
+    end
+  end
+  
+  json["vms"] = $vms.collect{|v| v.name}
+  json["suites"] = {}
+  json["runlog"] = results
+  
   columns = [createVMsString.size, 78].max
   
   outp.print "Benchmark report for "
-  if $suites.size == 1
-    outp.print $suites[0].to_s
-  elsif $suites.size == 2
-    outp.print "#{$suites[0]} and #{$suites[1]}"
-  else
-    outp.print "#{$suites[0..-2].join(', ')}, and #{$suites[-1]}"
-  end
+  outp.print andJoin($suites)
   if hostname
     outp.print " on #{hostname}"
   end
@@ -1510,17 +2421,7 @@ def parseAndDisplayResults(results)
     outp.print " (#{hwmodel})"
   end
   outp.puts "."
-  outp.puts
-  
-  # This looks stupid; revisit later.
-  if false
-    $suites.each {
-      | suite |
-      outp.puts "#{suite} at #{suite.path}"
-    }
-    
-    outp.puts
-  end
+  outp.puts
   
   outp.puts "VMs tested:"
   $vms.each {
@@ -1530,6 +2431,10 @@ def parseAndDisplayResults(results)
       outp.print " (r#{vm.svnRevision})"
     end
     outp.puts
+    vm.extraEnv.each_pair {
+      | key, val |
+      outp.puts "    export #{key}=#{val}"
+    }
   }
   
   outp.puts
@@ -1564,21 +2469,24 @@ def parseAndDisplayResults(results)
     outp.puts createVMsString
   end
   
-  def summaryStats(outp, accumulators, name, &proc)
-    outp.print "   " if $suites.size > 1
-    outp.print rpad(name, $benchpad)
+  def summaryStats(outp, json, jsonKey, accumulators, name, decimalShift, &proc)
+    resultingJson = {}
+    outp.print "   " if $allSuites.size > 1
+    outp.print rpad(name, $benchpad + $weightpad)
     outp.print " "
     accumulators.size.times {
       | index |
       if index != 0
         outp.print " "+accumulators[index].stats(&proc).compareTo(accumulators[index-1].stats(&proc)).shortForm
       end
-      outp.print statsToStr(accumulators[index].stats(&proc))
+      outp.print statsToStr(accumulators[index].stats(&proc), decimalShift)
+      resultingJson[accumulators[index].reportingName] = accumulators[index].stats(&proc).jsonMap
     }
     if accumulators.size>=2
       outp.print("    "+accumulators[-1].stats(&proc).compareTo(accumulators[0].stats(&proc)).longForm)
     end
     outp.puts
+    json[jsonKey] = resultingJson
   end
   
   def meanName(currentMean, preferredMean)
@@ -1589,18 +2497,18 @@ def parseAndDisplayResults(results)
     result
   end
   
-  def allSummaryStats(outp, accumulators, preferredMean)
-    summaryStats(outp, accumulators, meanName("arithmetic", preferredMean)) {
+  def allSummaryStats(outp, json, accumulators, preferredMean, decimalShift)
+    summaryStats(outp, json, "<arithmetic>", accumulators, meanName("arithmetic", preferredMean), decimalShift) {
       | stat |
       stat.arithmeticMean
     }
     
-    summaryStats(outp, accumulators, meanName("geometric", preferredMean)) {
+    summaryStats(outp, json, "<geometric>", accumulators, meanName("geometric", preferredMean), decimalShift) {
       | stat |
       stat.geometricMean
     }
     
-    summaryStats(outp, accumulators, meanName("harmonic", preferredMean)) {
+    summaryStats(outp, json, "<harmonic>", accumulators, meanName("harmonic", preferredMean), decimalShift) {
       | stat |
       stat.harmonicMean
     }
@@ -1608,16 +2516,29 @@ def parseAndDisplayResults(results)
   
   $suites.each {
     | suite |
+    suiteJson = {}
+    subSuiteJsons = {}
+    suite.subSuites.each {
+      | subSuite |
+      subSuiteJsons[subSuite] = {}
+    }
+    
     printVMs(outp)
-    if $suites.size > 1
-      outp.puts "#{suite.name}:"
+    if $allSuites.size > 1
+      outp.puts(andJoin(suite.suites.map{|v| v.name}) + ":")
     else
       outp.puts
     end
     suite.benchmarks.each {
       | benchmark |
-      outp.print "   " if $suites.size > 1
-      outp.print rpad(benchmark.name, $benchpad)
+      benchmarkJson = {}
+      outp.print "   " if $allSuites.size > 1
+      outp.print rpad(benchmark.name, $benchpad) + rpad(benchmark.weightString, $weightpad)
+      if benchmark.name.size > $benchNameClip
+        outp.puts
+        outp.print "   " if $allSuites.size > 1
+        outp.print((" " * $benchpad) + (" " * $weightpad))
+      end
       outp.print " "
       myConfigs = benchmarksOnVMsForBenchmark[benchmark]
       myConfigs.size.times {
@@ -1625,40 +2546,68 @@ def parseAndDisplayResults(results)
         if index != 0
           outp.print " "+myConfigs[index].stats.compareTo(myConfigs[index-1].stats).shortForm
         end
-        outp.print statsToStr(myConfigs[index].stats)
+        outp.print statsToStr(myConfigs[index].stats, suite.decimalShift)
+        benchmarkJson[myConfigs[index].vm.name] = myConfigs[index].stats.jsonMap
       }
       if $vms.size>=2
         outp.print("    "+myConfigs[-1].stats.compareTo(myConfigs[0].stats).to_s)
       end
       outp.puts
+      suiteJson[benchmark.name] = benchmarkJson
+      suite.subSuites.each {
+        | subSuite |
+        if subSuite.hasBenchmark(benchmark)
+          subSuiteJsons[subSuite][benchmark.name] = benchmarkJson
+        end
+      }
     }
     outp.puts
-    allSummaryStats(outp, suitesOnVMsForSuite[suite], suite.preferredMean)
-    outp.puts if $suites.size > 1
+    unless suite.subSuites.empty?
+      suite.subSuites.each {
+        | subSuite |
+        outp.puts "#{subSuite.name}:"
+        allSummaryStats(outp, subSuiteJsons[subSuite], subSuitesOnVMsForSubSuite[subSuite], subSuite.preferredMean, subSuite.decimalShift)
+        outp.puts
+      }
+      outp.puts "#{suite.name} including #{andJoin(suite.subSuites.map{|v| v.name})}:"
+    end
+    allSummaryStats(outp, suiteJson, suitesOnVMsForSuite[suite], suite.preferredMean, suite.decimalShift)
+    outp.puts if $allSuites.size > 1
+    
+    json["suites"][suite.name] = suiteJson
+    suite.subSuites.each {
+      | subSuite |
+      json["suites"][subSuite.name] = subSuiteJsons[subSuite]
+    }
   }
   
   if $suites.size > 1
     printVMs(outp)
     outp.puts "All benchmarks:"
-    allSummaryStats(outp, vmStatses, nil)
+    allSummaryStats(outp, json, vmStatses, nil, 0)
+    
+    scaledResultJson = {}
     
     outp.puts
     printVMs(outp)
     outp.puts "Geomean of preferred means:"
     outp.print "   "
-    outp.print rpad("<scaled-result>", $benchpad)
+    outp.print rpad("<scaled-result>", $benchpad + $weightpad)
     outp.print " "
     $vms.size.times {
       | index |
       if index != 0
         outp.print " "+overallResults[index].compareTo(overallResults[index-1]).shortForm
       end
-      outp.print statsToStr(overallResults[index])
+      outp.print statsToStr(overallResults[index], 0)
+      scaledResultJson[$vms[index].name] = overallResults[index].jsonMap
     }
     if overallResults.size>=2
       outp.print("    "+overallResults[-1].compareTo(overallResults[0]).longForm)
     end
     outp.puts
+    
+    json["<scaled-result>"] = scaledResultJson
   end
   outp.puts
   
@@ -1668,7 +2617,7 @@ def parseAndDisplayResults(results)
   
   if outp != $stdout and not $brief
     puts
-    File.open(reportName) {
+    File.open(reportName + "_report.txt") {
       | inp |
       puts inp.read
     }
@@ -1679,7 +2628,10 @@ def parseAndDisplayResults(results)
     puts(overallResults.collect{|stats| stats.confInt}.join("\t"))
   end
   
-  
+  File.open(reportName + ".json", "w") {
+    | outp |
+    outp.puts json.to_json
+  }
 end
 
 begin
@@ -1688,8 +2640,17 @@ begin
   def resetBenchOptionsIfNecessary
     unless $sawBenchOptions
       $includeSunSpider = false
+      $includeLongSpider = false
       $includeV8 = false
       $includeKraken = false
+      $includeJSBench = false
+      $includeJSRegress = false
+      $includeAsmBench = false
+      $includeDSPJS = false
+      $includeBrowsermarkJS = false
+      $includeBrowsermarkDOM = false
+      $includeOctane = false
+      $includeCompressionBench = false
       $sawBenchOptions = true
     end
   end
@@ -1698,16 +2659,22 @@ begin
                  ['--inner', GetoptLong::REQUIRED_ARGUMENT],
                  ['--outer', GetoptLong::REQUIRED_ARGUMENT],
                  ['--warmup', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--no-ss-warmup', GetoptLong::NO_ARGUMENT],
+                 ['--quantum', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--minimum', GetoptLong::REQUIRED_ARGUMENT],
                  ['--timing-mode', GetoptLong::REQUIRED_ARGUMENT],
-                 ['--sunspider-only', GetoptLong::NO_ARGUMENT],
-                 ['--v8-only', GetoptLong::NO_ARGUMENT],
-                 ['--kraken-only', GetoptLong::NO_ARGUMENT],
-                 ['--exclude-sunspider', GetoptLong::NO_ARGUMENT],
-                 ['--exclude-v8', GetoptLong::NO_ARGUMENT],
-                 ['--exclude-kraken', GetoptLong::NO_ARGUMENT],
                  ['--sunspider', GetoptLong::NO_ARGUMENT],
-                 ['--v8', GetoptLong::NO_ARGUMENT],
+                 ['--longspider', GetoptLong::NO_ARGUMENT],
+                 ['--v8-spider', GetoptLong::NO_ARGUMENT],
                  ['--kraken', GetoptLong::NO_ARGUMENT],
+                 ['--js-bench', GetoptLong::NO_ARGUMENT],
+                 ['--js-regress', GetoptLong::NO_ARGUMENT],
+                 ['--asm-bench', GetoptLong::NO_ARGUMENT],
+                 ['--dsp', GetoptLong::NO_ARGUMENT],
+                 ['--browsermark-js', GetoptLong::NO_ARGUMENT],
+                 ['--browsermark-dom', GetoptLong::NO_ARGUMENT],
+                 ['--octane', GetoptLong::NO_ARGUMENT],
+                 ['--compression-bench', GetoptLong::NO_ARGUMENT],
                  ['--benchmarks', GetoptLong::REQUIRED_ARGUMENT],
                  ['--measure-gc', GetoptLong::OPTIONAL_ARGUMENT],
                  ['--force-vm-kind', GetoptLong::REQUIRED_ARGUMENT],
@@ -1723,6 +2690,9 @@ begin
                  ['--prepare-only', GetoptLong::NO_ARGUMENT],
                  ['--analyze', GetoptLong::REQUIRED_ARGUMENT],
                  ['--vms', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--output-name', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--environment', GetoptLong::REQUIRED_ARGUMENT],
+                 ['--config', GetoptLong::REQUIRED_ARGUMENT],
                  ['--help', '-h', GetoptLong::NO_ARGUMENT]).each {
     | opt, arg |
     case opt
@@ -1734,6 +2704,12 @@ begin
       $outer = intArg(opt,arg,1,nil)
     when '--warmup'
       $warmup = intArg(opt,arg,0,nil)
+    when '--no-ss-warmup'
+      $sunSpiderWarmup = false
+    when '--quantum'
+      $quantum = intArg(opt,arg,1,nil)
+    when '--minimum'
+      $minimum = intArg(opt,arg,1,nil)
     when '--timing-mode'
       if arg.upcase == "PRECISETIME"
         $timeMode = :preciseTime
@@ -1750,40 +2726,54 @@ begin
         $forceVMKind = :jsc
       elsif arg.upcase == "DUMPRENDERTREE"
         $forceVMKind = :dumpRenderTree
+      elsif arg.upcase == "WEBKITTESTRUNNER"
+        $forceVMKind = :webkitTestRunner
       elsif arg.upcase == "AUTO"
         $forceVMKind = nil
       else
-        quickFail("Expected either 'jsc' or 'DumpRenderTree' for --force-vm-kind, but got '#{arg}'.",
+        quickFail("Expected 'jsc', 'DumpRenderTree', or 'WebKitTestRunner' for --force-vm-kind, but got '#{arg}'.",
                   "Invalid argument for command-line option")
       end
     when '--force-vm-copy'
       $needToCopyVMs = true
     when '--dont-copy-vms'
       $dontCopyVMs = true
-    when '--sunspider-only'
-      $includeV8 = false
-      $includeKraken = false
-    when '--v8-only'
-      $includeSunSpider = false
-      $includeKraken = false
-    when '--kraken-only'
-      $includeSunSpider = false
-      $includeV8 = false
-    when '--exclude-sunspider'
-      $includeSunSpider = false
-    when '--exclude-v8'
-      $includeV8 = false
-    when '--exclude-kraken'
-      $includeKraken = false
     when '--sunspider'
       resetBenchOptionsIfNecessary
       $includeSunSpider = true
-    when '--v8'
+    when '--longspider'
+      resetBenchOptionsIfNecessary
+      $includeLongSpider = true
+    when '--v8-spider'
       resetBenchOptionsIfNecessary
       $includeV8 = true
     when '--kraken'
       resetBenchOptionsIfNecessary
       $includeKraken = true
+    when '--js-bench'
+      resetBenchOptionsIfNecessary
+      $includeJSBench = true
+    when '--js-regress'
+      resetBenchOptionsIfNecessary
+      $includeJSRegress = true
+    when '--asm-bench'
+      resetBenchOptionsIfNecessary
+      $includeAsmBench = true
+    when '--dsp'
+      resetBenchOptionsIfNecessary
+      $includeDSPJS = true
+    when '--browsermark-js'
+      resetBenchOptionsIfNecessary
+      $includeBrowsermarkJS = true
+    when '--browsermark-dom'
+      resetBenchOptionsIfNecessary
+      $includeBrowsermarkDOM = true
+    when '--octane'
+      resetBenchOptionsIfNecessary
+      $includeOctane = true
+    when '--compression-bench'
+      resetBenchOptionsIfNecessary
+      $includeCompressionBench = true
     when '--benchmarks'
       $benchmarkPattern = Regexp.new(arg)
     when '--measure-gc'
@@ -1811,6 +2801,32 @@ begin
       $prepare = false
       $run = false
       $analyze << arg
+    when '--output-name'
+      $outputName = arg
+    when '--vms'
+      JSON::parse(IO::read(arg)).each {
+        | vmDescription |
+        path = Pathname.new(vmDescription["path"]).realpath
+        if vmDescription["name"]
+          name = vmDescription["name"]
+          nameKind = :given
+        else
+          name = "Conf\##{$vms.length+1}"
+          nameKind = :auto
+        end
+        vm = VM.new(path, name, nameKind, nil)
+        if vmDescription["env"]
+          vmDescription["env"].each_pair {
+            | key, val |
+            vm.addExtraEnv(key, val)
+          }
+        end
+        $vms << vm
+      }
+    when '--environment'
+      $environment = JSON::parse(IO::read(arg))
+    when '--config'
+      $configPath = Pathname.new(arg)
     when '--help'
       usage
     else
@@ -1818,12 +2834,72 @@ begin
     end
   }
   
+  # Figure out the configuration
+  if $configPath.file?
+    config = JSON::parse(IO::read($configPath.to_s))
+  else
+    config = {}
+  end
+  OCTANE_PATH = config["OctanePath"]
+  BROWSERMARK_PATH = config["BrowserMarkPath"]
+  BROWSERMARK_JS_PATH = config["BrowserMarkJSPath"]
+  BROWSERMARK_DOM_PATH = config["BrowserMarkDOMPath"]
+  ASMBENCH_PATH = config["AsmBenchPath"]
+  COMPRESSIONBENCH_PATH = config["CompressionBenchPath"]
+  DSPJS_FILTRR_PATH = config["DSPJSFiltrrPath"]
+  DSPJS_ROUTE9_PATH = config["DSPJSRoute9Path"]
+  DSPJS_STARFIELD_PATH = config["DSPJSStarfieldPath"]
+  DSPJS_QUAKE3_PATH = config["DSPJSQuake3Path"]
+  DSPJS_MANDELBROT_PATH = config["DSPJSMandelbrotPath"]
+  DSPJS_JSLINUX_PATH = config["DSPJSLinuxPath"]
+  DSPJS_AMMOJS_ASMJS_PATH = config["DSPJSAmmoJSAsmJSPath"]
+  DSPJS_AMMOJS_REGULAR_PATH = config["DSPJSAmmoJSRegularPath"]
+  JSBENCH_PATH = config["JSBenchPath"]
+  KRAKEN_PATH = config["KrakenPath"]
+  
   # If the --dont-copy-vms option was passed, it overrides the --force-vm-copy option.
   if $dontCopyVMs
     $needToCopyVMs = false
   end
   
-  SUNSPIDER = BenchmarkSuite.new("SunSpider", SUNSPIDER_PATH, :arithmeticMean)
+  ARGV.each {
+    | vm |
+    if vm =~ /([a-zA-Z0-9_ ]+):/
+      name = $1
+      nameKind = :given
+      vm = $~.post_match
+    else
+      name = "Conf\##{$vms.length+1}"
+      nameKind = :auto
+    end
+    envs = []
+    while vm =~ /([a-zA-Z0-9_]+)=([a-zA-Z0-9_:]+):/
+      envs << [$1, $2]
+      vm = $~.post_match
+    end
+    $stderr.puts "#{name}: #{vm}" if $verbosity >= 1
+    vm = VM.new(Pathname.new(vm).realpath, name, nameKind, nil)
+    envs.each {
+      | pair |
+      vm.addExtraEnv(pair[0], pair[1])
+    }
+    $vms << vm
+  }
+  
+  if $vms.empty?
+    quickFail("Please specify at least on configuraiton on the command line.",
+              "Insufficient arguments")
+  end
+  
+  $vms.each {
+    | vm |
+    if vm.vmType == :jsc
+      $allDRT = false
+    end
+  }
+  
+  SUNSPIDER = BenchmarkSuite.new("SunSpider", :arithmeticMean, 0)
+  WARMUP = BenchmarkSuite.new("WARMUP", :arithmeticMean, 0)
   ["3d-cube", "3d-morph", "3d-raytrace", "access-binary-trees",
    "access-fannkuch", "access-nbody", "access-nsieve",
    "bitops-3bit-bits-in-byte", "bitops-bits-in-byte", "bitops-bitwise-and",
@@ -1834,16 +2910,52 @@ begin
    "string-unpack-code", "string-validate-input"].each {
     | name |
     SUNSPIDER.add SunSpiderBenchmark.new(name)
+    WARMUP.addIgnoringPattern SunSpiderBenchmark.new(name)
+  }
+
+  LONGSPIDER = BenchmarkSuite.new("LongSpider", :geometricMean, 0)
+  ["3d-cube", "3d-morph", "3d-raytrace", "access-binary-trees",
+   "access-fannkuch", "access-nbody", "access-nsieve",
+   "bitops-3bit-bits-in-byte", "bitops-bits-in-byte", "bitops-nsieve-bits",
+   "controlflow-recursive", "crypto-aes", "crypto-md5", "crypto-sha1",
+   "date-format-tofte", "date-format-xparb", "math-cordic",
+   "math-partial-sums", "math-spectral-norm", "string-base64",
+   "string-fasta", "string-tagcloud"].each {
+    | name |
+    LONGSPIDER.add LongSpiderBenchmark.new(name)
   }
 
-  V8 = BenchmarkSuite.new("V8", V8_PATH, :geometricMean)
+  V8 = BenchmarkSuite.new("V8Spider", :geometricMean, 0)
   ["crypto", "deltablue", "earley-boyer", "raytrace",
    "regexp", "richards", "splay"].each {
     | name |
     V8.add V8Benchmark.new(name)
   }
 
-  KRAKEN = BenchmarkSuite.new("Kraken", KRAKEN_PATH, :arithmeticMean)
+  OCTANE = BenchmarkSuite.new("Octane", :geometricMean, 1)
+  [[["crypto"], "encrypt", 1, true, false, 32],
+   [["crypto"], "decrypt", 1, true, false, 32],
+   [["deltablue"], "deltablue", 2, true, false, 32],
+   [["earley-boyer"], "earley", 1, true, false, 32],
+   [["earley-boyer"], "boyer", 1, true, false, 32],
+   [["navier-stokes"], "navier-stokes", 2, true, false, 16],
+   [["raytrace"], "raytrace", 2, true, false, 32],
+   [["richards"], "richards", 2, true, false, 32],
+   [["splay"], "splay", 2, true, false, 32],
+   [["regexp"], "regexp", 2, false, false, 16],
+   [["pdfjs"], "pdfjs", 2, false, false, 4],
+   [["mandreel"], "mandreel", 2, false, false, 4],
+   [["gbemu-part1", "gbemu-part2"], "gbemu", 2, false, false, 4],
+   [["code-load"], "closure", 1, false, false, 16],
+   [["code-load"], "jquery", 1, false, false, 16],
+   [["box2d"], "box2d", 2, false, false, 8],
+   [["zlib", "zlib-data"], "zlib", 2, false, true, 3],
+   [["typescript", "typescript-input", "typescript-compiler"], "typescript", 2, false, true, 1]].each {
+    | args |
+    OCTANE.add OctaneBenchmark.new(*args)
+  }
+
+  KRAKEN = BenchmarkSuite.new("Kraken", :arithmeticMean, -1)
   ["ai-astar", "audio-beat-detection", "audio-dft", "audio-fft",
    "audio-oscillator", "imaging-darkroom", "imaging-desaturate",
    "imaging-gaussian-blur", "json-parse-financial",
@@ -1853,93 +2965,172 @@ begin
     | name |
     KRAKEN.add KrakenBenchmark.new(name)
   }
-
-  ARGV.each {
-    | vm |
-    if vm =~ /([a-zA-Z0-9_ ]+):/
-      name = $1
-      nameKind = :given
-      vm = $~.post_match
-    else
-      name = "Conf\##{$vms.length+1}"
-      nameKind = :auto
+  
+  JSBENCH = BenchmarkSuite.new("JSBench", :arithmeticMean, 0)
+  [["amazon", "urm"], ["facebook", "urem"], ["google", "urem"], ["twitter", "urem"],
+   ["yahoo", "urem"]].each {
+    | nameAndMode |
+    JSBENCH.add JSBenchBenchmark.new(*nameAndMode)
+  }
+  
+  JSREGRESS = BenchmarkSuite.new("JSRegress", :geometricMean, 0)
+  Dir.foreach(JSREGRESS_PATH) {
+    | filename |
+    if filename =~ /\.js$/
+      name = $~.pre_match
+      JSREGRESS.add JSRegressBenchmark.new(name)
     end
-    $stderr.puts "#{name}: #{vm}" if $verbosity >= 1
-    $vms << VM.new(Pathname.new(vm).realpath, name, nameKind, nil)
   }
   
-  if $vms.empty?
-    quickFail("Please specify at least on configuraiton on the command line.",
-              "Insufficient arguments")
+  ASMBENCH = BenchmarkSuite.new("AsmBench", :geometricMean, 0)
+  if JSBENCH_PATH
+    Dir.foreach(ASMBENCH_PATH) {
+      | filename |
+      if filename =~ /\.js$/
+        name = $~.pre_match
+        ASMBENCH.add AsmBenchBenchmark.new(name)
+      end
+    }
   end
+
+  COMPRESSIONBENCH = BenchmarkSuite.new("CompressionBench", :geometricMean, 0)
+  [[["huffman", "compression-data"], "huffman", ""],
+   [["arithmetic", "compression-data"], "arithmetic", "Simple"],
+   [["arithmetic", "compression-data"], "arithmetic", "Precise"],
+   [["arithmetic", "compression-data"], "arithmetic", "Complex Precise"],
+   [["arithmetic", "compression-data"], "arithmetic", "Precise Order 0"],
+   [["arithmetic", "compression-data"], "arithmetic", "Precise Order 1"],
+   [["arithmetic", "compression-data"], "arithmetic", "Precise Order 2"],
+   [["arithmetic", "compression-data"], "arithmetic", "Simple Order 1"],
+   [["arithmetic", "compression-data"], "arithmetic", "Simple Order 2"],
+   [["lz-string", "compression-data"], "lz-string", ""]
+  ].each {
+    | args |
+    COMPRESSIONBENCH.add CompressionBenchBenchmark.new(*args)
+  }
+
+  DSPJS = BenchmarkSuite.new("DSP", :geometricMean, 0)
+  DSPJS.add DSPJSFiltrrBenchmark.new("filtrr-posterize-tint", "e2")
+  DSPJS.add DSPJSFiltrrBenchmark.new("filtrr-tint-contrast-sat-bright", "e5")
+  DSPJS.add DSPJSFiltrrBenchmark.new("filtrr-tint-sat-adj-contr-mult", "e7")
+  DSPJS.add DSPJSFiltrrBenchmark.new("filtrr-blur-overlay-sat-contr", "e8")
+  DSPJS.add DSPJSFiltrrBenchmark.new("filtrr-sat-blur-mult-sharpen-contr", "e9")
+  DSPJS.add DSPJSFiltrrBenchmark.new("filtrr-sepia-bias", "e10")
+  DSPJS.add DSPJSVP8Benchmark.new
+  DSPJS.add DSPStarfieldBenchmark.new
+  DSPJS.add DSPJSJSLinuxBenchmark.new
+  DSPJS.add DSPJSQuake3Benchmark.new
+  DSPJS.add DSPJSMandelbrotBenchmark.new
+  DSPJS.add DSPJSAmmoJSASMBenchmark.new
+  DSPJS.add DSPJSAmmoJSRegularBenchmark.new
+
+  BROWSERMARK_JS = BenchmarkSuite.new("BrowsermarkJS", :geometricMean, 1)
+  ["array_blur", "array_weighted", "string_chat", "string_filter", "string_weighted"].each {
+    | name |
+      BROWSERMARK_JS.add BrowsermarkJSBenchmark.new(name)
+  }
   
-  $vms.each {
-    | vm |
-    if vm.vmType != :jsc and $timeMode != :date
-      $timeMode = :date
-      $stderr.puts "Warning: using Date.now() instead of preciseTime() because #{vm} doesn't support the latter."
-    end
+  BROWSERMARK_DOM = BenchmarkSuite.new("BrowsermarkDOM", :geometricMean, 1)
+  ["advanced_search", "create_source", "dynamic_create", "search"].each {
+    | name |
+      BROWSERMARK_DOM.add BrowsermarkDOMBenchmark.new(name)
   }
+
+  $suites = []
   
-  if FileTest.exist? BENCH_DATA_PATH
-    cmd = "rm -rf #{BENCH_DATA_PATH}"
-    $stderr.puts ">> #{cmd}" if $verbosity >= 2
-    raise unless system cmd
+  if $includeSunSpider and not SUNSPIDER.empty?
+    $suites << SUNSPIDER
   end
   
-  Dir.mkdir BENCH_DATA_PATH
+  if $includeLongSpider and not LONGSPIDER.empty?
+    $suites << LONGSPIDER
+  end
   
-  if $needToCopyVMs
-    canCopyIntoBenchPath = true
-    $vms.each {
-      | vm |
-      canCopyIntoBenchPath = false unless vm.canCopyIntoBenchPath
-    }
-    
-    if canCopyIntoBenchPath
-      $vms.each {
-        | vm |
-        $stderr.puts "Copying #{vm} into #{BENCH_DATA_PATH}..."
-        vm.copyIntoBenchPath
-      }
-      $stderr.puts "All VMs are in place."
+  if $includeV8 and not V8.empty?
+    $suites << V8
+  end
+  
+  if $includeOctane and not OCTANE.empty?
+    if OCTANE_PATH
+      $suites << OCTANE
     else
-      $stderr.puts "Warning: don't know how to copy some VMs into #{BENCH_DATA_PATH}, so I won't do it."
+      $stderr.puts "Warning: refusing to run Octane because \"OctanePath\" isn't set in #{$configPath}."
     end
   end
   
-  if $measureGC and $measureGC != true
-    found = false
-    $vms.each {
-      | vm |
-      if vm.name == $measureGC
-        found = true
-      end
-    }
-    unless found
-      $stderr.puts "Warning: --measure-gc option ignored because no VM is named #{$measureGC}"
+  if $includeKraken and not KRAKEN.empty?
+    if KRAKEN_PATH
+      $suites << KRAKEN
+    else
+      $stderr.puts "Warning: refusing to run Kraken because \"KrakenPath\" isn't set in #{$configPath}."
     end
   end
   
-  if $outer*$inner == 1
-    $stderr.puts "Warning: will only collect one sample per benchmark/VM.  Confidence interval calculation will fail."
+  if $includeJSBench and not JSBENCH.empty?
+    if $allDRT
+      if JSBENCH_PATH
+        $suites << JSBENCH
+      else
+        $stderr.puts "Warning: refusing to run JSBench because \"JSBenchPath\" isn't set in #{$configPath}"
+      end
+    else
+      $stderr.puts "Warning: refusing to run JSBench because not all VMs are DumpRenderTree or WebKitTestRunner."
+    end
   end
   
-  $stderr.puts "Using timeMode = #{$timeMode}." if $verbosity >= 1
-  
-  $suites = []
+  if $includeJSRegress and not JSREGRESS.empty?
+    $suites << JSREGRESS
+  end
   
-  if $includeSunSpider and not SUNSPIDER.empty?
-    $suites << SUNSPIDER
+  if $includeAsmBench and not ASMBENCH.empty?
+    if ASMBENCH_PATH
+      $suites << ASMBENCH
+    else
+      $stderr.puts "Warning: refusing to run AsmBench because \"AsmBenchPath\" isn't set in #{$configPath}."
+    end
   end
   
-  if $includeV8 and not V8.empty?
-    $suites << V8
+  if $includeDSPJS and not DSPJS.empty?
+    if $allDRT
+      if DSPJS_FILTRR_PATH and DSPJS_ROUTE9_PATH and DSPJS_STARFIELD_PATH and DSPJS_QUAKE3_PATH and DSPJS_MANDELBROT_PATH and DSPJS_JSLINUX_PATH and DSPJS_AMMOJS_ASMJS_PATH and DSPJS_AMMOJS_REGULAR_PATH
+        $suites << DSPJS
+      else
+        $stderr.puts "Warning: refusing to run DSPJS because one of the following isn't set in #{$configPath}: \"DSPJSFiltrrPath\", \"DSPJSRoute9Path\", \"DSPJSStarfieldPath\", \"DSPJSQuake3Path\", \"DSPJSMandelbrotPath\", \"DSPJSLinuxPath\", \"DSPJSAmmoJSAsmJSPath\", \"DSPJSAmmoJSRegularPath\"."
+      end
+    else
+      $stderr.puts "Warning: refusing to run DSPJS because not all VMs are DumpRenderTree or WebKitTestRunner."
+    end
   end
   
-  if $includeKraken and not KRAKEN.empty?
-    $suites << KRAKEN
+  if $includeBrowsermarkJS and not BROWSERMARK_JS.empty?
+    if BROWSERMARK_PATH and BROWSERMARK_JS_PATH
+      $suites << BROWSERMARK_JS
+    else
+      $stderr.puts "Warning: refusing to run Browsermark-JS because one of the following isn't set in #{$configPath}: \"BrowserMarkPath\" or \"BrowserMarkJSPath\"."
+    end
+  end
+
+  if $includeBrowsermarkDOM and not BROWSERMARK_DOM.empty?
+    if $allDRT
+      if BROWSERMARK_PATH and BROWSERMARK_JS_PATH and BROWSERMARK_DOM_PATH
+        $suites << BROWSERMARK_DOM
+      else
+        $stderr.puts "Warning: refusing to run Browsermark-DOM because one of the following isn't set in #{$configPath}: \"BrowserMarkPath\", \"BrowserMarkJSPath\", or \"BrowserMarkDOMPath\"."
+      end
+    else
+      $stderr.puts "Warning: refusing to run Browsermark-DOM because not all VMs are DumpRenderTree or WebKitTestRunner."
+    end
+  end
+
+  if $includeCompressionBench and not COMPRESSIONBENCH.empty?
+    if COMPRESSIONBENCH_PATH
+      $suites << COMPRESSIONBENCH
+    else
+      $stderr.puts "Warning: refusing to run CompressionBench because \"CompressionBenchPath\" isn't set in #{$configPath}"
+    end
   end
+
+  $allSuites = $suites.map{|v| v.suites}.flatten(1)
   
   $benchmarks = []
   $suites.each {
@@ -1947,6 +3138,17 @@ begin
     $benchmarks += suite.benchmarks
   }
   
+  if $suites.empty? or $benchmarks.empty?
+    $stderr.puts "No benchmarks found.  Bailing out."
+    exit 1
+  end
+  
+  if $outer*$inner == 1
+    $stderr.puts "Warning: will only collect one sample per benchmark/VM.  Confidence interval calculation will fail."
+  end
+  
+  $stderr.puts "Using timeMode = #{$timeMode}." if $verbosity >= 1
+  
   $runPlans = []
   $vms.each {
     | vm |
@@ -1961,26 +3163,103 @@ begin
   
   $runPlans.shuffle!
   
+  if $sunSpiderWarmup
+    warmupPlans = []
+    $vms.each {
+      | vm |
+      WARMUP.benchmarks.each {
+        | benchmark |
+        warmupPlans << BenchRunPlan.new(benchmark, vm, 0)
+      }
+    }
+    
+    $runPlans = warmupPlans.shuffle + $runPlans
+  end
+  
   $suitepad = $suites.collect {
     | suite |
     suite.to_s.size
   }.max + 1
   
-  $benchpad = ($benchmarks +
-               ["<arithmetic> *", "<geometric> *", "<harmonic> *"]).collect {
+  $planpad = $runPlans.collect {
+    | plan |
+    plan.to_s.size
+  }.max + 1
+  
+  maxBenchNameLength =
+    ($benchmarks + ["<arithmetic> *", "<geometric> *", "<harmonic> *"]).collect {
     | benchmark |
     if benchmark.respond_to? :name
       benchmark.name.size
     else
       benchmark.size
     end
-  }.max + 1
+  }.max
+  $benchNameClip = 40
+  $benchpad = [maxBenchNameLength, $benchNameClip].min + 1
+  
+  $weightpad = $benchmarks.collect {
+    | benchmark |
+    benchmark.weightString.size
+  }.max
 
   $vmpad = $vms.collect {
     | vm |
     vm.to_s.size
   }.max + 1
   
+  $analyze.each_with_index {
+    | filename, index |
+    if index >= 1
+      puts
+    end
+    parseAndDisplayResults(IO::read(filename))
+  }
+  
+  if not $prepare and not $run
+    exit 0
+  end
+  
+  if FileTest.exist? BENCH_DATA_PATH
+    cmd = "rm -rf #{BENCH_DATA_PATH}"
+    $stderr.puts ">> #{cmd}" if $verbosity >= 2
+    raise unless system cmd
+  end
+  
+  Dir.mkdir BENCH_DATA_PATH
+  
+  if $needToCopyVMs
+    canCopyIntoBenchPath = true
+    $vms.each {
+      | vm |
+      canCopyIntoBenchPath = false unless vm.canCopyIntoBenchPath
+    }
+    
+    if canCopyIntoBenchPath
+      $vms.each {
+        | vm |
+        $stderr.puts "Copying #{vm} into #{BENCH_DATA_PATH}..."
+        vm.copyIntoBenchPath
+      }
+      $stderr.puts "All VMs are in place."
+    else
+      $stderr.puts "Warning: don't know how to copy some VMs into #{BENCH_DATA_PATH}, so I won't do it."
+    end
+  end
+  
+  if $measureGC and $measureGC != true
+    found = false
+    $vms.each {
+      | vm |
+      if vm.name == $measureGC
+        found = true
+      end
+    }
+    unless found
+      $stderr.puts "Warning: --measure-gc option ignored because no VM is named #{$measureGC}"
+    end
+  end
+  
   if $prepare
     File.open("#{BENCH_DATA_PATH}/runscript", "w") {
       | file |
@@ -1996,15 +3275,15 @@ begin
         | plan, idx |
         if $verbosity == 0 and not $silent
           text1 = lpad(idx.to_s,$runPlans.size.to_s.size)+"/"+$runPlans.size.to_s
-          text2 = plan.benchmark.to_s+"/"+plan.vm.to_s
-          file.puts("echo "+("\r#{text1} #{rpad(text2,$suitepad+1+$benchpad+1+$vmpad)}".inspect)[0..-2]+"\\c\" 1>&2")
-          file.puts("echo "+("\r#{text1} #{text2}".inspect)[0..-2]+"\\c\" 1>&2")
+          text2 = plan.to_s
+          file.puts("echo " + Shellwords.shellescape("\r#{text1} #{rpad(text2,$planpad)}") + "\"\\c\" 1>&2")
+          file.puts("echo " + Shellwords.shellescape("\r#{text1} #{text2}") + "\"\\c\" 1>&2")
         end
         plan.emitRunCode
       }
       if $verbosity == 0 and not $silent
-        file.puts("echo "+("\r#{$runPlans.size}/#{$runPlans.size} #{' '*($suitepad+1+$benchpad+1+$vmpad)}".inspect)[0..-2]+"\\c\" 1>&2")
-        file.puts("echo "+("\r#{$runPlans.size}/#{$runPlans.size}".inspect)+" 1>&2")
+        file.puts("echo " + Shellwords.shellescape("\r#{$runPlans.size}/#{$runPlans.size} #{' '*($suitepad+1+$benchpad+1+$vmpad)}") + "\"\\c\" 1>&2")
+        file.puts("echo " + Shellwords.shellescape("\r#{$runPlans.size}/#{$runPlans.size}") + " 1>&2")
       end
     }
   end
@@ -2020,14 +3299,14 @@ begin
       
       def grokHost(host)
         if host =~ /:([0-9]+)$/
-          "-p " + $1 + " " + $~.pre_match.inspect
+          "-p " + $1 + " " + Shellwords.shellescape($~.pre_match)
         else
-          host.inspect
+          Shellwords.shellescape(host)
         end
       end
       
       def sshRead(host, command)
-        cmd = "ssh #{$sshOptions.collect{|x| x.inspect}.join(' ')} #{grokHost(host)} #{command.inspect}"
+        cmd = "ssh #{$sshOptions.collect{|x| Shellwords.shellescape(x)}.join(' ')} #{grokHost(host)} #{Shellwords.shellescape(command)}"
         $stderr.puts ">> #{cmd}" if $verbosity>=2
         result = ""
         IO.popen(cmd, "r") {
@@ -2043,7 +3322,7 @@ begin
       end
       
       def sshWrite(host, command, data)
-        cmd = "ssh #{$sshOptions.collect{|x| x.inspect}.join(' ')} #{grokHost(host)} #{command.inspect}"
+        cmd = "ssh #{$sshOptions.collect{|x| Shellwords.shellescape(x)}.join(' ')} #{grokHost(host)} #{Shellwords.shellescape(command)}"
         $stderr.puts ">> #{cmd}" if $verbosity>=2
         IO.popen(cmd, "w") {
           | outp |
@@ -2059,11 +3338,11 @@ begin
         remoteTempPath = JSON::parse(sshRead(host, "cat ~/.bencher"))["tempPath"]
         raise unless remoteTempPath
         
-        sshWrite(host, "cd #{remoteTempPath.inspect} && rm -rf benchdata && tar -xz", IO::read("#{TEMP_PATH}/payload.tar.gz"))
+        sshWrite(host, "cd #{Shellwords.shellescape(remoteTempPath)} && rm -rf benchdata && tar -xz", IO::read("#{TEMP_PATH}/payload.tar.gz"))
         
         $stderr.puts "Running on #{host}..." if $verbosity==0
         
-        parseAndDisplayResults(sshRead(host, "cd #{(remoteTempPath+'/benchdata').inspect} && sh runscript"))
+        parseAndDisplayResults(sshRead(host, "cd #{Shellwords.shellescape(remoteTempPath + '/benchdata')} && sh runscript"))
       }
     end
     
@@ -2076,14 +3355,6 @@ begin
     end
   end
   
-  $analyze.each_with_index {
-    | filename, index |
-    if index >= 1
-      puts
-    end
-    parseAndDisplayResults(IO::read(filename))
-  }
-  
   if $prepare and not $run and $analyze.empty?
     puts wrap("Benchmarking script and data are in #{BENCH_DATA_PATH}. You can run "+
               "the benchmarks and get the results by doing:", 78)
@@ -2098,4 +3369,3 @@ rescue => e
   fail(e)
 end
   
-