WebCore: Add support for AudioNode "tailTime()" and "latencyTime()"
authorjer.noble@apple.com <jer.noble@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 13 Mar 2012 00:09:18 +0000 (00:09 +0000)
committerjer.noble@apple.com <jer.noble@apple.com@268f45cc-cd09-0410-ab3c-d52691b4dbfc>
Tue, 13 Mar 2012 00:09:18 +0000 (00:09 +0000)
https://bugs.webkit.org/show_bug.cgi?id=74750

Reviewed by Chris Rogers.

No new tests; optimization of existing code path, so covered by existing tests.

To account for AudioNodes which may generate non-silent audio when fed silent input
for a certain amount of time after the last non-silent audio data, add two new virtual
functions tailTime() and latencyTime().

* webaudio/AudioNode.h:
(WebCore::AudioNode::tailTime): Added. Pure virtual.
(WebCore::AudioNode::latencyTime): Added. Pure virtual.
* platform/audio/AudioProcessor.h:
(WebCore::AudioProcessor::tailTime): Added. Pure virtual.
(WebCore::AudioProcessor::latencyTime): Added. Pure virtual.
* platform/audio/AudioDSPKernel.h:
(WebCore::AudioDSPKernel::tailTime): Added. Pure virtual.
(WebCore::AudioDSPKernel::latencyTime): Added. Pure virtual.

Added tailTime() and latencyTime() overrides to the following classes:
* platform/audio/AudioDSPKernelProcessor.cpp:
(WebCore::AudioDSPKernelProcessor::tailTime):
(WebCore::AudioDSPKernelProcessor::latencyTime):
* platform/audio/AudioDSPKernelProcessor.h:
* platform/audio/DynamicsCompressor.h:
(WebCore::DynamicsCompressor::tailTime):
(WebCore::DynamicsCompressor::latencyTime):
* platform/audio/EqualPowerPanner.h:
* platform/audio/HRTFPanner.cpp:
(WebCore::HRTFPanner::tailTime):
(WebCore::HRTFPanner::latencyTime):
* platform/audio/HRTFPanner.h:
* platform/audio/Panner.h:
* webaudio/AudioBasicProcessorNode.cpp:
(WebCore::AudioBasicProcessorNode::tailTime):
(WebCore::AudioBasicProcessorNode::latencyTime):
* webaudio/AudioBasicProcessorNode.h:
* webaudio/AudioChannelMerger.h:
* webaudio/AudioChannelSplitter.h:
* webaudio/AudioDestinationNode.h:
* webaudio/AudioGainNode.h:
* webaudio/AudioPannerNode.h:
* webaudio/AudioSourceNode.h:
* webaudio/BiquadDSPKernel.cpp:
(WebCore::BiquadDSPKernel::tailTime):
(WebCore::BiquadDSPKernel::latencyTime):
* webaudio/BiquadDSPKernel.h:
* webaudio/BiquadFilterNode.h:
* webaudio/ConvolverNode.cpp:
(WebCore::ConvolverNode::tailTime):
(WebCore::ConvolverNode::latencyTime):
* webaudio/ConvolverNode.h:
* webaudio/DelayDSPKernel.cpp:
(WebCore::DelayDSPKernel::tailTime):
(WebCore::DelayDSPKernel::latencyTime):
* webaudio/DelayDSPKernel.h:
* webaudio/DelayProcessor.h:
* webaudio/DynamicsCompressorNode.cpp:
(WebCore::DynamicsCompressorNode::tailTime):
(WebCore::DynamicsCompressorNode::latencyTime):
* webaudio/DynamicsCompressorNode.h:
* webaudio/JavaScriptAudioNode.cpp:
(WebCore::JavaScriptAudioNode::tailTime):
(WebCore::JavaScriptAudioNode::latencyTime):
* webaudio/JavaScriptAudioNode.h:
* webaudio/RealtimeAnalyserNode.h:
* webaudio/WaveShaperDSPKernel.h:

The following functions were added as support for the new AudioNode and AudioProcessor functions:
* platform/audio/Biquad.cpp:
(WebCore::Biquad::latencyFrames):
* platform/audio/Biquad.h:
* platform/audio/Reverb.cpp:
(WebCore::Reverb::latencyFrames):
* platform/audio/ReverbConvolver.h:
(WebCore::ReverbConvolver::latencyFrames):

The following functions were made const-correct:
* platform/audio/HRTFPanner.h:
(WebCore::HRTFPanner::fftSize):
* platform/audio/Reverb.h:
(WebCore::Reverb::impulseResponseLength):

git-svn-id: https://svn.webkit.org/repository/webkit/trunk@110507 268f45cc-cd09-0410-ab3c-d52691b4dbfc

37 files changed:
Source/WebCore/ChangeLog
Source/WebCore/platform/audio/AudioDSPKernel.h
Source/WebCore/platform/audio/AudioDSPKernelProcessor.cpp
Source/WebCore/platform/audio/AudioDSPKernelProcessor.h
Source/WebCore/platform/audio/AudioProcessor.h
Source/WebCore/platform/audio/DynamicsCompressor.h
Source/WebCore/platform/audio/EqualPowerPanner.h
Source/WebCore/platform/audio/HRTFPanner.cpp
Source/WebCore/platform/audio/HRTFPanner.h
Source/WebCore/platform/audio/Panner.h
Source/WebCore/platform/audio/Reverb.cpp
Source/WebCore/platform/audio/Reverb.h
Source/WebCore/platform/audio/ReverbConvolver.cpp
Source/WebCore/platform/audio/ReverbConvolver.h
Source/WebCore/webaudio/AudioBasicProcessorNode.cpp
Source/WebCore/webaudio/AudioBasicProcessorNode.h
Source/WebCore/webaudio/AudioChannelMerger.h
Source/WebCore/webaudio/AudioChannelSplitter.h
Source/WebCore/webaudio/AudioDestinationNode.h
Source/WebCore/webaudio/AudioGainNode.h
Source/WebCore/webaudio/AudioNode.h
Source/WebCore/webaudio/AudioPannerNode.h
Source/WebCore/webaudio/AudioSourceNode.h
Source/WebCore/webaudio/BiquadDSPKernel.cpp
Source/WebCore/webaudio/BiquadDSPKernel.h
Source/WebCore/webaudio/BiquadFilterNode.h
Source/WebCore/webaudio/ConvolverNode.cpp
Source/WebCore/webaudio/ConvolverNode.h
Source/WebCore/webaudio/DelayDSPKernel.cpp
Source/WebCore/webaudio/DelayDSPKernel.h
Source/WebCore/webaudio/DelayProcessor.h
Source/WebCore/webaudio/DynamicsCompressorNode.cpp
Source/WebCore/webaudio/DynamicsCompressorNode.h
Source/WebCore/webaudio/JavaScriptAudioNode.cpp
Source/WebCore/webaudio/JavaScriptAudioNode.h
Source/WebCore/webaudio/RealtimeAnalyserNode.h
Source/WebCore/webaudio/WaveShaperDSPKernel.h

index 6e1ed83..a47e56c 100644 (file)
@@ -1,3 +1,90 @@
+2012-03-12  Jer Noble  <jer.noble@apple.com>
+
+        WebCore: Add support for AudioNode "tailTime()" and "latencyTime()"
+        https://bugs.webkit.org/show_bug.cgi?id=74750
+
+        Reviewed by Chris Rogers.
+
+        No new tests; optimization of existing code path, so covered by existing tests.
+
+        To account for AudioNodes which may generate non-silent audio when fed silent input
+        for a certain amount of time after the last non-silent audio data, add two new virtual
+        functions tailTime() and latencyTime().
+
+        * webaudio/AudioNode.h:
+        (WebCore::AudioNode::tailTime): Added. Pure virtual.
+        (WebCore::AudioNode::latencyTime): Added. Pure virtual.
+        * platform/audio/AudioProcessor.h:
+        (WebCore::AudioProcessor::tailTime): Added. Pure virtual.
+        (WebCore::AudioProcessor::latencyTime): Added. Pure virtual.
+        * platform/audio/AudioDSPKernel.h:
+        (WebCore::AudioDSPKernel::tailTime): Added. Pure virtual.
+        (WebCore::AudioDSPKernel::latencyTime): Added. Pure virtual.
+
+        Added tailTime() and latencyTime() overrides to the following classes:
+        * platform/audio/AudioDSPKernelProcessor.cpp:
+        (WebCore::AudioDSPKernelProcessor::tailTime):
+        (WebCore::AudioDSPKernelProcessor::latencyTime):
+        * platform/audio/AudioDSPKernelProcessor.h:
+        * platform/audio/DynamicsCompressor.h:
+        (WebCore::DynamicsCompressor::tailTime):
+        (WebCore::DynamicsCompressor::latencyTime):
+        * platform/audio/EqualPowerPanner.h:
+        * platform/audio/HRTFPanner.cpp:
+        (WebCore::HRTFPanner::tailTime):
+        (WebCore::HRTFPanner::latencyTime):
+        * platform/audio/HRTFPanner.h:
+        * platform/audio/Panner.h:
+        * webaudio/AudioBasicProcessorNode.cpp:
+        (WebCore::AudioBasicProcessorNode::tailTime):
+        (WebCore::AudioBasicProcessorNode::latencyTime):
+        * webaudio/AudioBasicProcessorNode.h:
+        * webaudio/AudioChannelMerger.h:
+        * webaudio/AudioChannelSplitter.h:
+        * webaudio/AudioDestinationNode.h:
+        * webaudio/AudioGainNode.h:
+        * webaudio/AudioPannerNode.h:
+        * webaudio/AudioSourceNode.h:
+        * webaudio/BiquadDSPKernel.cpp:
+        (WebCore::BiquadDSPKernel::tailTime):
+        (WebCore::BiquadDSPKernel::latencyTime):
+        * webaudio/BiquadDSPKernel.h:
+        * webaudio/BiquadFilterNode.h:
+        * webaudio/ConvolverNode.cpp:
+        (WebCore::ConvolverNode::tailTime):
+        (WebCore::ConvolverNode::latencyTime):
+        * webaudio/ConvolverNode.h:
+        * webaudio/DelayDSPKernel.cpp:
+        (WebCore::DelayDSPKernel::tailTime):
+        (WebCore::DelayDSPKernel::latencyTime):
+        * webaudio/DelayDSPKernel.h:
+        * webaudio/DelayProcessor.h:
+        * webaudio/DynamicsCompressorNode.cpp:
+        (WebCore::DynamicsCompressorNode::tailTime):
+        (WebCore::DynamicsCompressorNode::latencyTime):
+        * webaudio/DynamicsCompressorNode.h:
+        * webaudio/JavaScriptAudioNode.cpp:
+        (WebCore::JavaScriptAudioNode::tailTime):
+        (WebCore::JavaScriptAudioNode::latencyTime):
+        * webaudio/JavaScriptAudioNode.h:
+        * webaudio/RealtimeAnalyserNode.h:
+        * webaudio/WaveShaperDSPKernel.h:
+
+        The following functions were added as support for the new AudioNode and AudioProcessor functions:
+        * platform/audio/Biquad.cpp:
+        (WebCore::Biquad::latencyFrames):
+        * platform/audio/Biquad.h:
+        * platform/audio/Reverb.cpp:
+        (WebCore::Reverb::latencyFrames):
+        * platform/audio/ReverbConvolver.h:
+        (WebCore::ReverbConvolver::latencyFrames):
+
+        The following functions were made const-correct:
+        * platform/audio/HRTFPanner.h:
+        (WebCore::HRTFPanner::fftSize):
+        * platform/audio/Reverb.h:
+        (WebCore::Reverb::impulseResponseLength):
+
 2012-03-12  Anders Carlsson  <andersca@apple.com>
 
         WebTileLayers should be opaque
index f33c9ed..f501d65 100644 (file)
@@ -63,6 +63,9 @@ public:
     AudioDSPKernelProcessor* processor() { return m_kernelProcessor; }
     const AudioDSPKernelProcessor* processor() const { return m_kernelProcessor; }
 
+    virtual double tailTime() const = 0;
+    virtual double latencyTime() const = 0;
+
 protected:
     AudioDSPKernelProcessor* m_kernelProcessor;
     float m_sampleRate;
index 5f9139f..0289089 100644 (file)
@@ -115,6 +115,18 @@ void AudioDSPKernelProcessor::setNumberOfChannels(unsigned numberOfChannels)
         m_numberOfChannels = numberOfChannels;
 }
 
+double AudioDSPKernelProcessor::tailTime() const
+{
+    // It is expected that all the kernels have the same tailTime.
+    return !m_kernels.isEmpty() ? m_kernels.first()->tailTime() : 0;
+}
+
+double AudioDSPKernelProcessor::latencyTime() const
+{
+    // It is expected that all the kernels have the same latencyTime.
+    return !m_kernels.isEmpty() ? m_kernels.first()->latencyTime() : 0;
+}
+
 } // namespace WebCore
 
 #endif // ENABLE(WEB_AUDIO)
index 7f8f81d..a4fc33f 100644 (file)
@@ -65,6 +65,9 @@ public:
 
     unsigned numberOfChannels() const { return m_numberOfChannels; }
 
+    virtual double tailTime() const OVERRIDE;
+    virtual double latencyTime() const OVERRIDE;
+
 protected:
     unsigned m_numberOfChannels;
     Vector<OwnPtr<AudioDSPKernel> > m_kernels;
index 469f833..4f240c8 100644 (file)
@@ -65,6 +65,9 @@ public:
 
     float sampleRate() const { return m_sampleRate; }
 
+    virtual double tailTime() const = 0;
+    virtual double latencyTime() const = 0;
+
 protected:
     bool m_initialized;
     float m_sampleRate;
index b35294d..5201328 100644 (file)
@@ -78,6 +78,9 @@ public:
     float sampleRate() const { return m_sampleRate; }
     float nyquist() const { return m_sampleRate / 2; }
 
+    double tailTime() const { return 0; }
+    double latencyTime() const { return m_compressor.latencyFrames() / static_cast<double>(sampleRate()); }
+
 protected:
     unsigned m_numberOfChannels;
 
index 016cd4a..012e214 100644 (file)
@@ -39,6 +39,9 @@ public:
 
     virtual void reset() { m_isFirstRender = true; }
 
+    virtual double tailTime() const OVERRIDE { return 0; }
+    virtual double latencyTime() const OVERRIDE { return 0; }
+
 private:
     // For smoothing / de-zippering
     bool m_isFirstRender;
index 4c69932..1cc9ff8 100644 (file)
@@ -294,6 +294,21 @@ void HRTFPanner::pan(double desiredAzimuth, double elevation, const AudioBus* in
     }
 }
 
+double HRTFPanner::tailTime() const
+{
+    // Because HRTFPanner is implemented with a DelayKernel and a FFTConvolver, the tailTime of the HRTFPanner
+    // is the sum of the tailTime of the DelayKernel and the tailTime of the FFTConvolver, which is MaxDelayTimeSeconds
+    // and fftSize() / 2, respectively.
+    return MaxDelayTimeSeconds + (fftSize() / 2) / static_cast<double>(sampleRate());
+}
+
+double HRTFPanner::latencyTime() const
+{
+    // The latency of a FFTConvolver is also fftSize() / 2, and is in addition to its tailTime of the
+    // same value.
+    return (fftSize() / 2) / static_cast<double>(sampleRate());
+}
+
 } // namespace WebCore
 
 #endif // ENABLE(WEB_AUDIO)
index f5af1d1..785cf09 100644 (file)
@@ -40,11 +40,14 @@ public:
     virtual void pan(double azimuth, double elevation, const AudioBus* inputBus, AudioBus* outputBus, size_t framesToProcess);
     virtual void reset();
 
-    size_t fftSize() { return fftSizeForSampleRate(m_sampleRate); }
+    size_t fftSize() const { return fftSizeForSampleRate(m_sampleRate); }
     static size_t fftSizeForSampleRate(float sampleRate);
 
     float sampleRate() const { return m_sampleRate; }
 
+    virtual double tailTime() const OVERRIDE;
+    virtual double latencyTime() const OVERRIDE;
+
 private:
     // Given an azimuth angle in the range -180 -> +180, returns the corresponding azimuth index for the database,
     // and azimuthBlend which is an interpolation value from 0 -> 1.
index d8b8dd0..f8b240e 100644 (file)
@@ -57,6 +57,9 @@ public:
 
     virtual void reset() = 0;
 
+    virtual double tailTime() const = 0;
+    virtual double latencyTime() const = 0;
+
 protected:
     Panner(PanningModel model) : m_panningModel(model) { }
 
index 122e21b..47468a1 100644 (file)
@@ -228,6 +228,11 @@ void Reverb::reset()
         m_convolvers[i]->reset();
 }
 
+size_t Reverb::latencyFrames() const
+{
+    return !m_convolvers.isEmpty() ? m_convolvers.first()->latencyFrames() : 0;
+}
+
 } // namespace WebCore
 
 #endif // ENABLE(WEB_AUDIO)
index 779e7bb..0ae50d6 100644 (file)
@@ -48,7 +48,8 @@ public:
     void process(const AudioBus* sourceBus, AudioBus* destinationBus, size_t framesToProcess);
     void reset();
 
-    unsigned impulseResponseLength() const { return m_impulseResponseLength; }
+    size_t impulseResponseLength() const { return m_impulseResponseLength; }
+    size_t latencyFrames() const;
 
 private:
     void initialize(AudioBus* impulseResponseBuffer, size_t renderSliceSize, size_t maxFFTSize, size_t numberOfChannels, bool useBackgroundThreads);
index c6ab54e..7459f3b 100644 (file)
@@ -224,6 +224,13 @@ void ReverbConvolver::reset()
     m_inputBuffer.reset();
 }
 
+size_t ReverbConvolver::latencyFrames() const
+{
+    // FIXME: ConvolverNode should not incur processing latency
+    // <https://bugs.webkit.org/show_bug.cgi?id=75564>
+    return m_minFFTSize / 2;
+}
+
 } // namespace WebCore
 
 #endif // ENABLE(WEB_AUDIO)
index 370b872..c3d309f 100644 (file)
@@ -62,6 +62,7 @@ public:
     bool useBackgroundThreads() const { return m_useBackgroundThreads; }
     void backgroundThreadEntry();
 
+    size_t latencyFrames() const;
 private:
     Vector<OwnPtr<ReverbConvolverStage> > m_stages;
     Vector<OwnPtr<ReverbConvolverStage> > m_backgroundStages;
index 81b5d96..084dd7b 100644 (file)
@@ -136,6 +136,16 @@ unsigned AudioBasicProcessorNode::numberOfChannels()
     return output(0)->numberOfChannels();
 }
 
+double AudioBasicProcessorNode::tailTime() const
+{
+    return m_processor->tailTime();
+}
+
+double AudioBasicProcessorNode::latencyTime() const
+{
+    return m_processor->latencyTime();
+}
+
 } // namespace WebCore
 
 #endif // ENABLE(WEB_AUDIO)
index 36cf2b4..1973023 100644 (file)
@@ -55,6 +55,9 @@ public:
     unsigned numberOfChannels();
 
 protected:
+    virtual double tailTime() const OVERRIDE;
+    virtual double latencyTime() const OVERRIDE;
+
     AudioProcessor* processor() { return m_processor.get(); }
     OwnPtr<AudioProcessor> m_processor;
 };
index e773dae..35dd06d 100644 (file)
@@ -51,6 +51,9 @@ public:
     virtual void checkNumberOfChannelsForInput(AudioNodeInput*);
 
 private:
+    virtual double tailTime() const OVERRIDE { return 0; }
+    virtual double latencyTime() const OVERRIDE { return 0; }
+
     AudioChannelMerger(AudioContext*, float sampleRate);
 };
 
index 71b0ef4..c82650a 100644 (file)
@@ -44,6 +44,9 @@ public:
     virtual void reset();
 
 private:
+    virtual double tailTime() const OVERRIDE { return 0; }
+    virtual double latencyTime() const OVERRIDE { return 0; }
+
     AudioChannelSplitter(AudioContext*, float sampleRate);
 };
 
index d07833d..760c638 100644 (file)
@@ -54,6 +54,9 @@ public:
     virtual void startRendering() = 0;
     
 protected:
+    virtual double tailTime() const OVERRIDE { return 0; }
+    virtual double latencyTime() const OVERRIDE { return 0; }
+
     // Counts the number of sample-frames processed by the destination.
     size_t m_currentSampleFrame;
 };
index 626ef2c..69a64b4 100644 (file)
@@ -55,6 +55,9 @@ public:
     AudioGain* gain() { return m_gain.get(); }                                   
     
 private:
+    virtual double tailTime() const OVERRIDE { return 0; }
+    virtual double latencyTime() const OVERRIDE { return 0; }
+
     AudioGainNode(AudioContext*, float sampleRate);
 
     float m_lastGain; // for de-zippering
index dffd711..b9f27b4 100644 (file)
@@ -137,6 +137,13 @@ public:
 
     bool isMarkedForDeletion() const { return m_isMarkedForDeletion; }
 
+    // tailTime() is the length of time (not counting latency time) where non-zero output may occur after continuous silent input.
+    virtual double tailTime() const = 0;
+    // latencyTime() is the length of time it takes for non-zero output to appear after non-zero input is provided. This only applies to
+    // processing delay which is an artifact of the processing algorithm chosen and is *not* part of the intrinsic desired effect. For 
+    // example, a "delay" effect is expected to delay the signal, and thus would not be considered latency.
+    virtual double latencyTime() const = 0;
+
 protected:
     // Inputs and outputs must be created before the AudioNode is initialized.
     void addInput(PassOwnPtr<AudioNodeInput>);
index 18275f6..961ef3c 100644 (file)
@@ -124,6 +124,9 @@ public:
     AudioGain* distanceGain() { return m_distanceGain.get(); }                                        
     AudioGain* coneGain() { return m_coneGain.get(); }                                        
 
+    virtual double tailTime() const OVERRIDE { return m_panner ? m_panner->tailTime() : 0; }
+    virtual double latencyTime() const OVERRIDE { return m_panner ? m_panner->latencyTime() : 0; }
+
 private:
     AudioPannerNode(AudioContext*, float sampleRate);
 
index a6bdd42..92a95a0 100644 (file)
@@ -39,6 +39,9 @@ public:
         : AudioNode(context, sampleRate)
     {
     }
+protected:
+    virtual double tailTime() const OVERRIDE { return 0; }
+    virtual double latencyTime() const OVERRIDE { return 0; }
 };
 
 } // namespace WebCore
index 9faac65..cfed1da 100644 (file)
 
 #include "BiquadProcessor.h"
 #include "FloatConversion.h"
+#include <limits.h>
 #include <wtf/Vector.h>
 
 namespace WebCore {
 
+// FIXME: As a recursive linear filter, depending on its parameters, a biquad filter can have
+// an infinite tailTime. In practice, Biquad filters do not usually (except for very high resonance values) 
+// have a tailTime of longer than approx. 200ms. This value could possibly be calculated based on the
+// settings of the Biquad.
+static const double MaxBiquadDelayTime = 0.2;
+
 void BiquadDSPKernel::updateCoefficientsIfNecessary(bool useSmoothing, bool forceUpdate)
 {
     if (forceUpdate || biquadProcessor()->filterCoefficientsDirty()) {
@@ -134,6 +141,16 @@ void BiquadDSPKernel::getFrequencyResponse(int nFrequencies,
     m_biquad.getFrequencyResponse(nFrequencies, frequency.data(), magResponse, phaseResponse);
 }
 
+double BiquadDSPKernel::tailTime() const
+{
+    return MaxBiquadDelayTime;
+}
+
+double BiquadDSPKernel::latencyTime() const
+{
+    return 0;
+}
+
 } // namespace WebCore
 
 #endif // ENABLE(WEB_AUDIO)
index a21e24c..0775559 100644 (file)
@@ -52,6 +52,10 @@ public:
                               const float* frequencyHz,
                               float* magResponse,
                               float* phaseResponse);
+
+    virtual double tailTime() const OVERRIDE;
+    virtual double latencyTime() const OVERRIDE;
+
 protected:
     Biquad m_biquad;
     BiquadProcessor* biquadProcessor() { return static_cast<BiquadProcessor*>(processor()); }
index 545b060..ccd6cde 100644 (file)
@@ -63,6 +63,7 @@ public:
     void getFrequencyResponse(const Float32Array* frequencyHz,
                               Float32Array* magResponse,
                               Float32Array* phaseResponse);
+
 private:
     BiquadFilterNode(AudioContext*, float sampleRate);
 
index 6afc095..ab7593e 100644 (file)
@@ -151,6 +151,16 @@ AudioBuffer* ConvolverNode::buffer()
     return m_buffer.get();
 }
 
+double ConvolverNode::tailTime() const
+{
+    return m_reverb ? m_reverb->impulseResponseLength() / static_cast<double>(sampleRate()) : 0;
+}
+
+double ConvolverNode::latencyTime() const
+{
+    return m_reverb ? m_reverb->latencyFrames() / static_cast<double>(sampleRate()) : 0;
+}
+
 } // namespace WebCore
 
 #endif // ENABLE(WEB_AUDIO)
index 12f4172..6d58918 100644 (file)
@@ -56,9 +56,13 @@ public:
 
     bool normalize() const { return m_normalize; }
     void setNormalize(bool normalize) { m_normalize = normalize; }
+
 private:
     ConvolverNode(AudioContext*, float sampleRate);
 
+    virtual double tailTime() const OVERRIDE;
+    virtual double latencyTime() const OVERRIDE;
+
     OwnPtr<Reverb> m_reverb;
     RefPtr<AudioBuffer> m_buffer;
 
index fb31e2d..2bd5759 100644 (file)
@@ -134,6 +134,16 @@ void DelayDSPKernel::reset()
     m_buffer.zero();
 }
 
+double DelayDSPKernel::tailTime() const
+{
+    return m_maxDelayTime;
+}
+
+double DelayDSPKernel::latencyTime() const
+{
+    return 0;
+}
+
 } // namespace WebCore
 
 #endif // ENABLE(WEB_AUDIO)
index 79a3956..1d556ea 100644 (file)
@@ -44,7 +44,10 @@ public:
     double maxDelayTime() const { return m_maxDelayTime; }
     
     void setDelayFrames(double numberOfFrames) { m_desiredDelayFrames = numberOfFrames; }
-    
+
+    virtual double tailTime() const OVERRIDE;
+    virtual double latencyTime() const OVERRIDE;
+
 private:
     AudioFloatArray m_buffer;
     double m_maxDelayTime;
index 43c5e0c..33fb937 100644 (file)
@@ -46,6 +46,7 @@ public:
 
     double maxDelayTime() { return m_maxDelayTime; }
 private:
+
     RefPtr<AudioParam> m_delayTime;
     double m_maxDelayTime;
 };
index 5afe2d6..ef70479 100644 (file)
@@ -106,6 +106,16 @@ void DynamicsCompressorNode::uninitialize()
     AudioNode::uninitialize();
 }
 
+double DynamicsCompressorNode::tailTime() const
+{
+    return m_dynamicsCompressor->tailTime();
+}
+
+double DynamicsCompressorNode::latencyTime() const
+{
+    return m_dynamicsCompressor->latencyTime();
+}
+
 } // namespace WebCore
 
 #endif // ENABLE(WEB_AUDIO)
index 32d3cb1..cf7e63a 100644 (file)
@@ -57,6 +57,9 @@ public:
     AudioParam* reduction() { return m_reduction.get(); }
 
 private:
+    virtual double tailTime() const OVERRIDE;
+    virtual double latencyTime() const OVERRIDE;
+
     DynamicsCompressorNode(AudioContext*, float sampleRate);
 
     OwnPtr<DynamicsCompressor> m_dynamicsCompressor;
index bf83147..ea3d7aa 100644 (file)
@@ -267,6 +267,16 @@ ScriptExecutionContext* JavaScriptAudioNode::scriptExecutionContext() const
     return const_cast<JavaScriptAudioNode*>(this)->context()->document();
 }
 
+double JavaScriptAudioNode::tailTime() const
+{
+    return std::numeric_limits<double>::infinity();
+}
+
+double JavaScriptAudioNode::latencyTime() const
+{
+    return std::numeric_limits<double>::infinity();
+}
+
 } // namespace WebCore
 
 #endif // ENABLE(WEB_AUDIO)
index 5743588..65049cb 100644 (file)
@@ -78,6 +78,9 @@ public:
     using AudioNode::deref;
     
 private:
+    virtual double tailTime() const OVERRIDE;
+    virtual double latencyTime() const OVERRIDE;
+
     JavaScriptAudioNode(AudioContext*, float sampleRate, size_t bufferSize, unsigned numberOfInputs, unsigned numberOfOutputs);
 
     static void fireProcessEventDispatch(void* userData);
index c37c6c1..022179f 100644 (file)
@@ -65,6 +65,9 @@ public:
     void getByteTimeDomainData(Uint8Array* array) { m_analyser.getByteTimeDomainData(array); }
 
 private:
+    virtual double tailTime() const OVERRIDE { return 0; }
+    virtual double latencyTime() const OVERRIDE { return 0; }
+
     RealtimeAnalyserNode(AudioContext*, float sampleRate);
 
     RealtimeAnalyser m_analyser;
index c725f4d..8d8e417 100644 (file)
@@ -44,6 +44,8 @@ public:
     // AudioDSPKernel
     virtual void process(const float* source, float* dest, size_t framesToProcess);
     virtual void reset() { }
+    virtual double tailTime() const OVERRIDE { return 0; }
+    virtual double latencyTime() const OVERRIDE { return 0; }
     
 protected:
     WaveShaperProcessor* waveShaperProcessor() { return static_cast<WaveShaperProcessor*>(processor()); }