Buff Averages (Volume-Weighted Fast/Slow MAs)
fast_period = 5 | slow_period = 20 Overview
Buff Dormeier introduced the Buff Averages in February 2001 through his article "Buff Up Your Moving Averages" in Stocks and Commodities magazine, demonstrating how volume-weighted moving averages improve both responsiveness and reliability compared to simple moving averages. The indicator calculates two volume-weighted moving averages using different periods, where each price point is multiplied by its corresponding volume before averaging, ensuring that price movements with greater market participation carry more weight in the calculation. This approach recognizes that price changes accompanied by heavy volume represent stronger conviction from market participants than those occurring on light volume, making the averages more responsive to meaningful price movements while filtering out low-volume noise. Dormeier's formula divides the sum of price times volume by the sum of volume over the specified period, creating averages that naturally accelerate toward prices supported by heavy trading and resist being pulled by prices on thin volume. The dual-average system with fast and slow periods enables classic crossover strategies while incorporating volume analysis, providing clearer signals than traditional moving average systems. By weighting prices proportionally to their traded volume, Buff Averages reveal the true center of market activity where most shares actually changed hands, rather than the simple arithmetic mean that treats all time periods equally regardless of participation levels.
Implementation Examples
Compute fast and slow volume-weighted averages:
use vectorta::indicators::buff_averages::{buff_averages, BuffAveragesInput, BuffAveragesParams};
use vectorta::utilities::data_loader::{Candles, read_candles_from_csv};
// Using price and volume slices
let prices = vec![100.0, 101.5, 103.0, 102.0, 104.0];
let volumes = vec![1200.0, 800.0, 1000.0, 900.0, 1100.0];
let params = BuffAveragesParams { fast_period: Some(5), slow_period: Some(20) };
let input = BuffAveragesInput::from_slices(&prices, &volumes, params);
let out = buff_averages(&input)?; // out.fast_buff, out.slow_buff
// Using Candles (volume is taken from candles)
let candles: Candles = read_candles_from_csv("data/sample.csv")?;
let input = BuffAveragesInput::with_default_candles(&candles); // source="close", fast=5, slow=20
let out = buff_averages(&input)?;
// Iterate the fast/slow results
for (f, s) in out.fast_buff.iter().zip(out.slow_buff.iter()) {
println!("fast={}, slow={}", f, s);
} API Reference
Input Methods ▼
// From price+volume slices
BuffAveragesInput::from_slices(&[f64], &[f64], BuffAveragesParams) -> BuffAveragesInput
// From candles with custom source (volume is taken from candles)
BuffAveragesInput::from_candles(&Candles, &str, BuffAveragesParams) -> BuffAveragesInput
// From candles with defaults (source="close", fast=5, slow=20)
BuffAveragesInput::with_default_candles(&Candles) -> BuffAveragesInput
// From price slice only (will error at compute time without volume)
BuffAveragesInput::from_slice(&[f64], BuffAveragesParams) -> BuffAveragesInput Parameters Structure ▼
pub struct BuffAveragesParams {
pub fast_period: Option<usize>, // Default: 5
pub slow_period: Option<usize>, // Default: 20
} Output Structure ▼
pub struct BuffAveragesOutput {
pub fast_buff: Vec<f64>, // Fast VWMA
pub slow_buff: Vec<f64>, // Slow VWMA
} Validation, Warmup & NaNs ▼
fast_period > 0andslow_period > 0; each must be ≤len.- At least
slow_periodvalid points after the first finite price; elseBuffAveragesError::NotEnoughValidData. - Price and volume lengths must match; else
BuffAveragesError::MismatchedDataLength. Missing volume viafrom_slice→BuffAveragesError::MissingVolumeData. - Leading indices before warmup (
warm = first_non_nan + slow_period - 1) areNaNin outputs; post‑warm are finite. - Per index, NaN in price or volume removes that bar from numerator/denominator; if
∑(volume) == 0, the value is0.0. - Streaming emits
Noneuntil warm; after warm returns(fast, slow)each update.
Error Handling ▼
use vectorta::indicators::buff_averages::{buff_averages, BuffAveragesError};
match buff_averages(&input) {
Ok(out) => process(out.fast_buff, out.slow_buff),
Err(BuffAveragesError::EmptyInputData) => eprintln!("no data"),
Err(BuffAveragesError::AllValuesNaN) => eprintln!("all NaN"),
Err(BuffAveragesError::InvalidPeriod { period, data_len }) => {
eprintln!("invalid period {period} for len {data_len}")
}
Err(BuffAveragesError::NotEnoughValidData { needed, valid }) => {
eprintln!("need {needed} valid after first, got {valid}")
}
Err(BuffAveragesError::MismatchedDataLength { price_len, volume_len }) => {
eprintln!("length mismatch: price={price_len}, volume={volume_len}")
}
Err(BuffAveragesError::MissingVolumeData) => eprintln!("volume required"),
} Python Bindings
Basic Usage ▼
import numpy as np
from vectorta import buff_averages, buff_averages_batch
prices = np.array([100, 101.5, 103, 102, 104], dtype=float)
volumes = np.array([1200, 800, 1000, 900, 1100], dtype=float)
# Single calculation (returns fast, slow as NumPy arrays)
fast, slow = buff_averages(prices, volumes, fast_period=5, slow_period=20, kernel=None)
# Batch calculation
out = buff_averages_batch(
prices,
volumes,
fast_range=(3, 9, 3), # 3,6,9
slow_range=(12, 24, 6), # 12,18,24
kernel=None
)
fast_mat = out['fast'] # shape: [num_combos, len]
slow_mat = out['slow']
fast_periods = out['fast_periods']
slow_periods = out['slow_periods'] Streaming ▼
from vectorta import BuffAveragesStream
stream = BuffAveragesStream(fast_period=5, slow_period=20)
for price, volume in stream_source():
value = stream.update(price, volume)
if value is not None:
fast, slow = value
handle(fast, slow) CUDA Acceleration ▼
CUDA helpers are available when the Python package is built with CUDA support.
import numpy as np
from vectorta import (
buff_averages_cuda_batch_dev,
buff_averages_cuda_many_series_one_param_dev,
)
# 1) One series, many parameter combinations (device-side result arrays)
price_f32 = np.asarray(prices, dtype=np.float32)
volume_f32 = np.asarray(volumes, dtype=np.float32)
fast_dev, slow_dev = buff_averages_cuda_batch_dev(
price_f32, volume_f32,
fast_range=(3, 9, 3),
slow_range=(12, 24, 6),
device_id=0,
)
# 2) Many series (time-major), one parameter set
T, N = 10_000, 64
prices_tm = np.empty((T, N), dtype=np.float32)
volumes_tm = np.empty((T, N), dtype=np.float32)
fast_dev, slow_dev = buff_averages_cuda_many_series_one_param_dev(
prices_tm.ravel(), volumes_tm.ravel(),
cols=N, rows=T,
fast_period=5, slow_period=20,
device_id=0,
) JavaScript/WASM Bindings
Basic Usage ▼
Calculate fast and slow arrays in a single call:
import { buff_averages_js } from 'vectorta-wasm';
const prices = new Float64Array([100, 101.5, 103, 102, 104]);
const volumes = new Float64Array([1200, 800, 1000, 900, 1100]);
// Returns a flat Float64Array: [fast..., slow...]
const flat = buff_averages_js(prices, volumes, 5, 20);
const len = prices.length;
const fast = flat.slice(0, len);
const slow = flat.slice(len);
console.log('Fast:', fast);
console.log('Slow:', slow); Memory-Efficient Operations ▼
Zero-copy buffer operations for large datasets:
import { buff_averages_alloc, buff_averages_free, buff_averages_into, memory } from 'vectorta-wasm';
const prices = new Float64Array([/* data */]);
const volumes = new Float64Array([/* data */]);
const len = prices.length;
// Allocate WASM memory (allocates 2*len elements internally)
const pricePtr = buff_averages_alloc(len);
const volumePtr = buff_averages_alloc(len);
const outPtr = buff_averages_alloc(len);
// Copy inputs into WASM memory
new Float64Array(memory.buffer, pricePtr, len).set(prices);
new Float64Array(memory.buffer, volumePtr, len).set(volumes);
// Compute directly into pre-allocated output
buff_averages_into(pricePtr, volumePtr, outPtr, len, 5, 20);
// Read results (flat: [fast..., slow...])
const out = new Float64Array(memory.buffer, outPtr, 2 * len).slice();
buff_averages_free(pricePtr, len);
buff_averages_free(volumePtr, len);
buff_averages_free(outPtr, len); Batch Processing ▼
Unified batch returns values with metadata.
import { buff_averages_batch } from 'vectorta-wasm';
const prices = new Float64Array([/* data */]);
const volumes = new Float64Array([/* data */]);
// Ranges are [start, end, step]
const res = buff_averages_batch(prices, volumes, [3, 9, 3], [12, 24, 6]);
// res: { values, rows, cols, fast_periods, slow_periods }
const { values, rows, cols, fast_periods, slow_periods } = res;
// First row (fast) and corresponding slow row are laid out separately
const firstFastRow = values.slice(0, cols);
const firstSlowRow = values.slice(rows / 2 * cols, (rows / 2 + 1) * cols); Performance Analysis
AMD Ryzen 9 9950X (CPU) | NVIDIA RTX 4090 (GPU) | Benchmarks: 2026-01-05