Demand Index
len_bs = 19 | len_bs_ma = 19 | len_di_ma = 19 | ma_type = ema Overview
Demand Index is a price-and-volume pressure oscillator that tries to separate buying demand from selling demand before collapsing both into a single signed line. The implementation starts from high, low, close, and volume, builds a rolling volume baseline, then compares the current composite price level against the previous one. When price rises, the model favors the buy-pressure side; when price falls, it favors the sell-pressure side. Those asymmetric pressure values are then smoothed before the final ratio-style demand index is calculated.
The result is a main demand-index line plus a slower signal line. A positive demand index means the smoothed buy pressure dominates smoothed sell pressure, while a negative reading means the opposite. The selected moving-average family changes the warmup behavior substantially: EMA and RMA become usable quickly, while SMA and WMA require the full window lengths before the main line and signal line can settle.
Defaults: Demand Index uses `len_bs = 19`, `len_bs_ma = 19`, `len_di_ma = 19`, and `ma_type = "ema"`.
Implementation Examples
Compute the main demand-index line and its signal line from OHLCV slices or candle data.
use vector_ta::indicators::demand_index::{
demand_index,
DemandIndexInput,
DemandIndexParams,
};
use vector_ta::utilities::data_loader::{Candles, read_candles_from_csv};
let output = demand_index(&DemandIndexInput::from_slices(
&high,
&low,
&close,
&volume,
DemandIndexParams::default(),
))?;
let candles: Candles = read_candles_from_csv("data/sample.csv")?;
let candle_output = demand_index(&DemandIndexInput::with_default_candles(&candles))?;
println!("di = {:?}", output.demand_index.last());
println!("signal = {:?}", candle_output.signal.last()); API Reference
Input Methods ▼
// From candles
DemandIndexInput::from_candles(&Candles, DemandIndexParams)
-> DemandIndexInput
// From OHLCV slices
DemandIndexInput::from_slices(&[f64], &[f64], &[f64], &[f64], DemandIndexParams)
-> DemandIndexInput
// From candles with default parameters
DemandIndexInput::with_default_candles(&Candles)
-> DemandIndexInput Parameters Structure ▼
pub struct DemandIndexParams {
pub len_bs: Option<usize>, // default 19
pub len_bs_ma: Option<usize>, // default 19
pub len_di_ma: Option<usize>, // default 19
pub ma_type: Option<String>, // default "ema"
} Output Structure ▼
pub struct DemandIndexOutput {
pub demand_index: Vec<f64>,
pub signal: Vec<f64>,
} Validation, Warmup & NaNs ▼
- High, low, close, and volume must all be non-empty slices with identical lengths.
len_bs,len_bs_ma, andlen_di_mamust each be greater than zero and must not exceed the available data length.ma_typemust be one ofema,sma,wma, orrma.- The indicator requires enough valid OHLCV bars to satisfy the selected smoothing chain. The exact signal warmup depends on the average family and can be queried from the streaming API.
- EMA and RMA warm up much faster than SMA and WMA because the latter need full windows for both the pressure averages and the signal average.
- Streaming can emit
NaNplaceholders during invalid bars while keeping the state machine consistent with the batch path. - Batch mode rejects invalid kernels and invalid parameter ranges before the grid runs.
Builder, Streaming & Batch APIs ▼
// Builder
DemandIndexBuilder::new()
.len_bs(usize)
.len_bs_ma(usize)
.len_di_ma(usize)
.ma_type(&str)?
.kernel(Kernel)
.apply_slices(&[f64], &[f64], &[f64], &[f64])
DemandIndexBuilder::new()
.apply(&Candles)
DemandIndexBuilder::new()
.into_stream()
// Stream
DemandIndexStream::try_new(DemandIndexParams)
DemandIndexStream::update(high, low, close, volume) -> Option<(f64, f64)>
DemandIndexStream::get_warmup_period() -> usize
// Batch
DemandIndexBatchBuilder::new()
.len_bs_range(start, end, step)
.len_bs_ma_range(start, end, step)
.len_di_ma_range(start, end, step)
.ma_type(&str)?
.kernel(Kernel)
.apply_slices(&[f64], &[f64], &[f64], &[f64]) Error Handling ▼
pub enum DemandIndexError {
EmptyInputData,
AllValuesNaN,
InvalidLenBs { len_bs: usize, data_len: usize },
InvalidLenBsMa { len_bs_ma: usize, data_len: usize },
InvalidLenDiMa { len_di_ma: usize, data_len: usize },
InvalidMaType { ma_type: String },
InconsistentSliceLengths { high_len: usize, low_len: usize, close_len: usize, volume_len: usize },
NotEnoughValidData { needed: usize, valid: usize },
OutputLengthMismatch { expected: usize, demand_index_got: usize, signal_got: usize },
InvalidRange { start: String, end: String, step: String },
InvalidKernelForBatch(Kernel),
} Python Bindings
Python exposes a tuple-returning single-run function, a streaming class, and a batch function. The single-run binding returns two NumPy arrays in order: the main demand-index line and the signal line. Batch returns both matrices plus the tested length axes, average-family labels, and the final rows and cols shape.
import numpy as np
from vector_ta import (
demand_index,
demand_index_batch,
DemandIndexStream,
)
high = np.asarray(high_values, dtype=np.float64)
low = np.asarray(low_values, dtype=np.float64)
close = np.asarray(close_values, dtype=np.float64)
volume = np.asarray(volume_values, dtype=np.float64)
demand_index_values, signal_values = demand_index(
high,
low,
close,
volume,
len_bs=19,
len_bs_ma=19,
len_di_ma=19,
ma_type="ema",
kernel="auto",
)
stream = DemandIndexStream(len_bs=19, len_bs_ma=19, len_di_ma=19, ma_type="ema")
print(stream.warmup_period)
print(stream.update(high[-1], low[-1], close[-1], volume[-1]))
batch = demand_index_batch(
high,
low,
close,
volume,
len_bs_range=(14, 19, 5),
len_bs_ma_range=(10, 19, 9),
len_di_ma_range=(9, 19, 10),
ma_type="ema",
kernel="auto",
)
print(batch["len_bs"], batch["ma_types"], batch["rows"], batch["cols"]) JavaScript/WASM Bindings
The WASM layer exposes an object-returning single-run wrapper, a batch wrapper with explicit config, and lower-level allocation and in-place exports. The standard JavaScript path returns typed arrays for the main line and the signal line. The batch wrapper adds the tested parameter lists, combo objects, and the final rows and cols shape.
import init, {
demand_index_js,
demand_index_batch_js,
} from "/pkg/vector_ta.js";
await init();
const high = new Float64Array(highValues);
const low = new Float64Array(lowValues);
const close = new Float64Array(closeValues);
const volume = new Float64Array(volumeValues);
const result = demand_index_js(high, low, close, volume, 19, 19, 19, "ema");
console.log(result.demand_index, result.signal);
const batch = demand_index_batch_js(high, low, close, volume, {
len_bs_range: [14, 19, 5],
len_bs_ma_range: [10, 19, 9],
len_di_ma_range: [9, 19, 10],
ma_type: "ema",
});
console.log(batch.len_bs, batch.ma_types, batch.rows, batch.cols); CUDA Bindings (Rust)
Additional details for the CUDA bindings can be found inside the VectorTA repository.
Performance Analysis
Across sizes, Rust CPU runs about 1.14× faster than Tulip C in this benchmark.
AMD Ryzen 9 9950X (CPU) | NVIDIA RTX 4090 (GPU)
Related Indicators
Accumulation/Distribution
Technical analysis indicator
Accumulation/Distribution Oscillator
Technical analysis indicator
Balance of Power
Technical analysis indicator
Chaikin Flow Oscillator
Technical analysis indicator
Double Exponential Moving Average
Moving average indicator
Elder Force Index
Technical analysis indicator