On Balance Volume (OBV)
Overview
On Balance Volume (OBV) measures cumulative buying and selling pressure by adding volume on up days and subtracting it on down days. When closing price exceeds the previous close, the entire session volume contributes positively to the running total; conversely, declining closes subtract their full volume from the cumulative sum. Starting from zero at the first valid bar, OBV builds a continuous record of volume flow that reveals whether accumulation or distribution dominates. The absolute OBV value holds no inherent meaning; traders instead analyze the slope and direction of the line, watching for divergences between OBV trends and price movements. A rising OBV alongside rising prices confirms strength, while divergence often signals reversals. This volume-weighted momentum indicator remains one of the most widely used tools for validating price trends.
Implementation Examples
Compute OBV from closes and volumes or candles:
use vector_ta::indicators::obv::{obv, ObvInput, ObvParams};
use vector_ta::utilities::data_loader::{Candles, read_candles_from_csv};
// From slices (close, volume)
let close = vec![100.0, 101.0, 100.5, 102.0];
let volume = vec![1200.0, 1500.0, 1800.0, 1700.0];
let input = ObvInput::from_slices(&close, &volume, ObvParams::default());
let result = obv(&input)?;
// From candles with defaults (source: close + volume)
let candles: Candles = read_candles_from_csv("data/sample.csv")?;
let input = ObvInput::with_default_candles(&candles);
let result = obv(&input)?;
// Access the OBV values
for value in result.values {
println!("OBV: {}", value);
} API Reference
Input Methods ▼
// From candles (uses close + volume)
ObvInput::from_candles(&Candles, ObvParams) -> ObvInput
// From slices
ObvInput::from_slices(&[f64], &[f64], ObvParams) -> ObvInput
// From candles with default params
ObvInput::with_default_candles(&Candles) -> ObvInput Parameters Structure ▼
#[derive(Debug, Clone, Default)]
pub struct ObvParams; // No tunable parameters Output Structure ▼
#[derive(Debug, Clone)]
pub struct ObvOutput {
pub values: Vec<f64>, // cumulative OBV
} Validation, Warmup & NaNs ▼
- Errors on empty inputs (
ObvError::EmptyData) or mismatched lengths (ObvError::DataLengthMismatch). - Skips leading NaNs: indices before the first bar where both
closeandvolumeare finite areNaN; the first valid OBV is0.0. - Updates: if
Closet > Closet-1addVolumet; if lower subtract; if equal, OBV unchanged. - Streaming: returns
Noneuntil the first valid update, then cumulative values thereafter.
Error Handling ▼
use vector_ta::indicators::obv::{obv, ObvError};
match obv(&input) {
Ok(output) => process(output.values),
Err(ObvError::EmptyData) => eprintln!("Input data is empty"),
Err(ObvError::DataLengthMismatch { close_len, volume_len }) =>
eprintln!("Length mismatch: close={}, volume={}", close_len, volume_len),
Err(ObvError::AllValuesNaN) => eprintln!("All values are NaN"),
Err(e) => eprintln!("OBV error: {}", e),
} Python Bindings
Basic Usage ▼
Compute OBV from NumPy arrays of close and volume:
import numpy as np
from vector_ta import obv
close = np.array([100.0, 101.0, 100.5, 102.0], dtype=np.float64)
volume = np.array([1200.0, 1500.0, 1800.0, 1700.0], dtype=np.float64)
# Auto kernel (default)
values = obv(close, volume)
# Or specify a kernel explicitly (e.g., "avx2")
values = obv(close, volume, kernel="avx2")
print(values) # numpy.ndarray of shape (len(close),) Streaming Real-time Updates ▼
Process close/volume ticks incrementally:
from vector_ta import ObvStream
stream = ObvStream()
for c, v in feed: # (float, float)
obv_val = stream.update(c, v)
if obv_val is not None:
consume(obv_val) Batch Processing ▼
OBV has no parameters; batch returns a single row:
import numpy as np
from vector_ta import obv_batch
close = np.asarray([...], dtype=np.float64)
volume = np.asarray([...], dtype=np.float64)
result = obv_batch(close, volume, kernel="auto")
# result['values'].shape == (1, len(close))
print(result['values']) CUDA Acceleration ▼
CUDA helpers are available when the Python package is built with CUDA support. Inputs must be float32; outputs are device arrays (DLPack / __cuda_array_interface__ compatible).
import numpy as np
from vector_ta import obv_cuda_batch_dev, obv_cuda_many_series_one_param_dev
# One series (float32)
close = np.asarray(load_close(), dtype=np.float32)
volume = np.asarray(load_volume(), dtype=np.float32)
dev = obv_cuda_batch_dev(
close=close,
volume=volume,
device_id=0,
)
# Many series (time-major)
close_tm = np.asarray(load_close_time_major_matrix(), dtype=np.float32)
rows, cols = close_tm.shape
close_tm = close_tm.ravel()
volume_tm = np.asarray(load_volume_time_major_matrix(), dtype=np.float32)
volume_tm = volume_tm.ravel()
dev_tm = obv_cuda_many_series_one_param_dev(
close_tm=close_tm,
volume_tm=volume_tm,
cols=cols,
rows=rows,
device_id=0,
) JavaScript/WASM Bindings
Basic Usage ▼
Compute OBV from close and volume arrays:
import { obv_js } from 'vectorta-wasm';
const close = new Float64Array([100.0, 101.0, 100.5, 102.0]);
const volume = new Float64Array([1200.0, 1500.0, 1800.0, 1700.0]);
const values = obv_js(close, volume);
console.log('OBV values:', values); Memory-Efficient Operations ▼
Operate directly on WASM memory with zero extra allocations:
import { obv_alloc, obv_free, obv_into, memory } from 'vectorta-wasm';
const close = new Float64Array([/* ... */]);
const volume = new Float64Array([/* ... */]);
const len = close.length;
// Allocate WASM memory for inputs and output
const closePtr = obv_alloc(len);
const volumePtr = obv_alloc(len);
const outPtr = obv_alloc(len);
// Copy inputs into WASM memory
new Float64Array(memory.buffer, closePtr, len).set(close);
new Float64Array(memory.buffer, volumePtr, len).set(volume);
// Compute directly into output buffer
obv_into(closePtr, volumePtr, outPtr, len);
// Read results (copy out)
const out = new Float64Array(memory.buffer, outPtr, len).slice();
// Free allocated memory
obv_free(closePtr, len);
obv_free(volumePtr, len);
obv_free(outPtr, len); Batch Processing ▼
OBV batch returns a single row object with shape metadata:
import { obv_batch } from 'vectorta-wasm';
const close = new Float64Array([/* ... */]);
const volume = new Float64Array([/* ... */]);
const { values, rows, cols } = obv_batch(close, volume);
console.log(rows, cols);
console.log(values); CUDA Bindings (Rust)
use vector_ta::cuda::CudaObv;
let cuda = CudaObv::new(0)?;
let close: [f32] = /* ... */;
let volume: [f32] = /* ... */;
let out = cuda.obv_batch_dev(&close, &volume)?;
let _ = out; Performance Analysis
Across sizes, Rust CPU runs about 3.44× faster than Tulip C in this benchmark.
AMD Ryzen 9 9950X (CPU) | NVIDIA RTX 4090 (GPU) | Benchmarks: 2026-01-05
Related Indicators
Accumulation/Distribution
Technical analysis indicator
Accumulation/Distribution Oscillator
Technical analysis indicator
Balance of Power
Technical analysis indicator
Chaikin Flow Oscillator
Technical analysis indicator
Elder Force Index
Technical analysis indicator
Ease of Movement
Technical analysis indicator