Negative Volume Index (NVI)
Overview
The Negative Volume Index (NVI) tracks price action during periods of declining volume, operating on the principle that informed traders often act during quieter market sessions. The indicator accumulates price percentage changes only when current volume falls below the previous bar; when volume increases or holds steady, NVI remains flat. Starting from a baseline of 1000.0 at the first valid observation, the series builds a cumulative record of moves made under lighter participation. Technical analysts monitor NVI trends and moving averages to identify smart money activity, as sustained NVI advances during low volume periods often precede broader market rallies. The indicator pairs naturally with the Positive Volume Index to provide a complete picture of volume-stratified price behavior.
Implementation Examples
Get started with NVI using slices or candles:
use vector_ta::indicators::nvi::{nvi, NviInput, NviParams};
use vector_ta::utilities::data_loader::{Candles, read_candles_from_csv};
// From slices (close, volume)
let close = vec![100.0, 101.0, 100.5, 102.0];
let volume = vec![1_000.0, 900.0, 950.0, 800.0];
let input = NviInput::from_slices(&close, &volume, NviParams);
let result = nvi(&input)?;
// From candles with default source ("close")
let candles: Candles = read_candles_from_csv("data/sample.csv")?;
let input = NviInput::with_default_candles(&candles);
let result = nvi(&input)?;
// Or specify a different close source explicitly
let input = NviInput::from_candles(&candles, "close", NviParams);
let result = nvi(&input)?;
// Access values
for v in result.values { println!("NVI: {}", v); } API Reference
Input Methods ▼
// From slices (close, volume)
NviInput::from_slices(&[f64], &[f64], NviParams) -> NviInput
// From candles with custom close source
NviInput::from_candles(&Candles, &str, NviParams) -> NviInput
// From candles with defaults (source="close")
NviInput::with_default_candles(&Candles) -> NviInput Parameters Structure ▼
pub struct NviParams; // no parameters Output Structure ▼
pub struct NviOutput {
pub values: Vec<f64>, // NVI values (seeded at 1000.0, NaN prefix)
} Validation, Warmup & NaNs ▼
- Requires matching-length
closeandvolumeseries; otherwiseNviError::MismatchedLength. - Indices before the first finite pair are
NaN. The first valid output is1000.0. - Needs at least
2valid points after the first finite pair; elseNviError::NotEnoughValidData. - Updates only when
volume[t] < volume[t−1]; otherwise the value is carried forward unchanged. nvi_into_slice: destination length must equal inputs; elseNviError::DestinationLengthMismatch.
Error Handling ▼
use vector_ta::indicators::nvi::{nvi, NviError};
match nvi(&input) {
Ok(output) => process(output.values),
Err(NviError::EmptyData) => println!("Input data is empty"),
Err(NviError::AllCloseValuesNaN) => println!("All close values are NaN"),
Err(NviError::AllVolumeValuesNaN) => println!("All volume values are NaN"),
Err(NviError::NotEnoughValidData { needed, valid }) =>
println!("Need {} data points after first valid, only {}", needed, valid),
Err(NviError::MismatchedLength { close_len, volume_len }) =>
println!("Length mismatch: close={}, volume={}", close_len, volume_len),
Err(NviError::DestinationLengthMismatch { dst_len, close_len, volume_len }) =>
println!("Output len {} != inputs ({}, {})", dst_len, close_len, volume_len),
Err(e) => println!("NVI error: {}", e),
} Python Bindings
Basic Usage ▼
Calculate NVI from NumPy arrays (kernel optional):
import numpy as np
from vector_ta import nvi
close = np.array([100.0, 101.0, 100.5, 102.0], dtype=np.float64)
volume = np.array([1000.0, 900.0, 950.0, 800.0], dtype=np.float64)
# Auto-select kernel ("auto", "scalar", "avx2", "avx512")
values = nvi(close, volume, kernel="auto")
print(values) # NumPy array, same length as inputs Streaming Real-time Updates ▼
Update with live close/volume pairs:
from vector_ta import NviStream
stream = NviStream()
for c, v in zip(close, volume):
val = stream.update(float(c), float(v))
if val is not None:
print("NVI:", val) # 1000.0 at the first valid update Batch Processing ▼
Compute the single-row NVI batch efficiently:
import numpy as np
from vector_ta import nvi_batch
close = np.asarray(close, dtype=np.float64)
volume = np.asarray(volume, dtype=np.float64)
res = nvi_batch(close, volume, kernel="auto")
print(res["values"].shape) # (1, len(close))
print(res["rows"], res["cols"]) CUDA Acceleration ▼
CUDA helpers are available when the Python package is built with CUDA support. Inputs must be float32; outputs are device arrays (DLPack / __cuda_array_interface__ compatible).
import numpy as np
from vector_ta import nvi_cuda_batch_dev, nvi_cuda_many_series_one_param_dev
# One series (float32)
close = np.asarray(load_close(), dtype=np.float32)
volume = np.asarray(load_volume(), dtype=np.float32)
dev = nvi_cuda_batch_dev(
close=close,
volume=volume,
device_id=0,
)
# Many series (time-major)
close_tm = np.asarray(load_close_time_major_matrix(), dtype=np.float32)
rows, cols = close_tm.shape
close_tm = close_tm.ravel()
volume_tm = np.asarray(load_volume_time_major_matrix(), dtype=np.float32)
volume_tm = volume_tm.ravel()
dev_tm = nvi_cuda_many_series_one_param_dev(
close_tm=close_tm,
volume_tm=volume_tm,
cols=cols,
rows=rows,
device_id=0,
) JavaScript/WASM Bindings
Basic Usage ▼
Calculate NVI in JavaScript/TypeScript:
import { nvi_js } from 'vectorta-wasm';
const close = new Float64Array([100.0, 101.0, 100.5, 102.0]);
const volume = new Float64Array([1000.0, 900.0, 950.0, 800.0]);
const nviValues = nvi_js(close, volume); // Float64Array
console.log('NVI values:', nviValues); Memory-Efficient Operations ▼
Use zero-copy operations for large datasets:
import { nvi_alloc, nvi_free, nvi_into, memory } from 'vectorta-wasm';
const close = new Float64Array([/* ... */]);
const volume = new Float64Array([/* ... */]);
const len = close.length;
// Allocate WASM memory
const closePtr = nvi_alloc(len);
const volumePtr = nvi_alloc(len);
const outPtr = nvi_alloc(len);
// Copy inputs into WASM memory
new Float64Array(memory.buffer, closePtr, len).set(close);
new Float64Array(memory.buffer, volumePtr, len).set(volume);
// Compute NVI directly into the output buffer
nvi_into(closePtr, volumePtr, outPtr, len);
// Read results (slice to copy out of WASM memory)
const values = new Float64Array(memory.buffer, outPtr, len).slice();
// Free WASM memory
nvi_free(closePtr, len);
nvi_free(volumePtr, len);
nvi_free(outPtr, len);
console.log('NVI values:', values); Batch Processing ▼
Compute the single-row batch using pointer API:
import { nvi_alloc, nvi_free, nvi_batch_into, memory } from 'vectorta-wasm';
const len = close.length;
const closePtr = nvi_alloc(len);
const volumePtr = nvi_alloc(len);
const outPtr = nvi_alloc(len);
new Float64Array(memory.buffer, closePtr, len).set(close);
new Float64Array(memory.buffer, volumePtr, len).set(volume);
// Returns number of rows (1); writes values into outPtr
const rows = nvi_batch_into(closePtr, volumePtr, outPtr, len);
const values = new Float64Array(memory.buffer, outPtr, len).slice();
console.log(rows); // 1
console.log(values.length === len);
nvi_free(closePtr, len);
nvi_free(volumePtr, len);
nvi_free(outPtr, len); CUDA Bindings (Rust)
use vector_ta::cuda::CudaNvi;
let cuda = CudaNvi::new(0)?;
let close: [f32] = /* ... */;
let volume: [f32] = /* ... */;
let out = cuda.nvi_batch_dev(&close, &volume)?;
let _ = out; Performance Analysis
Across sizes, Rust CPU runs about 1.07× faster than Tulip C in this benchmark.
AMD Ryzen 9 9950X (CPU) | NVIDIA RTX 4090 (GPU) | Benchmarks: 2026-01-08
Related Indicators
Accumulation/Distribution
Technical analysis indicator
Accumulation/Distribution Oscillator
Technical analysis indicator
Balance of Power
Technical analysis indicator
Chaikin Flow Oscillator
Technical analysis indicator
Elder Force Index
Technical analysis indicator
Ease of Movement
Technical analysis indicator