Ease of Movement (EMV)
Overview
Ease of Movement quantifies the relationship between price change and the volume required to produce that change, oscillating around zero to reveal when price moves efficiently versus laboriously. The indicator divides the distance between midpoints by a ratio of volume to trading range, producing positive values when price rises easily on relatively low volume and negative values when declines occur with minimal resistance. Traders monitor EMV crossings above zero as confirmation of breakout strength, since genuine moves typically show positive EMV as price advances without excessive volume churning. Conversely, negative EMV during rallies warns of distribution, as heavy volume accompanies small price gains. The indicator excels at identifying accumulation and distribution phases before major moves, particularly when EMV diverges from price action to signal shifts in supply and demand dynamics beneath the surface price activity.
Implementation Examples
Compute EMV from slices or a Candles dataset:
use vectorta::indicators::emv::{emv, EmvInput, EmvParams};
use vectorta::utilities::data_loader::{Candles, read_candles_from_csv};
// Using with H/L/C/V slices
let high = vec![10.0, 12.0, 13.0, 15.0];
let low = vec![5.0, 7.0, 8.0, 10.0];
let close = vec![7.5, 9.0, 10.5, 12.5];
let volume = vec![10000.0, 20000.0, 25000.0, 30000.0];
let input = EmvInput::from_slices(&high, &low, &close, &volume);
let result = emv(&input)?;
// From Candles (fields: open, high, low, close, volume)
let candles: Candles = read_candles_from_csv("data/sample.csv")?;
let input = EmvInput::from_candles(&candles);
let result = emv(&input)?;
// Access EMV values (same length as input)
for value in result.values {
println!("EMV: {}", value);
} API Reference
Input Methods ▼
// From Candles
EmvInput::from_candles(&Candles) -> EmvInput
// From H/L/C/V slices
EmvInput::from_slices(&[f64], &[f64], &[f64], &[f64]) -> EmvInput
// Default candles (same as from_candles)
EmvInput::with_default_candles(&Candles) -> EmvInput Parameters Structure ▼
#[derive(Debug, Clone, Default)]
pub struct EmvParams; // No tunable parameters Output Structure ▼
pub struct EmvOutput {
pub values: Vec<f64>, // EMV values (length matches input)
} Validation, Warmup & NaNs ▼
EmvError::EmptyDataif any required slice is empty.EmvError::AllValuesNaNif no finite H/L/V exists in the series.EmvError::NotEnoughData { valid }if fewer than two valid points after the first finite bar.- Warmup: indices before and including the first finite bar are
NaN; first computable value is atfirst + 1. high == lowproducesNaNand advances the stored midpoint;NaNinputs produceNaNwithout advancing it.- Streaming: first valid update seeds the midpoint and returns
None; subsequent valid bars returnSome(value); zero‑range orNaNyieldNone.
Error Handling ▼
use vectorta::indicators::emv::{emv, EmvInput, EmvError};
match emv(&input) {
Ok(output) => process_results(output.values),
Err(EmvError::EmptyData) => eprintln!("emv: empty data"),
Err(EmvError::AllValuesNaN) => eprintln!("emv: all values NaN"),
Err(EmvError::NotEnoughData { valid }) => eprintln!("emv: need at least 2 valid points, found {}", valid),
} Python Bindings
Basic Usage ▼
Compute EMV using NumPy arrays (kernel is optional: scalar/avx2/avx512/auto):
import numpy as np
from vectorta import emv
high = np.array([10.0, 12.0, 13.0, 15.0])
low = np.array([5.0, 7.0, 8.0, 10.0])
close = np.array([7.5, 9.0, 10.5, 12.5])
volume = np.array([10000.0, 20000.0, 25000.0, 30000.0])
# Basic calculation (Auto kernel)
values = emv(high, low, close, volume, kernel="auto")
print(values) Streaming Real-time Updates ▼
from vectorta import EmvStream
stream = EmvStream()
for h, l, c, v in feed: # close is accepted but not used by EMV streaming
val = stream.update(h, l, c, v)
if val is not None:
handle(val) Batch Processing ▼
import numpy as np
from vectorta import emv_batch
result = emv_batch(high, low, close, volume, kernel="auto")
values = result["values"] # shape: (1, len(series))
print(values.shape) CUDA Acceleration ▼
CUDA support for EMV is under consideration. APIs will mirror other CUDA-enabled indicators when available.
# Coming soon: CUDA helpers for EMV (batch and multi-series) JavaScript/WASM Bindings
Basic Usage ▼
Compute EMV with Float64Array inputs:
import { emv_js } from 'vectorta-wasm';
const high = new Float64Array([/* ... */]);
const low = new Float64Array([/* ... */]);
const close = new Float64Array([/* ... */]);
const volume = new Float64Array([/* ... */]);
const values = emv_js(high, low, close, volume);
console.log('EMV values:', values); Memory-Efficient Operations ▼
Use zero-copy into operations for large datasets:
import { emv_alloc, emv_free, emv_into, memory } from 'vectorta-wasm';
// Assume high/low/close/volume are Float64Array of same length
const n = high.length;
// Allocate WASM memory for inputs and output
const highPtr = emv_alloc(n);
const lowPtr = emv_alloc(n);
const closePtr = emv_alloc(n);
const volumePtr = emv_alloc(n);
const outPtr = emv_alloc(n);
// Copy inputs into WASM memory
new Float64Array(memory.buffer, highPtr, n).set(high);
new Float64Array(memory.buffer, lowPtr, n).set(low);
new Float64Array(memory.buffer, closePtr, n).set(close);
new Float64Array(memory.buffer, volumePtr, n).set(volume);
// Compute directly into pre-allocated output buffer
emv_into(highPtr, lowPtr, closePtr, volumePtr, outPtr, n);
// Read results (copy out if needed)
const values = new Float64Array(memory.buffer, outPtr, n).slice();
// Free allocated memory
emv_free(highPtr, n);
emv_free(lowPtr, n);
emv_free(closePtr, n);
emv_free(volumePtr, n);
emv_free(outPtr, n); Batch Processing ▼
EMV batch returns a single row (no parameter sweep):
import { emv_batch_js } from 'vectorta-wasm';
const out = emv_batch_js(high, low, close, volume, {});
// out: { values: Float64Array, combos: EmvParams[], rows: 1, cols: N }
const row = Array.from(out.values).slice(0, out.cols); Performance Analysis
Across sizes, Rust CPU runs about 1.17× faster than Tulip C in this benchmark.
AMD Ryzen 9 9950X (CPU) | NVIDIA RTX 4090 (GPU) | Benchmarks: 2026-01-05
Benchmark note
VectorTA’s EMV implementation includes additional edge-case handling (e.g., guarding against zero-volume inputs) compared to Tulip C. Because of this, Tulip C vs VectorTA timings may not be a strict apples-to-apples comparison.
CUDA note
In our benchmark workload, the Rust CPU implementation is faster than CUDA for this indicator. Prefer the Rust/CPU path unless your workload differs.
Related Indicators
Accumulation/Distribution
Technical analysis indicator
Accumulation/Distribution Oscillator
Technical analysis indicator
Balance of Power
Technical analysis indicator
Chaikin Flow Oscillator
Technical analysis indicator
Elder Force Index
Technical analysis indicator
Klinger Volume Oscillator
Technical analysis indicator