Ease of Movement (EMV)

Overview

Ease of Movement quantifies the relationship between price change and the volume required to produce that change, oscillating around zero to reveal when price moves efficiently versus laboriously. The indicator divides the distance between midpoints by a ratio of volume to trading range, producing positive values when price rises easily on relatively low volume and negative values when declines occur with minimal resistance. Traders monitor EMV crossings above zero as confirmation of breakout strength, since genuine moves typically show positive EMV as price advances without excessive volume churning. Conversely, negative EMV during rallies warns of distribution, as heavy volume accompanies small price gains. The indicator excels at identifying accumulation and distribution phases before major moves, particularly when EMV diverges from price action to signal shifts in supply and demand dynamics beneath the surface price activity.

Implementation Examples

Compute EMV from slices or a Candles dataset:

use vector_ta::indicators::emv::{emv, EmvInput, EmvParams};
use vector_ta::utilities::data_loader::{Candles, read_candles_from_csv};

// Using with H/L/C/V slices
let high = vec![10.0, 12.0, 13.0, 15.0];
let low = vec![5.0, 7.0, 8.0, 10.0];
let close = vec![7.5, 9.0, 10.5, 12.5];
let volume = vec![10000.0, 20000.0, 25000.0, 30000.0];
let input = EmvInput::from_slices(&high, &low, &close, &volume);
let result = emv(&input)?;

// From Candles (fields: open, high, low, close, volume)
let candles: Candles = read_candles_from_csv("data/sample.csv")?;
let input = EmvInput::from_candles(&candles);
let result = emv(&input)?;

// Access EMV values (same length as input)
for value in result.values {
    println!("EMV: {}", value);
}

API Reference

Input Methods
// From Candles
EmvInput::from_candles(&Candles) -> EmvInput

// From H/L/C/V slices
EmvInput::from_slices(&[f64], &[f64], &[f64], &[f64]) -> EmvInput

// Default candles (same as from_candles)
EmvInput::with_default_candles(&Candles) -> EmvInput
Parameters Structure
#[derive(Debug, Clone, Default)]
pub struct EmvParams; // No tunable parameters
Output Structure
pub struct EmvOutput {
    pub values: Vec<f64>, // EMV values (length matches input)
}
Validation, Warmup & NaNs
  • EmvError::EmptyData if any required slice is empty.
  • EmvError::AllValuesNaN if no finite H/L/V exists in the series.
  • EmvError::NotEnoughData { valid } if fewer than two valid points after the first finite bar.
  • Warmup: indices before and including the first finite bar are NaN; first computable value is at first + 1.
  • high == low produces NaN and advances the stored midpoint; NaN inputs produce NaN without advancing it.
  • Streaming: first valid update seeds the midpoint and returns None; subsequent valid bars return Some(value); zero‑range or NaN yield None.
Error Handling
use vector_ta::indicators::emv::{emv, EmvInput, EmvError};

match emv(&input) {
    Ok(output) => process_results(output.values),
    Err(EmvError::EmptyData) => eprintln!("emv: empty data"),
    Err(EmvError::AllValuesNaN) => eprintln!("emv: all values NaN"),
    Err(EmvError::NotEnoughData { valid }) => eprintln!("emv: need at least 2 valid points, found {}", valid),
}

Python Bindings

Basic Usage

Compute EMV using NumPy arrays (kernel is optional: scalar/avx2/avx512/auto):

import numpy as np
from vector_ta import emv

high = np.array([10.0, 12.0, 13.0, 15.0])
low = np.array([5.0, 7.0, 8.0, 10.0])
close = np.array([7.5, 9.0, 10.5, 12.5])
volume = np.array([10000.0, 20000.0, 25000.0, 30000.0])

# Basic calculation (Auto kernel)
values = emv(high, low, close, volume, kernel="auto")
print(values)
Streaming Real-time Updates
from vector_ta import EmvStream

stream = EmvStream()
for h, l, c, v in feed:  # close is accepted but not used by EMV streaming
    val = stream.update(h, l, c, v)
    if val is not None:
        handle(val)
Batch Processing
import numpy as np
from vector_ta import emv_batch

result = emv_batch(high, low, close, volume, kernel="auto")
values = result["values"]  # shape: (1, len(series))
print(values.shape)
CUDA Acceleration

CUDA helpers are available when the Python package is built with CUDA support. Inputs must be float32; outputs are device arrays (DLPack / __cuda_array_interface__ compatible).

import numpy as np
from vector_ta import emv_cuda_batch_dev, emv_cuda_many_series_one_param_dev

# One series (float32)
high_f32 = np.asarray(load_high(), dtype=np.float32)
low_f32 = np.asarray(load_low(), dtype=np.float32)
volume_f32 = np.asarray(load_volume(), dtype=np.float32)

dev = emv_cuda_batch_dev(
    high_f32=high_f32,
    low_f32=low_f32,
    volume_f32=volume_f32,
    device_id=0,
)

# Many series (time-major)
high_tm_f32 = np.asarray(load_high_time_major_matrix(), dtype=np.float32)
low_tm_f32 = np.asarray(load_low_time_major_matrix(), dtype=np.float32)
volume_tm_f32 = np.asarray(load_volume_time_major_matrix(), dtype=np.float32)

dev_tm = emv_cuda_many_series_one_param_dev(
    high_tm_f32=high_tm_f32,
    low_tm_f32=low_tm_f32,
    volume_tm_f32=volume_tm_f32,
    device_id=0,
)

JavaScript/WASM Bindings

Basic Usage

Compute EMV with Float64Array inputs:

import { emv_js } from 'vectorta-wasm';

const high = new Float64Array([/* ... */]);
const low = new Float64Array([/* ... */]);
const close = new Float64Array([/* ... */]);
const volume = new Float64Array([/* ... */]);

const values = emv_js(high, low, close, volume);
console.log('EMV values:', values);
Memory-Efficient Operations

Use zero-copy into operations for large datasets:

import { emv_alloc, emv_free, emv_into, memory } from 'vectorta-wasm';

// Assume high/low/close/volume are Float64Array of same length
const n = high.length;

// Allocate WASM memory for inputs and output
const highPtr = emv_alloc(n);
const lowPtr = emv_alloc(n);
const closePtr = emv_alloc(n);
const volumePtr = emv_alloc(n);
const outPtr = emv_alloc(n);

// Copy inputs into WASM memory
new Float64Array(memory.buffer, highPtr, n).set(high);
new Float64Array(memory.buffer, lowPtr, n).set(low);
new Float64Array(memory.buffer, closePtr, n).set(close);
new Float64Array(memory.buffer, volumePtr, n).set(volume);

// Compute directly into pre-allocated output buffer
emv_into(highPtr, lowPtr, closePtr, volumePtr, outPtr, n);

// Read results (copy out if needed)
const values = new Float64Array(memory.buffer, outPtr, n).slice();

// Free allocated memory
emv_free(highPtr, n);
emv_free(lowPtr, n);
emv_free(closePtr, n);
emv_free(volumePtr, n);
emv_free(outPtr, n);
Batch Processing

EMV batch returns a single row (no parameter sweep):

import { emv_batch_js } from 'vectorta-wasm';

const out = emv_batch_js(high, low, close, volume, {});
// out: { values: Float64Array, combos: EmvParams[], rows: 1, cols: N }
const row = Array.from(out.values).slice(0, out.cols);

CUDA Bindings (Rust)

use vector_ta::cuda::CudaEmv;

let cuda = CudaEmv::new(0)?;

let high: [f32] = /* ... */;
let low: [f32] = /* ... */;
let volume: [f32] = /* ... */;

let out = cuda.emv_batch_dev(&high, &low, &volume)?;
let _ = out;

Performance Analysis

Comparison:
View:

Across sizes, Rust CPU runs about 1.17× faster than Tulip C in this benchmark.

Loading chart...

AMD Ryzen 9 9950X (CPU) | NVIDIA RTX 4090 (GPU) | Benchmarks: 2026-02-28

Benchmark note

VectorTA’s EMV implementation includes additional edge-case handling (e.g., guarding against zero-volume inputs) compared to Tulip C. Because of this, Tulip C vs VectorTA timings may not be a strict apples-to-apples comparison.

CUDA note

In our benchmark workload, the Rust CPU implementation is faster than CUDA for this indicator. Prefer the Rust/CPU path unless your workload differs.

Related Indicators