Median Price (MEDPRICE)

Overview

Median Price (MEDPRICE) is a lightweight price transform that computes the midpoint between a period’s high and low. Despite the name, it is not a statistical median; it is simply (High + Low) / 2. This makes it less sensitive to close-to-close noise while still reflecting intrabar range expansion and contraction.

MEDPRICE is commonly used as a smoother input series for other indicators (moving averages, oscillators, filters), and as a visual “equilibrium” line for mean‑reversion context. Because it depends only on high and low, it’s also useful when closes are unreliable or when you want to reduce close‑price bias.

Implementation Examples

Compute MEDPRICE from high/low slices or candles:

use vector_ta::indicators::medprice::{medprice, MedpriceInput, MedpriceParams};
use vector_ta::utilities::data_loader::{Candles, read_candles_from_csv};

// Using high/low slices (must be same length)
let high: Vec<f64> = vec![/* ... */];
let low: Vec<f64> = vec![/* ... */];
let input = MedpriceInput::from_slices(&high, &low, MedpriceParams::default());
let out = medprice(&input)?;

// Using candles (defaults: high_source="high", low_source="low")
let candles: Candles = read_candles_from_csv("data/sample.csv")?;
let input = MedpriceInput::with_default_candles(&candles);
let out = medprice(&input)?;

for v in out.values { println!("MEDPRICE: {}", v); }

API Reference

Input Methods
// From slices
MedpriceInput::from_slices(&[f64], &[f64], MedpriceParams) -> MedpriceInput

// From candles with explicit sources (high/low by default)
MedpriceInput::from_candles(&Candles, &str, &str, MedpriceParams) -> MedpriceInput

// Defaults: ("high", "low")
MedpriceInput::with_default_candles(&Candles) -> MedpriceInput
Parameters Structure
// MEDPRICE has no parameters
pub struct MedpriceParams;
Output Structure
pub struct MedpriceOutput {
    pub values: Vec<f64>,
}
Validation, NaNs & Errors
  • high and low must be non-empty and same length; else MedpriceError::EmptyInputData / DifferentLength.
  • If all pairs are NaN, returns MedpriceError::AllValuesNaN.
  • Outputs are NaN until the first index where both high and low are finite; subsequent values compute (high + low) * 0.5.

Python Bindings

Basic Usage
import numpy as np
from vector_ta import medprice

high = np.asarray([...], dtype=np.float64)
low = np.asarray([...], dtype=np.float64)

values = medprice(high, low, kernel=None)
Streaming
from vector_ta import MedpriceStream

stream = MedpriceStream()
for h, l in zip(high, low):
    v = stream.update(float(h), float(l))
    if v is not None:
        handle(v)
Batch
from vector_ta import medprice_batch

res = medprice_batch(high, low, dummy_range=None, kernel=None)
values = res['values']  # shape: (1, len(high))
CUDA Acceleration

CUDA helpers are available when the Python package is built with CUDA support. Inputs must be float32; outputs are device arrays (DLPack / __cuda_array_interface__ compatible).

import numpy as np
from vector_ta import medprice_cuda_dev, medprice_cuda_batch_dev, medprice_cuda_many_series_one_param_dev

high_f32 = np.asarray(load_high(), dtype=np.float32)
low_f32 = np.asarray(load_low(), dtype=np.float32)

dev = medprice_cuda_dev(high=high_f32, low=low_f32, device_id=0)
dev_batch = medprice_cuda_batch_dev(high=high_f32, low=low_f32, device_id=0)

# Many series (time-major), flattened for this API
high_tm = np.asarray(load_high_time_major_matrix(), dtype=np.float32)
low_tm = np.asarray(load_low_time_major_matrix(), dtype=np.float32)
rows, cols = high_tm.shape
high_tm = high_tm.ravel()
low_tm = low_tm.ravel()

dev_tm = medprice_cuda_many_series_one_param_dev(
    high_tm=high_tm,
    low_tm=low_tm,
    cols=cols,
    rows=rows,
    device_id=0,
)

JavaScript/WASM Bindings

Basic Usage
import { medprice_js } from 'vectorta-wasm';

const high = new Float64Array(loadHighs());
const low = new Float64Array(loadLows());
const values = medprice_js(high, low);
Memory-Efficient Operations
import { medprice_alloc, medprice_free, medprice_into, memory } from 'vectorta-wasm';

const len = high.length;
const highPtr = medprice_alloc(len);
const lowPtr = medprice_alloc(len);
const outPtr = medprice_alloc(len);

new Float64Array(memory.buffer, highPtr, len).set(high);
new Float64Array(memory.buffer, lowPtr, len).set(low);

medprice_into(highPtr, lowPtr, outPtr, len);
const values = new Float64Array(memory.buffer, outPtr, len).slice();

medprice_free(highPtr, len);
medprice_free(lowPtr, len);
medprice_free(outPtr, len);
Batch
import { medprice_batch } from 'vectorta-wasm';

const out = medprice_batch(high, low, { dummy_range: [0, 0, 0] });
console.log(out.rows, out.cols);

CUDA Bindings (Rust)

use vector_ta::cuda::CudaMedprice;

let cuda = CudaMedprice::new(0)?;

let high: [f32] = /* ... */;
let low: [f32] = /* ... */;

let out = cuda.medprice_batch_dev(&high, &low)?;
let _ = out;

Performance Analysis

Comparison:
View:

Across sizes, Rust CPU runs about 1.02× slower than Tulip C in this benchmark.

Loading chart...

AMD Ryzen 9 9950X (CPU) | NVIDIA RTX 4090 (GPU) | Benchmarks: 2026-01-05

Related Indicators