Half Causal Estimator

Parameters: slots_per_day = | data_period = 5 | filter_length = 20 | kernel_width = 20 | kernel_type = epanechnikov | confidence_adjust = symmetric | maximum_confidence_adjust = 100 | enable_expected_value = | extra_smoothing = 0

Overview

Half Causal Estimator is built for repeating intraday structures where the same slot inside a session tends to behave similarly from day to day. It organizes the input into time-of-day slots, learns an expected value for each slot from prior sessions, and then combines the currently observed data with that learned template. The combined series is passed through a configurable kernel smoother, producing an estimate that reacts to current data without ignoring the slot-level rhythm of the instrument.

The candle path can derive the input stream from several session-aware sources such as volume, true-range percentage, close-to-close change percentage, or a synthetic test waveform. Slice input is supported as well, but it requires an explicit slot count because the estimator has to know how to wrap values into recurring intraday positions. When enabled, the expected-value branch is returned alongside the smoothed estimate so it can be inspected directly.

Defaults: Half Causal Estimator uses `data_period = 5`, `filter_length = 20`, `kernel_width = 20.0`, `kernel_type = "epanechnikov"`, `confidence_adjust = "symmetric"`, `maximum_confidence_adjust = 100.0`, `enable_expected_value = false`, `extra_smoothing = 0`, and candle source `volume`.

Implementation Examples

Run the estimator on a precomputed intraday series or derive it from candle volume data.

use vector_ta::indicators::half_causal_estimator::{
    half_causal_estimator,
    HalfCausalEstimatorInput,
    HalfCausalEstimatorParams,
};
use vector_ta::utilities::data_loader::{Candles, read_candles_from_csv};

let intraday_values = vec![1200.0, 1180.0, 1260.0, 1315.0, 1400.0, 1388.0];

let slice_output = half_causal_estimator(&HalfCausalEstimatorInput::from_slice(
    &intraday_values,
    HalfCausalEstimatorParams {
        slots_per_day: Some(3),
        data_period: Some(5),
        filter_length: Some(20),
        kernel_width: Some(20.0),
        kernel_type: None,
        confidence_adjust: None,
        maximum_confidence_adjust: Some(100.0),
        enable_expected_value: Some(true),
        extra_smoothing: Some(0),
    },
))?;

let candles: Candles = read_candles_from_csv("data/sample.csv")?;
let candle_output = half_causal_estimator(
    &HalfCausalEstimatorInput::with_default_candles(&candles),
)?;

println!("{:?}", slice_output.estimate.last());
println!("{:?}", candle_output.expected_value.last());

API Reference

Input Methods
// From candles
HalfCausalEstimatorInput::from_candles(&Candles, &str, HalfCausalEstimatorParams)
    -> HalfCausalEstimatorInput

// From a precomputed intraday slice
HalfCausalEstimatorInput::from_slice(&[f64], HalfCausalEstimatorParams)
    -> HalfCausalEstimatorInput

// From candles with default params and source "volume"
HalfCausalEstimatorInput::with_default_candles(&Candles)
    -> HalfCausalEstimatorInput
Parameters Structure
pub struct HalfCausalEstimatorParams {
    pub slots_per_day: Option<usize>,
    pub data_period: Option<usize>,
    pub filter_length: Option<usize>,
    pub kernel_width: Option<f64>,
    pub kernel_type: Option<HalfCausalEstimatorKernelType>,
    pub confidence_adjust: Option<HalfCausalEstimatorConfidenceAdjust>,
    pub maximum_confidence_adjust: Option<f64>,
    pub enable_expected_value: Option<bool>,
    pub extra_smoothing: Option<usize>,
}
Output Structure
pub struct HalfCausalEstimatorOutput {
    pub estimate: Vec<f64>,
    pub expected_value: Vec<f64>,
}

pub struct HalfCausalEstimatorBatchOutput {
    pub estimate_values: Vec<f64>,
    pub expected_value_values: Vec<f64>,
    pub combos: Vec<HalfCausalEstimatorParams>,
    pub rows: usize,
    pub cols: usize,
}
Validation, Warmup & NaNs
  • Slice input requires non-empty data and an explicit slots_per_day value of at least 2.
  • Candle input can infer slot placement from timestamps, but minute-granularity inference can fail with UnableToInferMinuteTimeframe or InvalidTimestamp.
  • filter_length must be at least 2, kernel_width must be finite, and maximum_confidence_adjust must be valid.
  • Candle source must be one of volume, tr, change, or test.
  • Streaming warmup is slots_per_day + window_size, reported by get_warmup_period().
  • Batch mode validates axis ranges and rejects unsupported kernels through InvalidKernelForBatch.
Builder, Streaming & Batch APIs
// Builder
HalfCausalEstimatorBuilder::new()
    .slots_per_day(usize)
    .data_period(usize)
    .filter_length(usize)
    .kernel_width(f64)
    .kernel_type(HalfCausalEstimatorKernelType)
    .confidence_adjust(HalfCausalEstimatorConfidenceAdjust)
    .maximum_confidence_adjust(f64)
    .enable_expected_value(bool)
    .extra_smoothing(usize)
    .source(String)
    .kernel(Kernel)
    .apply_slice(&[f64])

HalfCausalEstimatorBuilder::new()
    .apply_candles(&Candles)
    .into_stream()

// Stream
HalfCausalEstimatorStream::try_new(HalfCausalEstimatorParams)
HalfCausalEstimatorStream::update(f64) -> (Option<f64>, Option<f64>)
HalfCausalEstimatorStream::get_warmup_period() -> usize

// Batch
HalfCausalEstimatorBatchBuilder::new()
    .slots_per_day(usize)
    .data_period_range(usize, usize, usize)
    .filter_length_range(usize, usize, usize)
    .kernel_width_range(f64, f64, f64)
    .maximum_confidence_adjust_range(f64, f64, f64)
    .extra_smoothing_range(usize, usize, usize)
    .kernel_type(HalfCausalEstimatorKernelType)
    .confidence_adjust(HalfCausalEstimatorConfidenceAdjust)
    .enable_expected_value(bool)
    .apply_slice(&[f64])
Error Handling
pub enum HalfCausalEstimatorError {
    EmptyInputData,
    AllValuesNaN,
    MissingSlotsPerDay,
    InvalidSlotsPerDay { slots_per_day: usize },
    InvalidDataPeriod { data_period: usize },
    InvalidFilterLength { filter_length: usize },
    InvalidKernelWidth { kernel_width: f64 },
    InvalidMaximumConfidenceAdjust { maximum_confidence_adjust: f64 },
    InvalidSource { source_name: String },
    UnableToInferMinuteTimeframe,
    InvalidTimestamp { timestamp: i64 },
    OutputLengthMismatch { expected: usize, estimate_got: usize, expected_value_got: usize },
    InvalidRange { start: String, end: String, step: String },
    InvalidKernelForBatch(Kernel),
}

Python Bindings

Python exposes a scalar estimator, a streaming class, and a batch sweep. The scalar path returns a dictionary with `estimate` and `expected_value` NumPy arrays. Streaming returns a tuple of optional floats. Batch returns reshaped estimate and expected-value matrices plus the tested data-period, filter-length, kernel-width, confidence-cap, and extra-smoothing axes.

import numpy as np
from vector_ta import (
    half_causal_estimator,
    half_causal_estimator_batch,
    HalfCausalEstimatorStream,
)

data = np.asarray(session_values, dtype=np.float64)

result = half_causal_estimator(
    data,
    slots_per_day=78,
    data_period=5,
    filter_length=20,
    kernel_width=20.0,
    kernel_type="epanechnikov",
    confidence_adjust="symmetric",
    maximum_confidence_adjust=100.0,
    enable_expected_value=True,
    extra_smoothing=0,
    kernel="auto",
)

stream = HalfCausalEstimatorStream(
    slots_per_day=78,
    data_period=5,
    filter_length=20,
    kernel_width=20.0,
    kernel_type="epanechnikov",
    confidence_adjust="symmetric",
    maximum_confidence_adjust=100.0,
    enable_expected_value=True,
    extra_smoothing=0,
)
print(stream.update(float(data[-1])))
print(stream.warmup_period)

batch = half_causal_estimator_batch(
    data,
    slots_per_day=78,
    data_period_range=(5, 10, 5),
    filter_length_range=(12, 24, 12),
    kernel_width_range=(10.0, 20.0, 10.0),
    maximum_confidence_adjust_range=(50.0, 100.0, 50.0),
    extra_smoothing_range=(0, 2, 2),
    kernel_type="epanechnikov",
    confidence_adjust="symmetric",
    enable_expected_value=True,
    kernel="auto",
)

print(batch["estimate"].shape)
print(batch["kernel_widths"])
print(batch["rows"], batch["cols"])

JavaScript/WASM Bindings

The WASM layer exposes a scalar function, batch function, and raw allocation helpers. The scalar binding accepts one data array plus a config object and returns `estimate` and `expected_value`. The batch binding returns those flattened matrices together with the tested parameter combos and the result dimensions.

import init, {
  half_causal_estimator_js,
  half_causal_estimator_batch_js,
} from "@vectoralpha/vector_ta";

await init();

const single = half_causal_estimator_js(data, {
  slots_per_day: 78,
  data_period: 5,
  filter_length: 20,
  kernel_width: 20.0,
  kernel_type: "epanechnikov",
  confidence_adjust: "symmetric",
  maximum_confidence_adjust: 100.0,
  enable_expected_value: true,
  extra_smoothing: 0,
});

console.log(single.estimate);
console.log(single.expected_value);

const batch = half_causal_estimator_batch_js(data, {
  slots_per_day: 78,
  data_period_range: [5, 10, 5],
  filter_length_range: [12, 24, 12],
  kernel_width_range: [10.0, 20.0, 10.0],
  maximum_confidence_adjust_range: [50.0, 100.0, 50.0],
  extra_smoothing_range: [0, 2, 2],
  kernel_type: "epanechnikov",
  confidence_adjust: "symmetric",
  enable_expected_value: true,
});

console.log(batch.rows, batch.cols);
console.log(batch.combos);

CUDA Bindings (Rust)

Additional details for the CUDA bindings can be found inside the VectorTA repository.

Performance Analysis

Comparison:
View:
Placeholder data (no recorded benchmarks for this indicator)

Across sizes, Rust CPU runs about 1.14× faster than Tulip C in this benchmark.

Loading chart...

AMD Ryzen 9 9950X (CPU) | NVIDIA RTX 4090 (GPU)

Related Indicators