Stochastic Adaptive D

Parameters: k_length = 20 | d_smoothing = 9 | pre_smooth = 20 | attenuation = 2

Overview

Stochastic Adaptive D starts by smoothing the incoming high, low, and close series, then builds a stochastic D line on top of that pre-smoothed range. Alongside the standard D output it also computes an adaptive companion line that can move faster or slower depending on the local signal regime, plus a difference series that shows whether the adaptive response is leading or lagging the standard line.

In VectorTA the indicator supports direct HLC slices, candle input with an explicit source selection, a stateful stream for bar-by-bar updates, and full parameter sweeps across the three integer windows and the attenuation control. Invalid bars reset the rolling stream state, which keeps the pre-smoothing and stochastic windows consistent after gaps or bad ticks.

Defaults: `k_length = 20`, `d_smoothing = 9`, `pre_smooth = 20`, and `attenuation = 2.0`.

Implementation Examples

Compute the standard and adaptive D lines from direct slices or from candle data.

use vector_ta::indicators::stochastic_adaptive_d::{
    stochastic_adaptive_d,
    StochasticAdaptiveDInput,
    StochasticAdaptiveDParams,
};
use vector_ta::utilities::data_loader::{Candles, read_candles_from_csv};

let direct = stochastic_adaptive_d(&StochasticAdaptiveDInput::from_slices(
    &high,
    &low,
    &close,
    StochasticAdaptiveDParams {
        k_length: Some(20),
        d_smoothing: Some(9),
        pre_smooth: Some(20),
        attenuation: Some(2.0),
    },
))?;

let candles: Candles = read_candles_from_csv("data/sample.csv")?;
let from_candles = stochastic_adaptive_d(&StochasticAdaptiveDInput::from_candles(
    &candles,
    "close",
    StochasticAdaptiveDParams::default(),
))?;

println!("standard D = {:?}", direct.standard_d.last());
println!("adaptive D = {:?}", direct.adaptive_d.last());
println!("difference = {:?}", direct.difference.last());
println!("candle difference = {:?}", from_candles.difference.last());

API Reference

Input Methods
StochasticAdaptiveDInput::from_candles(&Candles, "close", StochasticAdaptiveDParams)
    -> StochasticAdaptiveDInput

StochasticAdaptiveDInput::from_slices(&[f64], &[f64], &[f64], StochasticAdaptiveDParams)
    -> StochasticAdaptiveDInput

StochasticAdaptiveDInput::with_default_candles(&Candles)
    -> StochasticAdaptiveDInput
Parameters Structure
pub struct StochasticAdaptiveDParams {
    pub k_length: Option<usize>,     // default 20
    pub d_smoothing: Option<usize>,  // default 9
    pub pre_smooth: Option<usize>,   // default 20
    pub attenuation: Option<f64>,    // default 2.0
}
Output Structure
pub struct StochasticAdaptiveDOutput {
    pub standard_d: Vec<f64>,
    pub adaptive_d: Vec<f64>,
    pub difference: Vec<f64>,
}
Validation, Warmup & NaNs
  • The input high, low, and close slices must have matching lengths and enough valid bars for all three resolved windows.
  • k_length, d_smoothing, and pre_smooth must be positive and fit the available data length.
  • attenuation must be a valid finite numeric control or the indicator returns InvalidAttenuation.
  • Direct evaluation rejects empty or all-invalid inputs and reports an output-length mismatch if the buffers do not align.
  • The stream resets its pre-smoothing and stochastic state when a non-finite bar arrives and returns None until valid history rebuilds.
  • Batch mode validates all integer and float sweep ranges and rejects non-batch kernels.
Builder, Streaming & Batch APIs
StochasticAdaptiveDBuilder::new()
    .k_length(usize)
    .d_smoothing(usize)
    .pre_smooth(usize)
    .attenuation(f64)
    .source("close")
    .kernel(Kernel)
    .apply(&Candles)
    .apply_slices(&[f64], &[f64], &[f64])
    .into_stream()

StochasticAdaptiveDStream::try_new(params)
stream.update(high, low, close) -> Option<(f64, f64, f64)>

StochasticAdaptiveDBatchBuilder::new()
    .k_length_range((start, end, step))
    .d_smoothing_range((start, end, step))
    .pre_smooth_range((start, end, step))
    .attenuation_range((start, end, step))
    .kernel(Kernel)
    .apply(&Candles)
    .apply_slices(&[f64], &[f64], &[f64])

Python Bindings

Python exposes a direct function, a stateful stream class, and a batch helper that returns the three output matrices together with the resolved sweep axes.

from vector_ta import (
    stochastic_adaptive_d,
    stochastic_adaptive_d_batch,
    StochasticAdaptiveDStream,
)

standard_d, adaptive_d, difference = stochastic_adaptive_d(
    high,
    low,
    close,
    k_length=20,
    d_smoothing=9,
    pre_smooth=20,
    attenuation=2.0,
)

stream = StochasticAdaptiveDStream(
    k_length=20,
    d_smoothing=9,
    pre_smooth=20,
    attenuation=2.0,
)
point = stream.update(high[-1], low[-1], close[-1])

batch = stochastic_adaptive_d_batch(
    high,
    low,
    close,
    k_length_range=(16, 20, 2),
    d_smoothing_range=(5, 9, 2),
    pre_smooth_range=(10, 20, 5),
    attenuation_range=(1.5, 2.5, 0.5),
)

print(batch["standard_d"].shape)
print(batch["attenuations"])

JavaScript/WASM Bindings

The WASM layer uses a direct indicator call, host and raw-pointer into-buffer entry points, and a batch helper for parameter sweeps over the same four controls.

import init, {
  stochastic_adaptive_d,
  stochastic_adaptive_d_batch,
  stochastic_adaptive_d_alloc,
  stochastic_adaptive_d_free,
  stochastic_adaptive_d_into,
  stochastic_adaptive_d_into_host,
  stochastic_adaptive_d_batch_into,
} from "vector-ta-wasm";

await init();

const single = stochastic_adaptive_d(high, low, close, 20, 9, 20, 2.0);
console.log(single.standard_d, single.adaptive_d, single.difference);

const batch = stochastic_adaptive_d_batch(high, low, close, {
  k_length_range: [16, 20, 2],
  d_smoothing_range: [5, 9, 2],
  pre_smooth_range: [10, 20, 5],
  attenuation_range: [1.5, 2.5, 0.5],
});

const ptr = stochastic_adaptive_d_alloc(close.length * 3);
stochastic_adaptive_d_into_host(high, low, close, ptr, 20, 9, 20, 2.0);
stochastic_adaptive_d_free(ptr, close.length * 3);

CUDA Bindings (Rust)

Additional details for the CUDA bindings can be found inside the VectorTA repository.

Performance Analysis

Comparison:
View:
Placeholder data (no recorded benchmarks for this indicator)

Across sizes, Rust CPU runs about 1.14× faster than Tulip C in this benchmark.

Loading chart...

AMD Ryzen 9 9950X (CPU) | NVIDIA RTX 4090 (GPU)

Related Indicators