Anti-Volume Stop Loss (AVSL)

Parameters: fast_period = 12 | slow_period = 26 | multiplier = 2

Overview

AVSL creates an adaptive trailing stop that responds to the relationship between price movement and volume activity, widening when volume contradicts price action and tightening when they confirm each other. The indicator calculates divergence between volume weighted moving averages and simple moving averages across fast and slow periods, then combines this with a volume ratio to generate a dynamic pressure coefficient. When high volume accompanies price moves in the trend direction, AVSL pulls the stop closer as confirmation strengthens; conversely, when volume fails to support price advances or spikes during pullbacks, the stop expands to avoid premature exits. The multiplier parameter controls sensitivity, with higher values creating more responsive stops that react quickly to volume anomalies. Traders use AVSL to distinguish between healthy retracements with low volume and genuine reversals marked by volume expansion, allowing positions to breathe during normal consolidations while protecting capital when institutional distribution begins.

Implementation Examples

Compute AVSL from close/low/volume arrays or candle collections:

use vector_ta::indicators::avsl::{avsl, AvslInput, AvslParams};
use vector_ta::utilities::data_loader::{Candles, read_candles_from_csv};

// Using raw slices (need at least slow_period values)
let close: Vec<f64> = vec![/* ... */];
let low: Vec<f64> = vec![/* ... */];
let vol: Vec<f64> = vec![/* ... */];

let params = AvslParams { fast_period: Some(12), slow_period: Some(26), multiplier: Some(2.0) };
let input  = AvslInput::from_slices(&close, &low, &vol, params);
let output = avsl(&input)?;

// Using Candles with defaults (fast=12, slow=26, multiplier=2.0; sources: close/low)
let candles: Candles = read_candles_from_csv("data/sample.csv")?;
let input = AvslInput::with_default_candles(&candles);
let output = avsl(&input)?;

for v in output.values { println!("AVSL: {}", v); }

API Reference

Input Methods
// From raw slices (close, low, volume)
AvslInput::from_slices(&[f64], &[f64], &[f64], AvslParams) -> AvslInput

// From candles with explicit sources
AvslInput::from_candles(&Candles, &str /*close_source*/, &str /*low_source*/, AvslParams) -> AvslInput

// From candles with defaults (close/low; 12/26, 2.0)
AvslInput::with_default_candles(&Candles) -> AvslInput
Parameters Structure
pub struct AvslParams {
    pub fast_period: Option<usize>, // Default: 12
    pub slow_period: Option<usize>, // Default: 26
    pub multiplier: Option<f64>,    // Default: 2.0 (must be > 0)
}
Output Structure
pub struct AvslOutput {
    pub values: Vec<f64>, // AVSL stop values
}
Validation, Warmup & NaNs
  • close.len == low.len == volume.len; otherwise AvslError::DataLengthMismatch.
  • fast_period > 0, slow_period > 0, both ≤ data length; else AvslError::InvalidPeriod.
  • multiplier > 0 and finite; else AvslError::InvalidMultiplier.
  • First valid index is the max of first finite across all three inputs; if none: AvslError::AllValuesNaN.
  • Need at least slow_period valid points after first valid; else AvslError::NotEnoughValidData.
  • Warmup: output is NaN through first + 2*slow_period - 2 due to internal pre‑stage and final slow SMA.
  • Streaming: AvslStream::update returns None during warmup; non‑NaN values thereafter.
Error Handling
use vector_ta::indicators::avsl::{avsl, AvslError};

match avsl(&input) {
    Ok(out) => process(out.values),
    Err(AvslError::EmptyInputData) => eprintln!("Input is empty"),
    Err(AvslError::AllValuesNaN) => eprintln!("All values are NaN"),
    Err(AvslError::InvalidPeriod { period, data_len }) =>
        eprintln!("Invalid period {} for data length {}", period, data_len),
    Err(AvslError::NotEnoughValidData { needed, valid }) =>
        eprintln!("Need {} valid points, only {}", needed, valid),
    Err(AvslError::DataLengthMismatch { close_len, low_len, volume_len }) =>
        eprintln!("Length mismatch: close={}, low={}, volume={}", close_len, low_len, volume_len),
    Err(AvslError::InvalidMultiplier { multiplier }) =>
        eprintln!("Invalid multiplier: {}", multiplier),
    Err(AvslError::ComputationError(msg)) => eprintln!("Compute error: {}", msg),
}

Python Bindings

Basic Usage

Calculate AVSL from NumPy arrays (defaults: 12/26, multiplier 2.0):

import numpy as np
from vector_ta import avsl

close = np.array([100.5, 101.2, 100.9, 102.0, 101.5])
low   = np.array([ 99.9, 100.7, 100.2, 101.1, 100.8])
vol   = np.array([1.0e6, 1.2e6, 0.9e6, 1.1e6, 1.0e6])

# Defaults
values = avsl(close, low, vol)

# Custom params and kernel
values = avsl(close, low, vol, fast_period=12, slow_period=26, multiplier=2.0, kernel="auto")

print(values)
Streaming
from vector_ta import AvslStream

stream = AvslStream(fast_period=12, slow_period=26, multiplier=2.0)
for c, l, v in feed:
    val = stream.update(c, l, v)
    if val is not None:
        print("AVSL:", val)
Batch Parameter Sweep
import numpy as np
from vector_ta import avsl_batch

close = np.array([...]); low = np.array([...]); vol = np.array([...])

results = avsl_batch(
    close, low, vol,
    fast_range=(8, 16, 4),
    slow_range=(20, 32, 4),
    mult_range=(1.5, 3.0, 0.5),
    kernel="auto"
)

print(results["values"].shape)  # (rows, len)
print(results["fast_periods"])
print(results["slow_periods"])
print(results["multipliers"])
CUDA Acceleration

CUDA helpers are available when the Python package is built with CUDA support. Inputs must be float32; outputs are device arrays (DLPack / __cuda_array_interface__ compatible).

import numpy as np
from vector_ta import avsl_cuda_batch_dev, avsl_cuda_many_series_one_param_dev

# One series (float32)
close_f32 = np.asarray(load_close(), dtype=np.float32)
low_f32 = np.asarray(load_low(), dtype=np.float32)
volume_f32 = np.asarray(load_volume(), dtype=np.float32)

dev = avsl_cuda_batch_dev(
    close_f32=close_f32,
    low_f32=low_f32,
    volume_f32=volume_f32,
    fast_range=(2, 20, 2),
    slow_range=(2, 20, 2),
    mult_range=(0.5, 2.0, 0.5),
    device_id=0,
)

# Many series (time-major)
close_tm_f32 = np.asarray(load_close_time_major_matrix(), dtype=np.float32)
rows, cols = close_tm_f32.shape
close_tm_f32 = close_tm_f32.ravel()
low_tm_f32 = np.asarray(load_low_time_major_matrix(), dtype=np.float32)
low_tm_f32 = low_tm_f32.ravel()
volume_tm_f32 = np.asarray(load_volume_time_major_matrix(), dtype=np.float32)
volume_tm_f32 = volume_tm_f32.ravel()

dev_tm = avsl_cuda_many_series_one_param_dev(
    close_tm_f32=close_tm_f32,
    low_tm_f32=low_tm_f32,
    volume_tm_f32=volume_tm_f32,
    cols=cols,
    rows=rows,
    fast_period=12,
    slow_period=26,
    multiplier=1.0,
    device_id=0,
)

JavaScript/WASM Bindings

Basic Usage

Compute AVSL from close/low/volume arrays:

import { avsl_js } from 'vectorta-wasm';

const close = new Float64Array([100.5, 101.2, 100.9, 102.0, 101.5]);
const low   = new Float64Array([ 99.9, 100.7, 100.2, 101.1, 100.8]);
const vol   = new Float64Array([1.0e6, 1.2e6, 0.9e6, 1.1e6, 1.0e6]);

const values = avsl_js(close, low, vol, 12, 26, 2.0);
console.log('AVSL:', values);
Memory-Efficient Operations

Use zero‑copy buffers for large datasets:

import { avsl_alloc, avsl_free, avsl_into, memory } from 'vectorta-wasm';

const len = close.length;
const cPtr = avsl_alloc(len);
const lPtr = avsl_alloc(len);
const vPtr = avsl_alloc(len);
const oPtr = avsl_alloc(len);

new Float64Array(memory.buffer, cPtr, len).set(close);
new Float64Array(memory.buffer, lPtr, len).set(low);
new Float64Array(memory.buffer, vPtr, len).set(vol);

// Args: close_ptr, low_ptr, vol_ptr, out_ptr, len, fast, slow, mult
avsl_into(cPtr, lPtr, vPtr, oPtr, len, 12, 26, 2.0);
const out = new Float64Array(memory.buffer, oPtr, len).slice();

avsl_free(cPtr, len); avsl_free(lPtr, len); avsl_free(vPtr, len); avsl_free(oPtr, len);
Streaming Context
import { AvslContext, avsl_alloc, avsl_free, memory } from 'vectorta-wasm';

const ctx = new AvslContext(12, 26, 2.0);
const warmup = ctx.get_warmup_period(); // slow_period - 1

const len = 1024; // number of new points per call
const cPtr = avsl_alloc(len);
const lPtr = avsl_alloc(len);
const vPtr = avsl_alloc(len);
const oPtr = avsl_alloc(len);

// Fill input windows and compute into preallocated output
ctx.update_into(cPtr, lPtr, vPtr, oPtr, len);
const lastChunk = new Float64Array(memory.buffer, oPtr, len).slice();

avsl_free(cPtr, len); avsl_free(lPtr, len); avsl_free(vPtr, len); avsl_free(oPtr, len);
Batch Processing

Sweep fast/slow/multiplier ranges in one call (flattened row‑major output):

import { avsl_batch_into, avsl_alloc, avsl_free, memory } from 'vectorta-wasm';

const len = close.length;
const cPtr = avsl_alloc(len), lPtr = avsl_alloc(len), vPtr = avsl_alloc(len);
new Float64Array(memory.buffer, cPtr, len).set(close);
new Float64Array(memory.buffer, lPtr, len).set(low);
new Float64Array(memory.buffer, vPtr, len).set(vol);

// Allocate out for rows * len; get rows from return value
const fast = [8, 12, 16], slow = [20, 24, 28, 32]; const mult = [1.5, 2.0, 2.5, 3.0];
const rows = fast.length * slow.length * mult.length;
const oPtr = avsl_alloc(rows * len);

const gotRows = avsl_batch_into(
  cPtr, lPtr, vPtr, oPtr, len,
  8, 16, 4,
  20, 32, 4,
  1.5, 3.0, 0.5
);

const flat = new Float64Array(memory.buffer, oPtr, rows * len).slice();
// Row 0..rows-1 each has len values; reshape as needed

avsl_free(cPtr, len); avsl_free(lPtr, len); avsl_free(vPtr, len); avsl_free(oPtr, rows * len);

CUDA Bindings (Rust)

use vector_ta::cuda::CudaAvsl;
use vector_ta::indicators::avsl::AvslBatchRange;

let cuda = CudaAvsl::new(0)?;

let close_f32: [f32] = /* ... */;
let low_f32: [f32] = /* ... */;
let volume_f32: [f32] = /* ... */;
let sweep = AvslBatchRange::default();

let out = cuda.avsl_batch_dev(&close_f32, &low_f32, &volume_f32, &sweep)?;
let _ = out;

Performance Analysis

Comparison:
View:
Loading chart...

AMD Ryzen 9 9950X (CPU) | NVIDIA RTX 4090 (GPU) | Benchmarks: 2026-01-08

Related Indicators