Installation

VectorTA is published as a Rust crate on crates.io (vector-ta) and as a Python package on PyPI (vector-ta), with optional WASM bindings and optional CUDA acceleration.

Latest published version: crates.io loading..., PyPI loading... (checked ).

From source (optional)

You can use the published Rust crate or Python wheels without cloning the repository. Clone the repo only if you want to build WebAssembly locally, compile with CUDA, or contribute.

Source builds originate from VectorAlpha-dev/VectorTA. Before fetching the source, make sure the following tools are installed:

  • Rust toolchain (latest stable recommended; rustup update)
  • wasm-pack 0.12+ for WebAssembly builds (cargo install wasm-pack)
  • Node.js 18 or newer if you plan to consume the generated WASM package from a bundler
  • NVIDIA driver + compatible GPU if you enable CUDA support
  • CUDA Toolkit (nvcc) if you need to build PTX from CUDA sources
git clone https://github.com/VectorAlpha-dev/VectorTA.git
	cd VectorTA

Rust crate (library consumers)

VectorTA is published on crates.io as vector-ta. Using cargo add keeps the dependency declaration tidy:

cargo add vector-ta

The resulting Cargo.toml stanza looks like:

[dependencies]
	vector-ta = "0.1"
	
	# Optional features
	# vector-ta = { version = "0.1", features = ["cuda"] }
	# vector-ta = { version = "0.1", features = ["wasm"] }
	# vector-ta = { version = "0.1", features = ["python"] }
	# vector-ta = { version = "0.1", features = ["nightly-avx"] } # nightly Rust required

Note: the crate name contains a hyphen (vector-ta), but you import it in Rust as vector_ta.

Indicators expose typed inputs and builders. The example below computes RSI values from a price slice and prints the latest reading:

use vector_ta::indicators::rsi::{rsi, RsiInput, RsiParams};

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let closes = vec![100.0, 102.0, 101.3, 104.2, 103.5, 105.1, 104.8];

    let params = RsiParams { period: Some(14) };
    let input = RsiInput::from_slice(&closes, params);
    let output = rsi(&input)?;

    if let Some(last) = output.values.last() {
        println!("latest RSI {:.2}", last);
    }

    Ok(())
}

Most indicators also provide builders (for kernel selection) and batch helpers—see the corresponding indicator page for in-depth examples.

CUDA acceleration (optional)

GPU acceleration is available when you build with CUDA support (via the cuda feature).

[dependencies]
vector-ta = { version = "0.1", features = ["cuda"] }

Building the WebAssembly package

Enable the wasm feature and build with wasm-pack. The command below generates a consumable package in pkg/ that you can publish to npm or import locally.

rustup target add wasm32-unknown-unknown

RUSTFLAGS="-C target-feature=+simd128" \
  wasm-pack build \
  --target web \
  --release \
  --features wasm \
  --out-dir pkg

Inside a browser bundle (Astro, Vite, Next.js, etc.) import the generated glue code and call the WASM-safe helper functions, for example rsi_js:

import init, { rsi_js } from './pkg/vector_ta.js';

await init();

const closes = new Float64Array([100, 102, 101.3, 104.2, 103.5, 105.1, 104.8]);
const values = rsi_js(closes, 14);

console.log(values); // Float64Array with RSI values

When targeting Node.js, swap --target web for bundler or nodejs so the emitted JS matches your runtime.

Python bindings (optional)

Prebuilt wheels are published on PyPI as vector-ta.

python -m pip install -U pip
python -m pip install vector-ta

Note: the PyPI name contains a hyphen (vector-ta), but you import it in Python as vector_ta.

If you need a custom build (for example to enable CUDA), build from source via maturin:

python -m venv .venv

# Linux/macOS
source .venv/bin/activate

# Windows (PowerShell)
.\.venv\Scripts\Activate.ps1

# Windows (cmd.exe)
.venv\Scripts\activate.bat

python -m pip install -U pip maturin numpy
maturin develop --release --features python

Feature highlights

SIMD-first design

Auto-detects AVX2/AVX-512 with scalar fallbacks for older CPUs.

194 indicators

194 documented on this site (target: 300).

Memory safe pipelines

Memory safe with explicit NaN handling for reliable batch calculations.

Cross-platform output

Reuse the same algorithms from native Rust, WebAssembly, and Python (PyO3) bindings.

Next steps