Installation

VectorTA is published as a Rust crate on crates.io (vector-ta) and as a Python package on PyPI (vector-ta), with optional WASM bindings and optional CUDA acceleration.

Latest published version: crates.io 0.1.8, PyPI 0.1.8 (checked ).

Architecture at a glance

VectorTA ships one core Rust implementation, but the execution surface is broader than a simple crate plus bindings. Every indicator follows the same contract across native kernels, streaming updates, WebAssembly, Python bindings, and optional CUDA execution paths.

CPU + bindings surface
flowchart TB Contract["Shared indicator contract"] Public["Single + batch indicator APIs"] Streaming["Streaming API<br/>O(1) updates"] Dispatch["CPU dispatch layer"] Scalar["Scalar kernel"] Avx2["AVX2 kernel"] Avx512["AVX512 kernel"] Active["One active CPU kernel<br/>selected for this machine"] RustUse["Rust crate"] Python["Python bindings<br/>thin call-through to Rust"] Wasm["WASM bindings<br/>single • batch • streaming"] Simd["SIMD128 when available"] Contract --> Public Contract --> Streaming Public --> Dispatch Streaming --> Dispatch Dispatch --> Scalar Dispatch --> Avx2 Dispatch --> Avx512 Scalar --> Active Avx2 --> Active Avx512 --> Active Active --> RustUse Active --> Python Active --> Wasm Wasm --> Simd

Every indicator follows the same CPU-side shape: one shared contract, Rust single and batch entry points, O(1) streaming updates, a dispatch step that chooses the best available kernel for the current machine, then Rust, Python, and WASM bindings layered underneath.

CUDA execution surface
flowchart TB ContractCuda["Shared indicator contract"] RustBinding["Rust crate + CUDA feature"] PythonBinding["Python CUDA bindings"] DispatchCuda["CUDA dispatch layer"] GpuExec["GPU kernel execution"] subgraph EntryStage["CUDA Entry APIs"] direction TB CudaApi["Batch sweep APIs<br/>Many-series / one-param APIs<br/>Device / ptr variants"] end subgraph KernelStage["Kernel Patterns"] direction TB KernelPatterns["1 series × many params<br/>many series × 1 param"] end subgraph IoStage["I/O API Forms"] direction TB IoModes["Host transfer + output<br/>ptr-in + ptr-out<br/>host transfer + ptr-out"] end style EntryStage fill:none,stroke:none style KernelStage fill:none,stroke:none style IoStage fill:none,stroke:none ContractCuda --> RustBinding ContractCuda --> PythonBinding RustBinding --> CudaApi PythonBinding --> CudaApi CudaApi --> DispatchCuda DispatchCuda --> KernelPatterns KernelPatterns --> IoModes IoModes --> GpuExec

The CUDA path keeps the shared indicator contract at the top, enters through the Rust crate or Python CUDA bindings, passes through the CUDA entry API variants and dispatch layer, selects the kernel pattern first, then the I/O API form, and only then launches GPU kernel execution.

On the CPU side, each indicator exposes scalar, AVX2, and AVX-512 native kernels for single-output and batch workloads, while streaming APIs provide stateful O(1) updates with SIMD-backed execution where applicable.

The WASM surface focuses on single, batch, and streaming workflows with SIMD128 variants where supported. Python bindings call the same Rust kernels directly and also expose CUDA kernel families with transfer-based and pointer-oriented APIs.

From source (optional)

You can use the published Rust crate or Python wheels without cloning the repository. Clone the repo only if you want to build WebAssembly locally, compile with CUDA, or contribute.

Source builds originate from VectorAlpha-dev/VectorTA. Before fetching the source, make sure you have a current Rust toolchain (rustup update), wasm-pack 0.12+ if you plan to build the WASM package (cargo install wasm-pack), and Node.js 18 or newer if that WASM output will be consumed from a bundler. If you plan to enable CUDA support, you will also need an NVIDIA driver with a compatible GPU and a CUDA Toolkit installation that provides nvcc.

Clone the repository
git clone https://github.com/VectorAlpha-dev/VectorTA.git
cd VectorTA

Rust crate (library)

VectorTA is published on crates.io as vector-ta. Using cargo add keeps the dependency declaration tidy:

Add the crate
cargo add vector-ta

The resulting Cargo.toml stanza looks like:

Cargo.toml
[dependencies]
vector-ta = "0.1.8"
# Optional features
# vector-ta = { version = "0.1.8", features = ["cuda"] }
# vector-ta = { version = "0.1.8", features = ["wasm"] }
# vector-ta = { version = "0.1.8", features = ["python"] }
# vector-ta = { version = "0.1.8", features = ["nightly-avx"] } # nightly Rust required

Note: the crate name contains a hyphen (vector-ta), but you import it in Rust as vector_ta.

Indicators expose typed inputs and builders. The example below computes RSI values from a price slice and prints the latest reading:

RSI example
use vector_ta::indicators::rsi::{rsi, RsiInput, RsiParams};
fn main() -> Result<(), Box<dyn std::error::Error>> {
let closes = vec![100.0, 102.0, 101.3, 104.2, 103.5, 105.1, 104.8];
let params = RsiParams { period: Some(14) };
let input = RsiInput::from_slice(&closes, params);
let output = rsi(&input)?;
if let Some(last) = output.values.last() {
println!("latest RSI {:.2}", last);
}
Ok(())
}

Most indicators also provide builders (for kernel selection) and batch helpers—see the corresponding indicator page for in-depth examples.

CUDA acceleration (optional)

GPU acceleration is available when you build with CUDA support (via the cuda feature).

Enable CUDA
[dependencies]
vector-ta = { version = "0.1.8", features = ["cuda"] }

Building the WebAssembly package

Enable the wasm feature and build with wasm-pack. The command below generates a consumable package in pkg/ that you can import locally or publish through your own npm workflow. There is not currently an official published npm package for the WASM bindings.

Build the WASM package
rustup target add wasm32-unknown-unknown
RUSTFLAGS="-C target-feature=+simd128" \
wasm-pack build \
--target web \
--release \
--features wasm \
--out-dir pkg

Inside a browser bundle (Astro, Vite, Next.js, etc.) import the generated glue code and call the WASM-safe helper functions, for example rsi_js:

Browser usage
import init, { rsi_js } from './pkg/vector_ta.js';
await init();
const closes = new Float64Array([100, 102, 101.3, 104.2, 103.5, 105.1, 104.8]);
const values = rsi_js(closes, 14);
console.log(values); // Float64Array with RSI values

When targeting Node.js, swap --target web for bundler or nodejs so the emitted JS matches your runtime.

Python bindings (optional)

Prebuilt wheels are published on PyPI as vector-ta.

Install from PyPI
python -m pip install -U pip
python -m pip install vector-ta

Note: the PyPI name contains a hyphen (vector-ta), but you import it in Python as vector_ta.

If you need a custom build (for example to enable CUDA), build from source via maturin:

Build Python bindings locally
python -m venv .venv
# Linux/macOS
source .venv/bin/activate
# Windows (PowerShell)
.\.venv\Scripts\Activate.ps1
# Windows (cmd.exe)
.venv\Scripts\activate.bat
python -m pip install -U pip maturin numpy
maturin develop --release --features python