Installation
VectorTA is published as a Rust crate on crates.io (vector-ta) and as a Python package on PyPI (vector-ta), with optional WASM bindings and optional CUDA acceleration.
Latest published version: crates.io 0.1.8, PyPI 0.1.8
(checked ).
Architecture at a glance
VectorTA ships one core Rust implementation, but the execution surface is broader than a simple crate plus bindings. Every indicator follows the same contract across native kernels, streaming updates, WebAssembly, Python bindings, and optional CUDA execution paths.
Every indicator follows the same CPU-side shape: one shared contract, Rust single and batch entry points, O(1) streaming updates, a dispatch step that chooses the best available kernel for the current machine, then Rust, Python, and WASM bindings layered underneath.
The CUDA path keeps the shared indicator contract at the top, enters through the Rust crate or Python CUDA bindings, passes through the CUDA entry API variants and dispatch layer, selects the kernel pattern first, then the I/O API form, and only then launches GPU kernel execution.
On the CPU side, each indicator exposes scalar, AVX2, and AVX-512 native kernels for single-output and batch workloads, while streaming APIs provide stateful O(1) updates with SIMD-backed execution where applicable.
The WASM surface focuses on single, batch, and streaming workflows with SIMD128 variants where supported. Python bindings call the same Rust kernels directly and also expose CUDA kernel families with transfer-based and pointer-oriented APIs.
From source (optional)
You can use the published Rust crate or Python wheels without cloning the repository. Clone the repo only if you want to build WebAssembly locally, compile with CUDA, or contribute.
Source builds originate from VectorAlpha-dev/VectorTA.
Before fetching the source, make sure you have a current Rust toolchain
(rustup update), wasm-pack 0.12+ if you plan to build the
WASM package (cargo install wasm-pack), and Node.js 18 or newer if that
WASM output will be consumed from a bundler. If you plan to enable CUDA support, you
will also need an NVIDIA driver with a compatible GPU and a CUDA Toolkit installation
that provides nvcc.
git clone https://github.com/VectorAlpha-dev/VectorTA.gitcd VectorTARust crate (library)
VectorTA is published on crates.io as vector-ta. Using cargo add keeps the dependency declaration tidy:
cargo add vector-taThe resulting Cargo.toml stanza looks like:
[dependencies]vector-ta = "0.1.8"
# Optional features# vector-ta = { version = "0.1.8", features = ["cuda"] }# vector-ta = { version = "0.1.8", features = ["wasm"] }# vector-ta = { version = "0.1.8", features = ["python"] }# vector-ta = { version = "0.1.8", features = ["nightly-avx"] } # nightly Rust required
Note: the crate name contains a hyphen (vector-ta), but you import it in Rust as vector_ta.
Indicators expose typed inputs and builders. The example below computes RSI values from a price slice and prints the latest reading:
use vector_ta::indicators::rsi::{rsi, RsiInput, RsiParams};
fn main() -> Result<(), Box<dyn std::error::Error>> { let closes = vec![100.0, 102.0, 101.3, 104.2, 103.5, 105.1, 104.8];
let params = RsiParams { period: Some(14) }; let input = RsiInput::from_slice(&closes, params); let output = rsi(&input)?;
if let Some(last) = output.values.last() { println!("latest RSI {:.2}", last); }
Ok(())}Most indicators also provide builders (for kernel selection) and batch helpers—see the corresponding indicator page for in-depth examples.
CUDA acceleration (optional)
GPU acceleration is available when you build with CUDA support (via the cuda feature).
[dependencies]vector-ta = { version = "0.1.8", features = ["cuda"] }Building the WebAssembly package
Enable the wasm feature and build with wasm-pack. The command below generates a consumable package in
pkg/ that you can import locally or publish through your own npm workflow. There is not currently
an official published npm package for the WASM bindings.
rustup target add wasm32-unknown-unknown
RUSTFLAGS="-C target-feature=+simd128" \ wasm-pack build \ --target web \ --release \ --features wasm \ --out-dir pkg
Inside a browser bundle (Astro, Vite, Next.js, etc.) import the generated glue code and call the WASM-safe helper
functions, for example rsi_js:
import init, { rsi_js } from './pkg/vector_ta.js';
await init();
const closes = new Float64Array([100, 102, 101.3, 104.2, 103.5, 105.1, 104.8]);const values = rsi_js(closes, 14);
console.log(values); // Float64Array with RSI values
When targeting Node.js, swap --target web for bundler or nodejs so the emitted
JS matches your runtime.
Python bindings (optional)
Prebuilt wheels are published on PyPI as vector-ta.
python -m pip install -U pippython -m pip install vector-ta
Note: the PyPI name contains a hyphen (vector-ta), but you import it in Python as vector_ta.
If you need a custom build (for example to enable CUDA), build from source via maturin:
python -m venv .venv
# Linux/macOSsource .venv/bin/activate
# Windows (PowerShell).\.venv\Scripts\Activate.ps1
# Windows (cmd.exe).venv\Scripts\activate.bat
python -m pip install -U pip maturin numpymaturin develop --release --features pythonDocumentation paths
Reference, benchmarks, and demo
These are the three destinations that are usually most useful immediately after installation: API reference, performance context, and a browser-side validation path.
Primary reference
Browse 340 documented indicators
Full inputs, parameters, outputs, warmup notes, streaming variants, and current CUDA coverage across the documented indicator set.
Open indicator catalog →
Performance
Check CPU and CUDA benchmarks
Compare scalar, AVX2, AVX-512, and GPU execution paths before choosing a build surface for your actual workload.
Open benchmark pages →
Browser
Try the interactive demo
Validate indicator behavior in the browser and get a quick feel for the WASM-facing workflow before integrating it elsewhere.
Open demo →
Distribution paths
Published packages and source
Use crates.io or PyPI for released packages. Use the repository when you need source builds, WASM package generation, CUDA toolchain work, or the current workspace.
Rust
crates.io
Cargo-native install path with feature flags such as cuda,
wasm, and python.
Open ↗
Python
PyPI
Easiest route into notebooks, scripts, and PyO3-backed bindings without building the Rust workspace yourself.
Open ↗
Source
GitHub repository
Use the repo when you need local builds, current source, CUDA toolchain work, or WASM package generation from the workspace itself.
Open ↗