Skip to main content

Strategy Development

Strategy development starts to go wrong when indicator output is mistaken for a complete trading idea. Indicators are transforms. A strategy is a claim about when to act, how to size the trade, how to exit, what market friction matters, and what evidence would be strong enough to keep the idea alive after the first attractive backtest. This tutorial is about that boundary.

This tutorial focuses on the order of work that usually produces strategies you can still explain after the optimizer has touched them.

1. Start with a market claim before you start listing indicators

A useful strategy begins with a sentence that can be wrong. Trend persistence after a volatility contraction is a claim. Mean reversion after an exhausted intraday move is a claim. “Use RSI, MACD, Bollinger Bands, and volume confirmation” reads like a shopping list and misses the actual market claim.

The reason this matters is simple: once the strategy is expressed as a market claim, you can decide what evidence would support it and what conditions should invalidate it. If the strategy begins as a stack of indicators, you usually end up tuning combinations before the actual idea is clear.

2. Define the execution contract before the rules

Before writing entries and exits, state when the signal becomes valid, when an order may be sent, when it may fill, what costs apply, and whether the strategy is long only, short allowed, or both. This is the part people postpone, and it is usually the part that later determines whether the whole backtest meant anything.

If those boundaries are still fuzzy, stop here and read Backtesting Fundamentals. A strategy tutorial is only useful after the simulation contract is legible.

3. Keep the first rule set small enough to audit

The first version of a strategy should be boring enough that you can explain every branch without scrolling for a minute. One trend filter, one entry condition, one exit condition, and one sizing rule is enough for a first pass. If the initial strategy requires a wall of exceptions, the rule set is still carrying unresolved uncertainty.

struct Signal {
    enter_long: bool,
    exit_long: bool,
}

fn crossover_signal(fast: &[f64], slow: &[f64], index: usize) -> Signal {
    let crossed_up = fast[index - 1] <= slow[index - 1] && fast[index] > slow[index];
    let crossed_down = fast[index - 1] >= slow[index - 1] && fast[index] < slow[index];

    Signal {
        enter_long: crossed_up,
        exit_long: crossed_down,
    }
}

That snippet covers one clean decision fragment. Fragments like this can be tested, timed, and argued about directly. That gets much harder once the rule set expands into a tangle of interacting thresholds.

4. Separate signal generation from portfolio logic

A signal answers whether the strategy wants exposure. Portfolio logic answers how much exposure it gets, whether another position blocks it, and what happens when several signals compete for capital. Mixing those layers too early makes the strategy harder to reason about because a bad sizing rule can disguise itself as a bad entry rule and vice versa.

The practical benefit of separation is that you can improve sizing, concentration limits, and drawdown controls without having to rewrite the indicator logic each time.

5. Build the strategy around failure modes

Every real strategy has a way it tends to fail. Trend systems get chewed up in chop. Mean reversion systems get run over during persistent moves. Breakout systems suffer from false breaks and volatility traps. Development gets better once you ask that question directly: what does this idea do badly, and how would I notice that in the results.

This is also where risk work begins to matter. Stops, exposure caps, and regime filters exist to stop a known failure mode from becoming fatal.

6. Validation starts before optimization

Optimization is useful, but only after the strategy is coherent enough that a better or worse parameter value still refers to the same underlying idea. If the first strategy draft is too unstable to survive small parameter changes, the optimizer will only produce a cleaner summary of instability.

The usual order should be: express the claim, define the execution contract, implement the smallest credible rule set, test the basic behavior, then optimize if there is still an idea worth searching.

A minimal development loop

  1. Write the market claim in one or two sentences.
  2. State the execution contract explicitly.
  3. Implement the smallest rule set that tests the claim.
  4. Inspect trades and equity path before chasing headline metrics.
  5. Stress the result with costs, nearby parameters, and holdout data.
  6. Only then move to broader optimization.

Common ways strategy work gets diluted

  • Adding more indicators while the underlying claim stays blurry.
  • Optimizing before the execution rules are fully defined.
  • Treating risk management as cleanup work after the core design is already set.
  • Judging the strategy by one summary metric while ignoring the full trade path.
  • Confusing a strong in-sample fit with evidence of robustness.

Where VectorTA and VectorGrid fit

VectorTA belongs at the point where you need reliable signal computation and clear indicator behavior. VectorGrid belongs later, when the search itself becomes the bottleneck and you need to evaluate a coherent strategy across a real parameter surface. Neither one removes the need to define the idea cleanly first.

Next reads

If the next question is simulation discipline, continue with Backtesting Fundamentals. If the next question is throughput and parameter search, read Performance Tuning and Backtesting Engine. If you still need to refine what the indicator layer is actually measuring, go back to Technical Indicators Theory.