top of page

Divergence Doesn't Work the Way You Think It Does



Most traders learn divergence as a single rule: when price and an indicator disagree, the trend is exhausted. Short the bear divergence, long the bull divergence, symmetric trade.

Run that rule against 180 days of /ES 5-minute bars with a 6-method ensemble vote and you get a result that's worth pausing on:

HARD-BEAR  n=67   W=57%  Edge=+5.9 pts
HARD-BULL  n=101  W=43%  Edge=-7.1 pts

Bear divergences resolved in the expected direction. Bull divergences resolved against it. Same ensemble, same threshold logic, same gate — opposite outcomes.

That's not a quirk of one tier. The asymmetry held across Soft, Medium, and Hard:

SOFT-BEAR  Edge=+2.0   |   SOFT-BULL  Edge=-4.3
MED-BEAR   Edge=+3.5   |   MED-BULL   Edge=-7.7
HARD-BEAR  Edge=+5.9   |   HARD-BULL  Edge=-7.1

And it held when the same thresholds were re-evaluated on the held-out 30% of the data the indicator had never seen during tuning.


The indicator that produced these numbers is MCAQI (Market Cap-Adjusted Quantity Index) — a 6-method divergence detector with MFE/MAE validation and a walk-forward split baked into the source. It's this week's drop in the thinkScript Library.


Worth thinking about before you take the next divergence call at face value. Why might bear divergences and bull divergences carry different information on the same instrument, in the same regime, on the same timeframe?


Drop your hypotheses in the group thread.


—---- Marketfragments ----------


For the quants


A few methodological notes if you're going to take this seriously:


  • Threshold tuning leak. Method thresholds (corrThreshold=0.30, residZThreshold=1.50, etc.) were iteratively tuned against the same 180 days reported above. The 70/30 walk-forward — re-evaluating fixed thresholds on a held-out partition — is honest-effort, not a clean out-of-sample test. True validation requires forward bars that didn't exist when this was written.


  • Confidence on the edge estimate. HARD-BEAR n=67 sits near the lower bound of trustworthy. Normal-approximation 95% CI on the 57% win rate is roughly ±12 pp. The +5.9 pt edge is a point estimate. If you're sizing position based on it, use the lower CI bound, not the mean.


  • Vote independence is overstated. The six methods are correlated, not orthogonal. Slope and ROC differential covary; pivot and percentile divergence also covary. A 5/6 vote Hard tier is closer to “≈3 effectively independent signals agree” once you adjust for redundancy. PCA on the binary fire vectors gives the real signal dimensionality.


  • Index basis. Underlying quantity is volume / (1 + sectorBias · log(close·volume + 1)). The log dollar-volume penalty caps high-priced large-cap influence so /ES intraday participation isn't dominated by occasional mega-cap rotations. Swap the basis to a footprint-derived or volume-by-price metric in one line.


  • MFE/MAE accounting. pTest tracks running max/min over a 40-bar lookahead, accumulating xh − stV and stV − xl at window close. Variance is computed inline against fresh values to avoid stale-reference bias. Edge is gross — /ES round-trip eats ~$15–25 in commissions plus 1–2 ticks of slippage on a bar-close fire. Subtract before treating as net.


  • Regime dependency. Lookback is dominated by a primary-uptrend regime. The bear/bull asymmetry may invert in primary downtrends. Re-run annually, or split the dataset by regime classifier and report tier edge per regime.


  • Signal debouncing. Overlapping pTest windows are dropped to avoid double-counting clustered signals. This is a slight conservative bias on n — actual fire count is 5–15% higher than reported, but the per-signal MFE/MAE statistics remain honest.

Comments


Brain with financial data analysis.

Inquiries at :

Important Risk Notice: Trading involves substantial risk of loss. This is educational content only—not advice. Full details here  ------------>  

Proceed only if you're prepared.

tel#: (843) 321-8514

bottom of page