Keyword cannibalization in ASO: a data-driven fix
Meta description: App Store keyword cannibalization silently kills organic installs. Build a SQLite-backed scoring model to detect and fix it with real ranking data.
Tags: mobile, android, ios, productengineering, startup
TL;DR
Most mobile teams unknowingly compete against their own listings for the same keywords. We built a SQLite-backed keyword tracking pipeline that scores by install-conversion probability rather than raw search volume, detected cannibalization across three of our own apps, and doubled organic installs in 90 days without shipping a single code change. Below is the framework, the queries, and the ranking factor experiments that made it work.
The vanity metric trap
Reid Hoffman recently made a sharp observation about the “tokenmaxxing” debate in AI: tracking token usage can gauge adoption, but it should be paired with context and not treated as a direct productivity metric. That same principle applies to App Store Optimization.
Most teams obsess over keyword search volume, the ASO equivalent of token counts. High volume feels productive. But without conversion context, you’re optimizing for visibility, not installs. And if you have multiple apps or localizations targeting overlapping keywords, you’re cannibalizing your own rankings.
I’ve seen this play out repeatedly in production systems for mobile distribution. The numbers are consistent: keyword quality multiplied by conversion probability beats keyword volume every time.
How store algorithms weight keywords
Both App Store and Play Store apply different ranking weights depending on where a keyword appears. The mistake most teams make is treating all fields equally.
| Field | App Store Weight | Play Store Weight | Max Length |
|---|---|---|---|
| Title | ~50% | ~45% | 30 chars |
| Subtitle | ~20% | N/A | 30 chars |
| Keyword Field | ~20% | N/A | 100 chars |
| Short Description | N/A | ~25% | 80 chars |
| Description | ~5%* | ~25% | 4000 chars |
| URL/Package Name | ~5% | ~5% | Varies |
*Apple claims descriptions aren’t indexed, but our experiments showed exact-match phrases in descriptions correlated with marginal ranking lifts for low-competition terms.
One finding from our experiments that surprised us: title word order matters a lot on iOS. “Budget Tracker - Expense Manager” and “Expense Manager - Budget Tracker” ranked differently for both terms. The first keyword in the title consistently ranked 8-15 positions higher than the second in our A/B tests across six locales. I didn’t expect the gap to be that large.
Detecting cannibalization with SQLite
The core problem with multi-app keyword strategy is that teams optimize each listing in isolation. We built a lightweight pipeline to detect overlap.
CREATE TABLE keyword_rankings (
app_id TEXT,
keyword TEXT,
store TEXT,
locale TEXT,
rank INTEGER,
search_volume INTEGER,
conversion_rate REAL,
recorded_at DATE
);
-- Detect cannibalization: keywords where multiple
-- owned apps rank in top 50
SELECT
keyword,
COUNT(DISTINCT app_id) AS competing_apps,
GROUP_CONCAT(app_id || ':' || rank) AS app_ranks,
search_volume,
AVG(conversion_rate) AS avg_cvr
FROM keyword_rankings
WHERE rank <= 50
AND recorded_at = DATE('now')
GROUP BY keyword, store, locale
HAVING competing_apps > 1
ORDER BY search_volume * AVG(conversion_rate) DESC;
This query surfaced 23 cannibalized keywords across our three apps. For each, we applied a simple decision framework:
- The app with the highest CVR keeps the keyword in its title/subtitle
- Other apps move it to the keyword field or drop it entirely
- Freed-up character budget goes to untapped long-tail terms
Straightforward, maybe even obvious in hindsight. But nobody on our team had actually checked for this overlap before.
The scoring model: prioritize by install probability
Raw search volume is misleading. We score keywords using a composite metric:
-- Install-conversion priority score
SELECT
keyword,
search_volume,
conversion_rate,
ROUND(search_volume * conversion_rate * (1.0 / NULLIF(rank, 0)), 2)
AS install_priority_score
FROM keyword_rankings
WHERE app_id = 'com.our.mainapp'
AND store = 'ios'
ORDER BY install_priority_score DESC
LIMIT 50;
The install_priority_score penalizes high-volume keywords where you rank poorly (and therefore convert poorly) while rewarding moderate-volume keywords where you already have traction. This shifted our keyword strategy dramatically. We dropped three high-volume head terms and replaced them with 11 long-tail phrases that collectively drove more installs. Counterintuitive, but the math doesn’t lie.
Localization as a ranking multiplier
One underutilized lever: Apple indexes keywords from multiple locale keyword fields for the same storefront. Setting keywords in both en-US and es-MX for the US App Store effectively doubles your indexable keyword budget from 100 to 200 characters. Our tests showed a 30-40% increase in indexed keywords per storefront using this approach, with no negative ranking signal.
This feels like a loophole, and Apple may close it eventually. But right now it works, and most teams aren’t using it.
Results
After resolving cannibalization and switching to conversion-weighted keyword selection:
| Metric | Before | After (90 days) |
|---|---|---|
| Organic Installs/Day | ~340 | ~710 |
| Cannibalized Keywords | 23 | 2 |
| Avg. Keyword Rank (Top 10) | 14.2 | 6.8 |
| Long-Tail Keywords Indexed | 87 | 203 |
No app changes. No new features. Just metadata.
What to do with this
Audit for self-cannibalization now. If you manage more than one app or heavy localization, run the overlap query above. You are almost certainly splitting ranking power across your own listings. We were, and we had no idea.
Score keywords by install probability, not search volume. Volume without conversion context is a vanity metric, the same trap Hoffman warned about with token counting. Pair every keyword with its conversion rate and your current rank to get a real priority score.
Use secondary locale keyword fields to expand your indexable surface area without touching your primary metadata. Most teams leave this on the table.
The best ASO work looks like engineering, not marketing. Build the pipeline, trust the data, and let dozens of small keyword improvements compound into something no single feature launch can match.