I Built Custom Claude Code Skills for Android Development — Here's How They Work
Every time I started a new coding session with Claude Code, the same thing happened. I’d explain my architecture. MVI with sealed classes for State, Intent, and Effect. Koin for DI with scoped modules per feature. Compose with strict state hoisting. Every. Single. Time.
Claude is smart, but it doesn’t remember your last session. So it reads your files, traces your imports, figures out your patterns, and then starts working. By the time it actually writes useful code, you’ve already burned through a chunk of your context window.
I got tired of repeating myself. So I built custom skills.
What’s a Claude Code skill?
A skill is a markdown file you put in ~/.claude/commands/. It becomes a slash command. Type /mvi-skill profile and Claude scaffolds a complete MVI feature module using your exact architecture. No guessing, no “let me read your codebase first.”
The skill file contains your conventions, templates, and rules. Claude reads it once when you invoke the command and follows it precisely.
Think of it as encoding a senior developer’s brain into a reusable prompt.
The skills I built
I started with four. Each one solves a specific problem I was hitting daily.
/mvi-skill — Full feature scaffolding
This is the one that saves the most time. I type /mvi-skill settings and get:
- Contract file with sealed
SettingsState,SettingsIntent, andSettingsEffectclasses - ViewModel with
StateFlowfor state andChannelfor effects - Screen composable with proper state hoisting and effect collection
- Domain layer with UseCase stubs
- Koin module wired up and ready
All matching my exact architecture. Not some generic tutorial version with LiveData and an abstract base class I don’t use.
Here’s what the Contract output looks like:
@Immutable
data class SettingsState(
val isLoading: Boolean = true,
val error: String? = null,
val data: List<SettingsItem> = emptyList(),
)
sealed interface SettingsIntent {
data object LoadData : SettingsIntent
data object Refresh : SettingsIntent
data class OnItemClick(val id: String) : SettingsIntent
}
sealed interface SettingsEffect {
data class ShowError(val message: String) : SettingsEffect
data class NavigateTo(val route: String) : SettingsEffect
}
Before this skill, I’d spend 15-20 minutes setting up a new feature module. Now it’s done in seconds and it’s consistent every time.
/koin-di — Dependency injection modules
I use Koin with a specific pattern: constructor DSL, scoped per feature, interfaces for everything, single-activity architecture. Claude doesn’t know this by default. It might generate Hilt code, or use the old lambda DSL, or skip interface binding.
The /koin-di skill encodes all of that. Give it a feature name and it generates a properly structured Koin module:
val settingsModule = module {
singleOf(::SettingsRepositoryImpl) { bind<SettingsRepository>() }
singleOf(::SettingsLocalDataSource)
factoryOf(::GetSettingsUseCase)
factoryOf(::UpdateSettingsUseCase)
viewModelOf(::SettingsViewModel)
}
It also tells you where to register it in the app module. Small thing, but it keeps the project consistent when you’re adding features quickly.
/security-check — OWASP audit
This one I run before every release. It scans for:
- Hardcoded API keys and secrets in source code
- OWASP Mobile Top 10 violations (exported components, insecure storage, cleartext traffic)
- Missing ProGuard/R8 rules for release builds
- Network security config issues
- Known vulnerable dependency versions
The output is a scored report with specific file:line references and fix suggestions. It catches things I’d normally miss in a manual code review, especially the subtle stuff like a Log.d() that prints user data or a missing android:exported="false" on a BroadcastReceiver.
/compose-conventions — Code review on autopilot
Compose is flexible, which means there are many ways to write bad Compose code. This skill checks for:
- State hoisting violations — composables reaching into ViewModels directly instead of receiving state as parameters
- remember misuse — remembering cheap values, missing keys on expensive computations
- Recomposition traps — unstable types, lambda allocations, missing
derivedStateOf - Preview coverage — every public composable should have light and dark mode previews
- Modifier conventions — first parameter, applied to root only
It outputs a before/after diff for each violation. Basically an automated code reviewer that knows my team’s exact Compose conventions.
How to build your own
A skill file has two parts. YAML frontmatter with a description and optional argument hint, then markdown instructions.
Here’s a minimal example:
---
description: Generate a UseCase class following project conventions
argument-hint: <use case name>
---
# UseCase Generator
Generate a UseCase class with these rules:
- Single public `invoke` operator function
- Returns Result<T> wrapping the response
- Constructor-injected repository
- Located in feature/{name}/domain/usecase/
$ARGUMENTS
The $ARGUMENTS placeholder gets replaced with whatever the user types after the slash command. That’s it. Drop it in ~/.claude/commands/ and it works.
The real value isn’t saving time
The time savings are nice. But the real value is consistency.
When Claude works from structured instructions instead of figuring out your codebase on the fly, the output matches your patterns. Every feature module looks the same. Every Koin module follows the same structure. Every composable passes the same conventions check.
On a solo project that might not matter much. On a team, or when you’re building fast and switching between features, it matters a lot. The codebase stays clean without extra effort.
Get the skills
All four skills are open source on GitHub:
github.com/ChrisWW/claude-code-android-skills
Clone the repo, copy the skills to ~/.claude/commands/, and customize them for your stack. The MVI templates assume my architecture, the Koin setup assumes my conventions. Fork it and make it yours.
If you build useful skills for your own workflow, open a PR. The more patterns we encode, the less time we all spend explaining the same things to AI.